BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

17 Surprising (And Sometimes Alarming) Uses For And Results Of AI

Forbes Technology Council

Artificial intelligence has been the subject of innumerable headlines the past several years, increasing the public’s interest in (and concern about) its potential. In recent weeks, the media has featured stories both about the many uses for and the drawbacks of generative AI tools such as ChatGPT. And the entertainment industry has been debating both the pros (such as the rise of new art forms) and cons (deepfakes that can replicate a performer’s face and/or voice, with or without their permission) of the proliferation of AI.

Such stories make it clear that while AI is a powerful resource that’s not going away, industries, governments and the public at large need to stay updated on its developments and think carefully about the ethical implications of its use. Below, 17 members of Forbes Technology Council share some of the surprising—even unsettling—ways AI is or could be leveraged that the general public may not know about, but should.

1. Phishing Messages And Malware

AI has enhanced existing threats and introduced new ones. Chatbots can take the grunt work out of crafting phishing lures; eventually, hackers will combine internet access, automation and AI in a way that lets them write a script saying, “Learn about Target X and keep messaging them until they click.” We’ve even seen evidence of AI being used to write polymorphic malware. We need good AI to fight back. - Jim Taylor, RSA Security

2. Identity Theft

The general public should know that AI-generated deepfakes aren’t just targeting high-profile people. Fraudsters are leveraging them to steal individuals’ identities so they have access to bank accounts and confidential information. Luckily, verification platforms that have multiple identification factors can help deter fraud and the potential leakage of personal information and documents. - Andrew Sever, Sumsub


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


3. Increasingly Sophisticated Cyberattacks

Hackers are increasingly utilizing AI for sophisticated cyberattacks. AI can automate and optimize tasks, including reconnaissance and vulnerability scanning, and be trained to mimic human behavior, enabling attackers to launch convincing social engineering attacks and phishing emails. This poses a significant challenge for cybersecurity, requiring constant adaptation and advanced AI-driven defense. - Stephen O’Doherty, Gibraltar Solutions

4. Disinformation Campaigns

AI-generated text can be used to create sophisticated disinformation campaigns. By emulating the writing style of influential figures, AI can generate fake news articles, social media posts or blog entries that appear authentic. This raises concerns about the spread of misinformation and the erosion of trust in online content. - Roman Reznikov, Intellias

5. Revelation Of Personal Data

AI models trained on large data sets can capture patterns and knowledge from text, potentially including sensitive or personal information. This raises concerns about the privacy and security of individuals’ data, as AI-generated text can inadvertently reveal private details or be exploited for malicious purposes, such as social engineering attacks or identity theft. - Manan Shah, Avalance Global Solutions

6. Reputational Damage

It’s unsettling that deepfake technology could enable highly damaging revenge scenarios. A vengeful person could easily make it appear as though someone has cheated by swapping faces in an intimate video; create a fake video of the victim saying offensive things, damaging their career (even if the video is proven to be fake); or blackmail someone with a deepfake video, threatening to release it publicly unless demands are met. - Indiana (Indy) Gregg, Wedo

7. Impersonating Trusted Individuals

Deepfakes are on the rise and create security threats for both consumers and businesses. Bad actors can utilize AI to impersonate bank employees or even family members over the phone. These phishing attacks are very dangerous—their urgent and deceptive nature specifically targets human emotions with the ultimate goal of stealing personally identifiable information and/or money. - Caroline Wong, Cobalt

8. Manipulating Election Results

AI deepfakes can distort democratic discourse and manipulate elections. Deepfakes can be used to spread misinformation, propaganda and fake news about political candidates, parties or issues. Political leaders can be impersonated or discredited, as can political activists or journalists. This can influence voter behavior, undermine public trust and destabilize democracy. AI use needs to be controlled. - Namrata Sengupta, Stellar Data Recovery Inc. dba BitRaser

9. Autonomous Weapons Systems

I am sure there will be a time when AI-powered autonomous weapons systems will evolve. These systems could have the potential to make critical decisions about targeting and engagement without direct human control. This raises serious ethical concerns. - Dr. Vivek Bhandari, Powerledger

10. Image Manipulation

Most people do not realize that AI can be used to manipulate images. AI-powered image manipulation can take an existing image and change elements of it, such as the background, color and other features. This technology is used for everything from facial recognition to creating realistic deepfakes. It is a powerful tool that can be used both ethically and unethically, depending on the application. - Sandro Shubladze, Datamam

11. Surveillance

One unsettling way AI can be leveraged is as a surveillance tool. Facial recognition technology is becoming more common, and there’s a concern among some that it may be used to keep an eye on people without their knowledge. I think we need to be cautious and hold companies that use this tech accountable so people’s rights are not violated. - Thomas Griffin, OptinMonster

12. Adversarial Attacks

AI adversarial attacks represent a surprising and concerning application of the technology. These attacks subtly manipulate AI inputs to induce erroneous outputs, misleading systems including those used in autonomous cars or for facial recognition. This unfamiliar threat can lead to significant security risks, making it vital to improve public awareness and system resilience. - Amitkumar Shrivastava, Fujitsu

13. More Pervasive And Invasive Advertising

AI can be used for more pervasive advertising. With AI, one can analyze the emotional state of a consumer and feed them highly personalized ads, exploiting their emotional vulnerabilities. AI algorithms can distinguish between happy and sad faces, understand text sentiments and tone of voice, and read other behavioral patterns to manipulate a user’s decision-making processes and nudge them into buying. - Konstantin Klyagin, Redwerk

14. Creation Of Echo Chambers

The most unsettling development to me is the way AI serves up only what people want to see and know about. The more you click on sites and pages expressing a certain viewpoint, the more that viewpoint is shown to you. It is causing people to take sides and think those who don’t believe the same things they do are misinformed, unintelligent or misguided. In reality, every one of us is only being shown things that align with our existing viewpoints. - Laureen Knudsen, Broadcom

15. Realistic Digital Influencers

Companies are creating AI-generated social media influencers that are entirely computer-generated and designed to appear and act like real people. They can amass large numbers of followers, endorse products and even collaborate with other influencers—all without being human. These blurred lines between real and virtual individuals raise ethical concerns regarding transparency and authenticity in influencer marketing. - Cristian Randieri, Intellisystem Technologies

16. Creation Of Synthetic Data

One way AI is being leveraged that the general public may not know about is to create synthetic data, which imitates real data such as images, text, audio or video. Synthetic data serves several worthwhile purposes, including training machine learning models, testing software and enhancing privacy. However, there are also challenges regarding quality, validity, fairness and safeguarding the rights of the original data owners and users. - Jagadish Gokavarapu, Wissen Infotech

17. Medical Image Interpretation

AI’s ability to interpret medical images, such as X-rays or MRIs, is astonishing yet disconcerting. While it can aid in early disease detection, if the algorithms are flawed or biased, it may lead to misdiagnoses and inappropriate treatments. It’s essential that we approach AI in healthcare with a balanced understanding of both its vast potential and the need for rigorous validation. - Marc Fischer, Dogtown Media LLC

Check out my website