Scam Detection and AI
AI has captivated businesses, governments, and individuals. AI has many pros and cons. In a BBC interview, Apple co-founder Steve Wozniak expressed concerns about AI, particularly in spotting scams and falsehoods.
AI Scams and Misinformation Worry Wozniak
Wozniak worries that AI makes scams and misinformation harder to spot. Wozniak advised labeling and regulating AI content. Since ChatGPT can generate “so intelligent” content, he believes AI can make bad actors more believable. Wozniak believes those who publish AI-generated content should be held accountable.
Wozniak also thinks huge tech companies “feel they can kind of get away with anything” require regulation. He doubted regulators would get it right. “The forces that drive for money usually win out, which is sort of sad.”
AI Voice Scams Growing
AI voice scams are a major issue, with criminals cloning social media users’ voices to fool and extort money. McAfee found that AI voice scams are growing.
A quarter of adults worldwide have experienced an AI voice fraud, with 10% personally targeted and 15% knowing someone who has. The scam is most common in India, where 47% of respondents had either been a victim (20%) or knew someone who had (27%). 14% of Americans and 18% of friends and family experienced it. The U.K. ranks third with 8% directly affected and 16% indirectly.
Internet Birth Lessons
Wozniak argues that squandered chances at the internet’s birth might teach AI architects. He believes “we can’t stop the technology,” but we can teach individuals to recognize fraud and identity theft.
Regulation and Duty
Wozniak’s worries underscore the need for regulation and accountability. AI can be utilized for good or evil, depending on the user and intent. As AI gets more clever, scams and misinformation become harder to spot, making explicit regulation and accountability for AI content creators and publishers vital.