At first glance, there might be nothing harmful about an artificial intelligence owned by Google misclassifying a picture of a turtle as a gun. However, researchers believe that this poses a much more serious problem that people should really take more seriously. After all, tricking an AI to believe that something is harmful could lead to some serious consequences.
This disturbing development came via an experiment that a research group at MIT called LabSix did, where the researchers deliberately tricked an AI into thinking that a 3D-printed turtle is a rifle, Motherboard reports. By manipulating a few pixels in the image, the researchers were able to fool the AI, which is a process called “adversarial example.”
One of the researchers behind the experiment is Anish Athalye and during a phone call with the publication explained why this result is so troubling. After all, up until it was proven possible, “adversarial example” was just theoretical.
"Just because other people haven't been able to do it, doesn't mean that adversarial examples can't be robust," Athalye explained. "This conclusively demonstrates that yes, adversarial examples are a real concern that could affect real systems, and we need to figure out how to defend against them. This is something we should be worried about."
In the paper that the researchers published, they describe how exactly they were able to achieve this feat and why it is such a cause for concern. For the short version, it basically has to do with the world’s increasing reliance on AI.
Self-driving cars, research algorithms, even data analytics employed by stock markets all rely on machine intelligence in some form or another. Giving AI false information that leads it to make the wrong conclusions can cause massive damages. For example, if an autonomous car misclassifies a sidewalk as part of the road, pedestrians could get seriously injured.


SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns
Australia Enforces World-First Social Media Age Limit as Global Regulation Looms
Apple Explores India for iPhone Chip Assembly as Manufacturing Push Accelerates
SpaceX Begins IPO Preparations as Wall Street Banks Line Up for Advisory Roles
Amazon in Talks to Invest $10 Billion in OpenAI as AI Firm Eyes $1 Trillion IPO Valuation
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
Australia’s Under-16 Social Media Ban Sparks Global Debate and Early Challenges
noyb Files GDPR Complaints Against TikTok, Grindr, and AppsFlyer Over Alleged Illegal Data Tracking.
Trump’s Approval of AI Chip Sales to China Triggers Bipartisan National Security Concerns
EssilorLuxottica Bets on AI-Powered Smart Glasses as Competition Intensifies
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
Mizuho Raises Broadcom Price Target to $450 on Surging AI Chip Demand
Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
Adobe Strengthens AI Strategy Ahead of Q4 Earnings, Says Stifel 



