Meta, Facebook's parent company, is taking steps to address the potential impact of AI-generated images on its platforms as the 2024 election season approaches. Meta plans to identify and label images created by third-party AI tools to combat the proliferation of misleading content.
According to Reuters, the company aims to ensure transparency and provide users with information about the origin of these images.
Partnerships and Labels
Meta will begin adding "AI generated" labels to images created using tools from prominent companies such as Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. Meta also applies an "imagined with AI" label to photorealistic images generated by its AI generator tool.
CNN noted that by collaborating with leading firms in the AI development space, Meta intends to implement common technical standards, including invisible metadata or watermarks, that will enable its systems to identify AI-generated images created with these tools.
Meta will roll out the labels in multiple languages across its platforms, including Facebook, Instagram, and Threads. This global approach addresses the risks associated with AI-generated images and their potential to mislead voters in the United States and numerous other countries facing elections in 2024.
The announcement by Meta comes in response to growing concerns raised by experts, lawmakers, and even tech executives regarding the spread of false information facilitated by realistic AI-generated images and the rapid dissemination capabilities of social media. The Oversight Board of Meta recently criticized the company's manipulated media policy, calling it "incoherent." An altered video of US President Joe Biden prompted this decision.
Industry-Standard Markers
Meta's implementation of industry-standard markers will allow the company to label AI-generated images effectively. However, these markers currently do not extend to videos and audio generated by artificial intelligence.
To address this limitation, Meta plans to introduce a feature that enables users to identify and disclose when AI has generated the video or audio content they share. Failure to comply with this disclosure requirement may result in penalties.
In cases where digitally created or altered images, videos, or sounds pose a high risk of materially deceiving the public on significant matters, Meta may apply more prominent labels. Additionally, Meta is actively working to prevent users from removing the invisible watermarks from AI-generated images, ensuring the integrity and authenticity of the labeled content.
Photo: Meta Newsroom Facebook Page


Malaysia Unveils Energy Security Plan Amid Iran Conflict and Rising Oil Costs
US Revises UN Resolution on Iran Strait of Hormuz Attacks Amid Russia-China Opposition
Rubio Presses Italy Over Iran Support as Tensions Test U.S.-Italy Alliance
Trump Inspects Lincoln Memorial Reflecting Pool Renovation in Washington
TikTok Nears $400 Million Settlement With Trump Administration Over Child Privacy Lawsuit
Qatar LNG Tanker Crosses Strait of Hormuz Amid Iran War Tensions
Infineon Raises 2026 Outlook as AI Data Center Chip Demand Surges
Agentic AI Boom to Drive Massive Growth in CPU Market, UBS Says
Trump-Xi Beijing Summit to Focus on Trade, Taiwan, and Boeing Deal
Intel Emerges as Key Contender in Apple’s Chip Manufacturing Strategy Shift
Anthropic’s $1.5B AI Venture with Wall Street Firms Targets Private Equity Market
Vietnam Plans AI-Driven Propaganda Push With Influencers and Podcasts
China-Made Fireworks Power U.S. Independence Day Celebrations Amid Trade Truce
Judge Delays SEC Settlement With Elon Musk Over Twitter Stock Disclosure Case
Senate Stablecoin Bill Sparks Clash Between Banks and Crypto Industry
Taiwan Activates Backup Communications After Undersea Cable Break on Dongyin Island
Arm Stock Drops Despite Strong AI Chip Demand and Earnings Beat 



