Meta, Facebook's parent company, is taking steps to address the potential impact of AI-generated images on its platforms as the 2024 election season approaches. Meta plans to identify and label images created by third-party AI tools to combat the proliferation of misleading content.
According to Reuters, the company aims to ensure transparency and provide users with information about the origin of these images.
Partnerships and Labels
Meta will begin adding "AI generated" labels to images created using tools from prominent companies such as Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. Meta also applies an "imagined with AI" label to photorealistic images generated by its AI generator tool.
CNN noted that by collaborating with leading firms in the AI development space, Meta intends to implement common technical standards, including invisible metadata or watermarks, that will enable its systems to identify AI-generated images created with these tools.
Meta will roll out the labels in multiple languages across its platforms, including Facebook, Instagram, and Threads. This global approach addresses the risks associated with AI-generated images and their potential to mislead voters in the United States and numerous other countries facing elections in 2024.
The announcement by Meta comes in response to growing concerns raised by experts, lawmakers, and even tech executives regarding the spread of false information facilitated by realistic AI-generated images and the rapid dissemination capabilities of social media. The Oversight Board of Meta recently criticized the company's manipulated media policy, calling it "incoherent." An altered video of US President Joe Biden prompted this decision.
Industry-Standard Markers
Meta's implementation of industry-standard markers will allow the company to label AI-generated images effectively. However, these markers currently do not extend to videos and audio generated by artificial intelligence.
To address this limitation, Meta plans to introduce a feature that enables users to identify and disclose when AI has generated the video or audio content they share. Failure to comply with this disclosure requirement may result in penalties.
In cases where digitally created or altered images, videos, or sounds pose a high risk of materially deceiving the public on significant matters, Meta may apply more prominent labels. Additionally, Meta is actively working to prevent users from removing the invisible watermarks from AI-generated images, ensuring the integrity and authenticity of the labeled content.
Photo: Meta Newsroom Facebook Page


Taiwan Arms Deal on Track Despite U.S.-China Summit Uncertainty
Xiaomi's AI Model "Hunter Alpha" Mistaken for DeepSeek's Next Release
Bachelet Pushes Forward With UN Secretary-General Bid Despite Chile's Withdrawal
Super Micro Computer Shares Plunge After Co-Founder Charged in AI Chip Smuggling Case
Apple Defies China's Smartphone Slump with Strong Early 2026 Sales
Jay Bhattacharya to Continue Leading CDC as White House Searches for Permanent Director
Russia Strikes Kharkiv and Izmail as Cross-Border Drone War Escalates
OpenAI's Desktop Superapp: Unifying ChatGPT, Codex, and Browser Tools for Enterprise AI
Nanya Technology Shares Surge 10% After $2.5 Billion Private Placement from Sandisk and Cisco
Cuba Receives Humanitarian Aid Convoy Amid U.S. Sanctions
AMD CEO Lisa Su Heads to Samsung's South Korea Chip Facility Amid AI Expansion Talks
Cyberattack on Stryker Triggers U.S. Government Warning Over Microsoft Intune Security
Elliott Investment Management Takes Multibillion-Dollar Stake in Synopsys
Maduro Faces Rare Narcoterrorism Charges in U.S. Court
Kristi Noem Ends Western Hemisphere Tour in Diminished Role After DHS Firing
Trump's Overhaul of American History: Museums, Monuments, and Cultural Institutions
Iran-U.S. Negotiations: Tehran Reviews American Peace Proposal Amid Ongoing Gulf Conflict 



