Meta has made significant changes to its help center prohibiting political advertisers from utilizing Meta's generative AI ad campaign creation tools. On November 6th, Meta officially updated its help center to reflect the implementation of this policy.
Within the notice, Meta explicitly states that advertisers focusing on ads related to Housing, Employment, Credit, Social Issues, Elections, Politics, Health, Pharmaceuticals, or Financial Services are currently not permitted to use these innovative Generative AI features.
Meta’s Standards and Fact-Checking
Reuters reported that Meta's decision to limit the usage of its Generative AI tools for potentially sensitive topics within regulated industries is driven by its commitment to better understand potential risks. By building appropriate safeguards around these AI-driven advertisements, Meta aims to navigate the intricate landscape of regulated industries more effectively.
Meta's advertising standards, though lacking specific rules on AI, prohibit ads from circulating on its platform if they contain content that has been debunked by its fact-checking partners, as per Cointelegraph. This ensures accountability and combats the dissemination of false information in advertisements.
AI Accessibility and Political Bias Claims
The accessibility of AI has raised apprehensions about the proliferation of fake news, deep fakes, and other forms of misinformation. Additionally, there have been assertions regarding left-leaning political bias in one of the most popular AI chatbots, ChatGPT.
However, it's important to note that these claims remain disputed within the AI community and academia.
Google has also made changes to its guidelines, mandating that verified election advertisers disclose the usage of AI in their campaign content. Google further specifies the need for clear and conspicuous notices regarding the incorporation of synthetic content that inauthentically depicts real or realistic-looking people or events.
Nevertheless, Google does facilitate exemptions for ads that consist of inconsequential synthetic content concerning the claims made.
Apart from industry self-regulation, the United States regulators are contemplating the creation of regulations surrounding political AI deep fakes ahead of the upcoming 2024 election cycle. This preemptive measure aims to tackle concerns regarding the potential impact of AI on social media environments, specifically about voter sentiment manipulation through the dissemination of fake news.
Photo: Muhammad Asyfaul/Unsplash


Jensen Huang Urges Taiwan Suppliers to Boost AI Chip Production Amid Surging Demand
Nvidia, ByteDance, and the U.S.-China AI Chip Standoff Over H200 Exports
Amazon Stock Rebounds After Earnings as $200B Capex Plan Sparks AI Spending Debate
Nvidia Nears $20 Billion OpenAI Investment as AI Funding Race Intensifies
Tencent Shares Slide After WeChat Restricts YuanBao AI Promotional Links
AMD Shares Slide Despite Earnings Beat as Cautious Revenue Outlook Weighs on Stock
Anthropic Eyes $350 Billion Valuation as AI Funding and Share Sale Accelerate
Palantir Stock Jumps After Strong Q4 Earnings Beat and Upbeat 2026 Revenue Forecast
TSMC Eyes 3nm Chip Production in Japan with $17 Billion Kumamoto Investment
Elon Musk’s SpaceX Acquires xAI in Historic Deal Uniting Space and Artificial Intelligence
TrumpRx Website Launches to Offer Discounted Prescription Drugs for Cash-Paying Americans
Missouri Judge Dismisses Lawsuit Challenging Starbucks’ Diversity and Inclusion Policies
SpaceX Reports $8 Billion Profit as IPO Plans and Starlink Growth Fuel Valuation Buzz
Toyota’s Surprise CEO Change Signals Strategic Shift Amid Global Auto Turmoil
OpenAI Expands Enterprise AI Strategy With Major Hiring Push Ahead of New Business Offering
Uber Ordered to Pay $8.5 Million in Bellwether Sexual Assault Lawsuit
Australian Scandium Project Backed by Richard Friedland Poised to Support U.S. Critical Minerals Stockpile 



