Artificial intelligence (AI) is transforming the media landscape, both for news organisations and consumers. Applications such as ChatGPT, Bard, and Bing AI are creating new possibilities to assist in writing and researching the news, but these also raise ethical concerns.
One of the most pressing questions for news organisations is whether consumers should be told when they are reading a story created, or aided by, use of AI. Some, such as the technology magazine Wired and the BBC are already doing this, but other media outlets are not.
There are several arguments for and against disclosing this kind of information.
First, it would help to ensure transparency and accountability. Consumers should know how the news they are consuming is being produced, and they should be able to make informed choices about whether or not to trust it.
Second, disclosure could help mitigate the risks of bias. AI systems are trained using data, and that data can reflect the biases of the people who created it. As a result, AI-generated content can sometimes be biased. By requiring disclosure, consumers would be able to be aware of this potential bias and take it into account when evaluating the information.
Third, disclosure could help to protect consumers from misinformation . AI systems can be used to generate fake news, making it difficult for consumers to distinguish between real and fake news. By requiring disclosure, consumers would be able to be more sceptical of AI-generated content and be more likely to verify it before sharing it.
Against disclosure
One concern is that it could stifle innovation. If news organisations are required to disclose every time they use AI, they may be less likely to experiment with the technology.
Another is that disclosure could be confusing for consumers. Not everyone understands how AI works. Some people may be suspicious of AI-generated content. Requiring disclosure could make it more difficult for consumers to get the information they need.
How things could play out
Here are a couple of examples to illustrate these concerns:
Imagine a news organisation is using AI to perform real-time fact-checking and verification of statements made by public figures during live events, such as political debates or press conferences. An AI system could rapidly identify inaccuracies and provide viewers with accurate information in real-time.
However, if the news organisation were required to disclose the use of AI each time, it might lead to a reluctance to deploy such a tool. The fear of public perception and potential backlash could deter news outlets from leveraging AI to enhance the accuracy of their reporting, ultimately depriving the audience of a valuable service.
Another scenario involves AI-driven personalised news curation. Many news platforms use AI algorithms to tailor news content to individual readers’ preferences, ensuring they receive information that aligns with their interests.
If news organisations were compelled to disclose the use of AI in this context, readers might become wary of perceived manipulation. This apprehension could deter news outlets from investing in AI-driven personalisation, limiting their ability to engage and retain audiences in an increasingly competitive media landscape.
To mitigate these risks, publications such as the New York Times are offering “enhanced bylines” that include more details about the journalists behind the stories and details about how the story was produced.
Ultimately, the decision of whether or not to require disclosure is a complex one.
However, it is essential to have a public conversation about this issue so that we can develop policies that protect consumers and promote responsible journalism, and retain and improve trust in journalism, which is falling in some countries.
In addition to disclosure, there are other things that news organisations can do to ensure that AI is used ethically and responsibly. They should develop clear guidelines for the use of AI. These guidelines should address issues such as bias, transparency and accountability. They should invest in training and education for their staff. Journalists need to understand how AI works and how to use it responsibly.
Finally, news organisations should work with highly informed groups such as Harvard’s Neiman Lab, those working on policy, technology companies and academics, to develop ethical standards for using AI and tackle emerging issues critical to the future of public-interest news.
The use of AI tools in news is a significant development . It is vital to have a thoughtful and informed conversation about this technology’s potential benefits and risks. By working together, we can ensure that AI is used in a way that serves the public interest and upholds the values of responsible journalism.


EU Court Cuts Intel Antitrust Fine to €237 Million Amid Long-Running AMD Dispute
Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
U.S.-EU Tensions Rise After $140 Million Fine on Elon Musk’s X Platform
SpaceX Insider Share Sale Values Company Near $800 Billion Amid IPO Speculation
SpaceX Reportedly Preparing Record-Breaking IPO Targeting $1.5 Trillion Valuation
EssilorLuxottica Bets on AI-Powered Smart Glasses as Competition Intensifies
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
SK Hynix Considers U.S. ADR Listing to Boost Shareholder Value Amid Rising AI Chip Demand
IBM Nears $11 Billion Deal to Acquire Confluent in Major AI and Data Push
SK Hynix Shares Surge on Hopes for Upcoming ADR Issuance
Adobe Strengthens AI Strategy Ahead of Q4 Earnings, Says Stifel
Trump Criticizes EU’s €120 Million Fine on Elon Musk’s X Platform
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns
Taiwan Opposition Criticizes Plan to Block Chinese App Rednote Over Security Concerns 



