Meta will make its generative artificial intelligence (AI) models available to the United States’ government, the tech giant has announced, in a controversial move that raises a moral dilemma for everyone who uses the software.
Meta last week revealed it would make the models, known as Llama, available to government agencies, “including those that are working on defence and national security applications, and private sector partners supporting their work”.
The decision appears to contravene Meta’s own policy which lists a range of prohibited uses for Llama, including “[m]ilitary, warfare, nuclear industries or applications” as well as espionage, terrorism, human trafficking and exploitation or harm to children.
Meta’s exception also reportedly applies to similar national security agencies in the United Kingdom, Canada, Australia and New Zealand. It came just three days after Reuters revealed China has reworked Llama for its own military purposes.
The situation highlights the increasing fragility of open source AI software. It also means users of Facebook, Instagram, WhatsApp and Messenger – some versions of which use Llama – may inadvertently be contributing to military programs around the world.
What is Llama?
Llama is a collation of large language models – similar to ChatGPT – and large multimodal models that deal with data other than text, such as audio and images.
Meta, the parent company of Facebook, released Llama in response to OpenAI’s ChatGPT. The key difference between the two is that all Llama models are marketed as open source and free to use. This means anyone can download the source code of a Llama model, and run and modify it themselves (if they have the right hardware). On the other hand, ChatGPT can only be accessed via OpenAI.
The Open Source Initiative, an authority that defines open source software, recently released a standard setting out what open source AI should entail. The standard outlines “four freedoms” an AI model must grant in order to be classified as open source:
- use the system for any purpose and without having to ask for permission
- study how the system works and inspect its components
- modify the system for any purpose, including to change its output
- share the system for others to use with or without modifications, for any purpose.
Meta’s Llama fails to meet these requirements. This is because of limitations on commercial use, the prohibited activities that may be deemed harmful or illegal and a lack of transparency about Llama’s training data.
Despite this, Meta still describes Llama as open source.
Meta no longer prohibits military uses of its AI models. QubixStudio/Shutterstock
The intersection of the tech industry and the military
Meta is not the only commercial technology company branching out to military applications of AI. In the past week, Anthropic also announced it is teaming up with Palantir – a data analytics firm – and Amazon Web Services to provide US intelligence and defence agencies access to its AI models.
Meta has defended its decision to allow US national security agencies and defence contractors to use Llama. The company claims these uses are “responsible and ethical” and “support the prosperity and security of the United States”.
Meta has not been transparent about the data it uses to train Llama. But companies that develop generative AI models often utilise user input data to further train their models, and people share plenty of personal information when using these tools.
ChatGPT and Dall-E provide options for opting out of your data being collected. However, it is unclear if Llama offers the same.
The option to opt out is not made explicitly clear when signing up to use these services. This places the onus on users to inform themselves – and most users may not be aware of where or how Llama is being used.
For example, the latest version of Llama powers AI tools in Facebook, Instagram, WhatsApp and Messenger. When using the AI functions on these platforms - such as creating reels or suggesting captions – users are using Llama.
Llama powers AI tools in apps such as Facebook, Instagram and WhatsApp. AlexandraPopova/Shutterstock
The fragility of open source
The benefits of open source include open participation and collaboration on software. However, this can also lead to fragile systems that are easily manipulated. For example, following Russia’s invasion of Ukraine in 2022, members of the public made changes to open source software to express their support for Ukraine.
These changes included anti-war messages and deletion of systems files on Russian and Belarusian computers. This movement came to be known as “protestware”.
The intersection of open source AI and military applications will likely exacerbate this fragility because the robustness of open source software is dependent on the public community. In the case of large language models such as Llama, they require public use and engagement because the models are designed to improve over time through a feedback loop between users and the AI system.
The mutual use of open source AI tools marries two parties – the public and the military – who have historically held separate needs and goals. This shift will expose unique challenges for both parties.
For the military, open access means the finer details of how an AI tool operates can easily be sourced, potentially leading to security and vulnerability issues. For the general public, the lack of transparency in how user data is being utilised by the military can lead to a serious moral and ethical dilemma.


Coupang Apologizes After Massive Data Breach Affecting 33.7 Million Users
European Luxury Market Set for a Strong Rebound in 2026, UBS Says
Apple Alerts EU Regulators That Apple Ads and Maps Meet DMA Gatekeeper Thresholds
Morgan Stanley Boosts Nvidia and Broadcom Targets as AI Demand Surges
OpenAI Moves to Acquire Neptune as It Expands AI Training Capabilities
U.S. Productivity Growth Widens Lead Over Other Advanced Economies, Says Goldman Sachs
Wikipedia Pushes for AI Licensing Deals as Jimmy Wales Calls for Fair Compensation
Ethereum Bulls Reload: $175M ETF Inflows + Super-Whale Grabs $54M ETH as Price Coils for the Next Big Move
Bitcoin Defies Gravity Above $93K Despite Missing Retail FOMO – ETF Inflows Return & Whales Accumulate: Buy the Dip to $100K
Amazon and Google Launch New Multicloud Networking Service to Boost High-Speed Cloud Connectivity
EU Prepares Antitrust Probe Into Meta’s AI Integration on WhatsApp
Bitcoin Smashes $93K as Institutions Pile In – $100K Next?
AI-Guided Drones Transform Ukraine’s Battlefield Strategy
Gold’s Best Friend Is Back: Falling Yields Reload the $4,300 Bull Case
China Vanke Hit with Fresh S&P Downgrade as Debt Concerns Intensify 





