Rare alliance between major U.S. artificial intelligence companies

Technology|7/4/2026
Rare alliance between major U.S. artificial intelligence companies
Illustrative image
Listen to this story:
0:00

Note: AI technology was used to generate this article's audio.

  • American AI companies unite to monitor exploitation of AI models
  • Concerns over Chinese copies threatening U.S. profits and security

Major American artificial intelligence companies, including OpenAI, Anthropic, and Google, have begun collaborating to counter attempts by some Chinese competitors to extract results from U.S. AI models for a competitive advantage in the global market.

This rare collaboration is taking place through the Frontier Model Forum, a nonprofit organization founded by the three companies along with Microsoft in 2023, aimed at detecting so-called adversarial distillation that violates model terms of use, according to sources familiar with the matter.

Adversarial distillation has raised concerns among U.S. AI companies, as some users—especially in China—create imitation versions of American products that can undercut prices and attract customers away from the original companies, while posing potential security risks.

U.S. officials have estimated that these practices cost Silicon Valley labs billions of dollars in annual profits.

OpenAI confirmed its participation in the information-sharing efforts on adversarial distillation through the forum, pointing to a recent memo sent to Congress accusing the Chinese firm DeepSeek of attempting to “free-ride” on the capabilities developed by OpenAI and other American AI labs.

Distillation is a technique in which an older AI model is used to train a new model that replicates its capabilities, often at a much lower cost than building a model from scratch.

Some forms of distillation are widely accepted and encouraged within labs, such as creating smaller, more efficient versions of their own models or allowing external developers to use it for non-competitive purposes.

However, distillation by third parties is controversial, as it can replicate proprietary work without permission and may remove safety controls that prevent the AI from being used for harmful purposes.

Many Chinese models rely on open systems, making them easier to download and run at lower cost, creating an economic challenge for U.S. companies that keep their models proprietary, relying on customers paying for access to offset huge investments in data centers and infrastructure.

The issue drew significant attention after DeepSeek launched its R1 model in January 2025, prompting OpenAI and Microsoft to investigate potential unauthorized extraction of U.S. model data.

Since then, DeepSeek has continued using advanced methods to develop new versions of its chatbot through distillation.

Information-sharing among American AI companies aims to enhance their ability to detect these practices, identify responsible parties, and prevent unauthorized use, similar to cybersecurity firms exchanging attack data and adversary tactics to strengthen defenses.

Companies like Anthropic have blocked Chinese firms from using their models, noting that the threat goes beyond any single company or region and poses a national security risk, as distilled models often lack safety safeguards designed to prevent misuse.

Google has reported an increase in model extraction attempts. While the three U.S. companies have not provided clear evidence of the extent to which Chinese innovation relies on distillation, they note that the prevalence of attacks can be measured by the volume of large-scale data requests.