Anthropic wants to be the only good IA company in Trump America

Anthropic, the artificial intelligence company behind the Claude chatbot, tries to carve out a place as a good guy in the AI space. Freshly released being the only big company in AI to throw its support behind an AI security bill in California, the company has caught a title of Semaor thanks to its apparent refusal to allow its model to be used for surveillance tasks, which pisses the Trump administration.
According to the report, the organizations responsible for the application of laws have felt suffocated by the policy of use of anthropic, which includes a section restricting the use of its technology for the purpose of “criminal justice, censorship, surveillance or prohibition of the application of laws”. This includes the prohibition of the use of its AI tools to “take determinations on criminal justice applications”, “target or follow the physical location of a person, emotional state or communication without their consent” and “analyze or identify specific content to censor in the name of a government organization”.
This was a real problem for federal agencies, including the FBI, the secret services and the application of immigration and customs, by Semafor, and created tensions between the company and the current administration, despite the anthropic giving the federal government access to its Chatbot Claude and its IA tools for only $ 1. According to the report, anthropic policy is wide with fewer sculptures than competitors. For example, OpenAi’s use policy limits “unauthorized monitoring of individuals”, which may not exclude technology for “legal” monitoring.
OPENAI did not respond to a request for comments.
A source familiar with the case explained that Claude d’Anthropic is used by agencies for national security purposes, including for cybersecurity, but the policy of use of the company restricts the uses related to national surveillance.
An anthropic representative said that the company had developed Claudegov specifically for the intelligence community, and that the service has received “high” authorization from the Federal Risk Management and Authorization Program (Fedramp), allowing its use with sensitive government workloads. The representative said Claude is available for use throughout the intelligence community.
An administration official launched in Sema but also, it is as much a legal and moral question. We live in a state of surveillance, the police can and have monitored people without a mandate in the past and will almost certainly continue to do so in the future.
A company that chooses not to participate in what, as it can resist, covers her own ass as much as he exercises an ethical position. If the federal government is irritated that the policy of using a company prevents it from carrying out domestic surveillance, perhaps the principal to remember is that the government is carrying out generalized national surveillance and tries to automate it with AI systems.
Be that as it may, the theoretically position of the principles of Anthropic is the last in its effort to position itself as the society of reasonable intestance. Earlier this month, he supported an AI security invoice in California which would require that large AI companies submit to new and more strict security requirements to ensure that models are not likely to make catastrophic damage. Anthropic was the only major player in AI space to throw his weight behind the bill, which awaits the signature of Governor Newsom (who can come or not, because he has already opposed his veto to a similar bill). The company is also at DC, presenting a rapid adoption of the AI with railings (but the emphasis on the rapid part).
Her position as the company of AI Chill is perhaps a little compromised by the fact that she hacked millions of books and articles that she used to form her model of great language, violating the rights of author holders and leaving the authors high and dry without payment. A regulation of $ 1.5 billion reached earlier this month will at least put a little money in the pockets of people who have really created the works used to form the model. Meanwhile, Anthropic was just assessed at nearly $ 200 billion during a recent financing round which will make this sentence ordered by the court in a rounding error.
https://gizmodo.com/app/uploads/2025/09/GettyImages-2233758055-1200×675.jpg