Anthropic Ships Automated Security Reviews for Claude Code as overvoltage of vulnerabilities generated by AI

Do you want smarter information in your reception box? Sign up for our weekly newsletters to obtain only what matters for business managers, data and security managers. Subscribe now
Wednesday, Anthropic launched automated security review capacities for its Claude code platform, introducing tools that can scan the code for vulnerabilities and suggest fixes as artificial intelligence considerably accelerates the development of software in the industry.
The new features arrive while companies are counting more and more on AI to write code faster than ever, which raises critical questions about the question of whether security practices can keep the rate of the speed of development assisted by AI. The anthropic solution incorporates the safety analysis directly into the workflows of developers via a simple terminal command and automated Github reviews.
“People like Code Claude, they like to use models to write code, and these models are already extremely good and improve,” said Logan Graham, member of the Red Frontier team of Anthropic who led the development of security features, in an interview with Venturebeat. “It really seems possible that in the next two years, we are going to 10x, 100x, 1000x, the amount of code that is written in the world. The only way to follow is to use models themselves to understand how to make it secure.”
The announcement comes one day after the release of Anthropic Claude Opus 4.1, an improved version of its most powerful AI model that shows significant improvements in coding tasks. Timing highlights intensification competition between IA companies, Openai should announce the GPT-5 of poaching in an imminently and meta-bruising way with signing bonuses of $ 100 million.
The AI scale reached its limits
Electricity ceilings, increase in token costs and inference delays restart the AI company. Join our exclusive fair to discover how best the teams are:
- Transform energy into a strategic advantage
- Effective inference architecting for real debit gains
- Unlock a competitive return on investment with sustainable AI systems
Secure your place to stay in advance::
Why the generation of code AI creates a massive security problem
Security tools respond to an increasing concern in the software industry: as AI models become more capable of code writing, the volume of product code is exploded, but traditional security examination processes have not set up to correspond. Currently, security examinations are based on human engineers who manually examine the code for vulnerabilities – a process that cannot keep the rate of production generated by AI.
Anthropic’s approach uses AI to solve the problem created by AI. The company has developed two complementary tools that exploit Claude’s capacities to automatically identify common vulnerabilities, in particular SQL injection risks, inter-site script vulnerabilities, authentication defects and manipulation of unsecured data.
The first tool is a /security-review
Order that developers can execute their terminal to scan code before committing it. “It is literally 10 strikes, then it will trigger an agent Claude to review the code you write or your repository,” said Graham. The system analyzes the code and returns high confidence vulnerability assessments as well as suggested corrective.
The second component is a GitHub action that automatically triggers security notices when developers submit traction requests. The system publishes online comments on the code with security problems and recommendations, ensuring that each code change receives a basic security exam before reaching production.
How Anthropic tested the safety scanner on his own vulnerable code
Anthropic tested these tools internally on its own code base, including the Code Claude itself, offering real validation of their effectiveness. The company has shared specific examples of vulnerabilities that the system has captured before reaching production.
In a case, engineers have built a functionality for an internal tool that launched a local HTTP server intended only for local connections. The GitHub action has identified a vulnerability of distant code execution usable through DNS binding attacks, which was fixed before the merger of the code.
Another example involved a proxy system designed to safely manage internal references. Automated examination reported that the proxy was vulnerable to counterfeit attack attacks on the server side (SSRF), which invites immediate correction.
“We used it, and he already found vulnerabilities and faults and suggesting how to repair them in things before hitting production for us,” said Graham. “We thought, hey, it’s so useful that we decided to publish it as publicly.”
Beyond meet the challenges of the scale facing large companies, the tools could democratize sophisticated security practices for small development teams that lack dedicated security personnel.
“One of the things that makes me the most excited is that it means that security revision can be easily democratized for the smallest teams, and these small teams can push a lot of code in which they will have more and more confidence,” said Graham.
The system is designed to be immediately accessible. According to Graham, developers can start using the safety examination functionality within seconds depending on the press release, requiring a launch of around 15 strikes. The tools are perfectly integrated into existing workflows, processing of the code locally via the same API Claude which feeds other code features Claude.
Inside the AI architecture which scans millions of lines of code
The security exam system works by invoking Claude via an “agent loop” which systematically analyzes the code. According to Anthropic, Claude Code uses tool calls to explore major code bases, starting by understanding the changes made in a traction request, then proactively exploring the wider basis for code to understand the context, safety invariants and potential risks.
Business customers can personalize safety rules to correspond to their specific policies. The system is built on the extensible architecture of Claude Code, allowing teams to modify existing security prompts or to create entirely new digitization commands via simple brand documents.
“You can take a look at Slash controls, because many times the slash commands are executed via in fact just a Doc Claude.MD,” said Graham. “It’s really simple for you to write yours too.”
The Talent War of $ 100 million reshaping the development of IA security
The security announcement comes in the middle of a wider industry calculation with AI security and responsible deployment. Recent research from Anthropic has explored techniques to prevent AI models from developing harmful behavior, including a controversial “vaccination” approach that exposes models to unwanted lines during training to strengthen resilience.
Timing also reflects intense competition in AI space. Anthropic published Anthropic Claude Opus 4.1 on Tuesday, the company affirming significant improvements in software engineering tasks, stimulating 74.5% on the evaluation of the Swe-Bench verified coding, against 72.5% for the previous model Claude Opus 4.
Meanwhile, Meta aggressively recruited AI talents with massive signature bonuses, although the CEO of Anthropic Dario Amodei recently declared that many of its employees had refused these offers. The company maintains an 80% retention rate for employees hired in the past two years, compared to 67% in Openai and 64% in Meta.
Government agencies can now buy Claude as the company AI adoption accelerates
Security features represent part of the broader anthropic thrust in the works markets. During the last month, the company sent several features focused on the company for Claude Code, including analytics dashboards for administrators, native Windows management and multi-direct management.
The US government has also approved Anthropic’s business references, adding the company to the list of approved suppliers from the General Services Administration alongside Openai and Google, making Claude available for purchase of federal agencies.
Graham stressed that the safety tools are designed to complete, and not replace existing safety practices. “There is no one who will solve the problem. It is only an additional tool,” he said. However, he expressed his confidence that the safety tools fed will play an increasingly central role as the generation of code accelerates.
The race to secure the software generated by AI before breaking the internet
While AI reshapes the development of software at an unprecedented rate, the Anthropic security initiative represents critical recognition that the same technology stimulating explosive growth in code generation must also be used to keep this code in security. The Graham team, called The Frontier Red Team, focuses on identifying the potential risks of AI advanced capacities and the creation of appropriate defenses.
“We have always been extremely determined to measure the cybersecurity capacities of the models, and I think it is time for the defenses to exist more and more in the world,” said Graham. The company particularly encourages cybersecurity companies and independent researchers to experiment with creative applications of technology, with an ambitious objective of using AI to “review and keep away or allow securing all the most important software that feeds infrastructure in the world”.
The security features are available immediately for all Code Claude users, with the GitHub action requiring a unique configuration by the development teams. But can the largest question on the industry remain: can the defenses fed by AI evolve sufficiently quickly to correspond to the exponential growth of the vulnerabilities generated by the AI?
For the moment, at least, the machines run to repair what other machines could break.
https://venturebeat.com/wp-content/uploads/2025/08/nuneybits_Vector_art_of_a_computer_code_on_a_retro_computer_scr_e70af89f-d17f-4281-a0b6-bdb31afcdc6f.webp?w=857?w=1200&strip=all