Anthropic takes Openai and Google with new features from Claude AI designed for students and developers
Do you want smarter information in your reception box? Sign up for our weekly newsletters to obtain only what matters for business managers, data and security managers. Subscribe now
Anthropic is launching new “learning modes” for its AI assistant AI Claude which transforms the chatbot of a tool for exempting responses into a teaching companion, while large technological companies take place to grasp the artificial intelligence education market in rapid growth while responding to the growing concerns that underlie real learning.
The AI startup based in San Francisco will deploy the features from today for its service General Claude.ai and its Code Programming tool Claude Specialized. The learning methods represent a fundamental change in the way in which IA companies position their products for educational purposes – emphasizing guided discovery on immediate solutions, educators fear that students will not depend too much on the responses generated by AI.
“We do not build the AI that replaces human capacity – we build the AI that improves it in a thoughtful way for different users and users,” an anthropogenic spokesperson for Venturebeat, highlighting the philosophical approach to the company while the industry is struggling with the balance of productivity gains against the educational value.
The launch comes as competition in the education tools fueled by AI has reached fever. Openai presented its study method for Chatgpt at the end of July, while Google unveiled the guided learning for its gemini assistant in early August and initiated $ 1 billion over three years to AI education initiatives. Timing is not a coincidence – the return season represents a critical window to capture the adoption of students and institutional.
The AI scale reached its limits
Electricity ceilings, increase in token costs and inference delays restart the AI company. Join our exclusive fair to discover how best the teams are:
- Transform energy into a strategic advantage
- Effective inference architecting for real debit gains
- Unlock a competitive return on investment with sustainable AI systems
Secure your place to stay in advance::
The education technology market, estimated at around $ 340 billion worldwide, has become a key battlefield for AI companies seeking to establish dominant positions before the maturation of technology. Teaching establishments represent not only immediate income opportunities, but also the possibility of shaping the way in which an entire generation interacts with AI tools, potentially creating sustainable competitive advantages.
“This highlights the way we think about building AI – combining our incredible shipping speed with a thoughtful intention that serves different types of users,” noted the anthropogenic spokesperson, pointing to the recent company launch of the company, including Claude Opus 4.1 and automated security criticisms as proof of its aggressive development rate.
How Claude’s new Socratic method addresses the instant response problem
For users of Claude.ai, the new learning mode uses a Socratic approach, guiding users through difficult concepts with survey questions rather than immediate answers. Originally launched in April for Claude For Education Users, the functionality is now available for all users via a single -style drop -down menu.
The most innovative application can be in the Claude code, where Anthropic has developed two distinct learning methods for software developers. The “explanatory” mode provides a detailed narration of coding decisions and compromises, while the “learning” mode stops in the middle of the task to ask the developers to finish marked sections of “#Todo” comments, creating moments of solving collaborative problems.
This approach focused on developers responds to an increasing concern in the technology industry: subordinate programmers that can generate code using AI tools but are struggling to understand or debug their own work. “The reality is that junior developers using traditional AI coding tools can end up spending a lot of time examining and debugging the code they have not written and sometimes not understanding,” said the anthropogenic spokesperson.
The profitability analysis for the adoption of learning methods companies may seem counter-intuitive-why would companies want tools that intentionally slow their developers? But Anthropic argues that this represents a more sophisticated understanding of productivity which considers the development of long -term skills in parallel to immediate production.
“Our approach helps them to learn while they work, strengthening skills to develop in their careers while benefiting from the productivity increases of a coding agent,” said the company. This positioning goes against the broader trend of the industry towards fully autonomous AI agents, reflecting anthropic’s commitment to the philosophy of human design in the loop.
Learning modes are fueled by modified system prompts rather than refined models, allowing anthropic to iterate quickly depending on user comments. The company has tested all engineers internally with different levels of technical expertise and plans to follow the impact now that the tools are available for a wider audience.
Universities are jostling to balance the adoption of AI with academic integrity problems
The simultaneous launch of similar features of Anthropic, Openai and Google reflects increasing pressure to respond to legitimate concerns concerning the impact of AI on education. Critics argue that easy access to the responses generated by the SAPE Cognitive struggle which is essential for in -depth learning and the development of skills.
A recent wired analysis noted that if these study methods represent progress, they do not take up the fundamental challenge: “It remains on users to engage with the software in a specific way, ensuring that they really understand the hardware.” The temptation to simply blunt the learning mode for quick responses remains at a click.
Teaching establishments are struggling with these compromises because they integrate AI tools into the programs. The Northeastern University, the London School of Economics and the Champlain College joined Anthropic for Claude Access across campus, while Google has obtained partnerships with more than 100 universities for its IA education initiatives.
Behind technology: how Anthropic built the AI that teaches instead of saying
Anthropic learning methods work by modifying the system prompts to exclude efficiency focused on the generally integrated effectiveness in the Claude code, by ordering rather than AI to find strategic moments for educational information and user interaction. The approach allows rapid iteration but can cause incoherent behavior between conversations.
“We have chosen this approach because it allows us to quickly learn from real comments from students and improve the anthropogenic experience launches learning methods for Claude AI who guide users through a step -by -step reasoning instead of providing direct responses, competition with Openai and Google in the booming education market.
– Even if this translates into inconsistent behavior and errors between conversations, “said society.
The company also explores improved visualizations for complex concepts, the definition of objectives and the monitoring of progress through conversations, and a more in -depth personalization based on individual skills levels – folies that could differentiate more claude from competitors in the educational space of AI.
While students return to classrooms equipped with increasingly sophisticated AI tools, the ultimate test of learning methods will not be measured in user engagement measures or income growth. Instead, success will depend on the question of whether a generation raised alongside artificial intelligence can maintain intellectual curiosity and critical thinking skills that no algorithm can reproduce. The question is not whether the AI will transform education – it is whether companies like Anthropic can guarantee that transformation improves rather than decreases human potential.
https://venturebeat.com/wp-content/uploads/2025/08/nuneybits_Vector_art_of_laptop_growing_tree_in_burnt_orange_b898ab83-b7a3-4d13-bf2e-14f87cde78f2.webp?w=1024?w=1200&strip=all