Google’s AI broadcasting agent imitates human writing to improve business research

Do you want smarter information in your reception box? Sign up for our weekly newsletters to obtain only what matters for business managers, data and security managers. Subscribe now
Google researchers have developed a New executive for AI research agents that surpasses On key references.
The new agent, called a deep test researcher for testing tests (TTD-DR), is inspired by the way humans write through a process of writing, information research and iterative revisions.
The system uses diffusion mechanisms and scalable algorithms to produce more complete and precise research on complex subjects.
For companies, this framework could feed a new generation of tailor -made research assistants for high value tasks These increased generation systems (RAG) of standard recovery have trouble, such as the generation of a competitive analysis or a market entry report.
The AI scale reached its limits
Electricity ceilings, increase in token costs and inference delays restart the AI company. Join our exclusive fair to discover how best the teams are:
- Transform energy into a strategic advantage
- Effective inference architecting for real debit gains
- Unlock a competitive return on investment with sustainable AI systems
Secure your place to stay in advance::
According to the authors of the article, these real commercial use cases were the main objective of the system.
The limits of current deep research agents
Deep research agents (DR) are designed to combat complex requests that go beyond a simple research. They use large models of language (LLM) to plan, use tools such as web research to collect information, then synthesize the results in a detailed report using test scale techniques such as the reflection chain (COT), the best-end sampling and the search for Monte-Carlo trees.
However, many of these systems have fundamental design limitations. Most DR agents accessible to the public apply algorithms and test testing tools without structure that reflects human cognitive behavior. Open source agents often follow a linear or rigid parallel process of planning, research and generation of content, Make it difficult for the different phases of the search to interact and correct each other.

This can lead the agent to lose the global research context and miss critical connections between the different information.
As the authors of the article note: “This indicates a fundamental limitation of the current work of the DR agent and highlights the need for a more cohesive framework and specially designed for DR agents which imitates or exceeds human research capacities.”
A new approach inspired by human writing and dissemination
Unlike the linear process of most AI agents, human researchers work in an iterative way. They usually start with a High level plan, create an initial project, then engage in several revision cycles. During these revisions, they are looking for new information to strengthen their arguments and fill the gaps.
Google researchers observed that this The human process could be emulated using a diffusion model Increased with a recovery component. (Diffusion models are often used in the generation of images. They start with a noisy image and gradually refine them until it becomes a detailed image.)
As the researchers explain: “In this analogy, a formed diffusion model initially generates a noisy draft, and the module of speeding, helped by recovery tools, revises this project with better quality (or high resolution) outings.”
TTD-DR is built on this plan. The framework deals with the creation of a research report as a dissemination process, where a “noisy” initial project is gradually refined in a polished final report.

This is carried out by two basic mechanisms. The first, which the researchers call “unblocking with recovery”, begins with a preliminary project and improves it in an iterative way. In each step, the agent uses the current project to formulate new research requests, recovers external information and integrates them to “strip” the report by correcting the inaccuracies and adding details.
The second mechanism, “self-evolution”, guarantees that each component of the agent (the planner, the question generator and the answer synthesizer) independently optimizes his own performance. In the comments of Venturebeat, Rujun Han, scientific researcher of Google and co-author of the article, explained that this evolution at the level of the components is crucial because it makes the “report more effective”. This is similar to an evolutionary process where each part of the system gradually improves in its specific task, providing a better quality context for the main revision process.

“The complex interaction and the synergistic combination of these two algorithms are crucial for obtaining high -quality search results,” the authors say. This iterative process is translated directly into reports that are not only more precise, but also more logically consistent. As Han notes, since the model was evaluated on utility, which includes control and consistency, performance gains are a direct measure of its ability to produce well structured commercial documents.
According to the newspaper, The resulting research companion is “capable of generating useful and complete relationships for complex research issues in various fields of industry, Including finance, biomedical, leisure and technology ”, putting it in the same class as the deep research products of Openai, Perplexity and Grok.
TTD-DR in action
To build and test their framework, the researchers used the Development Kit of Google (ADK), an extensible platform to orchestrate the complex workflows of AI, with Gemini 2.5 Pro as Core LLM (although you can exchange it for other models).
They compared the TTD-DR against the main commercial and open source systems, including OpenAi Deep Research, Perplexity Deep Research, Grok Deepsearch and the Open-Source GPT-Researcher.
The evaluation has focused on two main areas. To generate long -term complete reports, they used the DEEPCONSULT benchmark, a collection of prompts related to businesses and advice, as well as their own set of long research data. To answer multi-hop questions that require in-depth research and reasoning, they have tested the agent on academic and real references such as the last examination (Hle) and Gaia of humanity.
The results showed that TTD-DR constantly surprised its competitors. In comparisons side by side with in-depth research OPENAI on the generation of long reports, TTD-DR reached victory rates of 69.1% and 74.5% on two different data sets. It also exceeded the OpenAi system on three distinct benchmarks which required multi-hop reasoning to find concise responses, with performance gains of 4.8%, 7.7%and 1.7%.

The future of testing test time
Although current research focuses on text reports using web research, the frame is designed to be very adaptable. Han has confirmed that the team plans to extend work to integrate more tools for complex corporate tasks.
A A similar “testing” process could be used to generate a complex software code,, Create a detailed financial modelOr Design a several -step marketing campaignwhere an initial “project” of the project is refined in an iterative way with new information and comments from various specialized tools.
“All these tools can be naturally incorporated in our context,” said Han, suggesting that this approach centered on the project could become a fundamental architecture for a wide range of complex and several stages.
https://venturebeat.com/wp-content/uploads/2025/08/deep-research-agent.png?w=1024?w=1200&strip=all