October 7, 2025

Google Cloud’s data agents promise to end 80% of labor problems that afflict corporate data teams

0
enterprise_data_pro_smk.jpg

Do you want smarter information in your reception box? Sign up for our weekly newsletters to obtain only what matters for business managers, data and security managers. Subscribe now


The data does not appear as if by magic in the right place for corporate analysis or AI, it must be prepared and directed with data pipelines. It is the field of data engineering and it has long been one of the most ungrateful and tedious tasks that companies have to face.

Today, Google Cloud directly aims to prepare for boredom data with the launch of a series of AI agents. The new agents cover the entire life cycle of data. Bigquery’s data engineering agent automates the creation of complex pipelines through natural language commands. A data science agent transforms notebooks into intelligent workspaces that can independently carry out automatic learning workflows. The improved conversational analysis agent now includes an code interpreter that manages advanced Python analyzes for professional users.

“When I think of who does data engineering today, it is not only engineers, data analysts, data scientists, each data person complains about the difficulty of finding data, how difficult it is to demolish the data, how difficult it is to have access to high quality data,” said Yasmeen Ahmad, Director General, Data Cloud at Google Cloud, VentureBeat. “Most workflows we intend to talk about about our users are 80% bogged down in this work work around data, data, data, engineering and good quality data with which they can work.”

Taking the strangulation neck of data preparation

Google has built the data engineering agent in Bigquery to create complex data pipelines via natural language prompts. Users can describe workflows in several steps and the agent manages technical implementation. This includes the ingestion of cloud storage data, the processing of transformations and carrying out quality checks.


The AI scale reached its limits

Electricity ceilings, increase in token costs and inference delays restart the AI company. Join our exclusive fair to discover how best the teams are:

  • Transform energy into a strategic advantage
  • Effective inference architecting for real debit gains
  • Unlock a competitive return on investment with sustainable AI systems

Secure your place to stay in advance::


The agent automatically writes the complex SQL and Python scripts. He manages the detection of anomalies, plans pipelines and helps out failures. These tasks traditionally require significant engineering expertise and continuous maintenance.

The agent breaks down natural language requests into several stages. It first includes the need to create connections to data sources. Then, it creates appropriate table structures, loads the data, identifies the primary keys for joints, the reasons on data quality problems and applies cleaning functions.

“Usually, this whole work flow would have written a lot of complex code for a data engineer and build this complex pipeline, then manage and iterate this code over time,” said Ahmad. “Now, with the data engineering agent, he can create new pipelines for natural language. It can modify existing pipelines. He can solve problems. “

How corporate data teams will work with data agents

Data engineers are often a very practical group of people.

The different tools that are commonly used to build a data pipeline, including data streaming, orchestration, quality and processing, do not disappear with the new data engineering agent.

“Engineers are always aware of these underlying tools, because what we see about how data data is working is, yes, they love the agent, and they actually see this agent as an expert, a partner and a collaborator,” said Ahmad. “But often, our engineers really want to see the code, they actually want to see the pipelines that have been created by these agents.”

As such, although data engineering agents can operate independently, data engineers can really see what the agent does. She explained that data professionals will often examine the code written by the agent, then make additional suggestions to the agent to adjust or personalize the data pipeline more.

Build a data agent ecosystem with an API foundation

There are several suppliers in the data space that build workflows with agents.

Startups like Altime IA build specific agents for data workflows. Large suppliers, including Databricks, Snowflake and Microsoft, build all their own respective agental technologies that can also help data professionals.

The Google approach is a little different in that it builds its agent AI services for data with its API Gemini Data Agents. It is an approach that can allow developers to integrate Google’s natural language treatment capacities and code interpretation in their own applications. This represents a passage of closed and first tools to an extensible platform approach.

“Behind the scenes of all these agents, they are in fact built as a set of API,” said Ahmad. “With these API services, we are increasingly intended to make these APIs available to our partners.”

The API Umbrella service will publish the fundamental API services and agent APIs. Google has the lighthouse preview programs where partners integrate these APIs in their own interfaces, including carnet suppliers and ISV partners creating data pipeline tools.

What it means for corporate data teams

For companies that seek to direct data operations motivated by AI, this announcement indicates an acceleration to autonomous data workflows. These capacities could provide significant competitive advantages over time of the installation and efficiency of resources. Organizations should assess their current data team capacity and consider pilot programs for pipeline automation.

For the planning of companies subsequently adoption of AI, the integration of these capacities into the existing services of Google Cloud modifies the landscape. The infrastructure of advanced data agents becomes standard rather than premium. This change potentially increases reference expectations for the capacities of the data platform through industry.

Organizations must balance the efficiency gains in relation to the need for surveillance and control. Google’s transparency approach can provide common ground, but data leaders should develop governance executives for autonomous agent operations before general deployment.

The emphasis placed on the availability of APIs indicates that the development of personalized agents will become a competitive differentiator. Companies should examine how to take advantage of these fundamental services to create agents specific to the field that deal with their business processes and unique data challenges.


https://venturebeat.com/wp-content/uploads/2025/08/enterprise_data_pro_smk.jpg?w=1024?w=1200&strip=all

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *