October 5, 2025

Data centers eat the economy – and we don’t even use them

0
GettyImages-885294108-e1754697741287.jpg



While the technology giants announce hundreds of billions of investments in new data centers, we are witnessing a fundamental misunderstanding of our problem of calculation. The current approach of industry, throwing money into massive infrastructure projects, looks like two more ways to a congestioned highway. It could offer temporary relief, but it does not solve the underlying problem.

The figures are amazing. Capital expenditure of the data center increased by 53% in annual sliding to $ 134 billion in the first quarter of only 2025. Meta would explore an investment of $ 200 billion in data centers, while Microsoft has hired $ 80 billion for $ 2025. OPENAI, SoftBank and Oracle announced the Stargate initiative of $ 500 billion. McKinsey plans that data centers will require $ 6.7 billions worldwide by 2030. And the list is long.

However, here is the uncomfortable truth. Most of these resources will remain considerably underused. The average use rate of the server fluctuates between 12% and 18% of the capacity, while 10 million servers are completely inactive, representing $ 30 billion in waste capital. Even active servers rarely exceed a use of 50%, which means that the majority of our existing calculation infrastructure burns essentially energy while doing nothing productive.

The analogy of the highway is true

Faced with the congestion of traffic, the instinctive response is to add more tracks. But transportation researchers documented what is called “induced demand”. It is a counter-intuitive conclusion which proves that the additional capacity temporarily reduces congestion until it attracts more drivers, which ultimately returns traffic to previous levels. The same phenomenon applies to data centers.

The construction of new data centers is the easy solution, but it is neither durable nor efficient. As I saw first in the development of calculation orchestration platforms, the real problem is not the capacity. It is allowance and optimization. There is already an abundant supply of inactive in thousands of data centers worldwide. The challenge lies in the effective connection of this dispersed and underused capacity with demand.

The energy consumption of the environmental data center should triple by 2030, reaching 2,967 TWh per year. Goldman Sachs estimates that the energy demand from the data center will increase by 160% by 2030. While technological giants buy entire nuclear power plants to fuel their data centers, the country’s cities reach difficult limits for the energy capacity of new facilities.

This energy crisis highlights the important strains of our infrastructure and is a subtle admission that we have built a fundamentally unbearable system. The fact that companies are now buying their own power plants rather than counting on existing networks reveal how our exponential appetite for calculation has exceeded our ability to feed it responsible.

The distributed alternative

The solution is not a more centralized infrastructure. It is a smarter orchestration of existing resources. Modern software can aggregate the calculation of inactivity from data centers, business servers and even consumption devices in unified and on demand pools. This distributed approach offers several advantages:

Immediate availability: Instead of waiting for years for the construction of new data centers, distributed networks can instantly use the existing inactivity capacity.

Profitability: Taking advantage of underused resources is much less than the construction of new infrastructure.

Environmental sustainability: Maximizing the use of existing equipment reduces the need for new manufacturing and energy consumption.

Resilience: The distributed systems are intrinsically more tolerant to breakdowns than centralized mega-facilities.

Technical reality

Technology to orchestrate the distributed calculation already exists. Some network models already show how software can abstract the complexity of resources management between several suppliers and locations. Docker containers and modern orchestration tools make portability of the workload without seam. The missing play is only the industry’s desire to adopt a fundamentally different approach.

Companies must recognize that most servers are inactive 70% to 85% of the time. This is not a hardware problem requiring more infrastructure. Nor is it a problem of capacity. It is a problem of orchestration and allowance requiring smarter software.

Instead of building our path with increasingly expensive and destructive mega-projects, we must adopt a distributed orchestration that maximizes existing resources.

This requires fundamental change in thought. Rather than considering the calculation as something that must be held and housed in massive installations, we must deal with it as a public service available on demand from the most effective sources, regardless of the location or property.

So, before we ask ourselves if we can afford to build 7 billions of dollars of new data centers by 2030, we must wonder if we can continue a more intelligent and more sustainable approach to calculate the infrastructure. The technology exists today to orchestrate the calculation distributed on a large scale. What we need now is the vision of implementing it.

The opinions expressed in the Fortune.com comments are only the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Presentation of 2025 Global Fortune 500The final classification of the largest companies in the world. Explore this year’s list.


https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-885294108-e1754697741287.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *