Is the window for generative AI adoption closing for companies?

Companies must act swiftly to embrace generative AI or risk being left behind.

Jun 26, 2025 - 11:10
 0
Is the window for generative AI adoption closing for companies?

The technology world is no stranger to hype cycles, but the arrival of generative AI marks something fundamentally different: not a wave of disruption, but a new epoch in digital transformation. Just as cloud computing redefined business operations in the last decade, generative AI is poised to reshape how entire industries operate.

It’s important to note that generative AI should not be seen as an incremental tool for productivity, but as a foundational capability that will dictate tomorrow’s winners and losers. In the next 12 to 18 months, companies that strategically embrace AI will redefine their value propositions, business models, and operational capacity. Those that hesitate risk being left behind in what is becoming an increasingly divided digital economy.

This emerging divide signals what could be seen as a “Divergent Future” – a world where companies with access to powerful AI capabilities accelerate exponentially, while those without face systemic disadvantages. The division won’t just be commercial, it will be societal. Access to AI tools is already beginning to impact education, economic mobility, and organizational competitiveness. Companies with the foresight to invest and the means to implement will shape markets; those without may find themselves more and more struggling to compete.

So, is the window closing? It depends on how fast you're moving. The companies that act decisively now by investing in sovereign, sustainable AI infrastructure and rethinking how their people and processes create value, are the ones most likely to lead in this new era. For those who hesitate, catching up may soon become not just difficult, but impossible.

AI sovereignty as a strategic priority

As demand for AI infrastructure surges globally, sovereign infrastructure is quickly becoming a key differentiator. But AI sovereignty isn't about ticking compliance boxes, it’s about having true control. This means owning your infrastructure, ensuring independence from foreign entities, managing proprietary data entirely within a given jurisdiction, and maintaining legal autonomy. These four areas – infrastructure control, foreign independence, data ownership, and legal autonomy – form the basis of meaningful AI sovereignty.

Recent shifts in geopolitical sentiment, especially regarding data residency and access, are driving demand for AI infrastructure located outside the jurisdiction of US-based hyperscalers. Sovereign cloud infrastructure offers organizations – especially those in regulated sectors such as finance, healthcare, and government – a secure alternative that avoids exposure to extraterritorial legislation like the US Patriot Act.

But genuine AI sovereignty doesn’t come from a sticker that says, “local cloud.” It requires intentional design - where your data lives, who owns the IT infrastructure, and where the provider itself is based all matter. Without aligning all three, claims of sovereignty can fall apart under scrutiny.

However, the AI landscape is rarely quiet, and recent political developments have only added to its complexity. In the United States, the Biden administration introduced the BIS diffusion bill, a policy designed to control global distribution of GPUs. Under the framework, countries are categorized into tiers, with allied nations such as the UK and most of Europe granted wider access, while others face restrictions or outright bans. This has significant implications for AI development, creating a controlled environment for where infrastructure can be located.

In this new reality, companies can’t afford to treat infrastructure strategy as a back-office decision. Companies must now factor geopolitical volatility, supply chain dynamics, and regulatory risks into their AI infrastructure strategies.

The environmental cost of AI: What questions companies must ask

The environmental footprint of AI cannot be ignored. With large-scale model training and inference workloads becoming the norm, data centers are consuming more energy than ever before. A single AI GPU today can draw over 1,200 watts, equivalent to 12 standard laptops. In aggregate, these GPUs are housed in facilities that can contain tens of thousands of units, representing a significant strain on energy systems globally.

Sustainability must be built into the process from the start. That begins with selecting data center locations based on proximity to abundant renewable energy. Unlike traditional infrastructure, AI data centers don’t need to be close to urban areas. They can be strategically located in regions with surplus wind, hydro, or solar energy, as long as they have the right fiber connectivity to handle real-time data flows.

However, despite this flexibility, many hyperscalers continue to site infrastructure in fossil-fuel-dominated grids, and there is often a lack of transparency in how energy is sourced or used. Companies should ask tough questions about where their AI workloads are running, what powers them, and what emissions are associated with that usage. Without this accountability, greenwashing will continue to undermine genuine sustainability efforts.

Data centers also need to be built for long-term efficiency. This includes using the latest generation of GPUs and implementing modular architecture that supports hardware swaps without costly retrofits. Advanced cooling systems are equally critical. Traditional air cooling just isn’t enough anymore. Closed-loop liquid cooling systems should become the norm, as they’re much more efficient and use less water, which helps protect local water resources.

Selecting AI infrastructure is no longer just a technical decision — it is a sustainability commitment that demands rigorous due diligence on energy use, location strategy, and cooling technologies. Companies must embed environmental considerations into their AI strategy from the outset, asking the hard questions when selecting an AI cloud partner.

Strategic timing: The 12–18-month window

We are now in a critical phase. The next 12 to 18 months represent a strategic window for companies to act. The market is maturing rapidly, foundational models are stabilizing, and the tools to deploy AI effectively across sectors are becoming more accessible.

But this accessibility comes with responsibility. Companies must think strategically about how they deploy AI. This means more than just selecting a tool or API. It’s time to align AI with business models, workforce planning, ethical values, and environmental goals. This should be accompanied by selecting AI infrastructure partners who provide sovereignty and sustainability.

The risk of waiting is real. Late adopters will not only miss early efficiency gains, but may find themselves structurally disadvantaged in adapting to a world where AI dictates economic competitiveness. The divide will grow, and catching up will become significantly harder.

AI isn’t a “maybe” anymore - it’s foundational. It’s going to be a key factor in defining competitive advantage, not just for companies, but for entire nations and economies in the years to come. But with this shift, we need to think long-term. We need to build AI infrastructure that is scalable, ethical, sovereign, and sustainable. We must regulate AI in ways that protect society without paralyzing innovation. And we should recognize that in this era, performance alone is not enough. Trust, responsibility, and transparency will be just as important as speed and scale.

We list the best business cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro