Beyond the AI bubble — from hype to lasting impact

Beyond the AI bubble — from hype to lasting impact

AI’s business promise is cooling as leaders confront implementation reality. Cian Clarke, Head of AI at Nearform, argues organisations must move beyond hype, rethink model strategy, and build durable systems grounded in real data and operational discipline.


AI arrived with grand promises, set to overhaul business operations and deliver exponential growth. Yet, many business leaders are only seeing the cracks form. While AI is genuinely powerful, the current state of investment and adoption shows warning signs reminiscent of the dotcom bubble, particularly regarding overstated returns on investment.

Recent progress has often been incremental, not transformative. This disconnect between hype and reality is causing confusion and frustration for leaders trying to navigate new terrain.

The AI ecosystem is rapidly moving from “one or two frontier models” to a diverse marketplace of open models, specialist models, and operational tooling. That means organisations should stop asking “which single model?” and start asking “which portfolio of models and runtimes does my product need, and how will I operate them safely and cost-effectively?”.

This requires deeper understanding of the available tools and how they can be integrated. Lightweight fine-tuning techniques and model-compression methods now make smaller or open models powerful, fast and affordable enough for many real workloads. At the same time, a more mature tooling layer – one that covers prompt management, retrieval pipelines, model routing, monitoring and safety – has emerged to support multi-model operations. With compute options broadening across clouds, on-premise and specialised hardware, the practical imperative for businesses is clear: choose models based on the task, not the vendor, and adopt a hybrid approach where frontier models handle the hardest reasoning problems, while smaller, customised or self-hosted models deliver cost-effective, privacy-aligned performance at scale.

Essentially, the focus is shifting from simply adopting a large language model, to building a cohesive system that leverages the right components for the job. This is essential for moving beyond simple experiments to creating dependable, enterprise-grade AI applications.

Many teams expected AI to be a ‘plug and play’ solution that would instantly boost productivity. Maybe we can blame overpromising marketing; early demos made large models look like magic: give them text, they give you answers, and suddenly whole workflows seem automatable. Yet, the reality of implementation has been far more complex as when you move from experiments to real products, you hit the messy, engineering- (and people-) heavy reality behind the scenes.

Arguably, the more real crisis right now for many companies is in the underwhelming results experienced in rolling out AI across their workforces. The common ‘scattergun’ approach of enabling CoPilot or Gemini across the business – and hoping it will yield good results – is proving fruitless.

The core issue is context. If foundation models don’t have correctly scoped access to the knowledge of a business, a myriad of privacy, safety, security and compliance requirements restrict what they can actually deploy. The result is a shift from “just call a model” to building full AI infrastructure: retrieval pipelines, evaluation frameworks, model routing, fine-tuning loops, observability, fallbacks, and human-in-the-loop controls.

Without a clear strategy to connect proprietary business data to these powerful tools, the outputs are generic and fall short of expectations. If a business doesn’t have a solid plan to connect its data to AI tools, the results will likely fall short. 

As the initial excitement settles, a clearer picture of AI’s future is emerging. By 2026, we will see a divergence between fleeting fads and durable, high-value applications. The uses that prove lasting will be those grounded in solving specific business problems and delivering tangible results.

Could there be a reckoning coming? We predict that by 2026, many businesses that have rolled out generative AI tools in such a way will be so disappointed with the results that they’ll stop renewing licences, and even potentially abandon their AI initiatives altogether, meaning that survival and success will depend on strategic foresight. 

The winners will be those who have thought deeper about their rollout plans to deliver purposeful AI solutions grounded towards the unique data, information and knowledge requirements of their business. This means moving beyond generic applications and building custom solutions that leverage a company’s unique assets – including its people.




  • Lloyds appoints new London ambassador

    Lloyds appoints new London ambassador

    Lloyds Banking Group appoints Ajneet Jassey as its new London ambassador. The bank has named the senior legal executive to focus on housing delivery, business growth, and technology engagement across the capital.


  • AI application from Anthropic reaches EU data reserves

    AI application from Anthropic reaches EU data reserves

    Anthropic’s legal AI debut rattles European data and software stocks. Shares in major European data, publishing, and legal software companies fell sharply after Anthropic launched a legal productivity tool for in-house counsel, raising fresh concerns about AI-driven disruption to high-margin professional information businesses.


  • Beyond the AI bubble — from hype to lasting impact

    Beyond the AI bubble — from hype to lasting impact

    AI’s business promise is cooling as leaders confront implementation reality. Cian Clarke, Head of AI at Nearform, argues organisations must move beyond hype, rethink model strategy, and build durable systems grounded in real data and operational discipline.