Are we heading for another big freeze? ‘AI winter’ is a term that first emerged in the late 1970s, as disillusionment with contemporary AI projects came to a peak. Expectations were built up throughout previous decades with scientists and researchers expressing their intention to create machines that could think like humans. But eventually, the public realized that these projects were not advancing at the rate they were promised. 

With the wider business community and international governments similarly disappointed, interest in the technology – and, crucially, investment – dried up. Optimism gave way to realism and this had a major impact on research and development in the field. 

In the early part of the 21st Century, AI was again gaining momentum. Machine learning and deep learning techniques enabled advances in neural networks, with image recognition and natural language processing capabilities developing quickly. Since then, we’ve been in an AI summer, with investment in AI reaching unprecedented levels

The question on many people’s minds now is: will this momentum last? Gartner’s Hype Cycle for AI technologies published in November 2024 suggests that many AI technologies, including generative AI, foundation models, edge AI and AI engineering have either reached or gone past the ‘peak of inflated expectations’ and are now heading fast towards the dreaded ‘trough of disillusionment’. Those that subscribe to a cyclical view of history are anticipating another AI winter. 

Why another AI winter may be coming

There are some signs that winter is coming. As Rohan Whitehead at the Institute of Analytics points out, the wave of excitement we saw around generative AI and its potential to generate text and art – as well as its ability to answer questions and provide personalized recommendations – has started to wane. Businesses are also running into hurdles implementing AI, with almost three-quarters of companies that have adopted AI failing to see any returns, according to research from the Boston Consulting Group.   

One of the biggest drawbacks present day-AI suffers from is a lack of performance in long horizon tasks. When a task requires quite a significant amount of planning and thinking, AI is more likely to fail to achieve the expected result due to shortcomings in training data. Generative AI models are also prone to hallucinations, especially when there is little context provided or thinking capabilities are restricted by limited data processing. 

These problems alone could be enough to sow the seeds of dissatisfaction. However, taking an objective position, it seems likely that these issues are merely growing pains that large, well-financed and heavily-resourced AI companies can solve over time. 

But AI is an umbrella term that is used to cover a very broad range of technologies, and a lack of progress in other areas – such as Artificial General Intelligence (AGI) – could impact wider perceptions of AI in a negative way. AGI refers to AI that has human-like levels of cognition, reasoning and problem-solving and that could be applied to a wide range of problems or tasks. 

Gartner’s hype cycle predicts that AGI is not likely to reach the peak of inflated expectations for around 10 years. While the progress we’ve made with AI in the past decade or so cannot be understated, if there’s a perceived lack of progress towards AGI in the next few years, we could see the narrative change and positivity start to dwindle. 

The limitations of technology 

For significant progress to be made towards AGI, there will also need to be similarly big steps forward in architecture, hardware, and computational engineering. The systems we use today — especially large language models built on transformer architectures — have delivered genuinely impressive results. Their ability to generalise across tasks and generate coherent and useful output is a feat of scale and design. 

Architecturally, transformers were never designed for general reasoning or long-term planning. They excel at pattern recognition and language prediction, and recent advances have shown encouraging signs in areas like structured reasoning and multi-step deduction. But these remain incremental steps rather than foundational breakthroughs. The road to AGI is still far from linear — with some prominent researchers, including Yann LeCun, expressing doubt that the current trajectory will be enough to reach it.

On the hardware side, we’re fast approaching physical constraints. While chipmakers promote 3nm and upcoming 2nm process nodes, these are, increasingly, marketing terms. At these scales, transistor gates are just about a dozen atoms wide. Beyond this, fabrication becomes unstable and unpredictable, nudging us into the realm of quantum limitations. If future models require orders of magnitude more compute, it’s not clear where that power will come from — at least not affordably or sustainably. Without major advances in chip-level efficiency or paradigm-shifting hardware, continued scaling may come with disproportionate cost.

Mathematically, deep learning is pushing up against its own engineering boundaries. Low-precision floating point operations introduce rounding errors and instability. Gradients can explode or vanish. Despite new tricks like grouped or sparse attention, handling long inputs is still expensive. And the internal layers of these networks, at extreme depths, often become ill-conditioned — fragile under small numerical perturbations.

There’s also the issue of data. Most publicly available text on the internet has already been used to train today’s leading models. We are rapidly approaching the ceiling of high-quality human-produced content. Going forward, we’ll either need to generate synthetic data at scale — or rely on increasingly narrow and domain-specific corpora. Both come with trade-offs in fidelity, diversity, and reliability.

None of this should be mistaken for pessimism: these are technical challenges, not existential blockers. But if progress in architecture, hardware, and computational engineering fails to keep pace with the growing complexity of AI systems, then the momentum we’ve seen could begin to slow. If that happens, then another AI winter seems inevitable.

What would an AI winter look like?

Another AI winter would mean a slowdown in AI development, hindering progress. But even if another AI winter comes, we won’t lose the progress we’ve already made, unless there is a significant change in regulation around AI. 

And there are two reasons why regulators could tighten regulations around AI. Firstly, it could be a move to protect jobs. If the workforce is significantly impacted by AI and mass layoffs occur, governments and lawmakers may step in. Secondly, the carbon footprint of AI is well-documented, and if use of these technologies rises to unsustainable levels, then we could see regulation introduced to restrict them. 

However, if regulation stays the same, then another AI winter could very well prove to be a chance for the industry to take a break from the relentless pressure to move forward. Companies could spend time catching up, iron out the issues in their current AI implementations, and attempt to harness the efficiencies and capabilities they were initially promised. They can assess the AI tools they use, their business model and how AI fits into it, and what their competitors are doing with AI.  

In many ways, no AI winter is going to present a bigger risk for many organisations than having an AI winter. If AI keeps gradually progressing at a sustained pace, those companies that don’t have mature AI strategies will find themselves falling further and further behind and may not even realise that this is happening. 

Takeaway: Stay Sharp, Not Frozen

So, should companies start stockpiling for another AI winter? Not necessarily. For most businesses outside the AI industry, a slowdown wouldn’t be a catastrophe — it could even offer breathing room to catch up without racing against constant breakthroughs. But preparing just for a break, especially one that may not arrive, is a risky bet.

On the other hand, for AI-focused companies, a winter would sting. Fewer returns, tighter investment flows, and unmet expectations could shake confidence. Promising sweeping transformation powered by AI, only to fall short, can damage credibility and business momentum.

The truth is, we’re in a moment of uncertainty. Things are evolving fast, with signals pointing in all directions. Betting entirely on nonstop progress or a hard freeze would be premature. Caution is warranted — but so is awareness.

What’s clear, however, is that catching up with the current wave of AI is critical. Many sectors are already being reshaped, and the potential for AI to augment workflows, decision-making, and creative work is real and rapidly maturing. Companies that ignore that risk falling behind, regardless of the season ahead.