AI giants unready for human-level intelligence risks

AI giants unready for human-level intelligence risks

AI firms face scrutiny over safety after a new report finds leading developers unprepared for human-level intelligence risks. Major industry players receive failing marks for planning, highlighting gaps in safety strategies as artificial general intelligence moves closer.


The world’s leading artificial intelligence (AI) companies are advancing rapidly towards human-level capabilities — but without a credible safety net. The Future of Life Institute (FLI), a US-based AI safety non-profit, has warned that major developers are “fundamentally unprepared” for the consequences of the systems they are building.

In its latest report, the FLI revealed that none of the seven major AI labs, including OpenAI, Google Deepmind, Anthropic, xAI, and Chinese firms Deepseek and Zhipu AI, scored higher than a D on its “existential safety” index. This index measures how seriously each company is preparing for the prospect of artificial general intelligence (AGI) — systems matching or exceeding human performance across nearly all intellectual tasks. Anthropic achieved the highest grade, a C+, followed by OpenAI (C) and Google Deepmind (C-), but no firm received a passing mark in planning for existential risks, such as catastrophic failures where AI could spiral out of human control.

Max Tegmark, FLI co-founder, likened the situation to “building a gigantic nuclear power plant in New York City set to open next week — but there is no plan to prevent it having a meltdown.”

The criticism arrives at a pivotal moment as AI development accelerates, driven by advances in brain-inspired architecture and emotional modelling. Last month, researchers at the University of Geneva found that large language models — including ChatGPT 4, Claude 3.5, and Google’s Gemini 1.5 — outperformed humans in emotional intelligence tests. Yet, these increasingly human-like qualities conceal a deep vulnerability: a lack of transparency, control, and understanding.

FLI’s findings come just months after the UK’s AI Safety Summit in Paris, which called for international collaboration to ensure the safe development of AI. Since then, powerful new models such as xAI’s Grok 4 and Google’s Veo3 have expanded what AI can do, but, according to FLI, there has not been a matching increase in risk mitigation efforts. SaferAI, another watchdog, released its own findings alongside FLI’s, describing the industry’s current safety regimes as “weak to very weak,” and calling the approach “unacceptable.”

“The companies say AGI could be just a few years away,” said Tegmark. “But they still have no coherent, actionable safety strategy. That should worry everyone.”

AGI may be closer than anticipated

AGI, the so-called ‘holy grail’ of AI, was once thought to be decades away. Recent advances, however, suggest it may be closer than previously assumed. By increasing complexity in AI networks — adding ‘height’ in addition to width and depth — researchers say it is possible to create more intuitive, stable, and humanlike systems. This approach, pioneered at Rensselaer Polytechnic Institute and City University of Hong Kong, uses feedback loops and intra-layer links to mimic the brain’s local neural circuits, moving beyond the transformer architecture behind today’s large language models.

Ge Wang, one of the study’s authors, compared the shift to “adding a third dimension to a city map: you’re not just adding more streets or buildings, you’re connecting rooms inside the same structure in new ways. That allows for richer, more stable reasoning, closer to how humans think.”

These innovations may fuel the next phase of AI development and help deepen understanding of the human brain itself, with potential applications in treating neurological disorders and exploring cognition. However, as capabilities grow, so do the risks. The AI firms named have been approached for comment.


Stories for you

  • Government borrowing exceeds forecast by £9.9bn

    Government borrowing exceeds forecast by £9.9bn

    Government borrowing exceeds forecasts by £9.9 billion this fiscal year. Public sector borrowing reached £17.4 billion in October, marking the third-highest October on record. The cumulative borrowing since April is £116.8 billion, intensifying economic pressures as the Budget approaches.


  • EU to delay high-risk AI rules after industry pressure

    EU to delay high-risk AI rules after industry pressure

    The EU’s flagship AI regulation faces a significant postponement. Brussels is expected to delay enforcement of high-risk AI system rules until 2027 following sustained pressure from major technology providers. The decision gives companies longer to adapt but raises concerns about governance complacency and shifting legal accountability.


  • Klarna’s AI agent reportedly takes on 853 jobs

    Klarna’s AI agent reportedly takes on 853 jobs

    Klarna claims its AI now matches workload of 853 staff. The payments company says its generative-AI customer-service assistant has handled millions of queries with human-level satisfaction, underscoring automation’s accelerating impact on service jobs as financial-technology leaders weigh the gains — and risks — of digital labour.