AI giants unready for human-level intelligence risks

AI giants unready for human-level intelligence risks

AI firms face scrutiny over safety after a new report finds leading developers unprepared for human-level intelligence risks. Major industry players receive failing marks for planning, highlighting gaps in safety strategies as artificial general intelligence moves closer.


The world’s leading artificial intelligence (AI) companies are advancing rapidly towards human-level capabilities — but without a credible safety net. The Future of Life Institute (FLI), a US-based AI safety non-profit, has warned that major developers are “fundamentally unprepared” for the consequences of the systems they are building.

In its latest report, the FLI revealed that none of the seven major AI labs, including OpenAI, Google Deepmind, Anthropic, xAI, and Chinese firms Deepseek and Zhipu AI, scored higher than a D on its “existential safety” index. This index measures how seriously each company is preparing for the prospect of artificial general intelligence (AGI) — systems matching or exceeding human performance across nearly all intellectual tasks. Anthropic achieved the highest grade, a C+, followed by OpenAI (C) and Google Deepmind (C-), but no firm received a passing mark in planning for existential risks, such as catastrophic failures where AI could spiral out of human control.

Max Tegmark, FLI co-founder, likened the situation to “building a gigantic nuclear power plant in New York City set to open next week — but there is no plan to prevent it having a meltdown.”

The criticism arrives at a pivotal moment as AI development accelerates, driven by advances in brain-inspired architecture and emotional modelling. Last month, researchers at the University of Geneva found that large language models — including ChatGPT 4, Claude 3.5, and Google’s Gemini 1.5 — outperformed humans in emotional intelligence tests. Yet, these increasingly human-like qualities conceal a deep vulnerability: a lack of transparency, control, and understanding.

FLI’s findings come just months after the UK’s AI Safety Summit in Paris, which called for international collaboration to ensure the safe development of AI. Since then, powerful new models such as xAI’s Grok 4 and Google’s Veo3 have expanded what AI can do, but, according to FLI, there has not been a matching increase in risk mitigation efforts. SaferAI, another watchdog, released its own findings alongside FLI’s, describing the industry’s current safety regimes as “weak to very weak,” and calling the approach “unacceptable.”

“The companies say AGI could be just a few years away,” said Tegmark. “But they still have no coherent, actionable safety strategy. That should worry everyone.”

AGI may be closer than anticipated

AGI, the so-called ‘holy grail’ of AI, was once thought to be decades away. Recent advances, however, suggest it may be closer than previously assumed. By increasing complexity in AI networks — adding ‘height’ in addition to width and depth — researchers say it is possible to create more intuitive, stable, and humanlike systems. This approach, pioneered at Rensselaer Polytechnic Institute and City University of Hong Kong, uses feedback loops and intra-layer links to mimic the brain’s local neural circuits, moving beyond the transformer architecture behind today’s large language models.

Ge Wang, one of the study’s authors, compared the shift to “adding a third dimension to a city map: you’re not just adding more streets or buildings, you’re connecting rooms inside the same structure in new ways. That allows for richer, more stable reasoning, closer to how humans think.”

These innovations may fuel the next phase of AI development and help deepen understanding of the human brain itself, with potential applications in treating neurological disorders and exploring cognition. However, as capabilities grow, so do the risks. The AI firms named have been approached for comment.


Stories for you

  • Audion expands in DACH region with new leadership appointment

    Audion expands in DACH region with new leadership appointment

    Audion appoints Ina Börner as head of sales & market growth DACH. The move strengthens the company’s presence in Germany, Austria, and Switzerland as it builds on strong regional momentum and expands its pan-European digital audio operations.


  • Diginex buys human rights advisory firm

    Diginex buys human rights advisory firm

    Diginex completes acquisition of The Remedy Project Limited. The acquisition aligns with growing demands for human rights due diligence driven by stringent regulations. It enhances Diginex’s capabilities in human rights risk identification and remediation within global supply chains.


  • Diginex buys human rights advisory firm

    Amazon store highlights sellers’ EcoVadis ratings

    EcoVadis and Amazon launch sustainability feature on B2B marketplace. The new feature enables sellers on Amazon Business in the EU to display EcoVadis sustainability medals, addressing demand for supply chain transparency and aiding sustainable procurement amid regulatory pressures.