AI is becoming a market story and a trust problem

AI is becoming a market story and a trust problem

AI can lift valuations while quietly eroding confidence inside organisations. The emerging tension is no longer technological capability alone, but whether companies can promise productivity, faster growth, and leaner structures to markets without convincing employees that the gains will be financed by diminished security, thinner career paths, and weaker reciprocity.


The same AI announcement now tends to travel in two directions at once. In the market, it can read as proof that management is moving fast enough, protecting margins, and adjusting to a new software economy before competitors do. Inside the company, the same move can register as a warning that the rewards of AI will be distributed unevenly and that the price of strategic urgency will be paid in job security, slower advancement, and thinner mutual confidence. That split is becoming one of the central tensions in corporate life. It is not a side effect of AI adoption. It is increasingly part of the story itself.

Atlassian’s decision to cut roughly 1,600 roles while shifting resources toward AI and enterprise sales brought that divide into sharp focus. The company said it was reducing its workforce by about 10% to self-fund further investment, move faster, and strengthen its financial profile. In extended trading, the stock rose. That reaction was not unusual. Markets tend to reward visible signals of adaptation, especially when investors are already nervous that advances in AI could disrupt established software models. Those concerns intensified after a broad sell-off in software stocks last month, when new AI agents and plugins raised fresh questions about how much of the traditional application layer might be automated, compressed, or bypassed.

From an investor’s perspective, the logic is straightforward. If AI lowers the cost of coding, shortens product cycles, and increases pressure on incumbent software businesses to prove they can still grow profitably, then a management team that leaves its operating model untouched may look complacent rather than compassionate. Announcements about workforce reallocation, leaner structures, or sharper product focus become signals of seriousness. They suggest that leadership understands where the industry is moving and is willing to absorb disruption early rather than let it accumulate. In that reading, AI is a strategy story first, and a labour story second.

Inside organisations, the order is often reversed. Employees do not hear an abstract efficiency case. They hear an argument about which work is becoming more valuable, which work is easier to automate, and how much of the resulting productivity dividend will be shared. When that answer is vague, AI transformation begins to look like a one-way bargain: greater expectations from management, tighter headcount discipline, and less certainty about the future shape of a career. That does not make staff anti-technology. It makes them alert to incentive asymmetry. The company is asking them to help build the next model while simultaneously warning that the old bargain has expired.

This matters because execution in a skills transition depends heavily on trust. A company can convince the market that it is moving decisively while still weakening the internal capacity needed to make the change succeed. Staff who feel AI is primarily a tool for labour compression are less likely to trust reskilling promises, experiment openly, or commit to the redesign of workflows that could make the gains durable. Atlassian’s own language acknowledged the underlying issue when Cannon-Brookes wrote that it would be disingenuous to pretend AI does not change the mix of skills and the number of roles required in certain areas. The hard part begins after that sentence. What replaces the old path inside the company, and who is invited into it?

The companies that handle this best are likely to separate two conversations that are too often collapsed into one. One concerns the immediate structure of the workforce. The other concerns the future distribution of opportunity inside the organisation. If AI becomes only an investor-relations narrative about productivity and margin, the market may applaud while the culture frays underneath. That is not a soft problem. It is an operating risk. The AI era is now producing not just a new set of products and valuations, but a more fundamental question about whether companies can modernise fast without hollowing out the confidence on which real transformation depends.



  • UK transformation spending leaves millions unrealised

    UK transformation spending leaves millions unrealised

    UK transformations are leaking value at an alarming rate today. Sullivan & Stanley says businesses lose £27m per £100m invested, as board confidence outpaces execution, adoption, and scaled AI delivery.


  • AI is becoming a market story and a trust problem

    AI is becoming a market story and a trust problem

    AI can lift valuations while quietly eroding confidence inside organisations. The emerging tension is no longer technological capability alone, but whether companies can promise productivity, faster growth, and leaner structures to markets without convincing employees that the gains will be financed by diminished security, thinner career paths, and weaker reciprocity.


  • ScottishPower apprentice demand hits record high

    ScottishPower apprentice demand hits record high

    Energy apprenticeships are drawing unprecedented interest across the UK now. ScottishPower says applications for 150 roles topped 6,000, up 25%, as it expands hiring to support a £24bn clean energy and grid investment plan.