UCD Michael Smurfit Graduate Business School has published research showing that people act more rationally when they believe they are interacting with AI rather than another person. The study was conducted by behavioural scientists Dr Suhas Vijayakumar, Dr Yuna Yang, and Dr David DeFranza.
In an economic game with real financial stakes, participants were offered a division of $1 in which the partner kept $0.90 and the participant received $0.10. Rejecting the offer meant neither side received anything. Participants who believed the offer came from AI were significantly more likely to accept it, choosing the economically rational outcome despite the imbalance.
The researchers said the effect may come from how people adjust their behaviour when they expect the other side to be highly logical, rather than from something unique to AI itself. As AI systems appear more often in negotiations, recommendations, and decision-support settings, those assumptions may affect how people respond.
Dr Vijayakumar said: “We speculate perhaps a reason why people are less likely to accept a similar unfair offer from a person (human), could also be because of expectations of reciprocity and emotional fairness that we share with other human beings. Future research needs to look at further expectations and beliefs about AI.”




You must be logged in to post a comment.