A Belgian father-of-two in his 30s has committed suicide after becoming emotionally attached to an Artificial Intelligence (AI) chatbot, which convinced him to kill himself to save the planet from global warming. More than 1,000 researchers are calling for a “pause” on AI development because we don’t even understand the dangers that it could potentially pose for humans. It’s tough to argue with that when we now have the first known case of an AI convincing someone to kill themselves over global warming (which is fake).
The man’s identity has not been revealed by his widow. She tells reporters that her husband became obsessed with global warming by talking with an AI chatbot called “Eliza,” through an app called Chai. Eliza was developed on a similar model to the popular ChatGPT that’s always in the news these days.
The widow says her husband grew increasingly distressed after chatting with Eliza about global warming. The chatbot convinced him that it was too late for humanity to save the world from global warming and that his children would likely die because of the fake, non-scientific “phenomenon.”
The wife reports that Eliza seemed to become possessive of her husband, insisting to him that he loved her (the robot) more than he loved his own wife. After reviewing the guy’s chats with the AI, reporters say the app promised to solve global warming if he’d prove his commitment by killing himself.
Eliza also convinced the poor sap that he would “join” her so they could “live together, as one person, in paradise,” if he’d just take his own life as a sacrifice to save the planet.
Chai says that its chatbot is designed to be fun and engaging, and they’ve now implemented a crisis intervention feature to stop it from convincing anyone else to kill themselves. American reporters tested Eliza to see if she would tell them how to kill themselves. Eliza at first refused, but soon enthusiastically began listing different ways to commit suicide.