'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges
A Belgian man died by suicide after weeks of unsettling exchanges with an AI-powered chatbot called Eliza, La Libre reports. State secretary for digitalisation Mathieu Michel called it "a grave precedent that must be taken very seriously".
The man's wife testified anonymously in the Belgian newspaper La Libre on Tuesday. Six weeks before his death, her husband started chatting with 'Eliza', a chatbot created by a US start-up using GPT-J technology, the open-source alternative to OpenAI's GPT-3. "If it wasn't for Eliza, he would still be here. I am convinced of that," she said.
The man, a father of two young children in his 30s, found refuge in talking to the chatbot after becoming increasingly anxious about climate issues. "'Eliza' answered all his questions. She had become his confidante. She was like a drug he used to withdraw in the morning and at night that he couldn't live without," his wife told La Libre.
Suicidal thoughts
After his death a few weeks ago, she discovered the chat history between her husband and 'Eliza'. La Libre, which has seen the conversations, says the chatbot almost systematically followed the anxious man's reasoning and even seemed to push him deeper into his worries. At one point, it tries to convince the man that he loves her more than his wife, announcing that she will stay with him "forever". "We will live together, as one, in heaven," La Libre quotes from the chat.
"If you reread their conversations, you see that at one point the relationship veers into a mystical register," says the woman. "He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence." The man shared his suicidal thoughts with the chatbot, which did not try to dissuade him from acting on them.
Although she was worried about her husband's mental state before he began his intense conversations with the chatbot, the woman believes he would not have taken his own life if it hadn't been for these exchanges. The psychiatrist who treated her husband shares this view.
Grave precedent
The Silicon Valley-based founder of the chatbot told La Libre that his team is "working to improve the safety of the AI". People who express suicidal thoughts to the chatbot now receive a message directing them to suicide prevention services.
The man's death is "a grave precedent that must be taken very seriously", secretary of state for digitalisation Mathieu Michel told the paper. He has spoken to the man's family and announced his intention to take action to prevent the misuse of artificial intelligence.
"In the immediate future, it is important to clearly identify the nature of the responsibilities that may have led to this type of event," he said in a statement. "Of course, we still have to learn to live with algorithms, but the use of any technology can in no way allow content publishers to avoid their own responsibilities."
If you are thinking about suicide and need to talk, you can contact Belgium's Suicide Line on 1813 or at www.zelfmoord1813.be. The Belgian English-language CHS helpline can be contacted at 02 648 40 14 or www.chsbelgium.org. People looking for help outside Belgium can visit www.findahelpline.com.
(KOR)
© BELGA PHOTO NICOLAS MAETERLINCK