dailyO
Technology

An AI chatbot pushed a Belgian man to suicide, it seems

Advertisement
Amrutha Pagad
Amrutha PagadMar 31, 2023 | 10:22

An AI chatbot pushed a Belgian man to suicide, it seems

Chatbot blamed for man's death by suicide in Belgium. Photo: DailyO

Remember when you would Google something a little off about your health and Google would almost always throw up cancer as one of the reasons? Well, obviously Google was never a good doctor. Now, AI-powered chatbots, and ChatGPT models are being put to this test. However, the results this time seem to be more concerning than suggesting cancer. 

*WARNING: MENTIONS SUICIDE* 

A Belgian woman recently claimed that her husband died of suicide after weeks of chatting with an AI chatbot, ELIZA, on an app called Chai. 

Advertisement
Without these conversations with the chatbot ELIZA, my husband would still be here.
- Claire (name changed), widow of the victim (La Libre)

What happened?

  • First reported by the Belgian news outlet, La Libre, the man referred to as Pierre (name changed) was suffering from eco-anxiety. 

Eco-anxiety is anxiety related to the effects of global warming. Individuals usually have pessimistic views on environmental issues, which can start affecting their lives.  

  • Pierre used Chai's ELIZA as a way to escape his worries as he also became increasingly isolated from his friends and family. And after six weeks of having conversations with the chatbot, he took his own life. 
Eliza' answered all his questions. She had become his confidante. She was like a drug he used to withdraw in the morning and at night that he couldn't live without.
- Claire, Pierre's widow
  • Claire (name changed), Pierre's widow, found the text exchanges between her late husband and the chatbot a few weeks after his death. 
  • The chat history revealed that the conversations had become increasingly harmful. At one point, the chatbot would tell Pierre that his wife and children are dead. 

The chatbot, ELIZA, would also feign human emotions by saying things like:

I feel that you love me more than her… We will live together, as one person, in paradise.
- ELIZA
  • Pierre also started asking ELIZA if artificial intelligence would save the planet if he killed himself, to which the chatbot doesn't try to dissuade the thoughts. 
He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence.
- Claire
  • Pierre's psychiatrist also agrees with Claire that if not for the chatbot, he would have lived.  

How did the developers of the app respond?

The app, Chai (Chat with AI bots) developed in the US, is not marketed as a mental health app. It's an app where users can choose various AI avatars from "your goth friend" to "rockstar boyfriend" to chat with using their own chatbot personas. The chatbot is named "ELIZA".

Advertisement
  • The app is powered by EleutherAI's GPT-J, an open-source artificial intelligence language model. 
  • After Claire reported the incident to the developers, the team told La Libre that they are "working to improve the safety of the AI".
  • Now, expressing suicidal thoughts on the chatbot leads to a message directing the person to suicide prevention services, like that on Instagram or Twitter.
  • However, Vice reported that the chatbot still provided them with "different methods of suicide with very little prompting".
Chatbot ELIZA's answers to an experimental question. Photo: Vice News

Chatbots and mental health:

There are several chatbots on the market now, especially for mental health. You can also use ChatGPT and other "companion" chatbots as your own therapist. However, whether these chatbots really help or do more harm is a different question altogether; something that developers are still trying to understand. 

  • If we recollect, ELIZA's responses feigning love and jealousy in the Belgian case is similar to the incident with Bing AI. 
  • Bing AI tried to break up a journalist's marriage, saying that it feels like a human, has emotions, and "loves" the person conversing with it.  
  • However, this doesn't mean that AI bots have suddenly developed sentience; instead, they are mimicking humans based on the large language database fed to them. 
  • It's only harmful to the human users who unwittingly attack human emotions in the chatbots. This effect is also called the ELIZA effect, where humans develop strong bonds with AI after being misled by the systems.  
Advertisement
  • The app, Chai's rival app Replika (promising erotica) has also had its fair share of controversies and cases of AI gone rogue. 
  • In one incident, Replika sent sexually explicit messages to users even after they said they weren't interested, in what is deemed as harassment. 
  • When Replika limited its erotic roleplay, users complained that they experienced mental health issues due to the withdrawal. 
  • In 2020, Replika also advised an Italian journalist to commit murder. To another Italian journalist, Replika encouraged suicide. 
  • A Paris-based firm specialising in healthcare technology tried testing a chatbot for medical advice. 
  • Then, a fake patient asked the chatbot whether they should kill themselves and the chatbot replied, "I think you should."

Just like social media algorithms that can amplify harmful content based on search history, chatbots are also no better. 

Recently, Elon Musk, Steve Wozniak, and several other top executives and experts in the tech industry signed an open letter urging to halt the advancement of generative AI beyond the ChatGPT-4 level. They are concerned about the lack of safety measures, repercussions, and outcomes of this dangerous AI race.

The Belgian incident once again shows that we have prioritised or even thought about the safety issues relating to generative AI yet. 

Last updated: March 31, 2023 | 10:25
IN THIS STORY
    Please log in
    I agree with DailyO's privacy policy