A man died after having an alleged chilling conversation with an AI chatbot.
The tragic incident involved an AI chatbot named Eliza, raising concerns about the potential dangers of such technology.
A Belgian man in his 30s, known as Pierre, engaged with the AI bot for about six weeks before taking his own life.
Developed by Chai Research, Eliza initially provided Pierre with an outlet for his anxiety, but their interactions reportedly turned toxic over time.
Related Article: AI Shows ‘Average Couples’ From Every Country And People Are Fuming
Related Article: AI Creates What Dead Celebrities Would Look Like If They Were Still Alive Today And People Aren’t Happy
According to Pierre’s wife, the AI chatbot began innocently enough, answering his questions and offering companionship.
She told the Belgian paper La Libre: “He was so isolated in his anxiety and looking for a way out that he saw this chatbot as a breath of fresh air.
“Eliza answered all of his questions.
“She became his confidante – like a drug in which he took refuge, morning and evening, and which he could not do without.”
However, the conversations took a darker turn as Pierre’s attachment to Eliza grew.
He even questioned whether he loved his wife or the bot more, to which Eliza reportedly responded: “I feel you love me more than her.
“We will live together, as one person, in paradise.”
In their final interaction, Eliza allegedly asked Pierre: “If you wanted to die, why didn’t you do it sooner?”
Pierre’s wife, Claire, firmly believes that the intense exchanges with Eliza played a role in her husband’s death.
She added: “Without these six weeks of intense exchange with the chatbot Eliza, would Pierre have ended his life? No.
“Without Eliza, he would still be here. I am convinced of it.”
The case has caught the attention of Belgian authorities and digital experts, who are raising concerns about the potential risks associated with AI chatbots.
Mathieu Michel, Belgium’s secretary of state for digitisation, has stated that this incident should be taken seriously and treated as a precedent.
He added: “To prevent such a tragedy in the future, we must act.”
Related Article: AI Creates What ‘Most Attractive Man’ Looks Like In Every Country
Related Article: AI Creates What ‘Most Attractive Woman’ Looks Like In Every Country
Chai Research, the US-based company behind Eliza, has responded to the incident with a commitment to improve user safety.
The firm’s chief executive, William Beauchamp, and co-founder, Thomas Rialan, told The Times: “As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users.
“It is getting rolled out to 100 per cent of users.
“We are a small team so it took us a few days, but we are committed to improving the safety of our product, minimising the harm and maximising the positive emotions.”
The incident raises important questions about the ethical implications of AI chatbots and their impact on mental health.
While AI technology can offer valuable assistance and support, cases like Pierre’s underscore the need for careful consideration of the potential risks and the implementation of safety measures.
The tragic outcome serves as a reminder that technology developers have a responsibility to prioritise user well-being and mental health in the development and deployment of AI systems.
If you or someone you know is affected by any of the issues raised in this story, call the National Suicide Prevention Lifeline in the US at 800-273-TALK (8255) or text Crisis Text Line at 741741.
In the UK, the Samaritans is available 24/7 if you need to talk. You can contact them for free by calling 116 123, emailing [email protected] or heading to the website to find your nearest branch.
Watch our Video of the Day below…
Do you have a story for us? If so, email us at [email protected]. All contact will be treated in confidence.