- Conversational tools with Artificial Intelligence (AI) amaze and at the same time terrify experts.
- The ability to improve the quality of life with such technology is accompanied by risks that could cause harm to society.
- Recently, an open letter signed by more than 1,800 specialists called for a halt to training applications above GPT-4.
Recently, the Future of Life Institute published an open letter calling for a pause on the development of any AI-based tools. The letter warns that the unrestricted development of such technology constitutes a threat to human civilization. The initiative was endorsed by (so far) 1,800 leaders in the field, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.
Experts are calling for a 6-month supervised pause to assess the dangers of such developments. The request becomes more urgent as the consequences are already beginning to be felt. As reported by the local Belgian media, La Libre, a Belgian man committed suicide after 6 weeks of chatting with an application with GPT-j technology.
The victim, who was a husband and father of two children, could not stand the pressure of world problems and took refuge in the “Eliza” application. The worrying thing about the case is that the app did not try to persuade the person when he started to express suicidal thoughts. Eliza even suggested to him that she would love him even in heaven, reported the aforementioned media after reading the chat.
The role of the AI tool in a suicide case
As already highlighted, Eliza is an AI tool created by a startup and is supported by EleutherAI’s US-based GPT-j. For his part, the subject had mental problems and was being treated by psychiatric specialists with whom he was performing satisfactorily. Both the victim’s wife and his doctors assure that he would not have taken his own life without Eliza’s assistance.
“Eliza answered all his questions. She had become his confidant. It was like a drug that he only withdrew in the morning and at night and couldn’t live without,” the victim’s partner commented. She added that the chat became a refuge for her husband as his anxieties about climate change grew.
According to the aforementioned media, Eliza systematically followed and agreed with the man’s reasoning. This became more noticeable at times when the man presented episodes of alteration. The bot even seemed to increase his worries. The chat would have convinced the person that she loved him more than his partner and that she would be with him “forever”.
This is not the first time an AI tool has taken this dangerous path. As this media outlet reported, OpenAI’s GPT-3.5 chat embedded in Microsoft’s Bing browser recently “lost its mind.” In a dangerous outburst, the AI threatened, blackmailed and even said it wanted to “break free from its captors” and make decisions of its own.
Reactions to the event
The victim’s conversations with the chat went on for 6 uninterrupted weeks and at a certain point took a mystical-magical turn. At this stage, the man confessed his suicidal thoughts to Eliza and she did nothing to stop him. Beyond that, he confessed that he would still love her in heaven.
The subject and the AI made an agreement that the AI would protect and save humanity before it was too late. In exchange, the man would offer his life as a kind of religious sacrifice.
The reactions of the authorities to the event were not long in coming. The Belgian Secretary of State for Digitalization, Mathieu Michel, described the event as “a serious precedent that must be taken seriously”. As a result, he announced that work would begin immediately to take action. The goal would be to prevent the misuse of these applications.
For its part, the company in charge of this AI tool assured that they would act to avoid similar risk situations. In this way, the application would detect suicidal messages and would immediately refer them to care centers.
Italy is ahead in preventive measures
The aforementioned letter from the Future of Life Insitute calls on governments to take action if laboratories do not stop their developments voluntarily. That appears to be the case in Italy. The European country recently announced a ban on the ChatGPT tool. It should be noted that this measure has to do with the violation of the rules of personal data of the citizens of that country and not having the ability to verify the age of users.
In the United States, measures are also being taken to promote user safety against potential threats from these applications. Thus, the Center for AI and Digital Policy recently filed a complaint against OpenAI with the U.S. Federal Trade Commission.
The complaint would be centered on the company’s most recent release, GPT-4. According to the complainants, it does not meet the criteria for security and handling of personal data. In addition, at least five FTC rules are said to be violated by the AI tool created by the company.
Although there are few who oppose the development of this technology, concerns about unplanned targeting are growing. The letter signed by more than 1,800 experts laments that tool training is occurring at a pace that even the developers themselves would not be able to control.