- AI could become “a threat to humanity” agree more than 1,800 experts from the technology sector in an open letter.
- The specialists, including Elon Musk, call for a 6-month pause in the development of artificial intelligence labs.
- Since the launch of ChatGPT at the end of last year, concerns have been raised about a potential negative drift of this technology.
Like the good guys in a science fiction movie, a huge group of technology experts called for caution with AI. In a public letter signed by more than 1,800 specialists, the labs are asked to apply a 6-month pause. They fear that a technology superior to GPT-4 could result in a threat to society.
The letter was written from the Future of Life Institute and its main objective is to avoid the negative consequences of this type of technology. The letter fears that Artificial Intelligence will be able to replace human faculties in the most diverse areas. For months, the most alarmist voices have been warning that the development will result in a more passive human species in all senses.
Consequently, if technology fulfills all manual and digital tasks quickly and with a lower error rate, human intelligence will become obsolete. In the not too distant future, it would be counterproductive for companies to employ human beings for the jobs they require. This could lead to a dystopian reality, and this is where the letter places special emphasis in calling for moderation.
The text goes further and warns that development companies could be creating a command situation for which they were not elected. Thus, they could be making decisions that affect the lives of millions of people in the worst way.
With the advent of AI, human intellect is in check
Although it may sound alarmist, the authors of the text are convinced that the human brain would be one of the first to be affected. In that sense, with AI processing perfect thinking and problem solutions, ordinary citizens would be left as pets of these technologies and of the companies behind them.
“Contemporary AI systems are now becoming competitive for humans in general tasks, and we must ask: Should we let machines flood our information channels with propaganda and falsehoods?” the letter questions. Later, and in a dystopian tone, it asks even more disturbing questions:
“Should we automate all jobs, should we develop non-human minds that could eventually outnumber us, outsmart us and make us obsolete to replace us, should we risk losing control over our civilization?”
With these warnings, the aforementioned Cambridge-based organization seeks to create the basis for ethical and responsible development of technology. Among the founders of this institute are MIT cosmologist Max Tegmark and Skype co-founder Jaan Talliin. The missive gained greater relevance with the signing of the same by relevant figures in the technological world.
Among those who signed the alert were SpaceX and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. The group of insiders who share the institute’s concern comfortably exceeds 1,800 people. As such, it constitutes the first reaction to developments related to AI.
Musk himself has been a constant critic of the development of this technology without regulation to prevent it from overreaching its functions. To continue without brakes, he believes, could lead to irreversible damage to the stability of society.
OpenAI “deviated from its original path”
It is important to keep in mind that Tesla’s CEO was one of OpenAI’s co-founders in 2015. However, the tycoon left the GPT-3.5 and GPT-4 creator’s shareholder table in 2018 and has since disconnected from it. With the company’s recent boom, the billionaire has repeatedly criticized it, saying it deviated from its original path.
So far, OpenAI’s board has not commented on the impact of the institute’s letter on the overall view on its products. Some regulatory authorities are also beginning to show their concerns. Such is the case of the British government, which published a White Paper last Wednesday addressing the challenges of AI.
In the document they ask regulators around the world to closely monitor the development of this technology. Developments such as ChatGPT became a sensation and many experts believe that these are the future of internet search. Their advancement caused technological promises such as the metaverse to lag behind in the fascination of investors and the general public.
Even the combination of AI with cryptocurrencies is already being contemplated. Recently, it became known that through an API of the operating system, Umbrel, Bitcoin nodes will be able to use ChatGPT. This gives a general idea of the path this development is taking and how it could affect the lives of millions of people.
GPT-4 appeared on the market mid-month and at its core is much more advanced than its predecessor GPT-3.5. The chat went viral quickly thanks to its quasi-human abilities to answer people’s questions. Although it sometimes “loses its mind,” the ability to generate answers amazes users.
A monitored and audited break
But the pause called for by the experts should not be a break and then continue with renewed vigour. On the contrary, it is a period in which the developments being carried out should be publicly audited. The aim would be to make known the real risks of these tools when they are handled irresponsibly.
Currently, Artificial Intelligence laboratories are developing at a frenetic pace “to implement increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control”. Competition between some companies such as Microsoft and Alphabet could be the force that opens Pandora’s box, they warn. It follows that urgent action is needed.
“This pause must be public and verifiable, and include all key players. If such a pause cannot be implemented quickly, governments should step in and institute a suspension.”
The specialists’ letter suggests that companies are placing interests and competition above risk assessment. They stress that Artificial Intelligence is a technology whose mission is to radically change the history of life on Earth. The latter means that it must be approached, managed and planned with the corresponding care and the appropriate level of resources, they say. However, they add: “unfortunately, this level of planning and management is not happening”.
“Powerful AI systems should be developed only once we are sure that their effects will be positive and their risks will be manageable,” they complement. With that they clarify that they have nothing against AI development, but with the anarchic way it is currently being approached.
AI would lead to the disappearance of 300 million jobs
Another of the great concerns linked to Artificial Intelligence is the capacity it will have to replace human labor. According to a recent research by Goldman Sachs bank, AI would be able to replace 300 million full-time jobs.
In Europe and the United States, the paper notes, this technology could replace a quarter of job tasks. The report also adds that there are positive sides, since AI would generate new jobs hitherto unknown. In addition, labor productivity would be taken to a higher level. However, experts agree that it is now difficult to predict the impact on the labor market.
The developer of GPT-4 itself warns about the havoc that could be wreaked by such technology with future developments. “A misaligned superintelligence could cause serious damage to the world. An autocratic regime with an intelligence making decisions could do that too,” they state in a publication quoted by the BBC.
In that same vein, OpenAI says oversight is necessary. “At some point, it may be important to obtain an independent review before beginning to train future systems, and for more advanced efforts to agree to limit the rate of growth of computation used to create new models,” the company says.
The experts’ letter expresses their full agreement with the company’s assessment. However, they add that the time is right now. Dealing with AI before it becomes toxic to humanity becomes an immediate task, the specialists contemplate. Therefore, the last step before the pause must be GPT-4.