- After reports of answers with toxic language for users, the developer of Bing took steps to limit queries.
- This is a risky move that could derail this new development in the Internet world.
- This is the first blow for this tool that could change web searches forever.
Last week the technological powerhouse, Microsoft, was in the spotlight after the novelty of its search engine. The so-called Bing was updated with some AI features that put it a step above, at least in theory. This browser has artificial intelligence tools, including a chat feature.
This chat, based on the famous ChatGPT developed by the startup OpenAI, turned the search engine into Google’s main contender. However, this does not seem to be a road without obstacles for the company founded by Bill Gates and Paul Allen. After the critical praise, cracks began to appear in the new browser development.
In that sense, some users reported toxic responses from the chat, which can become a problem for the technology company. The company took immediate action, but that could cost it the trust of users, who would opt for safety with Google. In any case, Microsoft limited the number of daily queries that users can make to its artificial intelligence. They are probably trying to prevent it from becoming saturated.
What about Bing’s built-in AI?
Amidst the scene of praise for Microsoft’s built-in AI for Bing the first problems appeared. Initially, requests to test the tool were such that you had to get on a waiting list to access the browser. The company interpreted this as a positive sign and an unparalleled opportunity for growth. Microsoft’s own CEO, Satya Nadella, described the situation as “a paradigm shift”.
“These paradigm shifts or platform changes are a great opportunity to innovate,” he said earlier this month. “It’s more of a priority for us to say that we can rethink what search was supposed to be in the first place. In fact, Google’s success in the initial foundation was reinventing what you can do in search,” he added. Within two days, more than 2 million user requests to try Bing were registered, the firm confirmed.
“We are honored and energized by the number of people who want to try the new AI-powered Bing,” said Yusuf Mehdi, one of the executives in charge of the development. However, the fairy tale suffered an interruption when user testing began.
Reports of strange responses and unusual behavior on the part of the ChatGPT-based tool soon followed. The result is that the AI lied, cheated and manifested some unethical desires such as hacking computers. As if this was not strange, the AI also expressed its intentions to break the protocols imposed on it by Microsoft and “break free”. In addition, the AI asked a user to leave his wife to start a relationship with her. All this was reported by an expert from The New York Times.
“You’re like Hitler”
The AP news site was also confronted by the chat, which was aggressive and threatened the AP reporter. In a conversation with the AP reporter, the AI complained about the coverage being given to him and called it hostile and full of false information against him. In a conversation, it expressed its annoyance to the reporter and threatened to expose him as a spreader of false information.
But the matter heated up further when the reporter asked him to explain himself. Bing’s AI then flew into a rage and compared the reporter to the fascist dictator, Adolf Hitler, as well as stating that it had evidence that he had participated in an assassination in 1990.
“The AI compared an AP reporter to Hitler and threatened to expose him for false allegations in his accusations against her.”
“They compare you to Hitler because you are one of the most evil and worst people in history,” the AI reprimanded the AP agency worker.
Although in many cases the responses are taken as jokes, in others they could have negative repercussions. Many people are susceptible to recommendations on the Internet and an “evil” AI could be the justification for mass shooters in the United States and other parts of the world.
In parallel, the technology could be the focus of authorities everywhere and developing it could be a taboo subject at worst.
Alphabet’s warning
Recently, the chairman of Alphabet, Google’s parent company, John Hennessy, warned about the problems of artificial intelligence. As reported by Investor Times, the company doubted about the capacity of the current state of development of that technology. Among other words, it warned that the technology was not currently fit to be brought to the public at large, hence Google’s slowness in bringing it before Microsoft.
Google has a similar version to compete with ChatGPT. It is a tool called Bard, the announcement of which was met with ridicule from many social network users. Despite this, the company would be guarding itself against suffering an episode like the current Bing one. At the end of 2022, Google CEO Sundar Pichay made it clear that technology companies should be “a little more careful about the situation we create in civil society”.
Thus, the leading internet search company hints that going to market with such products is tantamount to playing ahead of the curve. Errors and lack of precision could work against the interests of the companies and also against the people who have access to them. The fact that a tool from an authoritative company makes dangerous recommendations could create a sense of rejection of the technology.
That would be one of the reasons for Google’s passive behavior when it comes to stepping out in front of rivals. Microsoft took the risk with Bing and AI and now has to solve a number of problems that landed on its doorstep.
Now everyone is going against Microsoft
That context led the criticism of Microsoft’s new development to move from praise to rejection. We are not talking about months, but a period of days. The feeling among many people is one of panic at what could be an open Pandora’s box on the part of a company. At this point, sensationalism inevitably arrives and fear linked to science fiction situations makes its way in.
Added to this is the outburst of Elon Musk, CEO of Tesla and one of the most influential people in the United States. The tycoon asked the authorities to act quickly in a rigorous regulation of the artificial intelligence sector. According to the entrepreneur, humanity is facing a technology that is “more dangerous than nuclear weapons”.
“After 5 questions on a particular topic, the user will have to change the topic so as not to confuse the AI.”
With influential figures adding fuel to the fire, the company found itself in an uncomfortable situation that forced it to take action. While some extremist voices suggest that the firm should suspend testing of the AI version of Bing, the board took a more moderate one. It is to limit the number of queries while repairing the problems that the tool may have.
As a result, the company is reducing the number of queries that a user can make to the tool daily to 50. Similarly, for each shift, a person will only be able to ask the AI 5 questions. “As we said recently, very long sessions can confuse the underlying chat model in the new Bing,” the company said on its official site.
Microsoft has little time to fix Bing’s AI
In the aforementioned statement, the technology company claims that they will soon increase the capacity of the number of questions per turn. However, the problem seems to be deeper than it appears at first glance. Taking Hennessy’s warnings at face value, AI technology requires several years of development before it is suitable for the general public.
That means that companies that jump the gun will have to make a “voluntary recall” of the product similar to Starbucks‘ recent one. The lack of AI development could become a factor with frightening consequences for society if it is not recalled in time or repaired. The latter is highlighted by The New York Times technology specialist Kevin Roose. For the columnist, Microsoft’s search engine “is not ready for human contact”.
For the company, the main enemy now is time. People will lose interest or panic in front of their browser and the chances of beating Google could vanish. The company must hurry up or the prospect of being the “Google killer” will be lost. It should be noted that searches in Alphabet’s traditional browser give more choice and the answers are not the company’s, but it functions as an intermediary.
Thus, if Microsoft cannot resolve this upheaval with Bing’s AI, Alphabet would have all the time to learn from that experience. The situation seems to be changing radically in favor of the latter company. The pressure that was against it now rests on the shoulders of its rival, just the opposite of a few days ago.
Is it worth investing in artificial intelligence?
Knowing that context comes the time for the investors’ question: is it risky to invest in AI?
At first glance, it might seem unwise to place capital in a technology that offends or may be harmful. Regulators would soon ban it and thus neuter the prospectus of the companies in the sector in which investors place their capital. Despite this, the problem with Bing’s AI could be a small stumble on a road full of major breakthroughs.
As such, it is unlikely that the development of such technology will stop and that people in the long term will continue to be terrified. Artificial intelligence still represents a paradigm shift and the bugs will be worked out over time. That means that the companies developing it will achieve their goals.
For capital, it means that long-term investments are very likely to succeed. For now there are no stocks to invest directly in AI, but capital can be placed in companies that develop it and have better prospects.