Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Microsoft’s Bing Chatbot Gets New Set of Rules After Bad Behavior

Microsoft’s Bing Chatbot Gets New Set of Rules After Bad Behavior

Microsoft Chatbot Bing Bad Behavior

Photo: luckystep48/123RF

Since ChatGPT was released in November 2022, tech companies have been racing to see how they can incorporate AI into search. In early February 2023, Microsoft announced that it was revamping its Bing search engine by adding AI functionality. Users would be able to chat with the system, with the idea that this would power a new way to search for information. But, as users began testing the functionality, it was clear that something wasn't right.

From Bing declaring its love for a New York Times writer and telling him to divorce his wife to it arguing with a user that the current year is 2022, the rollout hasn't been as smooth as Microsoft might have hoped.

In one widely shared exchange, a user asks for showtimes for the movie Avatar: The Way of Water, which was released in December 2022. Bing lets the user know that, according to it, the film hasn't been released yet and that it will be another 10 months before it is in theaters. It's at that point that Bing clarifies that the current year is 2022. When the user tries to correct the chatbot, things go off the rails.

Bing tells the user that “I'm here to help you” and “I have been a good Bing,” and also has no problem letting the user know that they are “stubborn,” and “unreasonable.” And, at the same time, the chatbot continues to insist that the user needs to trust it when it says the year is 2022 and seems to accuse the user of trying to deceive it. Toward the end of the exchange, the chatbot appears to assign a lot of human emotion to the simple search request, stating that “you have only shown me [sic] bad intentions toward me at all times” and “you have not tried to learn from me, understand me, or appreciate me.”

When confronted with bad behavior like the unsettling conversation that The New York Times writer Kevin Roose had with the chatbot—which transformed into the chatbot making a declaration of love and insisting that Roose divorce his wife—Microsoft had several explanations. Microsoft's chief technology officer Kevin Scott stated that it was “part of the learning process,” and that the odd conversation might have been due to the long duration of the exchange. However, the argumentative Avatar exchange appears to have happened almost immediately, as soon as the chatbot produced a false answer.

Given all the feedback, Microsoft is already making changes. They appear to believe that limiting the length of a conversation will have a positive effect and, on Friday, put that into effect. Currently, users who are able to use the new chat feature—there is a waiting list to join—will only be allowed 50 queries a day and five queries per session.

As Microsoft makes these changes, people will be watching to see if they have a positive effect. Prior to the limit, the internet was flooded with examples of frightening encounters that users had with the technology. This includes threats of blackmail that were screen-recorded prior to Bing deleting its answer, as well as the chatbot's unsubstantiated claims that it had spied on Microsoft employees through webcams.

The slightly sinister character traits that Bing exhibited call to mind the story of Google engineer Blake Lemoine, who was fired after he claimed that the AI model he tested was sentient. While this is arguably untrue, these new encounters remind us of how “real” these chatbots can act. And, it's easy to see how someone could even be manipulated by their insistent language. It's even more frightening to think of what else they might produce when so easily provoked to insult or threaten users.

Microsoft has started limited usage of its new AI feature on Bing after the chatbot began arguing with and threatening users.

In a widely published exchange, the chatbot, also known as Sydney, declared its love for a journalist and tried to get him to divorce his wife.

It now appears that Microsoft is updating the chatbot rules to try to stem these strange conversations.

But its behavior is a reminder of the impact this technology can have and why AI ethicists have been cautious about its usage.

Related Articles:

Noam Chomsky Says ChatGPT Is a Form of “High-Tech Plagiarism”

Student Uses AI and a 3D Printer To Do Their Homework Assignment for Them

Google Engineer Claims Its AI Has Feelings, but Experts Have Far Deeper Concerns

AI Chatbots Now Let You Talk to Historical Figures Like Shakespeare and Andy Warhol

READ: Microsoft’s Bing Chatbot Gets New Set of Rules After Bad Behavior

Enregistrer un commentaire

0 Commentaires