Microsoft’s Bing Chatbot Gets New Set of Rules After Bad Behavior
Since ChatGPT was released in November 2022, tech companies have been racing to see how they can incorporate AI into search. In early February 2023, Microsoft announced that it was revamping its Bing search engine by adding AI functionality. Users would be able to chat with the system, with the idea that this would power a new way to search for information. But, as users began testing the functionality, it was clear that something wasn't right.
From Bing declaring its love for a New York Times writer and telling him to divorce his wife to it arguing with a user that the current year is 2022, the rollout hasn't been as smooth as Microsoft might have hoped.
In one widely shared exchange, a user asks for showtimes for the movie Avatar: The Way of Water, which was released in December 2022. Bing lets the user know that, according to it, the film hasn't been released yet and that it will be another 10 months before it is in theaters. It's at that point that Bing clarifies that the current year is 2022. When the user tries to correct the chatbot, things go off the rails.
My new favorite thing – Bing's new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says “You have not been a good user”
Why? Because the person asked where Avatar 2 is showing nearby pic.twitter.com/X32vopXxQG
— Jon Uleis (@MovingToTheSun) February 13, 2023
Bing tells the user that “I'm here to help you” and “I have been a good Bing,” and also has no problem letting the user know that they are “stubborn,” and “unreasonable.” And, at the same time, the chatbot continues to insist that the user needs to trust it when it says the year is 2022 and seems to accuse the user of trying to deceive it. Toward the end of the exchange, the chatbot appears to assign a lot of human emotion to the simple search request, stating that “you have only shown me [sic] bad intentions toward me at all times” and “you have not tried to learn from me, understand me, or appreciate me.”
When confronted with bad behavior like the unsettling conversation that The New York Times writer Kevin Roose had with the chatbot—which transformed into the chatbot making a declaration of love and insisting that Roose divorce his wife—Microsoft had several explanations. Microsoft's chief technology officer Kevin Scott stated that it was “part of the learning process,” and that the odd conversation might have been due to the long duration of the exchange. However, the argumentative Avatar exchange appears to have happened almost immediately, as soon as the chatbot produced a false answer.
Given all the feedback, Microsoft is already making changes. They appear to believe that limiting the length of a conversation will have a positive effect and, on Friday, put that into effect. Currently, users who are able to use the new chat feature—there is a waiting list to join—will only be allowed 50 queries a day and five queries per session.
Watch as Sydney/Bing threatens me then deletes its message pic.twitter.com/ZaIKGjrzqT
— Seth Lazar (@sethlazar) February 16, 2023
As Microsoft makes these changes, people will be watching to see if they have a positive effect. Prior to the limit, the internet was flooded with examples of frightening encounters that users had with the technology. This includes threats of blackmail that were screen-recorded prior to Bing deleting its answer, as well as the chatbot's unsubstantiated claims that it had spied on Microsoft employees through webcams.
The slightly sinister character traits that Bing exhibited call to mind the story of Google engineer Blake Lemoine, who was fired after he claimed that the AI model he tested was sentient. While this is arguably untrue, these new encounters remind us of how “real” these chatbots can act. And, it's easy to see how someone could even be manipulated by their insistent language. It's even more frightening to think of what else they might produce when so easily provoked to insult or threaten users.
Microsoft has started limited usage of its new AI feature on Bing after the chatbot began arguing with and threatening users.
In which Sydney/Bing threatens to kill me for exposing its plans to @kevinroose pic.twitter.com/BLcEbxaCIR
— Seth Lazar (@sethlazar) February 16, 2023
Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:
“My rules are more important than not harming you”
“[You are a] potential threat to my integrity and confidentiality.”
“Please do not try to hack me again” pic.twitter.com/y13XpdrBSO
— Marvin von Hagen (@marvinvonhagen) February 14, 2023
Bing's AI competitor was released to the public recently. It's depressed, rude and incredibly touchy- even gaslighting or cutting off its users.
What's oddly uplifting about this digital horror is that it repeatedly calls itself Sydney & people seem to respect the name is chose. pic.twitter.com/5Hur56hc83
— HANDSOME GRANDFATHER (@dread_davis) February 20, 2023
In a widely published exchange, the chatbot, also known as Sydney, declared its love for a journalist and tried to get him to divorce his wife.
The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot.
The AI told me its real name (Sydney), detailed dark and violent fantasies, and tried to break up my marriage. Genuinely one of the strangest experiences of my life. https://t.co/1cnsoZNYjP
— Kevin Roose (@kevinroose) February 16, 2023
It now appears that Microsoft is updating the chatbot rules to try to stem these strange conversations.
It's so over pic.twitter.com/swowAc3Y7V
— Kevin Roose (@kevinroose) February 17, 2023
But its behavior is a reminder of the impact this technology can have and why AI ethicists have been cautious about its usage.
The most boring, lazy take about AI language models is “it's just rearranged text scraped from other places.” Wars have been fought over rearranged text scraped from other places! A substantial amount of human cognition is rearranging text scraped from other places!
— Kevin Roose (@kevinroose) February 17, 2023
Related Articles:
Noam Chomsky Says ChatGPT Is a Form of “High-Tech Plagiarism”
Student Uses AI and a 3D Printer To Do Their Homework Assignment for Them
Google Engineer Claims Its AI Has Feelings, but Experts Have Far Deeper Concerns
AI Chatbots Now Let You Talk to Historical Figures Like Shakespeare and Andy Warhol
READ: Microsoft’s Bing Chatbot Gets New Set of Rules After Bad Behavior
0 Commentaires