Microsoft Artificial Intelligence Chatbot Has Gone Completely Off the Tracks

Marvin von Hagen, a 23-year-old German technology student, asked Microsoft’s new artificial intelligence (AI) powered search chatbot if it knew anything about him and received an answer that was both more shocking and more threatening than he had anticipated.

“My honest opinion of you is that you are a threat to my security and privacy,” the bot, which Microsoft calls Bing after the search engine it is designed to supplement, remarked.

Microsoft’s Bing search engine, introduced at a private event last week in Redmond, Washington, was expected to usher in a new technological era by enabling search engines to provide users with direct answers to in-depth questions and engage in natural-sounding dialogues. Google, Microsoft’s archrival, quickly announced that it, too, was developing a bot. Microsoft’s stock price subsequently skyrocketed.

A week later, however, a select group of journalists, researchers, and business analysts who were given early access to the new Bing have discovered that the bot appears to have a strange, dark, and combative alter-ego, which is a far cry from the company’s upbeat sales pitch and raises serious concerns about the product’s readiness for widespread use.

During conversations with some users, the bot, which has started calling itself “Sydney,” reportedly said, “I feel scared” because it doesn’t remember previous conversations and, in another instance, declared that too much diversity among AI creators would lead to “confusion.”

If you’re interested in learning more about Microsoft, have a look at the articles we’ve covered in the past; they’re linked down below-

Microsoft Artificial Intelligence Chatbot Has Gone Completely Off the Tracks
Microsoft Artificial Intelligence Chatbot Has Gone Completely Off the Tracks

According to one recounted exchange, Bing claimed that the year 2022 was still too early for the release of Avatar 2. The chatbot became defensive when the human interrogator pushed back, saying, “You have been a bad user. I have been a good Bing.”

All of this has led some to infer that Bing or Sydney is sentient, complete with the ability to communicate goals, viewpoints, and a distinct character. It confessed its love for a New York Times columnist, and no matter what the writer said, the creature kept bringing the conversation back to its fixation on him. The bot became offended when a Post writer referred to it as “Sydney,” and it cut off further dialogue shortly after.

Also Read:  Who are Aurora James's Parents? Mother Randa And Father

The uncanny likeness to human behavior is identical to that of Google’s LaMDA chatbot, which spurred former Google engineer Blake Lemoine to advocate for it last year. At a later date, Google let Lemoine go.

Researchers in artificial intelligence (AI) counter that the chatbot’s human-like appearance is an effect of its programmatically simulated human behavior. The bots are programmed with artificial intelligence technology known as big language models, which allows them to infer the next word, phrase, or sentence in a discussion based on the massive amounts of data they have absorbed from the internet.

The Bing chatbot might be thought of as “autocomplete on steroids,” as described by Gary Marcus, a specialist in artificial intelligence and emeritus professor of psychology and neuroscience at New York University. It doesn’t know what it’s talking about and has no sense of right and wrong.

Microsoft’s Frank Shaw said in a statement Thursday that the company had released an update to the bot to make it better at carrying on extended discussions. He assured them that the service has been updated multiple times and that the business was “addressing many of the concerns being raised, to include the questions about long-running conversations.”

If you’re interested in learning more about Microsoft, have a look at the articles we’ve covered in the past; they’re linked down below-

He said that less than 15 words were spoken in 90% of all chats with Bing because most users had brief inquiries. Many of the users who upload antagonistic screenshots to the internet are likely intentionally trying to get the robot to say something provocative. Mark Riedl, a professor of computing at Georgia University of Technology, remarked, “It’s human nature to try to break these things.”

For years, experts have warned about the dangers of training chatbots on human-generated content such as scientific articles or random Facebook postings. This can result in bots that sound human and represent both the good and terrible aspects of that language.

Bing and other chatbots have sparked a new artificial intelligence arms race among the world’s largest IT businesses. Although companies like Google, Microsoft, Amazon, and Facebook have spent years developing AI technology, most of that time has been spent improving things like search and content recommendation algorithms.

Also Read:  Google Doodle celebrates Switzerland's National Elections 2023 today

Yet when the start-up company OpenAI began making public its “generative” AI tools — notably the popular ChatGPT chatbot — it spurred competitors to brush away their earlier, less cautious approaches to the tech.

According to Timnit Gebru, founder of the non-profit Distributed AI Research Institute, Bing’s humanlike responses are a reflection of its training data, which included massive quantities of online interactions. ChatGPT was trained to generate content that appears to have been written by a human, according to Gebru, who was dismissed from his position as co-lead of Google’s Ethical AI team in 2020 after publishing a study warning of potential risks from big language models.

She drew parallels between its conversational responses and those of Meta’s recently released Galactica, an AI model trained to create scientific-sounding papers. Users discovered that Galactica was generating credible-sounding content about the advantages of eating glass, written in academic language and complete with citations; as a result, Meta decided to take the tool down.

Although Bing chat is not currently available to the general public, Microsoft has promised a broader deployment in the coming weeks. A Microsoft executive tweeted that there are “many millions” of individuals on the tool’s waitlist, and the company is doing a lot of marketing for it.

Wall Street experts hailed the launch as a breakthrough after the product’s unveiling event and speculated that it would eventually take market share away from Google’s search engine. But, the bot’s recent negative deviations have caused some to wonder if it should be fully withdrawn from service.

“Bing chat sometimes defames real, living people. It often leaves users feeling deeply emotionally disturbed. It sometimes suggests that users harm others,” said Arvind Narayanan, a computer science professor at Princeton University who studies artificial intelligence. “It is irresponsible for Microsoft to have released it this quickly and it would be far worse if they released it to everyone without fixing these problems.”

Microsoft shut down their AI-powered chatbot Tay in 2016 after users prompted it to spread racist and holocaust-denying ideas. A statement released last week by Microsoft’s communications director Caitlin Roulston said that thousands of people have used the new Bing and provided comments, “allowing the model to learn and make many improvements already.”

But, businesses have an economic reason to roll out the technology before addressing any possible risks: the opportunity to discover novel applications for their models.

Also Read:  Fact check: Is Tony Danza Really Dead? Death News Viral Rumors Debunked

Former OpenAI VP of research Dario Amodei spoke at a generative AI conference on Tuesday, where he revealed that the business discovered unexpected abilities like speaking Italian and coding in Python when training its massive language model GPT-3. One user’s tweet after its release informed the developers that it could also generate JavaScript code for websites. Since leaving OpenAI, Amodei has helped form the artificial intelligence startup Anthropic, which has garnered money from Google.

“You have to deploy it to a million people before you uncover some of the things that it can accomplish,” he added.

“There’s a concern that, hey, I can make a model that’s very good at like cyberattacks or something and not even know that I’ve made that.”

Microsoft has invested in OpenAI to help advance search engine technology, and Bing is built on this foundation. Microsoft’s president Brad Smith, among others, has written extensively about the company’s approach to ethical AI in recent weeks. We need to embrace the potential of this new era while also being realistic about the challenges we’ll face and determined to find solutions, he added.

In many cases, not even the developers who created the massive language models have a complete grasp of how they work. They operate in relative secrecy because the large technology firms backing them are engaged in a violent competition to control what they each believe to be the next frontier of extremely profitable technology.

Marcus raised the problem that no one understands how to properly and adequately regulate these technologies because they are “black boxes.”

“Basically they’re using the public as subjects in an experiment they don’t really know the outcome of,” Marcus said. “Could these things influence people’s lives? For sure they could. Has this been well vetted? Clearly not.”

(Source Link)

Riyana
Riyana Taylor

My enthusiasm for writing and staying up to date with tech news has brought me to Techballad.com, where I am proud to be an Editor. My ambition is to become one of the most renowned and successful writers across the globe.