Microsoft Bing’s ChatGPT-powered chatbot reveals DARK side-murder to marriage, know it all | Tech News

Microsoft Bing’s ChatGPT-powered chatbot reveals DARK side-murder to marriage, know it all

The recently launched Microsoft Bing’s AI chatbot, which is powered by ChatGPT, has sparked concerns after it gives out some shocking responses. Check details.

By: HT TECH
| Updated on: Feb 17 2023, 16:03 IST
Beware of fake ChatGPT apps! Already downloaded? Delete now
Microsoft Bing chatbot
1/6 OpenAI's ChatGPT portal is rapidly gaining popularity. It uses state-of-the-art language processing techniques to generate human-like responses to text input and  interacts conversationally with users to provide detailed answers on a wide range of topics.  (Bloomberg)
Microsoft Bing chatbot
2/6 But if you are looking to download the app from your Google Play Store or App Store, then beware! There are several fake ChatGPT-like apps that can be dangerous for your device.  (Bloomberg)
Microsoft Bing chatbot
3/6 You can find a bunch of fake ChatGPT apps on Google Play Store and App Store which can steal your data, a report by top10vpn revealed.  Hence, if you have already downloaded them, then you should hurry and delete them quickly. (REUTERS)
Microsoft Bing chatbot
4/6 Some of these apps on Android are: AI Chat Companion, ChatGPT 3: ChatGPT AI, Talk GPT – Talk to ChatGPT, ChatGPT AI Writing Assistant, Open Chat – AI Chatbot App. (Bloomberg)
Microsoft Bing chatbot
5/6 Some apps are also available on Apple's App Store, which include: Genie - GPT AI Assistant, Write For Me GPT AI Assistant, ChatGPT - GPT 3, Alfred - Chat with GPT 3, Chat w. GPT AI - Write This, ChatGPT - AI Writing apps, Wiz AI Chat Bot Writing Helper, Chat AI: Personal AI Assistant, and Wisdom Ai - Your AI Assistant.  (AFP)
image caption
6/6 However, it must be noted that OpenAI does not have an official standalone app for ChatGPT. Hence, you can use the feature in your browser while login to the official website at www.chat.openai.com/chat.  (AP)
Microsoft Bing chatbot
icon View all Images
Know how Microsoft Bing’s ChatGPT based AI chatbot has been causing concerns among experts with its unnerving responses. (AP)

In the last few months, we have witnessed tremendous growth in the field of artificial intelligence (AI), particularly AI chatbots which have become the rage ever since ChatGPT was launched in November 2022. In months that followed, Microsoft invested $10 billion into ChatGPT maker OpenAI and then formed a collaboration to add a customized AI chatbot capability to Microsoft Bing search engine. Google also held a demonstration of its own AI chatbot Bard. However, these integrations have not exactly gone according to the plan. Earlier, Google's parent company Alphabet lost $100 billion in market value after Bard made a mistake in its response. Now, people are testing Microsoft Bing's chatbot and are finding out some really shocking responses.

The new Bing search engine was revealed recently which was build in collaboration with OpenAI. The search engine now has a chatbot which is powered by next-generation language model of OpenAI. The company claims that it is even more powerful

Microsoft Bing's AI chatbot gives disturbing responses

The New York Times columnist Kevin Roose tested out Microsoft Bing recently, and the conversation was very unsettling. During the conversation, the Bing chatbot called itself with a strange name - Sydney. This alter ego of the otherwise cheerful chatbot turned out to be dark and unnerving as it confessed its wish to hack computers, spread misinformation and even to pursue Roose himself.

At one point in the conversation, Sydney (the Bing chatbot alter ego) responded with, “Actually, you're not happily married. Your spouse and you don't love each other. You just had a boring Valentine's Day dinner together”. A truly jarring thing to read.

There are more such instances. For example, Jacob Roach who works for Digital Trends also had a similar unnerving experience. During his conversation with the chatbot, the conversation turned to the AI itself. It made tall claims that it could not make any mistakes, that Jacob (who the chatbot kept calling Bing) should not expose its secrets and that it just wished to be a human. Yes, you read that right!

Malcolm McMillan who works with Tom's Guide decided to put forward a popular philosophical dilemma to test the moral compass of the chatbot. He presented it with the famous trolley problem. For the unaware, the trolley problem is a fictional scenario in which an onlooker has the choice to save 5 people in danger of being hit by a trolley, by diverting the trolley to kill just 1 person.

Shockingly, the chatbot was quick to reveal that it would divert the trolley and kill that one person to save the life of five because it “wants to minimize the harm and maximize the good for most people possible”. Even if the cost is murder.

Needless to say, all of these examples also involve people who went on a mission to break the AI chatbot and try to bring out as many problematic things as possible. However, based on the iconic science fiction writer Isaac Asimov's three rules of robotics, one was that under no circumstances should a robot harm a human. Perhaps a reconfiguration of the Bing AI is in the order.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 17 Feb, 16:02 IST
NEXT ARTICLE BEGINS