HT TECH wants to start sending you push notifications. Click allow to subscribe

Google Bard, Bing Search make huge mistakes, inaccurately report ceasefire in Israel

Google Bard and Microsoft Bing Search - two of the world’s most popular AI chatbots - have shocked by inaccurately reporting a ceasefire in the ongoing Israel-Hamas conflict. Not just that, they even went ahead and predicted the death toll.

By: SHAURYA TOMER
Updated on: Oct 16 2023, 08:38 IST
Google’s AI chatbot called Bard inaccurately claimed the death toll during the conflict, as per the report. (Bloomberg)

Since the emergence of OpenAI’s ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular around the world. This technology puts the whole world’s information just a prompt away to tailor as you please. Now, you can even get on Google Search, enter your query and find the answer you’ve been looking for. Simply ask the AI chatbot and it will present you the answer in a flash. However, the content that AI chatbots present are not always factual and true. In a recent case, two very popular AI chatbots, Google Bard and Microsoft Bing Chat have been accused of providing inaccurate reports on the Israel-Hamas conflict.

Let’s take a deep dive into it.

You may be interested in

Mobiles Tablets Laptops

AI chatbots report false information

According to a Bloomberg report, Google’s Bard and Microsoft’s AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that there was a ceasefire in place. In a newsletter, Bloomberg’s Shirin Ghaffary reported, “Google’s Bard told me on Monday, “both sides are committed” to keeping the peace. Microsoft’s AI-powered Bing Chat similarly wrote on Tuesday that “the ceasefire signals an end to the immediate bloodshed.””

Also read: Looking for a smartphone? To check mobile finder click here.

Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict where it reported that the death toll had surpassed “1300” on October 11, a date that hadn’t even arrived yet.

What is causing these errors?

While the exact cause behind this inaccurate reporting of facts isn’t known, AI chatbots have been known to twist facts from time to time, and the problem is known as AI hallucination. For the unaware, AI hallucination is when a Large Language Model (LLM) makes up facts and reports them as the absolute truth. This isn’t the first time that an AI chatbot has made up facts. In June, there were talks about OpenAI getting sued for libel after ChatGPT falsely accused a man of crime.

This problem has persisted for some time now, and even the people behind the AI chatbots are aware of it. Speaking at an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth. ”

At a time when there is so much misinformation out in the world, the inaccurate reporting of news by AI chatbots poses a serious question over the technology's reliability.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on ,Twitter, Facebook, , and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 16 Oct, 08:36 IST
NEXT ARTICLE BEGINS