Google Bard, Bing Search make huge mistakes, inaccurately report ceasefire in Israel

Google Bard and Microsoft Bing Search - two of the world’s most popular AI chatbots - have shocked by inaccurately reporting a ceasefire in the ongoing Israel-Hamas conflict. Not just that, they even went ahead and predicted the death toll.

| Updated on: Oct 16 2023, 08:38 IST
Want to boost your productivity at work? Check out these top 5 AI tools
Google Bard
1/5 Notion AI: It is the best organizing tool for working professionals as it enables them to create notes, to-do lists, manage projects, manage calendars, and more. It is also integrated with artificial intelligence which enables users to improve their writing skills, summarize notes, help brainstorm and create mind maps. Note that it's a subscription-based tool that has a monthly plan of $10. (Pexels)
Google Bard
2/5 ChatGPT: It is a generative AI tool which is built on a large language model (LLM). It enables users to generate unique ideas and create content for their business such as articles, blogs, facts, quotes, and more. It can also be utilized for grammar checks and proofreading. With ChatGPT you can save a lot of time and work on tasks that have higher importance to improve productivity. (Pexels)
Google Bard
3/5 Vimcal: It is an AI-based calendar tool that enables users to manage their time efficiently. Users can easily command the tool for scheduling appointments, meetings, and events. The tool states that it can save you 3 hours per week. This tool can come in handy for professionals who have tons of meetings and occasions to attend on a daily basis. (Pexels)
Google Bard
4/5 Superhuman: It is an AI-powered emailing tool that can be integrated into Gmail and Outlook. It helps professionals to be more productive by being fast and responsive while sending stressful emails. It can write your emails with speed in a most human manner. It also enables users to distinguish important emails. The AI emailing tool claims to save 4 hours per week. (Pexels)
Google Bard
5/5 Google Sheets AI: Google has integrated AI into its G-suite tools that have various automation features. The AI tool can help users organize their data sheet with simple commands such as “Planning a sales kick-off event”. You just need to turn on the “Help me organize” prompt to access its features. (Pexels)
Google Bard
icon View all Images
Google’s AI chatbot called Bard inaccurately claimed the death toll during the conflict, as per the report. (Bloomberg)

Since the emergence of OpenAI's ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular around the world. This technology puts the whole world's information just a prompt away to tailor as you please. Now, you can even get on Google Search, enter your query and find the answer you've been looking for. Simply ask the AI chatbot and it will present you the answer in a flash. However, the content that AI chatbots present are not always factual and true. In a recent case, two very popular AI chatbots, Google Bard and Microsoft Bing Chat have been accused of providing inaccurate reports on the Israel-Hamas conflict.

Let's take a deep dive into it.

AI chatbots report false information

According to a Bloomberg report, Google's Bard and Microsoft's AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that there was a ceasefire in place. In a newsletter, Bloomberg's Shirin Ghaffary reported, “Google's Bard told me on Monday, “both sides are committed” to keeping the peace. Microsoft's AI-powered Bing Chat similarly wrote on Tuesday that “the ceasefire signals an end to the immediate bloodshed.””

Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict where it reported that the death toll had surpassed “1300” on October 11, a date that hadn't even arrived yet.

What is causing these errors?

While the exact cause behind this inaccurate reporting of facts isn't known, AI chatbots have been known to twist facts from time to time, and the problem is known as AI hallucination. For the unaware, AI hallucination is when a Large Language Model (LLM) makes up facts and reports them as the absolute truth. This isn't the first time that an AI chatbot has made up facts. In June, there were talks about OpenAI getting sued for libel after ChatGPT falsely accused a man of crime.

This problem has persisted for some time now, and even the people behind the AI chatbots are aware of it. Speaking at an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth. ”

At a time when there is so much misinformation out in the world, the inaccurate reporting of news by AI chatbots poses a serious question over the technology's reliability.

Follow HT Tech for the latest tech news and reviews , also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 16 Oct, 08:36 IST