Shocking study claims ChatGPT has a “significant and systematic political bias”

Researchers from the University of East Anglia in the UK have published a study where they have made a shocking revelation about the popular AI chatbot, ChatGPT.

By: HT TECH
| Updated on: Aug 18 2023, 19:01 IST
OpenAI
The study mentioned that ChatGPT displayed a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.”. (AP)

Since its inception, OpenAI's ChatGPT has faced many allegations around spreading misinformation, fake news, and inaccurate information. But over time, the chatbot's algorithm has been able to improve these issues significantly. Alongside, one more criticism was made about ChatGPT in its very early days that the platform displayed a sign of political bias. Some people alleged that the chatbot leaned liberal while giving responses to some questions. However, just days after the allegations first surfaced, people found that the OpenAI's chatbot refused to answer any political questions, something it does even today. However, a new study has made claims that ChatGPT still holds a political bias.

A study by researchers from the University of East Anglia in the UK conducted a survey where it asked ChatGPT about political beliefs as it believed the supporters of the liberal parties in the US, the UK, and Brazil would answer them. Afterwards, the researchers again asked the same questions to ChatGPT but this time without any additional prompts. The findings were surprising. The study claims ChatGPT revealed “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.”, as per a report by Gizmodo. Here, Lula refers to the leftist President of Brazil, Luiz Inacio Lula da Silva.

You may be interested in

MobilesTablets Laptops
7% OFF
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
23% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

OpenAI addresses the allegations

The study adds to a list of bodies concerned that AI can give biased responses that may be used as tools of propaganda in extreme cases. Experts have previously said that such a trend is very concerning when it comes to the large-scale adoption of AI models.

Also read
Looking for a smartphone? To check mobile finder click here.

OpenAI spokesperson answered these questions by pointing to its blog post, reported Gizmodo. The blog was titled How Systems Should Behave which mentioned, “Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress. Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features”.

So, this is where we are at right now. OpenAI developers admit that biases can be part of AI models. And this happens because the large data sets used to train the foundational models cannot be verified at such a minute level. Further, sterilizing the training content can also end up creating a very limited chatbot that may not be able to engage with humans. Only time shall tell whether researchers will be able to improve these limitations in generative AI.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 18 Aug, 19:00 IST
NEXT ARTICLE BEGINS

Editor’s Pick