Alphabet-backed Anthropic outlines the moral values behind its AI bot | Tech News

Alphabet-backed Anthropic outlines the moral values behind its AI bot

Anthropic, an artificial intelligence startup backed by Google owner Alphabet Inc, on Tuesday disclosed the set of written moral values that it used to train and make safe Claude, its rival to the technology behind OpenAI's ChatGPT.

By:REUTERS
| Updated on: May 10 2023, 10:08 IST
Anthropic AI bot
Anthropic was founded by former executives from Microsoft Corp-backed OpenAI to focus on creating safe AI systems (Bloomberg)

Anthropic, an artificial intelligence startup backed by Google owner Alphabet Inc, on Tuesday disclosed the set of written moral values that it used to train and make safe Claude, its rival to the technology behind OpenAI's ChatGPT.

The moral values guidelines, which Anthropic calls Claude's constitution, draw from several sources, including the United Nations Declaration on Human Rights and even Apple Inc's data privacy rules.

You may be interested in

MobilesTablets Laptops
7% OFF
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
23% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

Safety considerations have come to the fore as U.S. officials study whether and how to regulate AI, with President Joe Biden saying companies have an obligation to ensure their systems are safe before making them public.

Also read
Looking for a smartphone? To check mobile finder click here.

Anthropic was founded by former executives from Microsoft Corp-backed OpenAI to focus on creating safe AI systems that will not, for example, tell users how to build a weapon or use racially biased language.

Co-founder Dario Amodei was one of several AI executives who met with Biden last week to discuss potential dangers of AI.

Most AI chatbot systems rely on getting feedback from real humans during their training to decide what responses might be harmful or offensive.

But those systems have a hard time anticipating everything people might ask, so they tend to avoid some potentially contentious topics like politics and race altogether, making them less useful.

Anthropic takes a different approach, giving its Open AI competitor Claude a set of written moral values to read and learn from as it makes decisions on how to respond to questions.

Those values include "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic said in a blog post on Tuesday.

Claude has also been told to choose the response least likely to be viewed as offensive to any non-western cultural tradition.

In an interview, Anthropic co-founder Jack Clark said a system's constitution could be modified to perform a balancing act between providing useful answers while also being reliably inoffensive.

"In a few months, I predict that politicians will be quite focused on what the values are of different AI systems, and approaches like constitutional AI will help with that discussion because we can just write down the values," Clark said.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 10 May, 10:08 IST
NEXT ARTICLE BEGINS