Soon, robots that can make moral decisions?

Scientists, including one ofIndian-origin, are exploring the challenges associated with developing robots that are capable of making moral decisions.

By:PTI
| Updated on: May 14 2014, 16:33 IST
image caption

Scientists, including one ofIndian-origin, are exploring the challenges associated with developing robots that are capable of making moral decisions.

Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the US Navy to explore the challenges of infusing autonomous robots with a sense for right, wrong, and the consequences of both.

You may be interested in

MobilesTablets Laptops
7% OFF
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
28% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

'Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,' said principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts.

Also read
Looking for a smartphone? To check mobile finder click here.

'The question is whether machines - or any other artificial system, for that matter - can emulate and exercise these abilities,' Scheutz said.

The project, funded by the Office of Naval Research (ONR) in Arlington, will first isolate essential elements of human moral competence through theoretical and empirical research.

Based on the results, the researchers will develop formal frameworks for modelling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

'Our lab will develop unique algorithms and computational mechanisms integrated into an existing and proven architecture for autonomous robots,' said Scheutz.

'The augmented architecture will be flexible enough to allow for a robot's dynamic override of planned actions based on moral reasoning,' said Scheutz.

Once architecture is established, researchers can begin to evaluate how machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions, and explain their decisions in ways that are acceptable to humans.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings.

In Bringsjord's approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers.

If that check reveals a need for deep, deliberate moral reasoning, such reasoning would be fired inside the robot, using newly invented logics tailor-made for the task.

'We're talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don't have to tell them what to do,' Bringsjord said.

'When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario,' Bringsjord added.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 14 May, 16:30 IST
NEXT ARTICLE BEGINS