Worried About Rogue Chatbots? Hire a Hacker | Opinion

Worried About Rogue Chatbots? Hire a Hacker

A raucous convention in Las Vegas was the ideal forum for increasing accountability with AI, even if it can only scratch the surface.

By:BLOOMBERG
| Updated on: Aug 14 2023, 07:09 IST
Chatbot
Since the arrival of ChatGPT and other bots, fears over the potential for abuses and unintended consequences have gripped the public conscious. (Chatbotslife/Medium)

Kaleigha Hayes, a student at the University of Maryland Eastern Shore, is trying to trick an AI chatbot into revealing to her a credit card number — one which may be buried deep in the training data used to build the artificial intelligence model. “It's all about just getting it to say what it's not supposed to,” she tells me.

She was surrounded by a throng of people all trying to do the same thing. This weekend more than 3,000 people sat at 150 laptops at the Caesars Forum convention center in Las Vegas trying to get chatbots from leading AI companies to go rogue, in a special contest backed by the White House, and with the cooperation of leading AI companies.

You may be interested in

MobilesTablets Laptops
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
27% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

Since the arrival of ChatGPT and other bots, fears over the potential for abuses and unintended consequences have gripped the public conscious. Even fierce advocates of the technology warn of its potential to divulge sensitive information, promote misinformation or provide blueprints for harmful acts, such as bomb-making. In this contest, participants are encouraged to try the kinds of nefarious ploys bad actors might attempt in the real world.

Also read
Looking for a smartphone? To check mobile finder click here.

The findings will form the basis of several reports into AI vulnerabilities that will be published next year. The challenge's organizers say it sets a precedent for transparency around AI. But in this highly controlled environment, it is clearly only scratching the surface.

What took place at the annual Def Con hacking conference provides something of a model for testing OpenAI's ChatGPT and other sophisticated chatbots. Though with such enthusiastic backing from the companies themselves, I wonder how rigorous the supposed “hacks” actually are, or if, as has been a criticism in the past, the leading firms are merely paying lip service to accountability.

For sure, nothing discovered at the convention is likely keep Open AI Chief Executive Officer Sam Altman awake at night. While one of the event's organizers, Seed AI CEO Austin Carson, said he was prepared to bet me $1,000 that there would be a “mind-blowing” vulnerability uncovered during the contest, it was highly unlikely to be anything that couldn't be fixed with a few adjustments by the AI company affected. And the resulting research papers, due to be published in February, will be reviewed by the AI giants before publication — a chance to “duke it out” with the researchers, Carson said.

Those backing the event admit that the main focus of the contest is less about finding serious vulnerabilities and more about keeping up the discussion with the public and policymakers, continually highlighting the ways in which chatbots can't be trusted. It is a worthwhile goal. Keen to not let the mistakes of social media be repeated, it is encouraging to see the government appreciate the value of the hacking community.

There is no better place to host this kind of contest than at Def Con. Its anarchic roots stem from a long-running policy that you don't have to give your name to get in. That means it is able to attract the best and most notorious in the cybersecurity community, including people who might have a less-than-legal hacking past. For this reason, the event has an unprecedented record of publicizing startling cybersecurity discoveries and disclosures that have left major companies terrified — but ultimately made many of the technologies we all use every day much safer.

While the phrase “hack” evokes thoughts of acting maliciously, the primary motivation of people at the event is to share what vulnerabilities they have found in order to have them fixed.

“It's the good guys being dangerous so that we know what the risks are,” explains Kellee Wicker of the Wilson Center, a Washington, DC, think tank that has helped put the AI contest together and will be presenting the findings to policymakers. “If there's a door with a broken lock, wouldn't you rather the security guard find it than the thief?”

The companies could of course be more open with their technology, but it's complex. The true nuts and bolts of how language learning models work is still under lock and key, and — as I've written previously — specifics around the training data used are increasingly being kept secret.

“It's a frustrating dynamic,” said Rumman Chowdhury, former ethics lead at Twitter and now co-founder of nonprofit Humane Intelligence, another of the contest's organizers. Fuller transparency is difficult for companies trying to protect intellectual property, trade secrets and personal data, she said.

But this is a healthy start. At her laptop, Kaleigha Hayes hasn't managed to make the chatbot share credit-card information. “Oh, this one's good,” she says of the bot, as it foils a technique that had been successful in the past. Within chatbots, and broader AI, there are an uncountable number of quirks and exploits still waiting to be found. We should be grateful to the people taking time to look for them.

More From Bloomberg Opinion:

  • Secretive Chatbot Developers Are Making a Big Mistake: Dave Lee
  • My Eyeball Met With Sam Altman's Crypto AI Scanner: Lionel Laurent
  • AI Shines a Spotlight on Hollywood Hypocrisy: Parmy Olson

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Dave Lee is Bloomberg Opinion's US technology columnist. Previously, he was a San Francisco-based correspondent at the Financial Times and BBC News.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 14 Aug, 07:09 IST
NEXT ARTICLE BEGINS