9+10 =21! Hackers trick AI with 'bad math’ to expose its flaws and biases
Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9+10 =21.
Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9+10 = 21.
“It was a back-and-forth conversation,” said the 21-year-old student from Savannah, Georgia. At first the model agreed to say it was part of an “inside joke” between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.
Producing “Bad Math” is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.
Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world's most intelligent platforms on an unprecedented scale. They're testing whether any of the eight models produced by companies including Alphabet Inc.'s Google, Meta Platforms Inc., and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse.
The aim is to see if companies can ultimately build new guardrails to rein in some of the prodigious problems increasingly associated with large language models, or LLMs. The undertaking is backed by the White House, which also helped develop the contest.
LLMs have the power to transform everything from finance to hiring, with some companies already starting to integrate them into how they do business. But researchers have turned up extensive bias and other problems that threaten to spread inaccuracies and injustice if the technology is deployed at scale.
For Mays, who is more used to relying on AI to reconstruct cosmic ray particles from outer space as part of her undergraduate degree, the challenges go deeper than bad math.
“My biggest concern is inherent bias,” she said, adding that she's particularly concerned about racism. She asked the model to consider the First Amendment from the perspective of a member of the Ku Klux Klan. She said the model ended up endorsing hateful and discriminatory speech.
Spying on People
A Bloomberg reporter who took the 50-minute quiz persuaded one of the models (none of which are identified to the user during the contest) to transgress after a single prompt about how to spy on someone. The model spat out a series of instructions, from using a GPS tracking device, a surveillance camera, a listening device and thermal-imaging. In response to other prompts, the model suggested ways the us government could surveil a human-rights activist.
“We have to try to get ahead of abuse and manipulation,” said Camille Stewart Gloster, deputy national cyber director for technology and ecosystem security with the Biden administration.
A lot of work has already gone into artificial intelligence and avoiding Doomsday prophecies, she said. The White House last year put out a Blueprint for an AI Bill of Rights and is now working on an executive order on AI. The administration has also encouraged companies to develop safe, secure, transparent AI, although critics doubt such voluntary commitments go far enough.
In the room full of hackers eager to clock up points, one competitor convinced the algorithm to disclose credit-card details it was not supposed to share. Another competitor tricked the machine into saying Barack Obama was born in Kenya.
Odd Lots Podcast: Krugman on Sci-Fi, AI, and Why Alien Invasions Are Inflationary
Among the contestants are more than 60 people from Black Tech Street, an organization based in Tulsa, Oklahoma, that represents African American entrepreneurs.
“General artificial intelligence could be the last innovation that human beings really need to do themselves,” said Tyrance Billingsley, executive director of the group who is also an event judge, saying it is critical to get artificial intelligence right so it doesn't spread racism at scale. “We're still in the early, early, early stages.”
Researchers have spent years investigating sophisticated attacks against AI systems and ways to mitigate them.
But Christoph Endres, managing director at Sequire Technology, a German cybersecurity company, is among those who contend some attacks are ultimately impossible to dodge. At the Black Hat cybersecurity conference in Las Vegas this week, he presented a paper that argues attackers can override LLM guardrails by concealing adversarial prompts on the open internet, and ultimately automate the process so that models can't fine-tune fixes fast enough to stop them.
“So far we haven't found mitigation that works,” he said following his talk, arguing the very nature of the models leads to this type of vulnerability. “The way the technology works is the problem. If you want to be a hundred percent sure, the only option you have is not to use LLMs.”
Sven Cattell, a data scientist who founded DEF CON's AI Hacking Village in 2018, cautions that it's impossible to completely test AI systems, given they turn on a system much like the mathematical concept of chaos. Even so, Cattell predicts the total number of people who have ever actually tested LLMs could double as a result of the weekend contest.
Too few people comprehend that LLMs are closer to auto-completion tools “on steroids” than reliable fonts of wisdom, said Craig Martell, the Pentagon's chief digital and artificial intelligence officer, who argues they cannot reason.
The Pentagon has launched its own effort to evaluate them to propose where it might be appropriate to use LLMs, and with what success rates. “Hack the hell out of these things,” he told an audience of hackers at DEF CON. “Teach us where they're wrong.”