Google’s AI Isn’t Too Woke. It’s Too Rushed.

Google has a chronic habit of dashing off half-baked AI products and neglecting safety checks.

By:BLOOMBERG
| Updated on: Feb 29 2024, 07:10 IST
Google
Google faces backlash over diverse images generated by AI chatbot Gemini, leading to accusations of neglecting proper checks on products. (unsplash)

Did you hear? Google has been accused of having a secret vendetta against White people. Elon Musk exchanged tweets about the conspiracy on X more than 150 times over the past week, all regarding portraits generated with Google's new AI chatbot Gemini. Ben Shapiro, The New York Post and Musk were driven apoplectic over how diverse the images were: Female popes! Black Nazis! Indigenous founding fathers! Google apologized and has paused the feature.

In reality, the issue is that the company did a shoddy job overcorrecting on tech that used to skew racist. No, its Chief Executive Officer Sundar Pichai hasn't been infected by the woke mind virus. Rather, he's too obsessed with growth and is neglecting the proper checks on his products.

You may be interested in

MobilesTablets Laptops
25% OFF
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
25% OFF
Google Pixel 8
  • Hazel
  • 8 GB RAM
  • 128 GB Storage
25% OFF
Google Pixel 3 128GB
  • Clearly White
  • 4 GB RAM
  • 128 GB Storage
41% OFF
Google Pixel 6 Pro
  • Stormy Black
  • 12 GB RAM
  • 128 GB Storage

Three years ago, Google got in trouble when its photo-tagging tool started labelling some Black people as apes. It shut the feature down, and then made the problem worse by firing two of its leading AI ethics researchers. These were the people whose job was to make sure that Google's technology was fair in how it depicted women and minorities. Not overly diverse like the new Gemini, but equitable and balanced.

Also read
Looking for a smartphone? To check mobile finder click here.

When Gemini started producing images of German World War II soldiers who were Black and Asian this week, it was a sign that the ethics team hadn't become more powerful, as Musk and others suggest, but that it was being ignored amid Google's race against Microsoft Corp. and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity in image generation, but Google was neglecting that work.

The signs have been there for the past year. People who test artificial intelligence systems for safety are outnumbered by those whose job is to make it bigger and more capable by 30-to-1, according to an estimate from the Center for Humane Technology. Often they are shouting into a void and told not to get in the way. Google's earlier chatbot Bard was so faulty that it made factual errors in its marketing demo. Employees had sounded warnings about that, but managers wouldn't listen. One posted on an internal message board that Bard was “worse than useless: please do not launch,” and many of the 7,000 staffers who viewed the message agreed, according to a Bloomberg News investigation.  

Not long after, engineers who'd carried out a risk assessment told their Google superiors that Bard could cause harm and wasn't ready. You can probably guess what Google did next: It released Bard to the public.

Google's rushed, faulty AI isn't alone. Microsoft's Bing chatbot wasn't just inaccurate(1), it was unhinged, telling a New York Times columnist soon after its release that it was in love with him and wanted to destroy things. Google has said that responsible AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products. A spokeswoman for Google said the company is “continuing to quickly address instances in which [Gemini] isn't responding appropriately.”  

OpenAI, which kickstarted Big Tech's race for a foothold in generative AI, normalized the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google's Pichai now says much the same. By releasing half-baked AI tools, he's giving us “time to adapt” to when AI becomes super powerful, according to comments he made in a 60 Minutes interview last year.

When asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn't mention investing more in the researchers that make AI safe, accurate and ethical, but pointed to greater regulation, a solution that lay outside of his control. “There have to be consequences for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinformation. “Anybody who has worked with AI for a while, you know, you realize this is something so different and so deep that we would need societal regulations to think about how to adapt.”

This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know full well there are no food inspectors to speak of and won't be for years. It gives them license to continue dishing out tainted meat or fish. The same is true in AI. With regulations in the distant future, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there's little incentive to ensure that it is. 

We know about Gemini's diversity bug because of all the tweets on X, but the AI model may have other problems we don't know about — issues that may not trigger Elon Musk but are no less insidious. The female popes and Black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety. Expect our role as guinea pigs to continue until that changes.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Also, read other top stories today:

NYT Misleading? OpenAI has asked a judge to dismiss parts of the New York Times' copyright lawsuit against it, arguing that the newspaper "hacked" its chatbot ChatGPT and other AI systems to generate misleading evidence for the case. Some interesting details in this article. Check it out here.

SMS fraud, or "smishing", is on the rise in many countries. This is a challenge for telecom operators who are meeting at the Mobile World Congress (MWC). An average of between 300,000 to 400,000 SMS attacks take place every day! Read all about it here.

Google vs Microsoft! Alphabet's Google Cloud ramped up its criticism of Microsoft's cloud computing practices, saying its rival is seeking a monopoly that would harm the development of emerging technologies such as generative artificial intelligence. Know what the accusations are all about here.

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any updates from the world of technology. ‎To follow the HT Tech channel on WhatsApp, click here to join now!

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 29 Feb, 07:10 IST
Tags:
NEXT ARTICLE BEGINS