YouTube case at US Supreme Court could shape protections for ChatGPT and AI | Tech News

YouTube case at US Supreme Court could shape protections for ChatGPT and AI

That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

By:REUTERS
| Updated on: Apr 25 2023, 06:50 IST
Telegram leaves Snapchat and YouTube behind, turns 5th most downloaded app globally
image caption
1/6 Telegram has attained the 5th position in the list of most downloaded apps worldwide with a surge in the number of downloads when compared with the previous year. Cumulatively, Telegram now has more than 100 million downloads. (Pixabay)
image caption
2/6 According to the Sensor Tower report, the worldwide app installs reached nearly 37 billion on the App Store and Google Play. While being in the top 5, Telegram even surpassed Snapchat and YouTube downloads. (Pixabay)
image caption
3/6 Among all these, TikTok topped the charts with over 3.5 billion downloads, while Instagram got second spot. Facebook and WhatsApp followed. (Pixabay)
YouTube
4/6 Broadly, Telegram is the 5th most downloaded app worldwide as well as the fifth most downloaded social media app. The top 5 spots in the list are covered by all social media giants. (Bloomberg)
image caption
5/6 However, worldwide downloads on Apple App Store for Telegram are not that impressive as it couldn't even manage to come into the top 10 list. It got into 11th spot. On Google Play Store, Telegram got 6th position after Instagram, Facebook, TikTok, Shopee, and WhatsApp. (Pixabay)
image caption
6/6 Of course, TikTok app topped the charts with over 3.5 billion downloads and became the fifth such app till now to cross this major milestone. (Pixabay)
YouTube
icon View all Images
"The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable," said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. (AFP)

When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT.

The justices are due to rule by the end of June whether Alphabet Inc's YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

What the court decides about those issues is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, a company in which Microsoft Corp is a major investor, or Bard from Alphabet's Google should be protected from legal claims like defamation or privacy violations, according to technology and legal experts.

That is because algorithms that power generative AI tools like ChatGPT and its successor GPT-4 operate in a somewhat similar way as those that suggest videos to YouTube users, the experts added.

"The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable," said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. "You have the same kinds of issues with respect to a chatbot."

Representatives for OpenAI and Google did not respond to requests for comment.

During arguments in February, Supreme Court justices expressed uncertainty over whether to weaken the protections enshrined in the law, known as Section 230 of the Communications Decency Act of 1996. While the case does not directly relate to generative AI, Justice Neil Gorsuch noted that AI tools that generate "poetry" and "polemics" likely would not enjoy such legal protections.

The case is only one facet of an emerging conversation about whether Section 230 immunity should apply to AI models trained on troves of existing online data but capable of producing original works.

Section 230 protections generally apply to third-party content from users of a technology platform and not to information a company helped to develop. Courts have not yet weighed in on whether a response from an AI chatbot would be covered.

'CONSEQUENCES OF THEIR OWN ACTIONS'

Democratic Senator Ron Wyden, who helped draft that law while in the House of Representatives, said the liability shield should not apply to generative AI tools because such tools "create content."

"Section 230 is about protecting users and sites for hosting and organizing users' speech. It should not protect companies from the consequences of their own actions and products," Wyden said in a statement to Reuters.

The technology industry has pushed to preserve Section 230 despite bipartisan opposition to the immunity. They said tools like ChatGPT operate like search engines, directing users to existing content in response to a query.

"AI is not really creating anything. It's taking existing content and putting it in a different fashion or different format," said Carl Szabo, vice president and general counsel of NetChoice, a tech industry trade group.

Szabo said a weakened Section 230 would present an impossible task for AI developers, threatening to expose them to a flood of litigation that could stifle innovation.

Some experts forecast that courts may take a middle ground, examining the context in which the AI model generated a potentially harmful response.

In cases in which the AI model appears to paraphrase existing sources, the shield may still apply. But chatbots like ChatGPT have been known to create fictional responses that appear to have no connection to information found elsewhere online, a situation experts said would likely not be protected.

Hany Farid, a technologist and professor at the University of California, Berkeley, said that it stretches the imagination to argue that AI developers should be immune from lawsuits over models that they "programmed, trained and deployed."

"When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products," Farid said. "And when they're not held liable, they produce less safe products."

The case being decided by the Supreme Court involves an appeal by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in a 2015 rampage by Islamist militants in Paris, of a lower court's dismissal of her family's lawsuit against YouTube.

The lawsuit accused Google of providing "material support" for terrorism and claimed that YouTube, through the video-sharing platform's algorithms, unlawfully recommended videos by the Islamic State militant group, which claimed responsibility for the Paris attacks, to certain users.

 

(Reporting by Andrew Goudsward; Editing by Will Dunham)

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 25 Apr, 06:49 IST
NEXT ARTICLE BEGINS