OpenAI backs idea of requiring licenses for advanced AI systems | Tech News

OpenAI backs idea of requiring licenses for advanced AI systems

An internal policy memo drafted by OpenAI shows the company supports the idea of requiring government licenses from anyone who wants to develop advanced AI systems.

By:BLOOMBERG
| Updated on: Jul 20 2023, 23:50 IST
How To Use ChatGPT For Beginners
Hindustan Time Tech
Wondering how to use the OpenAI's ChatGPT? Here's a quick guide if you are using this AI chatbot for the first time. 
Openai
Sam Altman, chief executive officer of OpenAI Inc., in Sun Valley, Idaho, US. (Bloomberg)
Openai
iconWatch Video
Sam Altman, chief executive officer of OpenAI Inc., in Sun Valley, Idaho, US. (Bloomberg)

An internal policy memo drafted by OpenAI shows the company supports the idea of requiring government licenses from anyone who wants to develop advanced artificial intelligence systems. The document also suggests the company is willing to pull back the curtain on the data it uses to train image generators.

The creator of ChatGPT and DALL-E laid out a series of AI policy commitments in the internal document following a May 4 meeting between White House officials and tech executives including OpenAI Chief Executive Officer Sam Altman. “We commit to working with the US government and policy makers around the world to support development of licensing requirements for future generations of the most highly capable foundation models,” the San Francisco-based company said in the draft.

The idea of a government licensing system co-developed by AI heavyweights such as OpenAI sets the stage for a potential clash with startups and open-source developers who may see it as an attempt to make it more difficult for others to break into the space. It's not the first time OpenAI has raised the idea: During a US Senate hearing in May, Altman backed the creation of an agency that, he said, could issue licenses for AI products and yank them should anyone violate set rules.

The policy document comes just as Microsoft Corp., Alphabet Inc.'s Google and OpenAI are expected to publicly commit Friday to safeguards for developing the technology — heeding a call from the White House. According to people familiar with the plans, the companies will pledge to responsible development and deployment of AI.

OpenAI cautioned that the ideas laid out in the internal policy document will be different from the ones that will soon be announced by the White House, alongside tech companies. Anna Makanju, the company's vice president of global affairs, said in an interview that the company isn't “pushing” for licenses as much as it believes such permitting is a “realistic” way for governments to track emerging systems.

“It's important for governments to be aware if super powerful systems that might have potential harmful impacts are coming into existence,” she said, and there are “very few ways that you can ensure that governments are aware of these systems if someone is not willing to self-report the way we do.”

Makanju said OpenAI supports licensing regimes only for AI models more powerful than OpenAI's current GPT-4 one and wants to ensure smaller startups are free from too much regulatory burden. “We don't want to stifle the ecosystem,” she said.

OpenAI also signaled in the internal policy document that it's willing to be more open about the data it uses to train image generators such as DALL-E, saying it was committed to “incorporating a provenance approach” by the end of the year. Data provenance — a practice used to hold developers accountable for transparency in their work and where it came from — has been raised by policy makers as critical to keeping AI tools from spreading misinformation and bias.

The commitments laid out in OpenAI's memo track closely with some of Microsoft's policy proposals announced in May. OpenAI has noted that, despite receiving a $10 billion investment from Microsoft, it remains an independent company.

The firm disclosed in the document that it's conducting a survey on watermarking — a method of tracking the authenticity of and copyrights on AI-generated images — as well as detection and disclosure in AI-made content. It plans to publish results.

The company also said in the document that it was open to external red teaming — in other words, allowing people to come in and test vulnerabilities in its system on multiple fronts including offensive content, the risk of manipulation and misinformation and bias. The firm said in the memo that it supports the creation of an information-sharing center to collaborate on cybersecurity.

In the memo, OpenAI appears to acknowledge the potential risk that AI systems pose to job markets and inequality. The company said in the draft that it would conduct research and make recommendations to policy makers to protect the economy against potential “disruption.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 20 Jul, 23:50 IST
NEXT ARTICLE BEGINS