Sam Altman asked for AI rules. The EU answered | Opinion

Sam Altman asked for AI rules. The EU answered

Europe’s AI Act has some good ideas around transparency, but it’s currently too complex for its own good.

By:BLOOMBERG
| Updated on: Jun 14 2023, 21:44 IST
AI-powered Bing now on SwiftKey! Skype, Microsoft Start, Microsoft Edge get updates too
AI statue
1/5 The SwiftKey mobile app now has access to AI-powered Bing features in three ways – search, chat, and tone changer. (Microsoft)
image caption
2/5 After receiving the update, the Bing icon will appear above the keyboard. By clicking on it, you can select the specific feature you want to use: Chat, Tone, or Search. (Microsoft)
AI statue
3/5 Chat functionality is for more detailed queries, while Tone feature can help you communicate more effectively by using AI to customize your in-progress text to fit any situation. The Search functionality can quickly let you search the web from your keyboard, without switching apps. (Bloomberg)
image caption
4/5 Apart from these, the translator functionality in the mobile Bing app now offers alternative masculine and feminine translations when translating from English to Spanish, French, or Italian. (Microsoft)
AI statue
5/5 Bing in Skype access is expanding, so that everyone in a group chat can now chat with the new Bing. Only one person in the group needs to have access to the preview. (AP)
AI statue
icon View all Images
The world's first Al sculpture "The Impossible Statue", is displayed at the Tekniska museum in Stockholm on June 8, 2023. (AFP)

America innovates, Europe regulates. Just as the world is starting to come to grips with OpenAI, whose boss Sam Altman has both leapfrogged the competition and pleaded for global rules, the European Union has responded with the Artificial Intelligence Act, its own bid for AI superpower status by being the first to set minimum standards. It faces a European Parliament vote on Wednesday.

Yet we're a long way from the deceptively simple world of Isaac Asimov's robot stories, which saw sentient machines deliver the benefits of powerful “positronic brains” with just three rules in place — don't harm humans, obey humans, and defend your existence. AI is clearly too important to not regulate thoroughly, but the EU will have its work cut out to reduce the Act's complexity while promoting innovation.

The AI Act has some good ideas focusing on transparency and trust: Chatbots will have to declare whether they're trained on copyrighted material, deepfakes will have to be labeled as such, and a raft of newly added obligations for the kind of models used in generative AI will require serious efforts to catalog datasets and take responsibility for how they're used.

Lifting the lid on opaque machines that process huge swathes of human output is the right idea, and gets us closer to more dignity around treatment of data. As Dragos Tudorache, co-rapporteur of the law, told me recently, the purpose is to promote “trust and confidence” in a technology that has attracted huge amounts of investment and excitement yet also produced some very dark failures. Self-regulation isn't an option — neither is “running into the woods” and doing nothing out of fear that AI could wipe out humanity one day, he says.

The Act also carries a lot of complexity, however, and runs the paradoxical risk of setting the bar too high to promote innovation but not high enough to avoid unpredictable outcomes. The main approach is to categorize AI applications into buckets of risk, from minimal (spam filters, video games) to high (workplace recruitment) to unauthorized (real-time facial recognition).

That makes sense from a product-safety point of view, with providers of AI systems expected to meet rules and requirements before putting their products on the market. Yet the category of high-risk applications is a broad one, and the downstream chain of responsibility in an application like ChatGPT shows how tech can blur product-safety frameworks. When a lawyer relies on AI to craft a motion that unwittingly becomes full of made-up case law, are they using the product as intended or misusing it?

It's also not clear how exactly this will work with other data-privacy laws like the EU's GDPR, which was used by Italy as justification for a temporary block on ChatGPT. And while more transparency on copyright-protected training data makes sense, it could conflict with past copyright exceptions granted for data mining back when AI was viewed less nervously by creative industries.

All this means there's a real possibility that the actual outcome of the AI Act might entrench the EU's dependency on big US tech firms from Microsoft Corp. to Nvidia Corp. European firms are chomping at the bit to tap into the potential productivity benefits of AI, but it's likely that the large incumbent providers will be best-positioned to handle the combination of estimated upfront compliance costs of at least $3 billion and non-compliance fines of up to 7% of global revenue.

Adobe Inc. has already offered to legally compensate businesses if they're sued for copyright infringement over any images its Firefly tool creates, according to Fast Company. Some firms may take the calculated risk of avoiding the EU entirely: Alphabet Inc. has yet to make its chatbot Bard available there.

The EU has a lot of fine-tuning to do as final negotiations begin on the AI Act, which might not come into force until 2026. Countries such as France that are nervous about losing more innovation ground to the US will likely push for more exemptions for smaller businesses. Bloomberg Intelligence analyst Tamlin Bason sees a possible “middle ground” on restrictions. That should be accompanied by initiatives to foster new tech ideas such as promoting ecosystems linking universities, startups and investors. There should also be more global coordination at a time when angst around AI is widespread — the G7's new Hiroshima AI process looks like a useful forum to discuss issues like intellectual property rights.

Perhaps one bit of good news is that AI is not about to destroy all jobs held by human compliance officers and lawyers. Technology consultant Barry Scannell says that companies will be looking at hiring AI officers and drafting AI impact assessments, similar to what happened in the aftermath of the GDPR. Reining in the robots requires more human brainpower — perhaps one twist you won't get in an Asimov story.

Lionel Laurent is a Bloomberg Opinion columnist covering digital currencies, the European Union and France. Previously, he was a reporter for Reuters and Forbes.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 14 Jun, 21:44 IST
NEXT ARTICLE BEGINS