Europe's artificial intelligence rules are facing a do-or-die moment | Tech News

Europe's artificial intelligence rules are facing a do-or-die moment

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

By:AP
| Updated on: Dec 05 2023, 06:42 IST
Microsoft Windows 12 launch: Know when you will likely get your hands on it
artificial intelligence
1/5 Various rumors and speculations are surfacing around regarding the launch of Microsoft Windows 12. Earlier speculation about Windows 12 launch was sparked  by Qualcomm Snapdragon X Elite presentation about the same. Now, a new report is indicating that Windows 12 will be unveiled in 2024. (Microsoft)
artificial intelligence
2/5 As reported in the NotebookCheck, Taiwan's Commercial Times (CTEE) report says that Windows 12 is scheduled to arrive "by June 2024". These speculations are made based on comments made by Barry Lam, the founder and chairman of PC contract manufacturer Quanta, and by Junsheng (Jason) Chen, the chairman and chief executive of Acer.  (Microsoft)
artificial intelligence
3/5 Reports also indicate that Microsoft aims to emphasize AI capabilities further in the coming year. Microsoft has already introduced Azure Maia AI Accelerator and Azure Cobalt CPU. So, it is expected that in Windows 12, there will be many more AI features introduced. (Microsoft windows)
artificial intelligence
4/5 However, there is an uncertainty surrounding the 2024 launch date and it is adding an element of anticipation to it all for all the users who would be looking to get their hands on the new OS. Also, as Windows 11 development continues, speculations related to  Windows 11 24H2 have also surfaced. It indicates that this update is slated for the second half of the coming year. (REUTERS)
image caption
5/5 Whether Windows 12 will launched before or coincide with this update remains uncertain. It leaves the tech community waiting in anticipation for further announcements from Microsoft. (AP)
artificial intelligence
icon View all Images
First suggested in 2019, the EU’s AI Act was expected to be the world's first comprehensive AI regulations. (Pexels)

 Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU's AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.

You may be interested in

MobilesTablets Laptops
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
27% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Also read
Looking for a smartphone? To check mobile finder click here.

Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

“Rather than the AI Act becoming the global gold standard for AI regulation, there's a small chance but growing chance that it won't be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

He said “there's simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany's Aleph Alpha.

Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media,” Reiners said.

A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existential risk” from AI.

AI is “too important not to regulate, and too important not to regulate well,” Google's top legal officer, Kent Walker, said in a Brussels speech last week. “The race should be for the best AI regulations, not the first AI regulations."

Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

The nature of general purpose AI systems means “you don't know how they're applied,” she said. At the same time, regulations are needed "because otherwise down the food chain there's no accountability” when other companies build services with them, McGowan said.

Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.

Aleph Alpha said a “balanced approach is needed" and supported the EU's risk-based approach. But it's “not applicable” to foundation models, which need “more flexible and dynamic” regulations, the German AI company said.

EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

EU's three branches of government are facing one of their last chances to reach a deal Wednesday.

Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.

If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

“There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate,” Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.

His office said he wasn't available for an interview.

“It's a very fluid conversation still," he told the event in Brussels. “We're going to keep you guessing until the very last moment.”

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 05 Dec, 06:42 IST
NEXT ARTICLE BEGINS