EU's AI rules: How do they work and will they affect people everywhere? 4 questions answered | Tech News

EU's AI rules: How do they work and will they affect people everywhere? 4 questions answered

European Union officials worked into the late hours last week hammering out an agreement on world-leading rules meant to govern the use of artificial intelligence in the 27-nation bloc.

| Updated on: Dec 12 2023, 06:31 IST
AI boost: 4 ways Copilot in Microsoft Teams improves meetings, hikes productivity
artificial intelligence
1/5 Microsoft’s AI tool named Copilot has introduced various new features to Microsoft Teams since its launch. Over eight months duration, Copilot features have been beneficial to carrying out several business tasks and it improves the way we conduct meetings on Microsoft Teams. Know the four ways Copilot improves meetings in Teams. (Microsoft )
artificial intelligence
2/5 To retain the meeting's privacy or confidentiality, Copilot will enable users to keep no transcript of the conversation after the meeting. It means that the meeting admin and organizers will have a "no transcription" option which will allow them to ask Copilot questions during the meeting, however, after the meeting is concluded, the no data interaction with the AI tools will be saved.  (Microsoft)
artificial intelligence
3/5 Copilot’s compose box writing assistance will enable users to draft messages in chat, meeting chat or channel. Microsoft Teams users can utilize this feature to rewrite the message, adjust the tone to be casual, professional, and confident, or simply modify the message before you send it.  (Microsoft)
artificial intelligence
4/5 Copilot in Microsoft Teams enables users to keep tabs on what's important in the Chat and channels. The tool simply filters information which will enable users to stay updated without reading through the whole chats. Copilot effortlessly highlights key decisions and open items from the long discussions. (Microsoft)
artificial intelligence
5/5 Users can also ask  Copilot to highlight key information from a channel post-conversation. Giving simple prompts such as your task list or key discussion points can be asked and then the tool will summarize the information along with citations based on your prompts. (Microsoft)
artificial intelligence
View all Images
The Artificial Intelligence Act is the latest set of regulations designed to govern technology in Europe destined to have global impact. (Pexels)

 European Union officials worked into the late hours last week hammering out an agreement on world-leading rules meant to govern the use of artificial intelligence in the 27-nation bloc. The Artificial Intelligence Act is the latest set of regulations designed to govern technology in Europe destined to have global impact.

Here's a closer look at the AI rules:


The AI Act takes a “risk-based approach" to products or services that use artificial intelligence and focuses on regulating the uses of AI rather than the technology. The legislation is designed to protect democracy, the rule of law and fundamental rights like freedom of speech, while still encouraging investment and innovation.

The riskier an AI application is, the stiffer the rules. Those that pose limited risk, such as content recommendation systems or spam filters, would have to follow only light rules such as revealing that they are powered by AI.

High-risk systems, such as medical devices, face tougher requirements like using high-quality data and providing clear information to users.

Some AI uses are banned because they're deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.

People in public can't have their faces scanned by police using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.

The AI Act won't take effect until two years after final approval from European lawmakers, expected in a rubber-stamp vote in early 2024. Violations could draw fines of up to 35 million euros ($38 million) or 7% of a company's global revenue.


The AI Act will apply to the EU's nearly 450 million residents, but experts say its impact could be felt far beyond because of Brussels' leading role in drawing up rules that act as a global standard.

The EU has played the role before with previous tech directives, most notably mandating a common charging plug that forced Apple to abandon its in-house Lightning cable.

While many other countries are figuring out whether and how they can rein in AI, the EU's comprehensive regulations are poised to serve as a blueprint.

“The AI Act is the world's first comprehensive, horizontal and binding AI regulation that will not only be a game-changer in Europe but will likely significantly add to the global momentum to regulate AI across jurisdictions,” said Anu Bradford, a Columbia Law School professor who's an expert on EU law and digital regulation.

"It puts the EU in a unique position to lead the way and show to the world that AI can be governed and its development can be subjected to democratic oversight,” she said.

Even what the law doesn't do could have global repercussions, rights groups said.

By not pursuing a full ban on live facial recognition, Brussels has “in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally,” Amnesty International said.

The partial ban is “a hugely missed opportunity to stop and prevent colossal damage to human rights, civil space and rule of law that are already under threat through the EU.”

Amnesty also decried lawmakers' failure to ban the export of AI technologies that can harm human rights — including for use in social scoring, something China does to reward obedience to the state through surveillance.


The world's two major AI powers, the U.S. and China, also have started the ball rolling on their own rules.

U.S. President Joe Biden signed a sweeping executive order on AI in October, which is expected to be bolstered by legislation and global agreements.

It requires leading AI developers to share safety test results and other information with the government. Agencies will create standards to ensure AI tools are safe before public release and issue guidance to label AI-generated content.

Biden's order builds on voluntary commitments made earlier by technology companies including Amazon, Google, Meta, Microsoft to make sure their products are safe before they're released.

China, meanwhile, has released “ interim measures ” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China.

President Xi Jinping has also proposed a Global AI Governance Initiative, calling for an open and fair environment for AI development.


The spectactular rise of OpenAI's ChatGPT showed that the technology was making dramatic advances and forced European policymakers to update their proposal.

The AI Act includes provisions for chatbots and other so-called general purpose AI systems that can do many different tasks, from composing poetry to creating video and writing computer code.

Officials took a two-tiered approach, with most general purpose systems facing basic transparency requirements like disclosing details about their data governance and, in a nod to the EU's environmental sustainability efforts, how much energy they used to train the models on vast troves of written works and images scraped off the internet.

They also need to comply with EU copyright law and summarize the content they used for training.

Stricter rules are in store for the most advanced AI systems with the most computing power, which pose “systemic risks” that officials want to stop spreading to services that other software developers build on top.


Follow HT Tech for the latest tech news and reviews , also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 12 Dec, 06:31 IST