New AI rules could ban surveillance and scoring in the EU
The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behaviour, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.
The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behaviour, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.
The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week.
Read more: EU civil rights groups want ban on biometric surveillance ahead of new laws
The EU proposal is expected to include the following rules:
- AI systems used to manipulate human behaviour, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
- Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
- AI applications considered to be ‘high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
- High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
- Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
- Rules would apply equally to companies based in the EU or abroad.
European member states would be required to appoint assessment bodies to test, certify and inspect the systems, according to the document. Companies that develop prohibited AI services, or supply incorrect information or fail to cooperate with the national authorities could be fined up to a maximum of 4% of global revenue.
The rules won't apply to AI systems used exclusively for military purposes, according to the document.
An EU spokesman declined to comment on the proposed rules. Politico reported on the draft document earlier.
Also read: Britain's GCHQ cyber spies embrace the AI revolution
As artificial intelligence has started to penetrate every part of society, from shopping suggestions and voice assistants to decisions around hiring, insurance and law enforcement, the EU wants to ensure technology deployed in Europe is transparent, has human oversight and meets its high standards for user privacy.
The proposed rules come as the EU tries to catch up to the U.S. and China on the roll-out of artificial intelligence and other advanced technology. The new requirements could hinder tech firms in the region from competing with foreign rivals if they are delayed in unveiling products because they first have to be tested.
Once proposed by the commission, the rules could still change following input from the European Parliament and the bloc's member states before becoming law.
Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.