The EU has set out plans to regulate the use of “risky” artificial intelligence (AI) such as facial recognition with hefty fines for companies that breach the rules.
The draft rules, published by the European Commission today, mark the first international efforts to curb the use of technology that critics warn could be used by repressive regimes to control citizens.
If approved, the legislation could see the EU take the lead in the area, setting out a path for other countries including Britain.
Under the proposals, uses of AI for surveillance or systems that allow governments to carry out social scoring would be banned, alongside others that exploit children.
Applications of the technology considered high-risk in recruitment, critical infrastructures, credit scoring, migration and law enforcement would also be subject to strict controls.
Companies that breach the rules would be handed a fine of up to six per cent of their global turnover or €30m (£26m) — whichever is higher.
The proposed measures come as China moves ahead in the global race to roll out AI-based technologies.
The country has come under fire for its use of data and tech such as facial recognition in a range of social scoring systems being developed to monitor and assess citizens.
“The proposed EU AI regulation is extremely ambitious in its proposals,” said Herbert Swaniker, tech lawyer at Clifford Chance.
“The rules will give the EU unprecedented oversight of high-risk AI used within the bloc and require organisations to be more transparent with their AI uses.”
But he warned there was significant work to be done to clarify the definition of high-risk AI and set out how vendors will be expected to comply with the regulations.
The draft laws will have to be agreed by the European Parliament and EU member states before coming into force in a process that could take more than a year.