Tuesday 11 May 2021 11:56 am Fladgate Talk

Regulating the robots – unpacking the EU’s new AI Regulation

What is city talk? Info Info. Latest
Tim Wright is a partner in the corporate department at London law firm Fladgate. He specialises in commercial, outsourcing and technology transactions across various sector, covering cloud computing, digital platforms and e-commerce, web development and hosting, software development and licensing, system integration projects, and business transformation and digital projects.

The European Commission recently publish a first draft of its new Artificial Intelligence Regulation (AI Regulation). 

The AI Regulation covers both Providers (i.e. developers) and business Users of AI systems. Providers have a broad set of obligations owed to Users. Under the AI Regulation, AI systems are categorised as:

–          Prohibited: certain “manipulative” and “exploitative” uses of AI are outlawed altogether as are AI systems operated by public authorities for social scoring of individuals.

–          Low risk: AI systems which are “low risk” are subject to transparency requirements, e.g. Providers must ensure that Users interacting with an AI system such as a chatbot are informed unless it is obvious. And Users using AI systems to create “deep fakes” must ensure that such content is labelled. 

–          High risk: AI systems which are “high risk” (i.e. systems that have a potentially significant harmful impact on the health, safety or fundamental rights of persons in the EU, such as uses in the context of employment, education and credit scoring) are subject to a more complex set of regulatory requirements, which are primarily imposed on the Providers of such systems.

The new AI Regulation places human oversight and ultimate control over AI systems at its core. As such Providers of high risk AI Systems will need to comply with an array of new obligations. These include implementing risk and quality management systems, validating the quality of the data used to train AI systems, providing clear instructions for Users, ensuring “state of the art” consistency, accuracy, robustness and cybersecurity of AI systems, and self-certifying conformity assessment via CE marking. Providers must also register details of the AI system on an EU database, monitor performance of the AI system, report serious incidents and breaches, and correct, withdraw or recall non-conforming AI systems, as well as cooperating with and providing information to regulators when required to do so.  

In a similar vein to the General Data Protection Regulation (GDPR), the AI Regulation will have global reach since it will govern AI systems, and the outputs of such systems, with an impact in the European Union. As a result, AI developers round the world will need to pay careful attention to complying with the new AI Regulation if they wish to access the EU market (although the regulation of AI systems developed or used exclusively for military purposes are specifically excluded). Providers located in third countries outside the EU will be required to appoint EU representatives.

Just like the GDPR, the AI Regulation brings with it the risk of enormous fines – in this case of up to the greater of €30 million or 6% of annual turnover for the most serious offences, relating to prohibited AI systems and the quality of training data (to address the issue of AI bias and discrimination).  And, the GDPR and the AI Regulation are not mutually exclusive – the AI Regulation will sit alongside GDPR as an additional set of requirements.

The AI Regulation will now make its way through the EU’s legislative processes and is not expected to come into force until 2024 at the earliest. 

Share
Tags: