Court systems must adapt to the rise of the robots

 
Michael Colledge
Continental AG Showcases New Automotive Technologies
Source: Getty

Intellectual property disputes are big business. In the UK there has been a large growth in intellectual property litigation, and the court system has adapted accordingly.

But this rise is not the only challenge faced by the courts. With the advent of artificial intelligence the law must adapt to take into account the new challenges created by automated decision-making processes and the errors which may arise.

At the Law Society’s London Technology Week, speakers spoke for and against legislation to govern AI: on the one hand there is a lack of any legal framework to fit AI, and on the other hand innovation must be allowed to thrive.

Computers and mobile phones have all developed to the extent that robotics and algorithms are a hidden feature in our everyday lives; these things all comprise some form of intellectual property which can be the subject of legal disputes.

In respect of legal issues arising from mistakes or accidents, the use of automated systems has not posed any particular problem for the law – the owner or manufacturer of the process or machinery will be liable for any negligent mistakes or contractual losses caused by their equipment or programs.

Biased algorithms

The Equality Act 2010 also illustrates the effectiveness of current legislation. If algorithms are used to select a candidate’s eligibility for a product, service or even a job, a claim of discrimination may arise if the process is biased.

In December 2016 Google presented a research paper considering discrimination and bias in AI and algorithms. The purpose of the research was to test whether there was any racial or gender bias in decision making. The research followed a number of high profile issues in 2016, including a crime prediction system showing racial bias and a chat bot launched by Microsoft which could be persuaded to learn sexist and other inappropriate language.

Who is liable?

It will not be long before driverless cars are seen on our roads and legal disputes will arise in relation to issues of liability when things go wrong - courts hearing road traffic act claims will have to adapt and consider AI, its role in accidents and where fault lies.

When a human makes a mistake they are liable in contract or in tort (negligence). Until artificially intelligent systems obtain their own legal personality the owners of the intellectual property or the driverless car may be liable for any errors or mistakes made by their systems.

This raises an interesting question where driverless cars are pre-programmed to favour a safety of the passengers or pedestrians and whether fault lies with the manufacturer or the owner, in this case the law may also see developments and changes to insurance law and markets.

But it is unlikely that the law will adapt to give artificial intelligence a separate legal personality in the near future; this is partly because of the various rationales behind legal personality, such as the attribution of blame.

Fortunately, a major reform of technical education has been proposed to prepare the UK for the high-skilled jobs of the future, and there are proposals to modernise the courts system within this parliament.