Artificial Intelligence: The importance of identifying the problem

Barry Jennings
AI is anticipated to transform almost every sector. (Source: Getty)

Over the past few years, Artificial Intelligence (AI) has attracted a huge amount of attention. It is anticipated to transform almost every sector including key areas such as finance, healthcare, manufacturing, logistics, automotive and aviation.

The impact of this technology is so great that it has led the government and the AI sector to agree a Sector Deal to boost the UK’s global position as a leader in developing AI technologies, which will see £1bn of investment put into the industry. For the legal sector and other professional services, AI is both a challenge for our clients in terms of how they will operate their businesses in future (and therefore something they will seek advice upon) and a challenge to how legal and professional services are delivered as AI replaces human review, analysis and advice.

However, before businesses jump on the AI bandwagon, it's important to assess whether or not AI is the right tool to solve the problem before investing. There are countless stories of businesses being driven by marketing and media hype into implementing new technology that they don't actually need or are not ready for. This results in a lack of trust and disillusionment with that technology. To be successful, businesses need to identify the specific problems that AI can solve before implementing it into their systems.

Reputation and trust

Many of the ways in which AI can assist us to involve trusting the AI technology with things of high value. For example, large-scale corporate transactions can be assisted by AI, where errors can be costly to businesses and investors. In the case of driverless vehicles, we are quite literally putting lives in the hands of AI technology.

The capabilities of AI are extensive, but there are also limitations. Factors such as the narrow focus of AI and its dependence on the data it reads are reasons why human intervention is always required alongside current AI solutions. AI must be stringently tested before going to market to combat documented issues, such as sensor failure in driverless vehicles due to street sign vandalisation and learned bias. Companies using AI should therefore ensure that failure modes have been analysed and that the data provided to AI functionality is carefully evaluated for error or bias before use. Only once stringent testing has taken place and been demonstrated, can AI customers have the trust that is required for AI to grow.

Essential legal and ethical implications

In terms of the legal implications of AI, vicarious liability and agency cannot be applied to AI in the same way as they would for employee liability. Due to the black box nature of AI and the lack of transparency in its reasoning, it is difficult to attribute liability. The Fairchild principles of a 'material increase to risk' could be applied in future to determine liability, but without legislative clarification, the position is not entirely clear.

Furthermore, AI can monitor price changes within a market and react very quickly, thereby potentially stifling competition by creating a form of collusion in the market. The European Commission is currently taking the threat of AI in competition seriously and exploring solutions to resolve these types of issues.

From an intellectual property perspective, legislation has not been updated to cover the ownership of AI-generated intellectual property. Companies will need to ensure ownership of any materials or intellectual property created by AI vests or is transferred to them.

In terms of ethics, the law cannot cover every moral scenario. AI is already creating unintended gender, race and socio-economic bias based on the data it works with. While AI works on the principle that the more data there is, the better the results produced by AI will be, where should the line be drawn to prevent intense surveillance and control? Legislation may need to be updated to contemplate the changes AI will create in our lives and our industries but there are no straightforward answers to these moral questions.

Artificial Intelligence and the legal sector

There is significant media hype around the dramatic effects AI will have on the legal sector. While we are on the cusp of momentous change, it's important to remember that AI has been used by law firms for years. AI is an excellent tool for providing answers. However, it remains necessary for lawyers to ask the questions, and to analyse the answers for AI to be a real solution. An initial investment of time is required from lawyers who must teach machines the rules required to enable AI to assist by machine learning. Therefore while the legal industry will change, there is currently time for these adjustments to be made and law firms should work together with clients to meet the challenges posed.

There are a number of issues that have prevented businesses from truly embracing their AI solutions, with lack of trust and confidence, businesses failing to identify a specific problem for AI and the need for AI solutions to be more effective than any comparable human solution as recurring themes. The media hype around AI has meant customers expect more than what it is currently able to offer, and businesses must focus on what they want AI to help them achieve before investing in it.

Related articles