Tuesday 15 December 2020 9:28 am CFA Institute Talk

Artificial intelligence: people – not robots – are in the driving seat

What is city talk? Info Info. Latest
These are articles written by professionals for investment professionals. They are contributions from external subject matter experts who do not work for CFA Institute, but may be a CFA charterholder as well as a member of a CFA Society. All are experts in their field and strive to deliver useful insights that help investment professionals make better decisions.

Like it or not, artificial intelligence (AI) is already part of our daily lives. From smartphones to Alexa virtual assistants, AI and its applications are accepted norms today.
While we appreciate that AI can automate repetitive workplace tasks or even drive a car, its implications are much further reaching.

Luminaries including Elon Musk and Bill Gates have spoken out about the potential downsides of AI. At times, they have even issued warnings. Gates compared AI to nuclear energy — simultaneously dangerous and full of possibility.

Of course, these visionaries appreciate AI’s potential. Musk was among the founders of OpenAI, a research laboratory dedicated to ensuring that AI serves all of humanity — funded in tandem with Gates.

While Musk and Gates have focused on how to best harness AI’s power, Andrew Yang has looked to cushioning its toll on workers: during his run for the 2020 Democratic presidential nomination, he pitched the idea of a universal basic income (UBI) for all to help offset AI-driven automation’s impact on the labour force.

The message is clear: now that AI applications have been developed, companies and governments — and investment professionals — must stay ahead of the curve of the AI revolution.

Need to tread carefully

What makes AI so powerful is its ability to go beyond making inferences to making predictions — and even decisions — by learning.

For most of us, AI is just a modern convenience: it feeds us adverts for snow boots in our weather app just as a winter storm is predicted, for example. AI can also answer our seemingly mundane questions. What’s the weather like? Yet these mundane questions may have more serious implications: they can be used to generate patterns in the underlying system and collect data about our lives.

Without the proper guardrails, AI can create unneeded and biased outputs. That means ethical and optimisation criteria must be at the core of all effectively constructed AI systems. For example, an AI tool applied to college admissions, job applications or loan approvals must be designed and trained not to prioritise physical features or other irrelevant and potentially discriminatory characteristics.

AI can be just as susceptible to bias as its human programmers. That’s why we need to better understand the underlying algorithms and the data that feed them.

The Silicon Valley mantra ‘move fast and break things’ is no longer prudent. We must carefully consider whether to apply AI to a particular technology and decide on a case-by-case basis when AI can improve a process or needs additional tweaking, or whether relying on human judgment is best.

AI-related ethical issues need to be addressed in accessible ways that the public can understand. Only then can we chart the path forward. We must also recognise what we don’t know and ensure that decision makers who ‘trust’ AI know the risks. Being wrong about a customer’s classification is very different from being wrong about a person’s health.

AI expands and enhances what we can do with our computers and we must make sure it remains under our control. AI’s capacity for faster and faster computations combined with the decision-making power it can delegate to systems based on those computations can pose a threat.

That’s why we have to understand the processes behind the technology and have a say in where and how AI is applied. Indeed, systems can be tricked into ‘thinking’ something false is actually true. For example, an artist pulled a wagon loaded with 99 mobile phones through the streets of Berlin. This fooled Google Maps into broadcasting a traffic jam in what were, in fact, empty streets.

Ethical questions on employment impact

One of the biggest ethical questions AI evokes relates to its potential impact on employment. What will happen to those workers whose jobs AI automates out of existence? This is not a new dilemma. Automation in the workplace has been a catalyst for economic change and political upheaval throughout history.

In recent generations, factory workers have been especially vulnerable to the disruption created by new technology. Today new categories of workers are casting a wary eye on AI. Junior journalists are now competing with AI-driven machines that write formulaic articles about sports results. And marketers are watching as AI takes on some of the more data-driven elements of their job, freeing them up – at least in theory – to do more creative work.

The current automation revolution is different from its predecessors. Machines can now perform tasks that were previously thought to require human involvement. Computers can now make decisions on their own even if they don’t know they are making them. Previous industrial revolutions made human labour more efficient. In today’s industrial revolution, AI is bypassing the human element altogether.

AI has the potential to affect every job on the planet, from factory worker, to investment adviser, and we must decide whether increased efficiency and profits are worth the costs in lost jobs.

Automation for the people?

But ultimately, humans design these machines. We make them and we feed them the data. This means that we are still in the driver’s seat. We will decide which AI systems to develop and how.

True progress is not determined by how far technology goes, but by how well we comprehend it and how well it contributes to solving the world’s greatest problems. Although we must always be mindful of AI’s potential liabilities, this evolving science can empower not only individuals and companies but also society.

The key to harnessing this latent power is ongoing education paired with intelligent discussion and decision-making around AI’s inherent ethical dilemmas. Only then can we effectively guide AI’s development and ensure its beneficial adoption by society at large.


If you liked this post, don’t forget to subscribe to the Enterprising Investor.


By Sameer S. Somal, CFA and Pablo A. Ruz Salmones

All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / FG Trade

Share:
Tags: