Is your workplace AI ethical, or even useful?

by

We already use AI in every day life – like telling Alexa to play Ed Sheeran on loop (Source: Getty)

The artificial intelligence – or AI – revolution is already well underway. While it feels like almost every week another report comes out speculating about an imagined, doomed future of robots stealing our jobs, artificial intelligence and chatbot technology is already seamlessly integrated into our daily lives without us even noticing.

Every time you ask Siri where the nearest post office is or ask Alexa to play Ed Sheeran’s Divide on loop, every time your iPhone suggests your next word to text and Google predicts your search, this tech is motoring away in the background, making hundreds of processes around you quicker and easier.

So what does AI mean for the future of our workplaces? Everything – if we use it right.

Given that the AI market is forecast to grow from $643.7m in 2016 to $36.8bn by 2025, companies are desperate to make the most of the technology. It is already unlocking the newly productive, efficient and hassle-free workplaces of the future, and has the potential to do so much more, if we harness it in the right way. Here are three fundamental questions we should ask about any AI before we create or deploy it.

1. Is it actually useful?

With companies like Spotify, Facebook and Google investing huge amounts in bot development and even appointing senior AI executives, it’s fair to say that bots and AI have firmly entered the mainstream, and it seems like every brand is trying to get in on the action.

This is all well and good, but we must make sure we are channelling this technology to solve actual problems. AI could transform offices by handling our mundane admin tasks so we can focus our energy on more rewarding work. A Harvard Business Review survey from late last year found that managers across all levels spend more than half their time on administrative tasks, from juggling illness and flexible working requests, to making sure data entry and reports are consistent in standard – all things machines could take over.

2. Where is the data behind it coming from?

AI is only as good as the data that we feed it. If we’re not careful about our sources, chat bots could end up seriously offending the audiences they’re supposed to be serving. We have the opportunity to create this new world without any of the biases – racial, gender, or otherwise – that exist in the real world. It’s our responsibility to invest time and capital in careful consideration of the ethical, legal and societal impacts of this new technology. Leading thinkers and AI developers are urging the government to ensure ethical codes are adhered to, to avoid reinforcing existing societal bias.

3. Was it built by a diverse team?

The best way to avoid bias, of course, is to ensure that a diverse team is building this technology in the first place – not always an easy thing to do in the famously male-dominated tech industry, but one that we must be constantly committed to.

Technology falls short when it is not built inclusively – the first voice-activated software didn’t recognise female voices because it was only tested on its all-male programming team, and early cameras favoured picking up white skin tones over non-white ones.

The fourth industrial revolution is already here, and its potential to transform our workplaces – and our world – is limitless. But we owe it to the future generations who will be using this tech to do it right.