Meet the magical computing that puts the human into artificial intelligence

Ben Medlock
Follow Ben
The internet of things is expanding (Source: Getty Images)
Think Artificial Intelligence and the first thing that comes to mind is probably robots. Thanks to popular culture, AI is considered distinctly un-human, and can prompt people to worry about the role of technology in our daily lives and who is really in control. However, there’s a new wave of technology fighting to put the human back at the heart of the experience, one that is designed to make the technology that you use every single day learn from you personally.
Called Machine Learning, it underlies technology that constantly evolves to adapt to every individual, anticipating what you want to do next and staying one step ahead. Machine Learning is about building software capable of learning how to perform tasks that are too complex to be solved by traditional programming techniques. During my PhD research at the University of Cambridge, for example, I built programs that could recognise topics in news articles and filter spam email.
As the boom in the connected home continues, and the “internet of things” expands, these principles may soon underpin all of the technology in our lives, while also leading to a fundamental shift in the way software engineers build complex systems. It’s changing our jobs, opening up new products and services, and transforming industries.
Historically, coding has been about distilling the expert knowledge of the programmer into a series of logical structures that cause the system to respond in predictable ways. But our world is increasingly information-saturated, meaning that many of the tasks we face require a level of sophistication that can’t be captured in human-engineered logical rules. So if I’m building a system to translate a sequence of text from one language into another, I can’t write a manageable set of rules to solve that problem. However, I can seek to solve it by creating a framework that allows the software to learn from examples of previously translated sequences to make new translations. In other words, the system gets the expertise it needs from the data.
This trend is also big business. Google is a clear leader in this space, with many of its services learning and adapting to individual user needs. The UK has a strong heritage in AI since Alan Turing: earlier this year, British company DeepMind was acquired by Google for £242m and Machine Learning underpins SwiftKey, the company I co-founded.
We use these highly complex techniques to solve a very common problem – typing easily on touchscreens like mobile phones. We make keyboard apps that learn from you as an individual to give you more relevant autocorrection and to predict the words you’ll want next. Our technology, which features on more than 200m devices worldwide, understands your personal patterns of behaviour. So when I type “Ben”, the software knows I’m likely to write “Medlock” next. Typing “I’m at the” could mean it predicts “football” or “cafe” next. It can learn to predict everything from quirky nicknames to multiple languages at the same time.
For a long time, this technology has been the preserve of Android, but with iOS 8, Apple is opening up its mobile platform to allow replacement keyboard apps like SwiftKey’s. This is an exciting development and will change the way millions more people interact.
But these ideas extend beyond mobile technology and can challenge commonly held definitions of concepts such as understanding. For example, as part of my PhD, I built a system that could mark essays. If you can show mathematically that the system is as reliable as an examiner, does it matter that the machine doesn’t “understand” the content of the essay in the same way a human being would?
Data security and privacy must be taken seriously, and people are understandably wary of computers that can “think” and learn like humans. If algorithms start taking on activities associated with teachers, personal assistants and others, does this harm us or distance us from each other? I believe we need to be open and honest about these questions, and that the debate will ultimately lead us to more clearly understand what it means to be human in a technological world.
Despite the implications, this trend is here to stay and will benefit us as citizens, consumers and business leaders. Technology will be able to better adapt and anticipate our needs at an individual level. The days of the generic experience are numbered. Over the next two decades, invisible systems that mine a wealth of data about every aspect of our lives will become the norm. They will constantly learn, adapt and enhance our decision making, communications and wellbeing.
Ben Medlock will be presenting at the RE.WORK Technology Summit at LSO St Luke's in London on 18-19 September. To discover more emerging technologies, book your place to attend at

Related articles