Man Vs machine: One chart showing how artificial intelligence created by Google’s Deep Mind beat humans at classic 80s Atari games

 
Lynsey Barber
Follow Lynsey
Pacman and Space Invaders tested Deep Mind's AI (Source: Kurtxio/Flickr)

Google just made a massive leap forward in developing artificial intelligence (AI) - with the help of classic Atari games like Space Invaders and Pong.

Researchers working at the secretive London company Deep Mind have succeeded in creating a computer program which can learn how to play computer games, with no input whatsoever from humans.

Within hours of its first encounter with the classic Atari game Pong, the AI not only learned how the game worked and how to play it, but became better than a human player.

It's no wonder Google was so impressed with Deep Mind's progress in artificial intelligence that it snapped up the company last year for £400m.

The AI combines deep neural networks with reinforcement learning which for the first time, researchers say, creates "artificial intelligence capable of learning to excel at a diverse array of challenging tasks".

Deep Blue, the IBM computer chess master which may sound similar, actually learned its skills from human input and programming and applied them to beat grandmasters.

Deep Mind’s AI on the other hand, takes in every piece of information about the game, gathering what is essentially a large data set, and can then learn from it to improve its game.

How good is AI? When it comes to playing Atari games, it does give human gamers a run for their money. We inched ahead, scoring higher points on 26 games, while AI managed a higher score than humans on 23 games.

Humans still have the upper hand, but AI isn't far-off being better than us - when it comes to Atari games at least.

"On the face it, it looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily," said Deep Mind founder Demis Hassabis.

"What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do. The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do," he told the BBC.

While the next step is for AI to catch up a bit and be able to play games from the 90s, Hassabis has started partnering with financial institutions and satellite operators to see if AI could "play" their data sets for trading oil futures or predicting the weather, according to the New Yorker.

It's the first time Deep Mind has hinted at its progress in artificial learning and what its plans are and its latest research is published in the journal Nature.

Watch how AI learned to play Pong.

And Space Invaders.

Related articles