14586836904_dc97cfc8c8_z

In what is a giant leap towards true general artificial intelligence, Google scientists and engineers have created the first ever computer program that is capable of learning a wide variety of tasks completely independently.  

The AI, or as Google refers to it the“agent”, has learnt to play almost 50 different retro computer games, and came up with its own strategies for winning completely without human input. The same approach could be used to control self-driving cars or personal assistants in smartphones.

This research was conducted at a British company the Google acquired a few years ago called DeepMind

Demis Hassabis, who founded DeepMind said:

“This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult. It’s the very first baby step towards that grander goal … but an important one.”

And continued to draw comparisons with IBM’s DeepBlue chess computer.

“With Deep Blue, it was team of programmers and grand masters that distilled the knowledge into a program. We’ve built algorithms that learn from the ground up.”

Google have provided a video (below) that shows DeepMind learning to play a classic Atari game.

 

Image credit:  Dieter R | Flickr
Via Proton4