Trained neural nets perform much like humans on classic psychological tests

D09DC16C-C45F-4C59-9FA1-E44F988381FA

Neural networks were inspired by the human brain. Now AI researchers have shown that they perceive the world in similar ways.

In the early part of the 20th century, a group of German experimental psychologists began to question how the brain acquires meaningful perceptions of a world that is otherwise chaotic and unpredictable. To answer this question, they developed the notion of the “gestalt effect”—the idea that when it comes to perception, the whole is something other than the parts.

Continue reading… “Trained neural nets perform much like humans on classic psychological tests”

A neural network can learn to organize the world it sees into concepts—just like we do

gabriel-santiago-1598-unsplash2

Generative adversarial networks are not just good for causing mischief. They can also show us how AI algorithms “think.”

GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces.

As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.

Continue reading… “A neural network can learn to organize the world it sees into concepts—just like we do”

Google created AI that just needs a few snapshots to make 3D models of its surroundings

 3dmodel-300x169

The algorithm only needs a couple perspectives to figure out what objects look like.

Google’s new type of artificial intelligence algorithm can figure out what things look like from all angles — without needing to see them.

After viewing something from just a few different perspectives, the Generative Query Network was able to piece together an object’s appearance, even as it would appear from angles not analyzed by the algorithm, according to research published today in Science. And it did so without any human supervision or training. That could save a lot of time as engineers prepare increasingly advanced algorithms for technology, but it could also extend the abilities of machine learning to give robots (military or otherwise) greater awareness of their surroundings.

Continue reading… “Google created AI that just needs a few snapshots to make 3D models of its surroundings”

Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.