Advancing AI by teaching robots to learn

FA9BD6E2-A954-4D82-B40B-405848B1D560

Robotics provides important opportunities for advancing artificial intelligence, because teaching machines to learn on their own in the physical world will help us develop more capable and flexible AI systems in other scenarios as well. Working with a variety of robots — including walking hexapods, articulated arms, and robotic hands fitted with tactile sensors — Facebook AI researchers are exploring new techniques to push the boundaries of what artificial intelligence can accomplish.

Doing this work means addressing the complexity inherent in using sophisticated physical mechanisms and conducting experiments in the real world, where the data is noisier, conditions are more variable and uncertain, and experiments have additional time constraints (because they cannot be accelerated when learning in a simulation). These are not simple issues to address, but they offer useful test cases for AI.

Continue reading… “Advancing AI by teaching robots to learn”

0

AI used to “fill in the blanks”

D4CA0ED8-A019-4A18-A4A2-6AEB4B4305FC

New AI sees like a human, filling in the blanks

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.

Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognize an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.

“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”

Continue reading… “AI used to “fill in the blanks””

0

Powerful new AI framework turbocharges automated learning process

BD0F3B5A-F46C-4050-8638-40569B3C6B04Framework improves ‘continual learning’ for artificial intelligence

Researchers have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks. The researchers have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.

“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”

“Deep neural network AI systems are designed for learning narrow tasks,” says Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”

Continue reading… “Powerful new AI framework turbocharges automated learning process”

0

The new digital divide is between people who opt out of algorithms and people who don’t

D2A2333B-DF46-4E7A-8381-17F40216FF50

Every aspect of life can be guided by artificial intelligence algorithms—from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them to monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.

Continue reading… “The new digital divide is between people who opt out of algorithms and people who don’t”

0

AI Ethics: Seven Traps

 

2018_AI_Banner_Twitter_1000x563px.indd

The question of how to ensure that technological innovation in machine learning and artificial intelligence leads to ethically desirable—or, more minimally, ethically defensible—impacts on society has generated much public debate in recent years. Most of these discussions have been accompanied by a strong sense of urgency: as more and more studies about algorithmic bias have shown, the risk that emerging technologies will not only reflect, but also exacerbate structural injustice in society is significant.

So which ethical principles ought to govern machine learning systems in order to prevent morally and politically objectionable outcomes? In other words: what is AI Ethics? And indeed, “is ethical AI even possible?”, as a recent New York Times article asks?

Continue reading… “AI Ethics: Seven Traps”

0

AI : The future of photography ?

 

007A28FC-C48D-465E-B248-9A97432E4A9D

How machine learning and artificially generated images might replace photography as we know it.

When hearing the words ‘AI’, ‘Machine Learning’ or ‘bot’ most people tend to visualize a walking, talking android robot which looks like something out of a Sci-Fi movie and immediately assume about a time far away in the future.

Continue reading… “AI : The future of photography ?”

0

A robot has figured out how to use tools

14DB72BB-D91F-41A2-969F-E50B1B72BF72

In a startling demonstration, the machine drew on experimentation, data, and observation of humans to learn how simple implements could help it achieve a task.

Learning to use tools played a crucial role in the evolution of human intelligence. It may yet prove vital to the emergence of smarter, more capable robots, too.

New research shows that robots can figure out at least the rudiments of tool use, through a combination of experimenting and observing people.

Continue reading… “A robot has figured out how to use tools”

0

MIT’s ‘cyber-agriculture’ optimizes basil flavors

F1902517-6280-43A0-98B9-7B610BA4CE48

The days when you could simply grow a basil plant from a seed by placing it on your windowsill and watering it regularly are gone — there’s no point now that machine learning-optimized hydroponic “cyber-agriculture” has produced a superior plant with more robust flavors. The future of pesto is here.

This research didn’t come out of a desire to improve sauces, however. It’s a study from MIT’s Media Lab and the University of Texas at Austin aimed at understanding how to both improve and automate farming.

In the study, published today in PLOS ONE, the question being asked was whether a growing environment could find and execute a growing strategy that resulted in a given goal — in this case, basil with stronger flavors.

Continue reading… “MIT’s ‘cyber-agriculture’ optimizes basil flavors”

0

Five key ideas you need to understand now if you want to be ready for the world of 2030

F48EBAAE-0F1A-478E-8B4F-54678216894C

With a new series of the BBC science podcast Futureproofing on air this month, presenter Timandra Harkness explains how to get ready for the world of the future.

Just a building that keeps the rain out and your clean underwear in? Think again. The home of 2030 will be smart, connected and emotional.

“Our notion of home will change drastically,” says Sce Pike, CEO of Oregon-based IOTAS. “The notion of home is no longer four walls and a roof and a place, a location, but actually something that travels with you throughout your life.”

How does it do that? By learning your preferences and your habits, and automatically adjusting the light, heat, even your TV channels, before you have to ask. “It’s about how that home reacts to you and makes you comfortable,” says Pike.

Continue reading… “Five key ideas you need to understand now if you want to be ready for the world of 2030”

0

What will machine learning look like in twenty years?

BD5C00C6-F271-4A27-8AB4-7D194C8020D0

What will machine learning look like 15-20 years from now? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Machine Learning is a very rapidly moving field, so it’s hard to make predictions about the state of the art 6 months from now, let alone 15 – 20 years. I can however, offer a series of educated guesses based on what I see happening right now.

We are still far away from AGI. Current generation machine learning systems are still very far away from something that could legitimately be called artificial “intelligence”. The systems we have right now are phenomenal at pattern recognition from lots of data (even reinforcement learning systems are mostly about memorizing and recognizing patterns that worked well during training). This is certainly a necessary step, but it is very far away from an intelligent system. In analogy to human cognition, what we have now is analogous to the subconscious processes that allow split second activation of your sympathetic nervous system when your peripheral vision detects a predator approaching or former significant other turning around a corner – in other words, pattern-based, semi-automatic decisions that our brain does “in hardware”. We don’t currently have anything I can see that would resemble intentional thought and I’m not convinced we’ll get to it from current generation systems.

Continue reading… “What will machine learning look like in twenty years?”

0

What will our society look like when artificial intelligence is everywhere?

717B3FB7-C8B3-4DAC-B4C7-80FB47499289

 Will robots become self-aware? Will they have rights? Will they be in charge? Here are five scenarios from our future dominated by AI.

SMITHSONIAN MAGAZINE | April 2018

In June of 1956, A few dozen scientists and mathematicians from all around the country gathered for a meeting on the campus of Dartmouth College. Most of them settled into the red-bricked Hanover Inn, then strolled through the famously beautiful campus to the top floor of the math department, where groups of white-shirted men were already engaged in discussions of a “strange new discipline”—so new, in fact, that it didn’t even have a name. “People didn’t agree on what it was, how to do it or even what to call it,” Grace Solomonoff, the widow of one of the scientists, recalled later. The talks—on everything from cybernetics to logic theory—went on for weeks, in an atmosphere of growing excitement.

What the scientists were talking about in their sylvan hideaway was how to build a machine that could think.

Continue reading… “What will our society look like when artificial intelligence is everywhere?”

0

Trained neural nets perform much like humans on classic psychological tests

D09DC16C-C45F-4C59-9FA1-E44F988381FA

Neural networks were inspired by the human brain. Now AI researchers have shown that they perceive the world in similar ways.

In the early part of the 20th century, a group of German experimental psychologists began to question how the brain acquires meaningful perceptions of a world that is otherwise chaotic and unpredictable. To answer this question, they developed the notion of the “gestalt effect”—the idea that when it comes to perception, the whole is something other than the parts.

Continue reading… “Trained neural nets perform much like humans on classic psychological tests”

0