Industrial robots used to be big, unwieldy, and dangerous. Now new “human-safe” robots are commonplace on automotive lines, working right next to people. These robots are awkward coworkers; they coexist with us but do not meaningfully collaborate. They often need to be explicitly told how to be helpful or when to stay out of the way — things human teammates seem to learn intuitively.
A good human apprentice is a keen observer, inferring unspoken rules and customs, watching how others work, and then generalizing this knowledge for new situations. We are able to accomplish this partly because the human mind is able to process very complex information very efficiently. This type of inference has traditionally been hard for machines to perform.
Recent research indicates that we are at an inflection point in how robots observe and process data, and therefore how they work with people.
Roboticists are starting to reverse-engineer the human mind by translating the cognitive models that humans use intuitively into computational models that machines can use. With this approach, robots and humans working in pairs have been able to accomplish complex tasks as well or better than human teams.
The implications are vast. Imagine a robot that participates as a team member in planning an emergency response deployment. The robot listens to the human team’s conversation to automatically learn the game plan. Such a robot would not have to wait until after the meeting to be told what to do — it could immediately take initiative to accomplish tasks that help the team achieve its goals. This basic ability is expected from humans working in emergency response and other time-critical situations, but it is transformative for robots.
The challenge is that collaborative dialog is complex: It unfolds in cycles, agreements are fluid, and proposals are often implicitly or passively communicated and accepted. The team may consider and reject many options and revise the plan many times. It is hard for a machine to infer our plans efficiently; the robot may have to consider and explore trillions of possible plans for even a simple scenario with just a few team members and a few goals.
In contrast, human team members do not need hours after a meeting to figure out what was agreed on. We are generally able to leave meetings with a clear picture of the plan. We do this by employing a mental scaffolding to piece together the conversation. Every member of the team is motivated by the same goals and has the same basic knowledge of the team’s capabilities, and every suggestion a team member makes is considered in context.
My research group, the Interactive Robotics Group at MIT, leveraged this insight to design a computational model that replicates the inherent structure in how a team discusses and negotiates a plan. We gave the machine just a little bit of information — for example, the number of team members, their capabilities, the team’s goals, and information about the tasks. The machine used this information to infer the final plan by piecing together the plan that was most likely to be valid, given the context of the problem. For instance, if the machine was told the team has four members but eight tasks are to be performed concurrently, it could infer that either the teammates will be multitasking or the plan needs to be revised around tasks being performed in sequence. The approach proved successful for emergency response planning. Teams of people were tasked with formulating a complex team response plan, and the machine was able to infer the final plan with up to 86% accuracy, on average.
The same approach also enables robots to learn complex plans just by watching us make decisions in our jobs. In fact, our most recent studies show that robots can learn the complex decision-making strategies of experts performing real-world tasks in defense and health care.
The key was to design the structure of our model to make very efficient use of each observation of the human expert. Each decision an expert makes provides a great deal of information, revealing how that particular option was prioritized over other options. We designed a model to leverage this logical structure by transforming the observed data into pairwise rankings on options. This approach substantially improved the machine’s ability to efficiently learn a high-quality model of the human expert’s decision-making strategy.
This technique was successfully applied in two settings.
In the first, domain experts played a serious game in which ships were defended from a set of missile threats. A machine may need days or weeks to solve this kind of problem well, but human experts are able to make good decisions very quickly. Using our structured computational model, the machine was able to learn effective strategies with just 16 human expert demonstrations. In fact, the machine was able to outperform the average expert score on many of the missile defense tasks.
In the second setting, a machine learned strategies for coordinating patient care in a hospital unit from health care professionals. Specifically, it learned decision-making strategies for where and when to move patients among room types and how to assign nurses under varying workload conditions. The technology was evaluated through experiments in which a robot provided decision support to nurses and doctors as they made decisions on patient care in a high-fidelity simulation. The nurses and doctors complied with the robot’s recommendations at a rate of 90%,a strong indication that the robot had learned high-quality strategies for the task.
These recent advances indicate that there is tremendous potential for machines to collaborate with us in rich ways that will extend and enhance human capability in many sectors of the economy. Robots of the future won’t need to sit on the sidelines or wait to be told what to do. Robots will truly be at our service, ready, willing, and able to learn from watching us. They will work shoulder-to-shoulder on assembly lines, in hospitals, and on the front lines of emergency response. The awkward robots of the past will be replaced by valued members of the team.
Image Credit / Article via hbr.org