robot_fooled

A group of roboticists at the Georgia Institute of Technology are teaching robots to deceive.

A group of roboticists at the Georgia Institute of Technology are teaching robots to do something you wouldn’t normally peg as a good thing: deceiving others. Why would a robot need to do that? It’s a fine line, the researchers say, but it could be very beneficial.

 

 

Still, there’s an inherent element of danger, according to Ronald Arkin, an interactive-computing professor at the university: “We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects.”

So, when would deception be useful for a ‘bot? Well, right now the researchers are having their robots learn to shake a tail — not by outrunning it, but by confusing its path. This was done by leading a false trail, explained Alan Wagner, an engineer on the project. “The hider’s set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position,” he said, and apparently it worked three times out of four.

Of course, not all robots use markers for navigation, but it’s a start. In the future, a military robot would be able to avoid enemy ‘bots, for instance, or even elude human pursuers.

“We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” Arkin says, adding that he thinks it’s important that we think about this kind of technology, and that we understand what kinds of restrictions should be imposed on robotics.

Via Dvice