Evolutionary computing has been tapped to produce coherent robot behavior in simulation, and real robots have been used to evolve simple behavior like moving toward light sources and avoiding objects.

Researchers from North Carolina State University and the University of Utah have advanced the field by combining artificial neural networks and teams of real mobile robots to demonstrate that the behavior necessary to play Capture the Flag can be evolved in a simulation.



“The original idea… came from the desire to find a way to automatically program robots to perform tasks that humans don’t know how to do, or tasks which humans don’t know how to do well,” said Andrew Nelson, now a visiting researcher at the University of South Florida.



The method could eventually be used to develop components of control systems used in autonomous robots, said Nelson. “Any task that can be formulated into a competitive game — like clearing a minefield or searching for heat sources in a collapsed building — could potentially be learned by a neural network or other evolvable [system] without requiring a human to specify the details of the task,” he said.



More here.