Severely disabled people who cannot operate a motorised wheelchair may one day get their independence, thanks to a system that lets them steer a wheelchair using only their thoughts.



Unlike previous thought-communication devices, the system does not use surgical implants. Instead a skullcap peppered with electrodes monitors the electrical activity of its wearer’s brain. Early trials using a steerable robot indicate that with just two days training it is as easy to control the robot with the human mind as it is manually.


“It’s a very positive step,” says Paul Smith, executive director of The Spinal Injuries Association in London. “The psychological benefits it would offer are huge.”



The current options to give freedom of movement to people who are quadriplegic are limited, says Smith. For example, it is possible to steer a wheelchair using a chin-operated joystick or by blowing into a thin tube. But both options can be exhausting – and they are not suitable for those with very limited movement.



So José Millán at the Dalle Molle Institute for Perceptual Artificial Intelligence in Martigny, Switzerland, along with researchers from the Swiss Federal Institute of Technology in Lausanne and the Centre for Biomedical Engineering Research in Barcelona, Spain, has come up with a system that can reliably recognise different mental states.



If all goes to plan, it will be the first mind-controlled system able to operate something as complicated as a wheelchair, says Millán.



At the moment the system controls a simple wheeled robot. The user dons the electrode-lined skullcap, which monitors electrical activity on the surface of the head. A web of wires sends the information to a computer. Millán’s software then analyses the brain’s activity and, using a wireless link, passes on any commands it spots to the robot.



At the moment the user can choose between three different commands: for example, “turn left”, “turn right” and “move forward”. Millán’s software exploits the fact that the desire to move in a particular direction will generate a unique pattern of brain activity. It can tell which command the user is thinking of by spotting the telltale pattern of brain activity associated with that command.



To ensure the robot does not hit any objects, it contains some inbuilt intelligence. So, when the user thinks of one of the three states – for example, “turn left” – the software translates it into an appropriate command for the robot, such as “turn left at the next opportunity”.



In this case, infrared sensors allow the robot to detect walls and objects and it will safely plod along until it reaches the next turning. And in case the software has got the command wrong, a light on the robot indicates what it is going to do, giving the user time to correct it.

More here.