In what is believed to be a medical first, researchers from Johns Hopkins Medicine (JHM) and the Johns Hopkins University Applied Physics Laboratory (APL) have enabled a quadriplegic man to control a pair of prosthetic arms with his mind.
In January 2019, surgeons implanted six electrodes into the brain of Robert “Buz” Chmielewski during a 10-hour operation. The goal was to improve the sensation in his hands and enable him to mentally operate his prostheses. For more than three decades after a surfing accident while in his teens, Chmielewski has been paralyzed with only minimal movement in his arms and hands.
Now, almost two years into the joint JHM/APL research study following the surgery, has reached an important milestone—he can now use both of his robotic appendages to perform simple tasks such as feeding himself.
“This type of research, known as brain-computer interface [BCI], has for the most part focused on only one arm, controlled from only one side of the brain,” says Pablo Celnik, M.D., professor and director of physical medicine and rehabilitation at the Johns Hopkins University School of Medicine and a member of the research team. “Thus, being able to control two robotic arms performing a basic activity of daily living—in this case, cutting a pastry and bringing it to the mouth using signals detected from both sides of the brain via implanted electrodes—is a clear step forward to achieve more complex task control directly fed from the brain.”
Chmielewski cutting food with his left hand and feeding himself with the right, at times simultaneously controlling both robot arms. Credit: Johns Hopkins University
“Simultaneous brain-machine interface control of two limbs is a particular challenge because it’s not a simple 1+1 summation of what the left arm is doing plus what the right arm is doing in the brain, but more like trying to calculate the sum of the two arms as 1 plus 1 equals 3.8,” adds Gabriela Cantarero, Ph.D., assistant professor of physical medicine and rehabilitation at the Johns Hopkins University School of Medicine and a member of the research team.
The technology uses a system of devices that automates a portion of the robotic control with artificial intelligence.
“Our goal is to make activities, such as eating, easy to accomplish by having the robot do one part of the work and leaving the user in charge of the details: which food to eat, where to cut, how big the cut piece should be, and so on,” says David Handelman, Ph.D., a senior roboticist at APL and a member of the research team. “By combining brain-computer interfacesignals with robotics and artificial intelligence, we allow the user to focus on the parts of the task that matter most.”
“Our next steps for this work include expanding the number and types of activities of daily living that we can demonstrate with this form of human-machine teaming, as well as providing users with additional sensory feedback as tasks are conducted,” says Francesco Tenore, Ph.D., an electrical engineer at APL and a member of the research team. “This means that the user won’t have to rely entirely on vision to know if he’s succeeding, in the same way that uninjured people can ‘feel’ how they’re tying their shoelaces without having to look.”