02/27/2004: Technologica
Plug It In! Plug It In To My Brain!
direct interface between a brain and a machine?
from Popular Science
Something incredible is happening in a lab at Duke University's Center for Neuroengineering -- though, at first, it's hard to see just what it is. A robot arm swings from side to side, eerily lifelike, as if it were trying to snatch invisible flies out of the air. It pivots around and straightens as it extends its mechanical hand. The hand clamp shuts and squeezes for a few seconds, then relaxes its grip and pulls back to shoot out again in a new direction. OK, nothing particularly astonishing here -- robot arms, after all, do everything from building our cars to sequencing our DNA. But those robot arms are operated by software; the arm at Duke follows commands of a different sort. To see where those commands are coming from, you have to follow a tangled trail of cables out of the lab and down the hall to another, smaller room.
Inside this room sits a motionless macaque monkey.
The monkey is strapped in a chair, staring at a computer screen. On the screen a black dot moves from side to side; when it stops, a circle widens around it. You wouldn't know just from watching, but that dot represents the movements of the arm in the other room. The circle indicates the squeezing of its robotic grip; as the force of the grip increases, the circle widens. In other words, the dot and the circle are responding to the robot arm's movements. And the arm? It's being directed by the monkey.
Did I mention the monkey is motionless?
Take another look at those cables: They snake into the back of the computer and then out again, terminating in a cap on the monkey's head, where they receive signals from hundreds of electrodes buried in its brain. The monkey is directing the robot with its thoughts.
More
For decades scientists have pondered, speculated on, and pooh-poohed the possibility of a direct interface between a brain and a machine -- only in the late 1990s did scientists start learning enough about the brain and signal-processing to offer glimmers of hope that this science-fiction vision could become reality. Since then, insights into the workings of the brain -- how it encodes commands for the body, and how it learns to improve those commands over time -- have piled up at an astonishing pace, and the researchers at Duke studying the macaque and the robotic arm are at the leading edge of the technology. "This goes way beyond what's been done before," says neuroscientist Miguel Nicolelis, co-director of the Center for Neuroengineering. Indeed, the performance of the center's monkeys suggests that a mind-machine merger could become a reality in humans very soon.Nicolelis and his team are confident that in five years they will be able to build a robot arm that can be controlled by a person with electrodes implanted in his or her brain. Their chief focus is medical -- they aim to give people with paralyzed limbs a new tool to make everyday life easier. But the success they and other groups of scientists are achieving has triggered broader excitement in both the public and private sectors. The Defense Advanced Research Projects Agency has already doled out $24 million to various brain-machine research efforts across the United States, the Duke group among them. High on DARPA's wish list: mind-controlled battle robots, and airplanes that can be flown with nothing more than thought. You were hoping for something a bit closer to home? How about a mental telephone that you could use simply by thinking about talking?
The notion of decoding the brain's commands can seem, on the face of it, to be pure hubris. How could any computer eavesdrop on all the goings-on that take place in there every moment of ordinary life? Yet after a century of neurological breakthroughs, scientists aren't so intimidated by the brain; they treat it as just another information processor, albeit the most complex one in the world. "We don't see the brain as being a mysterious organ," says Craig Henriquez, Nicolelis's fellow co-director of the Center for Neuroengineering. "We see 1s and 0s popping out of the brain, and we're decoding it."
The source of all those 1s and 0s is, of course, the brain's billions of neurons. When a neuron gets an incoming stimulus at one end -- for example, photons strike the retina, which sends that visual information to a nearby neuron -- an electric pulse travels the neuron's length. Depending on the signals it receives, a neuron can crackle with hundreds of these impulses every second. When each impulse reaches the far end of the neuron, it triggers the cell to dump neurotransmitters that can spark a new impulse in a neighboring neuron. In this way, the signal gets passed around the brain like a baton in a footrace. Ultimately, this rapid-fire code gives rise to electrical impulses that travel along nerves that lead out of the brain and spread through the body, causing muscles to contract and relax in all sorts of different patterns, letting us blink, speak, walk, or play the sousaphone.
In the 1930s, neuroscientists began to record these impulses with implantable electrodes. Although each neuron is coated in an insulating sheath, an impulse still creates a weak electric field outside the cell. Researchers studying rat and monkey brains found that by placing the sensitive tip of an electrode near a neuron they could pick up the sudden changes in the electric field that occurred when signals coursed through the cell.
The more scientists studied this neural code, the more they realized that it wasn't all that different from the on-off digital code of computers. If scientists could decipher the code -- to translate one signal as "lift hand" and another as "look left," they could use the information to operate a machine. "This idea is not new," says John Chapin, a collaborator with the Duke researchers who works at the State University of New York Downstate Health Science Center in Brooklyn. "People have thought about it since the '60s."
But most researchers assumed that each type of movement was governed by a specific handful of the brain's billions of neurons -- the need to monitor the whole brain in order to find those few would make the successful decoding a practical impossibility. "If you wanted to have a robot arm move left," Chapin explains, "you would have to find that small set of neurons that would carry the command to move to the left. But you don't know where those cells are in advance."
Thus everything that was known at the time suggested that brain-machine interfaces were a fool's errand. Everything, it turned out, was wrong.
In 1989, Miguel Nicolelis arrived from Brazil at Hahnemann University in Philadelphia, intent on cracking the neural code, regardless of how complex it might prove to be. At Hahnemann he found the perfect collaborator in John Chapin, who had spent the previous decade working on a device that could take 12 separate recordings from the brain at once; if the two of them could perfect it, they'd be the first to be able to listen to more than one neuron at a time.
Every aspect of the project posed new challenges. To work adequately, the electrodes needed to be tiny enough to be safely inserted into the brain, and precise enough to send a reliable stream of data to a computer. Conventional electrodes would get covered in scar tissue. The problem, Chapin and Nicolelis found, was that the electrodes, designed as rigid spikes, were damaging the surrounding brain tissue -- so the scientists subbed in electrodes with flexible tips. "They have to float around," Nicolelis says. "But if they are rigid and move around, the brain can be dissected."
By the mid-'90s, Nicolelis and Chapin finally were inserting their arrays of electrodes into the brains of living rats -- and what they discovered instantly challenged the conventional wisdom on the way neurons send their messages. What they found was that the commands for even the simplest of movements -- twitching a whisker, for example -- required far more than just a tiny cluster of neurons. In fact, a whole orchestra of neurons scattered across the brain played in synchrony. And the neurons behaved like an orchestra in another important way. Beethoven's Fifth Symphony and Gershwin's Rhapsody in Blue sound nothing alike, even if many of the same musicians are playing both pieces, on many of the same instruments, using many of the same notes. Likewise, many of the same neurons, it turned out, participated in generating many different kinds of body movement.
With this discovery, the biggest supposed roadblock to making a brain-machine interface suddenly disappeared. Rather than needing to find the tiny handful of neurons responsible for a particular movement, scientists could, by listening to a small fraction of neurons in a brain, generate enough information to recognize many different commands. Think again of the brain as an orchestra: You don't need to set up a microphone next to every instrument to tell whether the orchestra is playing Beethoven's Fifth or Rhapsody in Blue. You could probably figure it out by listening to just a handful of musicians.
To test this supposition, Chapin and Nicolelis inserted electrodes into a rat's brain and began monitoring 46 neurons. They then trained the rat to press a lever to get a drink of water, and used the electrodes to record the pattern of signals the animal produced to move its arm. Then Chapin and Nicolelis disconnected the lever from the water supply, so that pressing the lever did nothing. The rat went on pressing the lever, but now the scientists gave the rat a drink of water when it simply produced the "press lever" command in its brain. After a while, the rat stopped bothering to lift its arm, and just thought about lifting it.
Not long after the rat breakthrough, Nicolelis got a job at Duke and began setting up a new lab to take the research to a higher level. There he began to implant electrodes into monkeys instead of rats, hoping to get them to operate more complex equipment with their brains. Nicolelis teamed up with biomedical engineers at Duke to design new arrays of electrodes, along with high-capacity signal processors, that could handle the new challenge. "Miguel always wants more channels," says biomedical engineer Patrick Wolf with a grin. "It's like, 'More power, Scotty.'"
By 2000, Nicolelis and his colleagues had invented a system that could recognize patterns in monkey brains well enough to let the animals swing a robot arm to the left or to the right with their thoughts. The success gave the researchers the confidence to set themselves a goal: to design a system that would allow paralyzed people to operate a prosthetic arm with a set of implanted electrodes. The arm wouldn't let people play a piano sonata, but it would let them do simpler things like drink a glass of water. "That's a fairly complicated action," says Henriquez. "Going out, grabbing a glass, grabbing with enough pressure to not let it slip, raising it, drinking from it, and putting it back."
The next steps toward that goal would be to make the robot arm move in more intricate ways, and then to add a simple hand that could also follow a monkey's commands. This is the system that's online today: A monkey learns how to use it by sitting at a computer screen and using a joystick to move a cursor across the screen. When a dot appears on the screen, the monkey drags the cursor on top of it in order to get a squirt of juice through a tube rigged up next to its mouth. The electrodes in the monkey's brain record the signals from its motor neurons as they form the commands that move its arm.
The signals are piped into a computer, which compares them to the joystick's movements and figures out how to predict the latter from the former. Once the computer has grown familiar enough with the monkey's brain patterns, it uses those signals rather than input from the joystick to move the cursor across the screen.
"After a while, like the rats before her, she realizes she doesn't have to move her hand," says Nicolelis. The monkey simply thinks the cursor across the screen.
Then the monkey learns to use its mind to control a robot. (The monkey, however, doesn't realize the robot even exists; it is simply focused on moving the cursor to gain rewards.) The monkey operates the joystick again, but the signals from the joystick go to the robot arm. The cursor still moves across the screen; now, however, it's responding to the robot's movements rather than the joystick's. The switch is awkward at first for the monkey -- it's a bit like learning to type with the tips of two pens instead of your fingers. But by watching the cursor move on the screen, the monkey manages to control the robot with its brain signals alone.
When a monkey has learned this skill, it's ready for the third and final challenge: reaching plus grabbing. When the monkey moves the cursor to the dot, it now has to squeeze the joystick. Sensors measure how hard the monkey squeezes, and the computer screen displays the force as an expanding disc on the screen. By watching the disc expand, the monkey learns how to apply different amounts of force in order to get its reward. "She has to squeeze very precisely," says Nicolelis.
No one knew if a monkey could meet this challenge. Clearly, the electrode arrays could recognize commands to move the arm back and forth. But what if squeezing was controlled by neurons too far away from the electrodes to be monitored? Nicolelis put his faith in the orchestral nature of neurons -- and he wasn't disappointed. The system could predict how hard the monkey was squeezing as well as it could predict where it was moving the arm. "The predictions," he says with pride, "are unbelievably good."
Much of the money that funds Nicolelis's research comes from DARPA, which in 2003 ratcheted up its long-standing interest in brain-related research to a new level by launching the Brain-Machine Interface Program (BMI) with an initial grant of $24 million divided among six different labs. "Imagine how useful and important it could be for a war fighter to use only the power of his thoughts to do things at great distances," says Tony Tether, the director of DARPA.
DARPA is famous for funding futuristic technology of all sorts, from the precursor to the Internet to the ill-fated terrorist futures market, which was attacked by Congress last summer. And according to former BMI program director Alan Rudolph, DARPA is well aware that there's no guarantee that the brain-machine interface research will ever make it onto the battlefield. "There's plenty of risk," he says. "If there wasn't a lot of risk, we wouldn't be involved."
In addition to the Duke research, DARPA's funding is helping other scientists pursue the linkage of brain and machine. At the University of Michigan, for example, it's supporting research that may eventually let humans control a more classic free-standing robot with their thoughts. The robot in question, known as RHex, can scurry around on six legs like a mechanical cockroach. Researchers are investigating how to teach rats to control the movement of RHex by pressing levers that steer the robot left and right. Then, in a process similar to the one employed at Duke, scientists will decode the brain patterns the rats use to press the different levers, and enable the rat to guide RHex by thought alone. Humans could someday use the same system to guide robots into collapsed buildings or across rough terrain on distant planets -- or, DARPA hopes, into battle.
Not all of DARPA's research is limited to manipulating machines. The brain does more than just move arms and legs -- it also sends out complex commands that control muscles in the throat, tongue and mouth, creating speech. It's conceivable that a computer could learn to recognize those commands before they leave the brain and then translate them into words. "You could imagine thinking about talking and having it projected into a room 2,000 miles away," says Craig Henriquez. "I don't see that that will be a problem. It's very, very possible."
But Henriquez and other neuroengineers do see one particularly enormous roadblock in the way of DARPA's goal. According to Rudolph, it would be unethical to implant electrodes in the heads of healthy soldiers. He's betting that future technology will be able to read brain signals without actually being inside the brain. Today the most common way to attempt this is with electroencephalography (EEG), in which electrodes are placed on the scalp. But EEG has a serious drawback: It can only pick up a blurry, weak signal compared to what electrodes nestled in the brain can record. People can learn to control a computer by altering their EEG patterns, but it takes months of training to type just a few letters a minute. That's not the sort of bandwidth you want for operating an arm. "To the best of our knowledge, that doesn't look very promising at the moment," Henriquez says.
Rudolph expects other approaches to pay off down the road. "Out at 20 years I have a lot of hope," he says. He points to a new kind of brain imaging known as magnetoencephalography, or MEG, that uses magnets to pick up electrical activity in the brain. MEG has the sort of speed and resolution that might make a brain-machine interface possible. In their current form, MEG scanners have to be protected by shielded walls and cooled with giant tanks of helium. But Rudolph speculates that room- temperature superconductors and other materials of the future will make MEG portable. "If you think about using superconducting magnets, maybe you could figure out how to make a helmet," he says. It might be possible in a few decades to design a helmet-like scanner that a soldier could wear along with a signal-processing supercomputer in his backpack. "At least DARPA's got some people looking at that," Rudolph says.
One of the ways you can tell that the monkey-controlled robot arms at Duke aren't science fiction is that sometimes they don't work. Some days the circuit boards fry, and other days the prospect of a reward of juice just isn't enough to motivate monkeys to play the game. For all the progress the researchers have made in recent years, the work is still hard, and there's a lot more hard work ahead before they see their research making a difference in people's lives.
Take the equipment itself. Wires sprout from the implants in a monkey's head and are jacked into a big signal processor, which in turn is plugged into a computer, which in turn is connected by cables to a robot arm. The Duke researchers will need to design a far more portable, unobtrusive system to make it practical for humans. They envision implanting an array of electrodes in key regions of a quadriplegic patient's brain. The signals detected by the electrodes would travel through a wire to a small processor embedded in the skull. From there, the processor would wirelessly transmit its signals out of the body. "It's like having an implanted cellphone," says Nicolelis.
These signals would be picked up by a portable computer, which would then generate commands for the artificial limb. Patrick Wolf has been aggressively tackling this part of the system, and has already built a wireless backpack computer for the Duke monkeys, with enough power to transmit their brain signals 100 meters through the air.
The researchers are also grappling with the fact that getting commands out of the brain is not the full secret to controlling an arm. The brain also needs feedback in order to make its commands more precise. Imagine trying to pick up a glass of water without a sense of touch: Instead of guiding your fingers around its side, you might simply knock it over. Or, once you'd managed to grab the glass, you might crush it accidentally as you tried to pick it up. Or, after passing those stages successfully, you might just splash your face with water.
John Chapin is working on ways to give people the feedback they'll need to make the Duke brain-machine interface a reality. He's experimenting with how to deliver information directly into the brain -- particularly to the region of the brain that handles the sense of touch. But that's long-term research. In the short-term, a group at MIT is designing a cloth-like material that can be attached to a place on a person's body where he or she still has a sense of touch. Force sensors on the limb can then relay their signals to the cloth, which will turn that information into different vibrations. It's not the same thing as feeling a glass in your hand, but your brain can probably learn to take advantage of the information.
Learning, in fact, turns out to be the secret weapon of brain-machine interfaces. Nicolelis's latest studies have shown what is happening to the Duke monkeys on a neurological level as they use the dots and circles on the computer screen to alter the commands their brains generate. "Now we have plenty of evidence that the brain is changing, and in ways I didn't expect," says Nicolelis. "It happens in a matter of minutes." As the monkey trains, neurons in its brain begin to alter their firing patterns. More and more neurons get involved in producing commands -- in fact, the number can triple. At the same time, a special set of neurons emerges that becomes active only when the monkey operates the robot directly with its brain, and not when it uses the joystick. Remarkably, these neurons switch on as soon as Nicolelis disables the joystick.
With this extra set of neurons, Nicolelis explains, "the brain is assimilating the robot. It's creating a representation of it in different areas of the motor cortex" -- the part of the monkey's brain where movement commands are generated. As the brain carves out a special place for its representation of the robot, Nicolelis speculates, it's possible that the robot begins to feel as much a part of the body as the monkey's own arm.
If he's correct, this is very good news for people who might someday try to use his prosthetic limb. Their brains will reorganize themselves to master the limb, which will take on a natural feel. And since humans can be told what they should be learning -- instead of figuring it out on their own as monkeys do -- the training process may take even less time. "This could be done in a matter of a few trials, because you could instruct a human what to do," says Nicolelis.
The fact that the monkeys' brains adapt so readily gives the Duke researchers confidence in the face of all the challenges that lie ahead. While it's too soon to say whether brain-machine interfaces are going to turn up on the battlefield, they are almost certainly headed for the doctor's office. "We have a plan for every part of the puzzle," says Patrick Wolf, who strongly believes that the Duke team will meet its five-year deadline. "I don't see any showstopper."
Carl Zimmer is a science writer based in Guilford, Connecticut. His most recent book, Soul Made Flesh: The Discovery of the Brain -- and How It Changed the World, was published by the Free Press in January.