16 years ago, Dennis DeGray was paralyzed in an accident. Now, computer implants in his brain allow him some semblance of control.
By Ferris Jabr
On the evening of Oct. 10, 2006, Dennis DeGray’s mind was nearly severed from his body. After a day of fishing, he returned to his home in Pacific Grove, Calif., and realized he had not yet taken out the trash or recycling. It was raining fairly hard, so he decided to sprint from his doorstep to the garbage cans outside with a bag in each hand. As he was running, he slipped on a patch of black mold beneath some oak trees, landed hard on his chin, and snapped his neck between his second and third vertebrae.
While recovering, DeGray, who was 53 at the time, learned from his doctors that he was permanently paralyzed from the collarbones down. With the exception of vestigial twitches, he cannot move his torso or limbs. “I’m about as hurt as you can get and not be on a ventilator,” he told me. For several years after his accident, he “simply laid there, watching the History Channel” as he struggled to accept the reality of his injury.
Some time later, while at a fund-raising event for stem-cell research, he met Jaimie Henderson, a professor of neurosurgery at Stanford University. The pair got to talking about robots, a subject that had long interested DeGray, who grew up around his family’s machine shop. As DeGray remembers it, Henderson captivated him with a single question: Do you want to fly a drone?
Henderson explained that he and his colleagues had been developing a brain-computer interface: an experimental connection between someone’s brain and an external device, like a computer, robotic limb or drone, which the person could control simply by thinking. DeGray was eager to participate, eventually moving to Menlo Park to be closer to Stanford as he waited for an opening in the study and the necessary permissions. In the summer of 2016, Henderson opened DeGray’s skull and exposed his cortex — the thin, wrinkled, outermost layer of the brain — into which he implanted two 4-millimeter-by-4-millimeter electrode arrays resembling miniature beds of nails. Each array had 100 tiny metal spikes that, collectively, recorded electric impulses surging along a couple of hundred neurons or so in the motor cortex, a brain region involved in voluntary movement.
After a recovery period, several of Henderson’s collaborators assembled at DeGray’s home and situated him in front of a computer screen displaying a ring of eight white dots the size of quarters, which took turns glowing orange. DeGray’s task was to move a cursor toward the glowing dot using his thoughts alone. The scientists attached cables onto metal pedestals protruding from DeGray’s head, which transmitted the electrical signals recorded in his brain to a decoder: a nearby network of computers running machine-learning algorithms.
The algorithms were constructed by David Brandman, at the time a doctoral student in neuroscience collaborating with the Stanford team through a consortium known as BrainGate. He designed them to rapidly associate different patterns of neural activity with different intended hand movements, and to update themselves every two to three seconds, in theory becoming more accurate each time. If the neurons in DeGray’s skull were like notes on a piano, then his distinct intentions were analogous to unique musical compositions. An attempt to lift his hand would coincide with one neural melody, for example, while trying to move his hand to the right would correspond to another. As the decoder learned to identify the movements DeGray intended, it sent commands to move the cursor in the corresponding direction.
Brandman asked DeGray to imagine a movement that would give him intuitive control of the cursor. Staring at the computer screen, searching his mind for a way to begin, DeGray remembered a scene from the movie “Ghost” in which the deceased Sam Wheat (played by Patrick Swayze) invisibly slides a penny along a door to prove to his girlfriend that he still exists in a spectral form. DeGray pictured himself pushing the cursor with his finger as if it were the penny, willing it toward the target. Although he was physically incapable of moving his hand, he tried to do so with all his might. Brandman was ecstatic to see the decoder work as quickly as he had hoped. In 37 seconds, DeGray gained control of the cursor and reached the first glowing dot. Within several minutes he hit dozens of targets in a row.
Only a few dozen people on the planet have had neural interfaces embedded in their cortical tissue as part of long-term clinical research. DeGray is now one of the most experienced and dedicated among them. Since that initial trial, he has spent more than 1,800 hours spanning nearly 400 training sessions controlling various forms of technology with his mind. He has played a video game, manipulated a robotic limb, sent text messages and emails, purchased products on Amazon and even flown a drone — just a simulator, for now — all without lifting a finger. Together, DeGray and similar volunteers are exploring the frontier of a technology with the potential to fundamentally alter how humans and machines interact.
Scientists and engineers have been creating and studying brain-computer interfaces since the 1950s. Given how much of the brain’s behavior remains a mystery — not least how consciousness emerges from three pounds of electric jelly — the aggregate achievements of such systems are remarkable. Paralyzed individuals with neural interfaces have learned to play simple tunes on a digital keyboard, control exoskeletons and maneuver robotic limbs with enough dexterity to drink from a bottle. In March, a team of international scientists published a study documenting for the first time that someone with complete, bodywide paralysis used a brain-computer interface to convey their wants and needs by forming sentences one letter at a time.
Neural interfaces can also create bidirectional pathways of communication between brain and machine. In 2016, Nathan Copeland, who was paralyzed from the chest down in a car accident, not only fist-bumped President Barack Obama with a robotic hand, he also experienced the tactile sensation of the bump in his own hand as the prosthesis sent signals back to electrodes in his brain, stimulating his sensory cortex. By combining brain-imaging technology and neural networks, scientists have also deciphered and partly reconstructed images from people’s minds,producing misty imitations that resemble weathered Polaroids or smeared oil paintings.
Most researchers developing brain-computer interfaces say they are primarily interested in therapeutic applications, namely restoring movement and communication to people who are paralyzed or otherwise disabled. Yet the obvious potential of such technology and the increasing number of high-profile start-ups developing it suggest the possibility of much wider adoption: a future in which neural interfaces actually enhance people’s innate abilities and grant them new ones, in addition to restoring those that have been lost.
In the history of life on Earth, we have never encountered a mind without a body. Highly complex cognition has always been situated in an intricate physical framework, whether eight suction-cupped arms, four furry limbs or a bundle of feather and beak. Human technology often amplifies the body’s inherent abilities or extends the mind into the surrounding environment through the body. Art and writing, agriculture and engineering: All human innovations have depended on, and thus been constrained by, the body’s capacity to physically manipulate whatever tools the mind devises. If brain-computer interfaces fulfill their promise, perhaps the most profound consequence will be this: Our species could transcend those constraints, bypassing the body through a new melding of mind and machine.
On a spring morning in 1893, during a military training exercise in Würzburg, Germany, a 19-year-old named Hans Berger was thrown from his horse and nearly crushed to death by the wheel of an artillery gun. The same morning, his sister, 60 miles away in Coburg, was flooded with foreboding and persuaded her father to send a telegram inquiring about her brother’s well-being. That seemingly telepathic premonition obsessed Berger, compelling him to study the mysteries of the mind. His efforts culminated in the 1920s with the invention of electroencephalography (EEG): a method of recording electrical activity in the brain using electrodes attached to the scalp. The oscillating patterns his apparatus produced, reminiscent of a seismograph’s scribbling, were the first transcriptions of the human brain’s cellular chatter.
In the following decades, scientists learned new ways to record, manipulate and channel the brain’s electrical signals, constructing ever-more-elaborate bridges between mind and machine. In 1964, José Manuel Rodríguez Delgado, a Spanish neurophysiologist, brought a charging bull to a halt using radio-controlled electrodes embedded in the animal’s brain. In the 1970s, the University of California Los Angeles professor Jacques Vidal coined the term brain-computer interface and demonstrated that people could mentally guide a cursor through a simple virtual maze. By the early 2000s, the Duke University neuroscientist Miguel Nicolelis and his collaborators had published studies demonstrating that monkeys implanted with neural interfaces could control robotic prostheses with their minds. In 2004, Matt Nagle, who was paralyzed from the shoulders down, became the first human to do the same. He further learned how to use his thoughts alone to play Pong, change channels on a television, open emails and draw a circle on a computer screen.
Since then, the pace of achievements in the field of brain-computer interfaces has increased greatly, thanks in part to the rapid development of artificial intelligence. Machine-learning software has substantially improved the efficiency and accuracy of neural interfaces by automating some of the necessary computation and anticipating the intentions of human users, not unlike how your phone or email now has A.I.-assisted predictive text. Last year, the University of California San Francisco neurosurgeon Edward Chang and a dozen collaborators published a landmark study describing how a neural interface gave a paralyzed 36-year-old man a voice for the first time in more than 15 years. Following a car crash and severe stroke at age 20, the man, known as Pancho, lost the ability to produce intelligible speech. Over a period of about 20 months, 128 disk-shaped electrodes placed on top of Pancho’s sensorimotor cortex recorded electrical activity in brain regions involved in speech processing and vocal tract control as he attempted to speak words aloud. A decoder associated different patterns of neural activity with different words and, with the help of language-prediction algorithms, eventually learned to decipher 15 words per minute with 75 percent accuracy on average. Although this is only a fraction of the rate of typical speech in English (140 to 200 words a minute), it is considerably faster than many point-and-click methods of communication available to people with severe paralysis.
In another groundbreaking study published last year, Jaimie Henderson and several colleagues, including Francis Willett, a biomedical engineer, and Krishna Shenoy, an electrical engineer, reported an equally impressive yet entirely different approach to communication by neural interface. The scientists recorded neurons firing in Dennis DeGray’s brain as he visualized himself writing words with a pen on a notepad, trying to recreate the distinct hand movements required for each letter. He mentally wrote thousands of words in order for the system to reliably recognize the unique patterns of neural activity specific to each letter and output words on a screen. “You really learn to hate M’s after a while,” he told me with characteristic good humor. Ultimately, the method was extremely successful. DeGray was able to type up to 90 characters or 18 words a minute — more than twice the speed of his previous efforts with a cursor and virtual keyboard. He is the world’s fastest mental typist. “Sometimes I get going so fast it’s just one big blur,” he said. “My concentration gets to a point where it’s not unusual for them to remind me to breathe.”
Achievements in brain-computer interfaces to date have relied on a mix of invasive and noninvasive technologies. Many scientists in the field, including those who work with DeGray, rely on a surgically embedded array of spiky electrodes produced by a Utah-based company, Blackrock Neurotech. The Utah Array, as it’s known, can differentiate the signals of individual neurons, providing more refined control of connected devices, but the surgery it requires can result in infection, inflammation and scarring, which may contribute to eventual degradation of signal strength. Interfaces that reside outside the skull, like headsets that depend on EEG, are currently limited to eavesdropping on the collective firing of groups of neurons, sacrificing power and precision for safety. Further complicating the situation, most neural interfaces studied in labs require cumbersome hardware, cables and an entourage of computers, whereas most commercially available interfaces are essentially remote controls for rudimentary video games, toys and apps. These commercial headsets don’t solve any real-world problems, and the more powerful systems in clinical studies are too impractical for everyday use.
With this problem in mind, Elon Musk’s company Neuralink has developed an array of flexible polymer threads studded with more than 3,000 tiny electrodes connected to a bottlecap-size wireless radio and signal processor, as well as a robot that can surgically implant the threads in the brain, avoiding blood vessels to reduce inflammation. Neuralink has tested its system in animals and has said it would begin human trials this year.
Synchron, which is based in New York, has developed a device called a Stentrode that doesn’t require open-brain surgery. It is a four-centimeter, self-expanding tubular lattice of electrodes, which is inserted into one of the brain’s major blood vessels via the jugular vein. Once in place, a Stentrode detects local electric fields produced by nearby groups of neurons in the motor cortex and relays recorded signals to a wireless transmitter embedded in the chest, which passes them on to an external decoder. In 2021, Synchron became the first company to receive F.D.A. approval to conduct human clinical trials of a permanently implantable brain-computer interface. So far, four people with varied levels of paralysis have received Stentrodes and used them, some in combination with eye-tracking and other assistive technologies, to control personal computers while unsupervised at home.
Philip O’Keefe, 62, of Greendale, Australia, received a Stentrode in April 2020. Because of amyotrophic lateral sclerosis (A.L.S.), O’Keefe can walk only short distances, cannot move his left arm and is losing the ability to speak clearly. At first, he explained, he had to concentrate intensely on the imagined movements required to operate the system — in his case, thinking about moving his left ankle for different lengths of time. “But the more you use it, the more it’s like riding a bike,” he said. “You get to a stage where you don’t think so hard about the movement you need to make. You think about the function you need to execute, whether it’s opening an email, scrolling a web page or typing some letters.” In December, O’Keefe became the first person in the world to post to Twitter using a neural interface: “No need for keystrokes or voices,” he wrote by mind. “I created this tweet just by thinking it. #helloworldbci”
Thomas Oxley, a neurologist and the founding C.E.O. of Synchron, thinks future brain-computer interfaces will fall somewhere between LASIK and cardiac pacemakers in terms of their cost and safety, helping people with disabilities recover the capacity to engage with their physical surroundings and a rapidly evolving digital environment. “Beyond that,” he says, “if this technology allows anyone to engage with the digital world better than with an ordinary human body, that is where it gets really interesting. To express emotion, to express ideas — everything you do to communicate what is happening in your brain has to happen through the control of muscles. Brain-computer interfaces are ultimately going to enable a passage of information that goes beyond the limitations of the human body. And from that perspective, I think the capacity of the human brain is actually going to increase.”
There is no technology yet that can communicate human thoughts as fast as they occur. Fingers and thumbs will never move quickly enough. And there are many forms of information processing better suited to a computer than to a human brain. Oxley speculated about the possibility of using neural interfaces to enhance human memory, bolster innate navigational skills with a direct link to GPS, sharply increase the human brain’s computational abilities and create a new form of communication in which emotions are wordlessly “thrown” from one mind to another. “It’s just the beginning of the dawn of this space,” Oxley said. “It’s really going to change the way we interact with one another as a species.”
Frederic Gilbert, a philosopher at the University of Tasmania, has studied the ethical quandaries posed by neurotechnology for more than a decade. Through in-depth interviews, he and other ethicists have documented how some people have adverse reactions to neural implants, including self-estrangement, increased impulsivity, mania, self-harm and attempted suicide. In 2015, he traveled to Penola, South Australia, to meet Rita Leggett, a 54-year-old patient with a very different, though equally troubling, experience.
Several years earlier, Leggett participated in the first human clinical trial of a particular brain-computer interface that warned people with epilepsy of imminent seizures via a hand-held beeper, giving them enough time to take a stabilizing medication or get to a safe place. With the implant, she felt much more confident and capable and far less anxious. Over time, it became inextricable from her identity. “It was me, it became me,” she told Gilbert. “With this device I found myself.” Around 2013, NeuroVista, the company that manufactured the neural interface, folded because it could not secure new funding. Despite her resistance, Leggett underwent an explantation. She was devastated. “Her symbiosis was so profound,” Gilbert told me, that when the device was removed, “she suffered a trauma.”
In a striking parallel, a recent investigation by the engineering magazine IEEE Spectrum revealed that, because of insufficient revenues, the Los Angeles-based neuroprosthetics company Second Sight had stopped producing and largely stopped servicing the bionic eyes they sold to more than 350 visually impaired peoplearound the world. At least one individual’s implant has already failed with no way to repair it — a situation that could befall many others. Some patients enrolled in clinical trials for Second Sight’s latest neural interface, which directly stimulates the visual cortex, have either removed the device or are contemplating doing so.
If sophisticated brain-computer interfaces eventually transcend medical applications and become consumer goods available to the general public, the ethical considerations surrounding them multiply exponentially. In a 2017 commentary on neurotechnology,the Columbia University neurobiologist Rafael Yuste and 24 colleagues identified four main areas of concern: augmentation; bias; privacy and consent; and agency and identity. Neural implants sometimes cause disconcerting shifts in patients’ self-perception. Some have reported feeling like “an electronic doll” or developing a blurred sense of self. Were someone to commit a crime and blame an implant, how would the legal system determine fault? As neural interfaces and artificial intelligence evolve, these tensions will probably intensify.
All the scientists and engineers I spoke to acknowledged the ethical issues posed by neural interfaces, yet most were more preoccupied with consent and safety than what they regarded as far-off or unproven concerns about privacy and agency. In the world of academic scientific research, the appropriate future boundaries for the technology remain contentious.
In the private sector, ethics are often a footnote to enthusiasm, when they are mentioned at all. As pressure builds to secure funding and commercialize, spectacular and sometimes terrifying claims proliferate. Christian Angermayer, a German entrepreneur and investor, has said he is confident that everyone will be using brain-computer interfaces within 20 years. “It is fundamentally an input-output device for the brain, and it can benefit a large portion of society,” he posted on LinkedIn last year. “People will communicate with each other, get work done and even create beautiful artwork, directly with their minds.” Musk has described the ultimate goal of Neuralink as achieving “a sort of symbiosis with artificial intelligence” so that humanity is not obliterated, subjugated or “left behind” by superintelligent machines. “If you can’t beat em, join em,” he once said on Twitter, calling it a “Neuralink mission statement.” And Max Hodak, a former Neuralink president who was forced out of the company, then went on to found a new one called Science, dreams of using neural implants to make the human sensorium “directly programmable”and thereby create a “world of bits”: a parallel virtual environment, a lucid waking dream, that appears every time someone closes their eyes.
Today, DeGray, 68, still resides in the Menlo Park assisted-living facility he chose a decade ago for its proximity to Stanford. He still has the same two electrode arrays that Henderson embedded in his brain six years ago, as well as the protruding metal pedestals that provide connection points to external machines. Most of the time, he doesn’t feel their presence, though an accidental knock can reverberate through his skull as if it were a struck gong. In his everyday life, he relies on round-the-clock attention from caregivers and a suite of assistive technologies, including voice commands and head-motion tracking. He can get around in a breath-operated wheelchair, but long trips are taxing. He spends much of his time reading news articles, scientific studies and fiction on his computer. “I really miss books,” he told me. “They smell nice and feel good in your hands.”
DeGray’s personal involvement in research on brain-computer interfaces has become the focus of his life. Scientists from Stanford visit his home twice a week, on average, to continue their studies. “I refer to myself as a test pilot,” he said. “My responsibility is to take a nice new airplane out every morning and fly the wings off of it. Then the engineers drag it back into the hangar and fix it up, and we do the whole thing again the next day.”
Exactly what DeGray experiences when he activates his neural interface depends on his task. Controlling a cursor with attempted hand movements, for example, “boils the whole world down to an Etch A Sketch. All you have is left, right, up and down.” Over time, this kind of control becomes so immediate and intuitive that it feels like a seamless extension of his will. In contrast, maneuvering a robot arm in three dimensions is a much more reciprocal process: “I’m not making it do stuff,” he told me. “It’s working with me in the most versatile of ways. The two of us together are like a dance.”
No one knows exactly how long existing electrode arrays can remain in a human brain without breaking down or endangering someone’s health. Although DeGray can request explantation at any time, he wants to continue as a research participant indefinitely. “I feel very validated in what I’m doing here,” he said. “It would break my heart if I had to get out of this program for some reason.”
Regarding the long-term future of the technology in his skull, however, he is somewhat conflicted. “I actually spend quite a bit of time worrying about this,” he told me. “I’m sure it will be misused, as every technology is when it first comes out. Hopefully that will drive some understanding of where it should reside in our civilization. I think ultimately you have to trust in the basic goodness of man — otherwise, you would not pursue any new technologies ever. You have to just develop it and let it become monetized and see where it goes. It’s like having a baby: You only get to raise them for a while, and then you have to turn them loose on the world.”