Issue 19
Words by

Brain-computer interfaces

12th June 2025
23 Mins

Brain implants are letting people move, speak, and interact with machines using only their thoughts. The first FDA approvals may arrive within five years.

Brain-computer interfaces – systems that read the electrical activity of neurons, and transmit those signals as instructions to an external device – are helping paralyzed people control computers, pilot prosthetic limbs, and regain the ability to speak and even move. 

No interface has yet been approved for use outside of clinical trials, but that will likely change soon. The timeline I’ve heard from BCI researchers, users, and company founders is that the first FDA-approved BCIs will reach the market in the next five years.

The first brain-computer interface was trialed in a human in 2004, the culmination of some 80 years of research into parsing electrical activity in the brain. Since then, several dozen others – chiefly people with severe paralysis or loss of speech due to spinal cord injuries, strokes or neurodegenerative conditions – have received one.

What has led to this inflection point? An acceleration in scientific research, investment, and startup formation, coupled with rapid developments in consumer technologies and machine learning, which began roughly 20 years ago and have gained pace in the last decade. 

Charging bulls

The history of brain-computer interfaces can be traced back at least a century. In July 1924, German psychiatrist Hans Berger recorded the first electroencephalograph (also known as an EEG) in a 17-year-old boy undergoing brain surgery. Initially, he did so by placing conductive silver wires at the front and back of the boy’s skull; later, he developed scalp-mounted sensors that could produce a reading non-invasively, outside of the operating room. He published a paper in 1929 describing this new method of recording electrical activity in the brain, which continues to be used 100 years later to diagnose conditions like epilepsy, brain tumor, and stroke. 

Decades of research tapping into the brain’s electrical wiring followed. In 1964, Spanish neurophysiologist José Manuel Delgado conducted a dramatic demonstration of the effects of electrical stimulation on the brain when he stopped a bull mid-charge using radio-controlled electrodes to deliver a shock to the animal’s caudate nucleus, which controls movement. That same year, American-British neurophysiologist W Grey Walter showed that electrical activity in the brain could be linked to the intention to perform specific actions. In a study published in Nature, he asked patients wearing EEG sensors to push a button to advance a slide projector. The button was not, in fact, connected to the projector; instead, participants’ neural activity – that is, their intention to press the button, as recorded by the sensors – was used to advance the slides. 

José Manuel Delgado stops a charging bull in its tracks using a radio transmitter. 

The term ‘brain-computer interface’ first appeared in 1973, in a paper by computer scientist Jacques Vidal. ‘Can these observable electrical brain signals be put to work’, he inquired, ‘as carriers of information in man-computer communication for the purpose of controlling such external apparatus as prosthetic devices or spaceships?’ Testing his own theory, Vidal showed in 1977 that brainwave activity, measured by EEG, could be used to enable study participants to mentally guide a cursor through a digital maze.

Vidal turned out to be right about the feasibility of brain-computer interfaces to control external devices (save for spaceships, for now) but wrong about EEG as the mechanism. By the 1990s, it had become clear that EEG was too crude a measurement, gathering only a gauzy impression of neural activity through layers of hair, bone, and fluid. In order to record a more faithful and interpretable signal, electrodes would need to be implanted under the skull and draped over the surface of the brain, or even embedded into the neural tissue. 

One of the first implanted brain-computer interfaces was developed by Irish neurologist Phillip Kennedy (who, controversially, underwent a clandestine surgery to receive a neural implant in 2014) in the late 1990s. Kennedy’s design consisted of a hollow glass cone encasing a set of Teflon-coated gold wires. The electrode was bathed in a proprietary cocktail of growth factors, which coaxed neurites – the branchlike protrusions of neurons – to grow into the cone, allowing the conductive wire to pick up on the neuron’s electrical pulses. Using these so-called neurotrophic electrodes, Kennedy taught a man with locked-in syndrome – which paralyzes almost all of a person’s voluntary muscles, rendering them unable to move or speak – to control a computer cursor, allowing him to painstakingly type out words. By the early 2000s, researchers had demonstrated that monkeys implanted with rudimentary BCI systems could perform actions beyond cursor control, such as manipulating robotic limbs, paving the way for research in humans.

The Utah array

All this culminated in the first clinical trial of a brain-computer interface in 2004, dubbed BrainGate. A grainy video from the study released in 2005 shows Matthew Nagle, a 25-year-old with tetraplegia, playing Pong on a computer screen, using his neural activity to control the paddles. The breakthrough was received with a mixture of excitement, awe, and more than a little hyperbole: ‘Brain chip reads man’s thoughts’, proclaimed a BBC headline characteristic of the media fanfare at the time.

Enabling this feat was BrainGate’s Utah array, so named because its manufacturer, Blackrock Neurotech, is based in Salt Lake City. The postage stamp-size chip encases 96 electrodes, each the width of a human hair. Like a miniature bed of nails laid atop the motor cortex, each electrode penetrates the membrane of the brain, picking up on the electrical activity of individual neurons. An oblong pedestal and a thick coaxial cable protruding from Nagle’s scalp transmitted his neural activity to a computer, where a set of machine-learning algorithms, known collectively as a decoder, correlated those electrical signals with the intention to perform specific actions – in this case, to move his hand to shift the paddle left or right – and translated them into motions of the computer cursor.

Led by John Donoghue, now a professor of neuroscience at Brown University, and Leigh Hochberg, professor of engineering and brain science at Brown, the first BrainGate study detailed how Nagle was able to use the interface to change TV channels, open emails, create digital drawings, and open and close a robotic hand. 

Advancements in BCI research continued into the 2010s. A 2011 paper documented a woman with locked-in syndrome’s continued ability to control a computer cursor 1,000 days after implantation – a record at the time – using a BrainGate implant. The next year, two people with tetraplegia were asked to imagine reaching for and grabbing a foam ball. The movement intent captured by the interface was then transmitted to a robotic arm, which was able to grasp the ball just over half the time on average. One participant was able to use the system well enough to grab a bottle of coffee and raise it to her lips to drink through a straw. 

‘We were able to show how much information you can get out of the brain from a very limited sample’, Donoghue says of those early trials. ‘You have billions of cells, and the kinds of sensors we put into the brain sample a tiny piece of what’s going on. But because the brain is broken up into regions – there’s a motor area, and even a motor area just for your arm – the electrode array can pick up signals that tell us what you want to do: reach out in space, get your arm in the right place, control your hand. All that information can be extracted from a really tiny sample of your brain’.

As a subfield of brain-computer interface research, computer cursor control has advanced rapidly in the past decade. For Noland Arbaugh, the first human recipient of a Neuralink implant in 2024, the independence he’s gained with the interface has been life-altering. ‘I’m able to interact with the world a lot more easily’, he says. 

Established in 2016, Neuralink represented one of the first – and certainly the most public – challengers to the Utah array. Cofounded by Elon Musk, who has also funded it to the tune of $100 million, the company promised to achieve the holy grail of brain-computer interfaces: a fully implanted, wireless system. The Utah array, with its bristle-like electrodes and relatively long track record in humans, continues to be used in almost all academic BCI research today. But it still requires a round, bolt-like port to be permanently affixed to the user’s skull, to which pedestals and cables are attached to connect the decoder and other equipment. It’s not only a tad unsightly, but it also presents an infection risk. Blackrock Neurotech, the Utah array’s manufacturer, is actively developing a fully implanted version, but Neuralink got there first. 

Unlike the setup used by Matthew Nagle in the early 2000s, Arbaugh can access the system without any pedestal or cables. The Neuralink implant transmits his neural signals wirelessly to the decoder, and then to the computer. 

Previously, he used voice commands or a mouth stick to operate an iPad, which made using the device onerous and error-prone. Now, he can write emails, play games like chess and Mario Kart, and listen to music or audiobooks without outside help. An avid reader and learner, he also uses the interface to dictate notes as he’s reading. And he can do so while lying in bed, a position that causes less pain and fatigue than having to sit up to peck at the screen.

Arbaugh holds the current speed record for computer control with a brain-computer interface – 8 bits per second (a bit is the smallest unit of information a computer can process, the distinction between a 1 and a 0), compared to about 10 bits per second for a person using a computer mouse – and he continues to work with Neuralink researchers on training strategies to speed up the brain-computer interaction, so that he can match the average computer or smartphone user. 

Training decoders

Training the interface is an involved process. The exact mechanism differs depending on the device and decoding software used, but broadly speaking, it begins with the user being instructed to imagine performing a certain action – clicking a mouse with their right index finger, for example. Because the connection between the brain and the muscles has been severed – in Arbaugh’s case, by a spinal cord injury – the person’s fingers can’t respond to the neural signal. But the activity of the neurons that fire when the person thinks about performing the action are picked up by the electrodes and transmitted to the decoder. With enough repetition, the machine-learning algorithms learn to associate that specific electrical pattern with the intention to perform a mouse click. When the decoder detects that pattern, it transmits the ‘click’ command to the computer screen.

When Arbaugh first received his implant, he spent eight hours a day, five days a week, participating in research onsite at Neuralink’s Fremont, California, headquarters. Now, he works remotely from his home in Yuma for around four hours a day. A lot of the research is focused on how to make the translation between brain and machine faster and more seamless. ‘It’s like a symbiotic relationship’, Arbaugh says. ‘I learn from the BCI and the BCI learns from me. I’m always so surprised by how quickly it can learn what I’m trying to do’.

Neuralink’s N1 implant. 
Image
Source: Neuralink/Screenshot by Works in Progress.

Compared to cursor control, speech decoding has had a more abbreviated research trajectory, but the progress made in just a few short years has been exponential. In August 2024, a New England Journal of Medicine paper reported on a novel ‘speech prosthesis’ used by Casey Harrell, a 46-year-old with amyotrophic lateral sclerosis. Using a BCI system, Harrell, who is effectively unable to speak, was able to communicate verbally with his eight-year-old daughter for the first time in her memory. 

I’m looking for a cheetah’, he said, memorably, as his daughter returned home in a cheetah onesie. Or, rather, a computer said it for him – the words were read aloud in a voice that sounded like Harrell’s, created using videos and podcast interviews from before his ALS diagnosis. 

Inside Harrell’s skull, perched atop his brain’s precentral gyrus, the primary motor cortex, which controls voluntary movement, were four microelectrode arrays that transmitted his intention to speak aloud to the decoder. When he is connected to the system, two HDMI cables protruding from two cube-shaped posts on his scalp tether him to the generator and bank of computers that perform the decoding. 

‘It works well enough that the team can turn it on in the morning, go work at a cafe nearby while he uses it, and come back and unplug it at night’, says Sergey Stavisky, assistant professor in neuroengineering at the University of California, Davis, who led the study.

In their paper, Stavisky and colleagues showed that the system could translate Harrell’s neural signals into speech with 97 percent accuracy – a previously unreached benchmark. Even more impressive is that it took just two training sessions to achieve. 

In the first, Harrell was shown a selection of 50 words and asked to picture vocalizing them. The electrical activity representing his attempts to move the muscles of his mouth, lips, jaw, and tongue were then decoded and run through a ChatGPT-style large language model, not unlike the autocorrect feature on a smartphone, to associate the movements with basic units of speech known as phonemes. 

‘As he’s speaking, words are appearing on the screen’, Stavisky explains. ‘Then he can, with his eyes or by moving a computer cursor by trying to move his hand, select if the sentence is 100 percent correct, mostly correct, or incorrect. If it’s 100 percent correct, we use that data to update the decoding model. If it’s mostly correct, the language model will give the next most likely sentences, and he’ll select the correct one. If it’s incorrect, he’ll resay the sentence’. 

After 30 minutes of training, the system was outputting the correct sentences over 99 percent of the time. The next day, the researchers increased the training vocabulary to 125,000 words, roughly the size of an English dictionary. After an additional 90 minutes of training, the model was predicting words with 90 percent accuracy. Further training bumped the system up to 97.5 percent accuracy, where it remained eight months later, at the time the study was reported.

Speech decoding would have been impossible even ten years ago, Stavisky says, because the machine-learning algorithms simply weren’t sophisticated enough. ‘You used to have to program the algorithms from scratch. Now there are [existing] toolboxes and functions, and any college student can do it with five lines of code. We’re basically piggybacking off of a trillion-dollar industry of machine learning, AI, and hardware’.

The fact that speech prostheses have improved so rapidly has led some in the field to speculate that they may become the first FDA-approved use case for a brain-computer interface. Certainly, they contribute to the growing body of evidence that these devices can have a significant positive impact for people who have lost speech or motor function. 

‘I think things would have to go surprisingly wrong for that not to happen’, Stavisky says, ‘given what we’ve seen to date’.

Casey Harrell uses his BCI speech prosthetic to share his experience of being unable to communicate. 
Image
Source: UC Davis Health/Screenshot by Works in Progress.

Hardware limitations

Like cursor control, research into using brain-computer interfaces to command robotic limbs dates back to early days of the BrainGate trials. But advancements have been somewhat stymied by the need for more portable devices. 

In October 2016, Nathan Copeland, who is paralyzed from the neck down, fist-bumped US President Barack Obama during a White House-sponsored science and technology conference. Not captured in the video of the feat were the four button-size microelectrode arrays recording his neural activity as he imagined making a fist and moving it toward the President’s, transmitting his intention down a scalp-mounted cable through a computer to the robotic limb. 

Copeland received his implant, a newer iteration of the bed-of-nails Utah array used by Matthew Nagle in the early BrainGate trial, in 2014. While he’s especially passionate about participating in research that uses the interface to control a robotic limb, Copeland notes that for now, all of the massive, industrial-scale robotic arms he has tried out have to stay in the lab. 

Copeland commutes 75 minutes from his home in Dunbar, Pennsylvania, to the Pitt labs three days a week for three to four hours at a time. The sorts of tasks he performs in the lab vary, but much of it focuses on stimulating sensation in his hands, an area of research that clinicians believe will be critical to restoring limb function in people with paralysis and making it easier for them to control prosthetic devices. 

In addition to allowing people to experience physical touch, that sensory information is critically important for performing functional tasks, says Chad Bouton, professor of bioelectronic medicine at the Feinstein Institutes for Medical Research in New York, who has conducted similar research focused on restoring movement and sensation in people with paralysis. ‘We can’t even button our shirts without that tactile feedback’.

Alongside advancements in robotics that could make robotic prostheses more practical in daily life, machine-learning advancements could make the training requirements less onerous. While some systems like speech decoders are automatically updating because the training is built directly into the system – every time a user wants to speak, they have to indicate whether the sentence the model is predicting is correct – the training can be more laborious for uses like computer cursor control and motor prostheses.

‘You have to train the decoder every day’, Copeland explains. ‘The system can backshift even after a few hours. Sometimes it just might not be working as well’. Even the position his body is in when he’s training – whether he’s more or less reclined in his wheelchair, for example – can have an impact on how well the decoder interprets his neural activity.

But as researchers gather more data from Copeland and other long-term study participants, they hope to create what they call a ‘super-decoder’ – a sort of baseline decoder model that remains accurate with less training over time. 

‘We’re starting to get enough data that we can start to look at what we can learn from pooling data from all of these participants’, says Jennifer Collinger, associate professor and research operations director at the University of Pittsburgh’s Rehab Neural Engineering Lab, who has worked with Copeland and other participants in BCI studies. ‘For example, can we build a model that will predict control for the next participant?’

Nathan Copeland uses a BCI-controlled robotic arm to fist-bump US President Barack Obama in 2016.

For people with partial paralysis, it may even be possible to couple a brain-computer interface with electrical stimulation of the muscles to restore movement in their own limbs. Ian Burkhart, who founded a foundation to help others with spinal cord injuries, participated in movement-restoration studies for roughly seven years, before having his implant removed in August 2021.

Burkhart is paralyzed from the neck down, but he still has some movement in his shoulders and elbows. He has very little ability to use his hands and fingers, but researchers theorized that electrical stimulation might help restore some of that function. Using a combination of a BCI implant and a ribbonlike sleeve of electrodes that wrapped around his forearm, he worked on restoring strength to his hands and wrists. As he thought about moving his fingers, the implant would transmit the intention to move his digits to the decoder, which would then pass the signal to the sensors in his arm, stimulating the muscles. By 2021, his hands and wrists were strong enough to move blocks, pick up a bottle and pour out its contents, play Guitar Hero, and drive a simulated car. 

After years of funding concerns, however, the study Burkhart had been a part of ran out of funds, and an infection at the port on his scalp put his health at risk. In 2021, he made the difficult decision to have the interface explanted. 

Even though he had only been able to use the system in the lab, he felt bereft. ‘That was a challenging time’, he recalls, ‘to go from being able to use this device that restored hand function to having that taken away’. The tasks he was performing made him feel more capable, even if it was only in a clinical setting. ‘I’m not super concerned about being able to play the piano anymore, or type on a keyboard really fluidly. But I do want to be able to reach out and grab objects, open up packages, and be a bit more independent so I can rely less on caregivers’.

Brain-body interfaces

Fortunately, this line of research continues. In 2023, Chad Bouton, of the Feinstein Institutes for Medical Research, and colleagues reported the results of a ‘double neural bypass’, in which they combined a brain-computer interface with sensors placed on the skin to deliver electrical stimulation to the muscles. Over time, with the same sorts of training exercises that Burkhart used, 45-year old Keith Thomas, who is paralyzed from the chest down, went from being unable to lift his arms from his wheelchair frame to being able to raise both hands to his face. 

Bouton, who led the study, calls the system a brain-body interface. ‘This is an electronic bridge that not only links the computer to the brain, but also links the computer to a device that stimulates muscles to allow functional movement’, he explains.

For Thomas, the training involved watching hands moving on a computer screen and trying to copy those movements, which he was initially unable to do. The neural implant transmitted the electrical patterns produced by his attempted movements to the decoder, which passed the signal along to the electrode sleeve, delivering targeted pulses to his limbs. In addition to helping him build the strength to move his arms, the stimulation allowed him to feel sensation in his fingers for the first time in years. Since then, Thomas has also been able to perform more subtle finger movements, modulating his grip to pick up a cup and drink without spilling.

The system Bouton and colleagues developed also includes a third interface to the spine. Stimulating sensory fibers near the spinal cord, which controls the body’s motor reflexes – ‘when you touch something hot and pull your hand away, those reflexes are mediated by the spinal cord; your brain doesn’t even get the signal until a fraction of a second later’, Bouton explains – encourages neuroplasticity, or the brain’s ability to build new neural connections. This can make it easier for a person to regain limb strength.

‘Brain-computer interfaces have been viewed as an assistive technology for many years, and still are to this day’, Bouton says. ‘But in recent years [our group has] been focused on assistive plus rehabilitative or therapeutic outcomes. We want to use the double neural bypass to promote plasticity and recovery. That’s a powerful next step’.

Clinicians are intensely focused on having a positive impact on the lives of the people who need these devices most. ‘If you make a device that does something cool but doesn’t do anything for the patient, there’s no point’, says John Donoghue of Brown University, who led the early BrainGate trials. ‘For example, one patient told me, “All I want is to be able to scratch my nose”’. 

Sergey Stavisky, of the University of California, Davis, says there is a realistic possibility of one day using higher-channel-count interfaces to record signals deeper in the brain – places where mood, cognition, and memory are regulated. This could mean brain-computer interfaces to treat psychiatric conditions like obsessive-compulsive disorder or neurodegenerative conditions like Alzheimer’s disease. ‘That’s going to take a bit longer, but there’s a lot of potential impact there. The market sizes are orders of magnitude bigger. The societal-level costs are much larger as well’.

In the meantime, there’s much more to learn. Noland Arbaugh says that he and Neuralink’s researchers are constantly surprised by what they’re discovering. For example, the interface still decodes signals while he sleeps. ‘Early on, I fell asleep while I was using it’, he recalls. ‘I woke up a minute or two later and it had clicked a bunch of stuff on my screen.’ More recently, while he was training the system to output text in response to imagining writing words by hand, a researcher suggested he try it with his eyes closed. Suddenly, the accuracy of the output spiked. ‘It’s so bizarre’, Arbaugh says. ‘We run into stuff like that all the time.’

At a future date, BCIs might be able to feed more robust information back into the brain, much like how deep brain stimulators deliver targeted electrical pulses to treat conditions like tremor in Parkinson’s disease. In the long run, we might even achieve what Neuralink cofounder Elon Musk calls ‘a symbiosis between human and machine intelligence’.

But current brain-computer interfaces already offer great improvements to quality of life. ‘It’s not like you’re going to implant a BCI and magically cure someone of their injury or disease. But you’re going to allow them to be much more independent and do things they wouldn’t be able to do without it’, says Ian Burkhart. ‘The most important thing is to give more autonomy and independence to people who have had that ripped away’. They allow people with debilitating conditions to regain function. To text their friends, or drink a glass of water unassisted. Read a bedtime story to their kids, play Mario Kart, or scratch their nose. 

FDA approval of BCIs will arrive soon. With it will come the ability to improve the lives of those from whom disease has taken the most.

More articles from this issue

How to redraw a city

Words by Anya Martin

Japan faced some of the world’s toughest planning problems. It solved them by letting homeowners replan whole neighborhoods privately by supermajority vote.

Read more
Readjustment