Quantcast
Channel: Popular Science
Viewing all articles
Browse latest Browse all 21408

Translating Brain Waves to Reconstruct Sounds and Conversations You've Heard

$
0
0
Reconstructing Words The top shows a spectrogram of six isolated words (deep, jazz, cause) and pseudo-words (fook, ors, nim) presented to an individual participant. At the bottom, the speech segments have been reconstructed based on readings from a set of electrodes attached to the patient's brain. PLoS Biology
Researchers see a way to eavesdrop on our brains

As you listened to your colleagues' conversations at work today, or to a podcast on the train home, or to your personal trainer shouting lift, your brain completed some complex tasks. The frequencies of syllables and whole words were decoded and given meaning, and you could make sense of the language-filled world we live in without actively thinking about it. Now a team of researchers from the University of California at Berkeley has figured out how to map some of these cortical computations. It's a major step toward understanding how we hear - and a possible step toward hearing what we think.

By decoding patterns of activity in the brain, doctors may one day be able to play back the imagined conversations in our heads, or to communicate with a person who can think and hear but cannot speak.

Brian Pasley and colleagues at UCB worked with 15 volunteer patients who were being treated for epilepsy. The team also included researchers from UCB, UC San Francisco, the University of Maryland and The Johns Hopkins University. To diagnose the seizures' places of origin, surgeons implanted electrodes directly onto the patients' brains, providing a rare opportunity to study electrical signals in various brain regions. Pasley said the research team visited patients in their hospital rooms and played them recorded words while monitoring activity in the superior temporal gyrus, a region of the auditory cortex.

"We're looking at which brain sites become active. Because we can determine some association between those brain sites and different frequencies, we can watch what brain sites are turning on and off for these recordings, and that lets us map back to the sound," he said.

Since neurologists can know the frequencies of certain phonemes - specific language sounds - this cortical spectroscopy can decode which sounds, and then perhaps which words, a person is hearing. Pasley compared it to piano playing: "If you're an expert pianist, you know what musical notes are associated with each piano key, and you understand that relationship between the key and the sound," he said. "If you turn the sound off, and have the pianist watch which piano keys are being pressed, this expert would have an idea what sound is being played even though they can't hear anything."

The patients would hear a single word or a single sentence that would fall in the range of normal speech, between 1 and 8,000 Hz, Pasley said. Words were spoken by people of both genders and a wide range of voice frequencies. As they listened, the patients' brain activity was recorded. Then Pasley developed two computational models that crunched the electrode recordings and would predict the word being heard. One of the two methods could create a reproduced sound so close to the original word that Pasley and his colleagues could guess what it was 90 percent of the time, he said.

"It's not intelligible, but you can identify some similarities," he said. Watch the video below to hear what he means.

Neuroscientists have long been trying to decode the inner workings of the brain, associating neurons in the sensory cortices with stimuli that fire up those neurons. But the newest research, along with this paper, peers more deeply into the recesses of our minds, promising to illuminate thoughts so they can be seen and shared with others.

In December, Boston University researchers published research explaining how they stimulated patients' visual cortices and induced brain patterns to create a learned behavior, even when the subjects did not know what they were supposed to be learning. Last fall, Jack Gallant - also at UCB - published a paper describing the reconstruction of video images by tapping the visual cortices of people who watched the videos.

This form of mind-reading, which neurologists prefer to call "decoding," is a long way from everyday use. And there are clearly some ethical questions surrounding its use (although it would be hard to implant electrodes to peep in on an unwilling person). But there are some practical, medically motivated reasons to do these things, like communicating with locked-in patients, or those who have lost the ability to speak because of a stroke or a degenerative muscle disease. That depends on some other vagaries of the brain that are still not well understood, Pasley said. Development of neural prostheses depends on the assumption that brain activity is the same during real experiences and imagined ones.

"There is some evidence that when people imagine visual stimuli or sound stimuli, some of the same brain areas do seem to activate as when you are actually looking at something or hearing something," he said. "But we didn't have a good idea at all, even if the same areas are activating, if they are processing the same way, and using the same rules, as during perception."

In this study, the researchers only focused on English words and phonemes, but Pasley would like to study other languages too, he said. The paper appears in the journal PLoS Biology.


Viewing all articles
Browse latest Browse all 21408

Trending Articles