Idea number eight is an optical brain imaging system developed at Bloorview that decodes preference – with the ultimate goal of opening the world of choice to children who can’t speak or move.
Wearing a headband (see photo above) fitted with fibre-optics that emit light into the pre-frontal cortex of the brain, adults were shown two drinks on a computer monitor, one after the other, and asked to make a mental decision about which they liked more.
“When your brain is active, the oxygen in your blood increases and depending on the concentration, it absorbs more or less light,” says Sheena Luu, the PhD student who led the Bloorview study under the supervision of biomedical engineer Tom Chau.
After teaching the computer to recognize the unique pattern of brain activity associated with preference for each subject, Luu accurately predicted which drink the participants liked best 80 per cent of the time.
The work was published in the Journal of Neural Engineering in February and is groundbreaking because preference was detected naturally – from spontaneous thoughts – without training the user.
Most brain-computer interfaces require users to do an unrelated mental task – such as figuring out a math equation or singing a song – to indicate a response such as yes. This can be challenging for a child who doesn’t understand cause and effect or for people with developmental disabilities.
I had a fascinating opportunity to see the latest progress with the infrared brain imaging system last Friday.
Ka Lun Tam, a research engineer in Chau’s lab, demonstrated how thoughts can be used to express intention or activate a switch that controls a computer or communication or household device.
He donned the fibre-optic headband, with a spray of a dozen red and yellow cables cascading down his body. Yellow lines emit light into parts of the brain activated during singing. Red ones detect the amount of light that bounces back.
Then Tam played a computer matching game. Two photos at a time were presented on the screen – things like a cyclist and a swimmer. Sometimes the photos were the same, sometimes different. If the photos matched, Tam sung a fast-paced song in his head. He chose “I want you” by Savage Garden because of its frenetic beat.
When the photos didn’t match, he let his mind go blank.
In the bottom left of the screen, a circle appeared in green or red – green indicated a photo match was detected by Tam’s thoughts and red the opposite. The circles grew from small to large, depending on the strength of the signal.
How did the system know Tam was indicating a match? Singing gave certain parts of his brain a workout, causing oxygenated blood to flood those vessels and absorb more of the infra-red light.
The circles act as feedback for the user, indicating that mental singing – or silence – is triggering the signals.
Tam is still getting used to the system and says that while some days it’s bang on, other times it doesn’t read his mind correctly.
The quick response of the circle – indicating whether or not there’s a match – is surprising, Chau says. “The blood-flow response is slow. It takes about 10 seconds to evolve. So we’re pleasantly surprised that we can generate a channel signal in a couple of seconds. That means the system is detecting a change in blood flow before the entire response.”
Chau says the team will explore other mental tasks that can generate responses. “For example, maybe it’s a child thinking about their pet or a TV show they like.”
While the research is in its early stages, Chau envisions a portable system in future using a forehead sticker with light sensors.
The research is part of Chau’s body-talk research, which aims to give children who are “locked in” by disability a way to express themselves through subtle physiological processes like breathing pattern, skin temperature, heart rate and brain activity.