New York: A team of researchers in the US are developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks.
The team from University of California San Diego used a completely non-invasive technique - magnetoencephalography (MEG) to distinguish among hand gestures that people are making, without information from the hands themselves.
"Our goal was to bypass invasive components," said Mingxiong Huang, co-director of the MEG Center at the Qualcomm Institute at UC San Diego. "MEG provides a safe and accurate option for developing a brain-computer interface that could ultimately help patients."
The researchers underscored the advantages of MEG, which uses a helmet with embedded 306-sensor array to detect the magnetic fields produced by neuronal electric currents moving between neurons in the brain. Alternate brain-computer interface techniques include electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface, and scalp electroencephalography (EEG), which locates brain activity less precisely.
"With MEG, I can see the brain thinking without taking off the skull and putting electrodes on the brain itself," said Roland Lee, director of the MEG Center at the UC San Diego Qualcomm Institute. "I just have to put the MEG helmet on their head. There are no electrodes that could break while implanted inside the head; no expensive, delicate brain surgery; no possible brain infections."
Lee likens the safety of MEG to taking a patient's temperature. "MEG measures the magnetic energy your brain is putting out, like a thermometer measures the heat your body puts out. That makes it completely noninvasive and safe," he noted. The study, published online ahead of print in the journal Cerebral Cortex, evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects.
The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors. MEG functional information was superimposed on MRI images, which provided structural information on the brain.
Using a high-performing deep learning model called MEG-RPSnet, the team interpreted the data generated. The results showed that their techniques could be used to distinguish among hand gestures with more than 85 per cent accuracy.