Our hands do more than just hold objects. They also facilitate the processing of visual stimuli. When you move your hands, your brain first perceives and interprets sensory information, then it selects the appropriate motor plan before initiating and executing the desired movement. The successful execution of that task is influenced by numerous things, such as ease, whether external stimuli are present (distractions), and how many times someone has performed that task.
Take, for example, a baseball outfielder catching a ball. They want to make sure that when the ball heads their way, it ends up in their glove (the hand-movement goal). Once the batter hits the ball and it flies towards the outfielder, they begin to visually perceive and select what course of action is best (hand-movement preparation). They will then anticipate where they should position their hand and body in relation to the ball to ensure they catch it (future-hand location).
Researchers have long since pondered whether the hand-movement goal influences endogenous attention. Sometimes referred to as top-down attention, endogenous attention acts like our own personal spotlight; we choose where to shine it. This can be in the form of searching for an object, trying to block out distraction whilst working, or talking in a noisy environment. Elucidating the mechanisms behind hand movements and attention may help develop AI systems that support the learning of complicated movements and manipulations.
Now, a team of researchers at Tohoku University has identified that the hand-movement goal attention acts independently from endogenous attention.
"We conducted two experiments to determine whether hand-movement preparation shifts endogenous attention to the hand-movement goal, or whether it is a separate process that facilitates visual processing," said Satoshi Shioiri, a researcher at Tohoku University's Research Institute of Electrical Communication (RIEC), and co-author of the paper.
In the first experiment, researchers isolated the attention of the hand-movement goal from top-down visual attention by having participants move their hands to either the same location as a visual target or a differing location to the visual target based on cues. Participants could not see their hands. For both cases, there was a control condition where the participants were not asked to move their hand.
The second experiment examined whether the order in cues to the hand-movement goal and the visual target impacted visual performance.
Satoshi and his team employed an electroencephalogram (EEG) to measure the brain activity of participants. They also focused on steady state visual evoked potential (SSVEP). When a person is exposed to a visual stimulus, such as a flashing light or moving pattern, their brain produces rhythmic electrical activity at the same frequency. SSVEP is the change in EEG signal that occurs, and this helps assess the extent to which our brain selectively attends to or processes visual information, i.e, the spatial window.
"Based on the experiments, we concluded that when top-down attention is oriented to a location far from the future hand location, the visual processing of future hand location still occurs. We also found that this process has a much narrower spatial window than top-down attention, suggesting that the processes are separate," adds Satoshi.
The research group is hopeful the knowledge from the study can be applied to develop systems that maintain appropriate attention states in different occasions.
Details of the research were published in the Journal of Cognitive Neuroscience on May, 8, 2023.
People can perform tasks simultaneously, directing their attention to different locations for different tasks. For example, when reaching for a coffee mug while working on a PC, attention could be directed to the cup whilst keeping your attention on the display. Attention to the cup is related to hand movement, which could be different from top-down attention to the display. The study's results showed a difference in spatial profile between the two types of attention. The spatial extent of the attention to the hand-movement goal (bottom right) is much narrower than top-down attention (top right). This suggests that there is an attention mechanism that moves to the location of where the hand intends to go, independent of top-down attention.
CREDIT
Tohoku University
ARTICLE TITLE
Different mechanisms for visual attention at the hand-movement goal and endogenous visual attention
New study shows noninvasive brain imaging can distinguish among hand gestures
The research from the Qualcomm Institute at UC San Diego points to a safe, accurate brain-computer interface that might help patients with paralysis and other challenges
Peer-Reviewed PublicationLA JOLLA, CA, May 19, 2023 — Researchers from University of California San Diego have found a way to distinguish among hand gestures that people are making by examining only data from noninvasive brain imaging, without information from the hands themselves. The results are an early step in developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks.
The research, recently published online ahead of print in the journal Cerebral Cortex, represents the best results thus far in distinguishing single-hand gestures using a completely noninvasive technique, in this case, magnetoencephalography (MEG).
“Our goal was to bypass invasive components,” said the paper’s senior author Mingxiong Huang, PhD, co-director of the MEG Center at the Qualcomm Institute at UC San Diego. Huang is also affiliated with the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering and the Department of Radiology at UC San Diego School of Medicine, as well as the Veterans Affairs (VA) San Diego Healthcare System. “MEG provides a safe and accurate option for developing a brain-computer interface that could ultimately help patients.”
The researchers underscored the advantages of MEG, which uses a helmet with embedded 306-sensor array to detect the magnetic fields produced by neuronal electric currents moving between neurons in the brain. Alternate brain-computer interface techniques include electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface, and scalp electroencephalography (EEG), which locates brain activity less precisely.
“With MEG, I can see the brain thinking without taking off the skull and putting electrodes on the brain itself,” said study co-author Roland Lee, MD, director of the MEG Center at the UC San Diego Qualcomm Institute, emeritus professor of radiology at UC San Diego School of Medicine, and physician with VA San Diego Healthcare System. “I just have to put the MEG helmet on their head. There are no electrodes that could break while implanted inside the head; no expensive, delicate brain surgery; no possible brain infections.”
Lee likens the safety of MEG to taking a patient’s temperature. “MEG measures the magnetic energy your brain is putting out, like a thermometer measures the heat your body puts out. That makes it completely noninvasive and safe.”
Rock Paper Scissors
The current study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors (as in previous studies of this kind). MEG functional information was superimposed on MRI images, which provided structural information on the brain.
To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering PhD student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.
“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”
When the results of the study were in, the researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy. These results were comparable to those of previous studies with a much smaller sample size using the invasive ECoG brain-computer interface.
The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2 – 3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.
Looking ahead, Bu noted, “This work builds a foundation for future MEG-based brain-computer interface development.”
In addition to Huang, Lee and Bu, the article, “Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning” (https://doi.org/10.1093/cercor/bhad173), was authored by Deborah L. Harrington, Qian Shen and Annemarie Angeles-Quinto of VA San Diego Healthcare System and UC San Diego School of Medicine; Hayden Hansen of VA San Diego Healthcare System; Zhengwei Ji, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song and Sharon Nichols of UC San Diego School of Medicine; Dewleen Baker of VA Center of Excellence for Stress and Mental Health and UC San Diego School of Medicine; Imanuel Lerman of UC San Diego, its School of Medicine and VA Center of Excellence for Stress and Mental Health; and Ramesh Rao (director of Qualcomm Institute), Tuo Lin and Xin Ming Tu of UC San Diego.
The work was supported in part by Merit Review Grants from the US Department of Veterans Affairs, Naval Medical Research Center's Advanced Medical Development program and Congressionally Directed Medical Research Programs/Department of Defense.
The research from the Qualcomm Institute at UC San Diego points to a safe, accurate brain-computer interface that might help patients with paralysis and other challenges
Peer-Reviewed PublicationLA JOLLA, CA, May 19, 2023 — Researchers from University of California San Diego have found a way to distinguish among hand gestures that people are making by examining only data from noninvasive brain imaging, without information from the hands themselves. The results are an early step in developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks.
The research, recently published online ahead of print in the journal Cerebral Cortex, represents the best results thus far in distinguishing single-hand gestures using a completely noninvasive technique, in this case, magnetoencephalography (MEG).
“Our goal was to bypass invasive components,” said the paper’s senior author Mingxiong Huang, PhD, co-director of the MEG Center at the Qualcomm Institute at UC San Diego. Huang is also affiliated with the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering and the Department of Radiology at UC San Diego School of Medicine, as well as the Veterans Affairs (VA) San Diego Healthcare System. “MEG provides a safe and accurate option for developing a brain-computer interface that could ultimately help patients.”
The researchers underscored the advantages of MEG, which uses a helmet with embedded 306-sensor array to detect the magnetic fields produced by neuronal electric currents moving between neurons in the brain. Alternate brain-computer interface techniques include electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface, and scalp electroencephalography (EEG), which locates brain activity less precisely.
“With MEG, I can see the brain thinking without taking off the skull and putting electrodes on the brain itself,” said study co-author Roland Lee, MD, director of the MEG Center at the UC San Diego Qualcomm Institute, emeritus professor of radiology at UC San Diego School of Medicine, and physician with VA San Diego Healthcare System. “I just have to put the MEG helmet on their head. There are no electrodes that could break while implanted inside the head; no expensive, delicate brain surgery; no possible brain infections.”
Lee likens the safety of MEG to taking a patient’s temperature. “MEG measures the magnetic energy your brain is putting out, like a thermometer measures the heat your body puts out. That makes it completely noninvasive and safe.”
Rock Paper Scissors
The current study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors (as in previous studies of this kind). MEG functional information was superimposed on MRI images, which provided structural information on the brain.
To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering PhD student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.
“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”
When the results of the study were in, the researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy. These results were comparable to those of previous studies with a much smaller sample size using the invasive ECoG brain-computer interface.
The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2 – 3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.
Looking ahead, Bu noted, “This work builds a foundation for future MEG-based brain-computer interface development.”
In addition to Huang, Lee and Bu, the article, “Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning” (https://doi.org/10.1093/cercor/bhad173), was authored by Deborah L. Harrington, Qian Shen and Annemarie Angeles-Quinto of VA San Diego Healthcare System and UC San Diego School of Medicine; Hayden Hansen of VA San Diego Healthcare System; Zhengwei Ji, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song and Sharon Nichols of UC San Diego School of Medicine; Dewleen Baker of VA Center of Excellence for Stress and Mental Health and UC San Diego School of Medicine; Imanuel Lerman of UC San Diego, its School of Medicine and VA Center of Excellence for Stress and Mental Health; and Ramesh Rao (director of Qualcomm Institute), Tuo Lin and Xin Ming Tu of UC San Diego.
The work was supported in part by Merit Review Grants from the US Department of Veterans Affairs, Naval Medical Research Center's Advanced Medical Development program and Congressionally Directed Medical Research Programs/Department of Defense.
The new research from the Qualcomm Institute at UC San Diego used machine learning and a noninvasive imaging technique called magnetoencephalography (MEG). Illustrated here is the 306-sensor MEG helmet that detects nerve activity in the brain by measuring the magnetic field.
The new research from the Qualcomm Institute at UC San Diego used machine learning and a noninvasive imaging technique called magnetoencephalography (MEG). Illustrated here is the 306-sensor MEG helmet that detects nerve activity in the brain by measuring the magnetic field.
CREDIT
Courtesy of MEG Center at UC San Diego Qualcomm Institute
Courtesy of MEG Center at UC San Diego Qualcomm Institute
JOURNAL
Cerebral Cortex
Cerebral Cortex
DOI
METHOD OF RESEARCH
Experimental study
Experimental study
SUBJECT OF RESEARCH
People
People
ARTICLE TITLE
Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning
Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning
No comments:
Post a Comment