Controlling your home by the power of thought
Walking across the room to switch on a light - such a simple everyday activity involves enormously complex computations by the brain as it requires interpretation of the scene, control of the gait and planning upcoming movements such as the arm movement to the light switch. Neuroscientists at the German Primate Center (DPZ) - Leibniz Institute for Primate Research have now investigated in which brain areas the movements are coded for reaching distant targets that require both arm and walking movements, and how the movements are planned in the brain before execution. For this purpose, they have created a novel experimental environment, the "Reach Cage". First results with rhesus monkeys show that distant movement targets, which the animals have to walk to, are encoded in the same areas of the brain as close targets, even before the animal starts walking. This means that movement goals, near and far from the body, can be obtained from the same brain areas no matter if the goal requires walking or not. These findings could be harnessed to develop brain-machine interfaces that control smart homes (eLife).
Our highly developed nervous system enables versatile and coordinated movement sequences in complex environments. We only notice the impact on our daily life when we are no longer able to perform certain actions, for example, as a result of a paralysis caused by a stroke. A novel approach to put the patient back in control would be brain-computer interfaces that are able to read signals from the brain. Such signals can be used as control signals not only for neuroprosthetic devices, which aim at directly replacing the lost motor function, but also for any computerized devices such as smartphones, tablets or a smart home.
The development of brain-computer interfaces builds on decades of basic research on the planning and control of movements in the cerebral cortex of humans and animals, especially non-human primates. Up to now, scientists performed such experiments mostly to investigate the planning of controlled hand and arm movements to nearby targets within immediate reach. However, those experiments are too constrained to study action planning in large realistic environments, such as a one's home. For example, turning on the light switch on the opposite wall involves different types of overlapping movements with coordination of multiple parts of the body.
Experimental constraints so far prevented scientists from studying neural circuits involved in action planning during whole-body movements, since animals must be able to move freely during the brain recordings. Observing a combination of walking and reaching movements, such as in the case of distant targets, required a completely new experimental environment that was not available yet. The so-called "Reach Cage" provides a test environment that allows to register and interpret the movement behavior, and to link it to the related brain activity while the animals are able to move freely under highly controlled conditions.
For the experiment, two rhesus monkeys were trained to touch targets close to or distant from their body. For distant targets, a walking movement was required to bring the target within reach. Illumination of individual targets instructed the animals which target they should touch. Using multiple video cameras, the movements were observed in 3D with high temporal and spatial precision. So called deep-learning algorithms were used to automatically extract the movements of their head, shoulder, elbow and wrist in 3D from the video images. Simultaneously, brain activity was recorded wirelessly so that the animals were not restricted in their movements at any time. By measuring the activity of hundreds of neurons from 192 electrodes in three different brain regions, it is now possible to draw conclusions about how movements are planned and executed in parallel.
Over the course of the training the monkeys performed reaching and walking movements with increasing confidence and optimized their behavior to reach high precision even when the targets were at a greater distance. "In the video analysis we can track the movements very accurately. The wirelessly recorded brain signals are so precise and clear that the activity of individual neurons can be studied and linked to behavior", says Michael Berger.
The results show that motor planning areas of the brain process information about the goal of specific movements even if the goal is at the other end of the room and a whole-body movement is first required to get there. Alexander Gail, head of the Sensorimotor Group, adds: "Such knowledge is not only important to understand the deficits of patients who have difficulty in planning and coordinating actions. The new insights also might turn out particularly useful when developing brain-computer interfaces for controlling smart homes for which goals, such as doors, windows or lights, are distributed throughout a complex environment."
###
This research is part of "Plan4Act", an EU-funded project to develop brain-machine interfaces for smart homes with project partners in Germany, Spain and Denmark.
Walking across the room to switch on a light - such a simple everyday activity involves enormously complex computations by the brain as it requires interpretation of the scene, control of the gait and planning upcoming movements such as the arm movement to the light switch. Neuroscientists at the German Primate Center (DPZ) - Leibniz Institute for Primate Research have now investigated in which brain areas the movements are coded for reaching distant targets that require both arm and walking movements, and how the movements are planned in the brain before execution. For this purpose, they have created a novel experimental environment, the "Reach Cage". First results with rhesus monkeys show that distant movement targets, which the animals have to walk to, are encoded in the same areas of the brain as close targets, even before the animal starts walking. This means that movement goals, near and far from the body, can be obtained from the same brain areas no matter if the goal requires walking or not. These findings could be harnessed to develop brain-machine interfaces that control smart homes (eLife).
Our highly developed nervous system enables versatile and coordinated movement sequences in complex environments. We only notice the impact on our daily life when we are no longer able to perform certain actions, for example, as a result of a paralysis caused by a stroke. A novel approach to put the patient back in control would be brain-computer interfaces that are able to read signals from the brain. Such signals can be used as control signals not only for neuroprosthetic devices, which aim at directly replacing the lost motor function, but also for any computerized devices such as smartphones, tablets or a smart home.
The development of brain-computer interfaces builds on decades of basic research on the planning and control of movements in the cerebral cortex of humans and animals, especially non-human primates. Up to now, scientists performed such experiments mostly to investigate the planning of controlled hand and arm movements to nearby targets within immediate reach. However, those experiments are too constrained to study action planning in large realistic environments, such as a one's home. For example, turning on the light switch on the opposite wall involves different types of overlapping movements with coordination of multiple parts of the body.
Experimental constraints so far prevented scientists from studying neural circuits involved in action planning during whole-body movements, since animals must be able to move freely during the brain recordings. Observing a combination of walking and reaching movements, such as in the case of distant targets, required a completely new experimental environment that was not available yet. The so-called "Reach Cage" provides a test environment that allows to register and interpret the movement behavior, and to link it to the related brain activity while the animals are able to move freely under highly controlled conditions.
For the experiment, two rhesus monkeys were trained to touch targets close to or distant from their body. For distant targets, a walking movement was required to bring the target within reach. Illumination of individual targets instructed the animals which target they should touch. Using multiple video cameras, the movements were observed in 3D with high temporal and spatial precision. So called deep-learning algorithms were used to automatically extract the movements of their head, shoulder, elbow and wrist in 3D from the video images. Simultaneously, brain activity was recorded wirelessly so that the animals were not restricted in their movements at any time. By measuring the activity of hundreds of neurons from 192 electrodes in three different brain regions, it is now possible to draw conclusions about how movements are planned and executed in parallel.
Over the course of the training the monkeys performed reaching and walking movements with increasing confidence and optimized their behavior to reach high precision even when the targets were at a greater distance. "In the video analysis we can track the movements very accurately. The wirelessly recorded brain signals are so precise and clear that the activity of individual neurons can be studied and linked to behavior", says Michael Berger.
The results show that motor planning areas of the brain process information about the goal of specific movements even if the goal is at the other end of the room and a whole-body movement is first required to get there. Alexander Gail, head of the Sensorimotor Group, adds: "Such knowledge is not only important to understand the deficits of patients who have difficulty in planning and coordinating actions. The new insights also might turn out particularly useful when developing brain-computer interfaces for controlling smart homes for which goals, such as doors, windows or lights, are distributed throughout a complex environment."
###
This research is part of "Plan4Act", an EU-funded project to develop brain-machine interfaces for smart homes with project partners in Germany, Spain and Denmark.
No comments:
Post a Comment