It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Wednesday, May 21, 2025
Level-ground and stair adaptation for hip exoskeletons based on continuous locomotion mode perception
Credit: Qining Wang, department of Advanced Manufacturing and Robotics, College of Engineering, Peking University.
A research article published by the Peking University presented a control framework for exoskeletons based on environmental perception, which effectively integrates environmental information and human kinematic data, improves the accuracy and lead time of transition detection, thereby enhancing smooth switching of control strategies across different terrains. Additionally, the adoption of a learning-free method eliminates the need for data collection and model training, demonstrating strong generalization capabilities across users.
The new research paper, published on Apr. 22 in the journal Cyborg and Bionic Systems, presented a study on adaptive control of hip exoskeletons in level-ground and stair environments employs a continuous locomotion mode perception technology based on a learning-free (non-data-driven) method.
Recent advances in hip exoskeleton control have demonstrated potential in enhancing human mobility across diverse terrains. However, achieving seamless adaptation to continuous locomotion modes (e.g., level-ground walking, stair ascent/descent) without user-specific data training remains a significant challenge. "By integrating depth-enhanced visual-inertial odometry and terrain reconstruction, our learning-free method eliminates dependency on datasets while maintaining high prediction accuracy across subjects," explained corresponding author Qining Wang, a professor at Peking University. The proposed three-layer control framework incorporates (a) a depth camera for real-time environment mapping, (b) pressure insoles for gait phase detection, and (c) physics-driven torque/damping strategies tailored to biomechanical profiles. "This approach ensures smooth transitions between control modes by predicting terrain changes before the end of transition periods," added lead author Zhaoyang Wang.
The system was validated through experiments with 7 subjects performing continuous locomotion (LG, SA, SD, and transitions). High-level perception achieved >95% accuracy for steady modes (LG: 98.1% ± 1.8%, SA: 97.3% ± 3.7%, SD: 95.8% ± 3.9%) and 87.5–100% accuracy for transitions, detecting shifts 14.5–30.5% earlier than transition completion. Mid-level control employed phase-specific torque curves (e.g., peak extension torque at 42% gait cycle for LG) and constant damping for SD, while low-level execution utilized PID-based torque tracking and PWM-modulated braking. Compared to CNN-based methods, this framework improved transition accuracy by 20–30% and reduced reliance on user calibration.
"Vision-based terrain reconstruction enabled precise prediction of stair geometry, aligning exoskeleton assistance with real-time biomechanical demands," noted Wang. Limitations include sensitivity to lighting variations and unstructured environments. Future work will integrate multimodal sensors and validate efficacy in clinical populations. This study advances adaptive exoskeleton control by merging environmental intelligence with human-centric design, paving the way for robust, user-agnostic assistive technologies.
Authors of the paper include Zhaoyang Wang, Dongfang Xu, Shunyi Zhao, Zehuan Yu, Yan Huang, Lecheng Ruan, Zhihao Zhou, and Qining Wang.
This work is supported in part by the National Natural Science Foundation of China (nos. 91948302, 52005011, 62073038, and 52475001) and Beijing Municipal Science and Technology project (no. D181100000318002).
The paper, “Level-Ground and Stair Adaptation for Hip Exoskeletons Based on Continuous Locomotion Mode Perception” was published in the journal Cyborg and Bionic Systems on Apr 22 2025, at DOI: 10.34133/cbsystems.0248.
Journal
Cyborg and Bionic Systems
Efficient hybrid environment expression for look-and-step behavior of bipedal walking
The feasible planar regions are used for footstep planning, preventing the body from hitting obstacles, and the heightmap is used to calculate foot trajectory to avoid foot collision during the swing process. The planar regions are efficiently extracted by leveraging the organized structure of points for nearest neighbor searches.
Credit: Xuechao Chen, School of Mechatronic Engineering, Beijing Institute of Technology
A research paper by scientists at Beijing Institute of Technology proposed an efficient and safe perception method tailored for the look-and-step behavior of bipedal robots.
The new research paper, published on Apr. 23 in the journal Cyborg and Bionic Systems, provide an efficient method for representing the surrounding environment as a hybrid of feasible planar regions and a heightmap. The method consists of 2 subsystems: feasible planar region extraction and heightmap construction.
The look-and-step behavior of biped robots requires quickly extracting planar regions and obstacles with limited computing resources. “The look-and-step behavior is a strategy for biped robots that involves rapidly perceiving the surrounding environment, taking a step, and repeating the process without pause or deliberation. This approach significantly enhances the robot’s adaptability to unknown environments and helps correct accumulated walking deviations during navigation.” Explained study author Xuechao Chen, a professor at Beijing Institute of Technology. During bipedal walking, biped robots must extract surrounding information about landing areas and obstacles. However, current methods for biped robots often need long computation times or rely on additional computing units, such as graphics processing units (GPUs), to reduce processing time. Although GPUs enhance computational speed, their use adds weight and increases space requirements, making them less suitable for biped robots.
In this work, authors proposed an efficient method representing the environment as a hybrid of feasible planar regions and a heightmap. The feasible planar regions are used for footstep planning, preventing the body from hitting obstacles, and the heightmap is used to calculate foot trajectory to avoid foot collision during the swing process. The planar regions are efficiently extracted by leveraging the organized structure of points for nearest neighbor searches. To ensure safe locomotion, these extracted planar regions exclude areas that could cause the robot’s body to collide with the environment. The proposed method completes this perception process in 0.16 s per frame using only a central processing unit, making it suitable for look-and-step behavior of biped robots. “Combined with the footstep planning algorithm developed by our group, we experimentally verify the efficiency and safety of the proposed method in typical artificial scenes.” said Chao Li.
The contributions of this paper are as follows: 1. Authors propose an efficient method to present the environment as feasible planar regions and a heightmap within limited computing resources. 2. Authors apply the structured nature of points within an organized point cloud to the nearest neighbor search, which accelerates the planar extraction process. 3. Authors experimentally demonstrate that the proposed perception method can rapidly extract necessary environmental information while ensuring the robot’s walking safety.
While the proposed method demonstrates effectiveness in planning footsteps for biped robots on planar regions, there are several limitations that should be considered. First, the method depends on empirically set thresholds tailored for specific scenarios. These thresholds, while effective in the tested scenarios, may not automatically adapt to different environments or terrain variations. However, this is a manageable challenge that could be addressed through future work on adaptive thresholding techniques. Future work will focus on extending this approach to continuous walking based on the look-and-step paradigm, further enhancing the robot’s adaptability to changing scenarios.
Authors of the paper include Chao Li, Qingqing Li, Junhang Lai, Xuechao Chen, Zhangguo Yu, and Zhihong Jiang.
This work was supported by the Postdoctoral Fellowship Program of CPSF under grant GZC20233399 and the Beijing Natural Science Foundation under grant L243004.
The paper, “Efficient Hybrid Environment Expression for Look-and-Step Behavior of Bipedal Walking” was published in the journal Cyborg and Bionic Systems on Apr 23, 2025, at DOI: 10.34133/cbsystems.0244.
No comments:
Post a Comment