Deep Reinforcement Learning for Robot Collision Avoidance With Self-State -Attention and Sensor Fusion

Published in IEEE Robotics and Automation Letters, 2022

Abstract: 3D LiDAR sensors can provide 3D point clouds of the environment, and are widely used in automobile navigation; while 2D LiDAR sensors can only provide point cloud in a 2D sweeping plane, and then are only used for navigating robots of small height, e.g., floor mopping robots. In this letter, we propose a simple yet effective deep reinforcement learning (DRL) method with our self-state-attention unit and give a solution that can use low-cost devices (i.e., a 2D LiDAR sensor and a monocular camera) to navigate a tall mobile robot of one meter height. The overrall pipeline is that we (1) infer the dense depth information of RGB images with the aid of the 2D LiDAR sensor data (i.e., point clouds in a plane with fixed height), (2) further filter the dense depth map into a 2D minimal depth data and fuse with 2D LiDAR data, and (3) make use of DRL module with our self-state-attention unit to a partially observable sequential decision making problem that can deal with partially accurate data. We present a novel DRL training scheme for robot navigation, proposing a concise and effective self-state-attention unit and proving that applying this unit can replace multi-stage training, achieve better results and generalization capability. Experiments on both simulated data and a real robot show that our method can perform efficient collision avoidance only using low-cost 2D LiDAR sensor and monocular camera.

Download paper here

More information

Recommended citation: Yiheng Han, Irvin Haozhe Zhan, Wang Zhao, Jia Pan, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu*. Deep Reinforcement Learning for Robot Collision Avoidance With Self-State -Attention and Sensor Fusion. IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6886-6893, July 2022, doi: 10.1109/LRA.2022.3178791.