Reinforcement Learning based Energy-Efficient Fast Routing for FANETs
Published in IEEE Transactions on Communications , 2024
Abstract: Reinforcement learning (RL) based flying ad-hoc network (FANET) routing enables unmanned aerial vehicles (UAVs) to choose the next-hop to increase the packet delivery ratio, but the routing latency and energy consumption have to be further reduced over inaccurate feedback for large-scale networks. In this paper, we propose an RL based energy-efficient fast routing for each UAV to choose the forwarding decision and the power. Based on the state consisting of the battery level, channel conditions and forwarding decisions of the one-hop neighbors, the routing policy is chosen to enhance the utility as the weighted sum of the delivery success indicator, the latency and the energy consumption. The number of the latency violations and the learning parameters shared among the one-hop neighbors are exploited in the update of the routing policy distribution following the latency constraint with the reduced energy consumption. The deep neural networks address the state quantization error of the latency and the channel gain for UAVs with high mobility under large-scale networks. The performance bound regarding the end-to-end latency and the energy consumption is derived in terms of network topology and channel gain based on the packet forwarding game. The performance gain over the benchmark is provided via both simulation and experimental results.
[More information] Recommended citation: Jieling Li, Liang Xiao, Xuchen Qi, Zefang Lv, Qiaoxin Chen, Yong-Jin Liu*.Reinforcement Learning based Energy-Efficient Fast Routing for FANETs. IEEE Transactions on Communications,ISSN 0090-6778.