advanced
Journal Information
Journal Information

   Description
   Editorial Board
   Guide for Authors
   Ordering

Contents Services
Contents Services

   Regular Issues
   Special Issues
   Authors Index

Links
Links

   FEI STU Bratislava    deGruyter-Sciendo

   Feedback

[2, 2024] 

Journal of Electrical Engineering, Vol 75, 2 (2024) 94-101, https://doi.org/10.2478/jee-2024-0013

Deep reinforcement learning based computing offloading in unmanned aerial vehicles for disaster management

Anuratha Kesavan – Nandhini Jembu Mohanram – Soshya Joshi – Uma Sankar

   The emergence of Internet of Things enabled with mobile computing has the applications in the field of unmanned aerial vehicle (UAV) development. The development of mobile edge computational offloading in UAV is dependent on low latency applications such as disaster management, Forest fire control and remote operations. The task completion efficiency is improved by means of using edge intelligence algorithm and the optimal offloading policy is constructed on the application of deep reinforcement learning (DRL) in order to fulfill the target demand and to ease the transmission delay. The joint optimization curtails the weighted sum of average energy consumption and execution delay. This edge intelligence algorithm combined with DRL network exploits computing operation to increase the probability that at least one of the tracking and data transmission is usable. The proposed joint optimization significantly performs well in terms of execution delay, offloading cost and effective convergence over the prevailing methodologies proposed for UAV development. The proposed DRL enables the UAV to real-time decisions based on the disaster scenario and computing resources availability.

Keywords: deep reinforcement learning algorithm, edge intelligence, UAV energy consumption


[full-paper]


© 1997-2023  FEI STU Bratislava