Deep reinforcement learning for task offloading in unmanned aerial vehicle assisted intelligent farm network
AD |
Wen | Zhang BucaiEditor | Zhang Bucai1. IntroductionAs wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods


Wen | Zhang Bucai
Editor | Zhang Bucai
1. Introduction
As wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods. However, with current technology,Drones have limited energy and computing power and may not be able to executeAll intensive image classification tasksHow to improve the capabilities of drones has become a top priority.

2. Related work
The use of reinforcement learning (RL) to manage wireless network resources and optimize performance has been widely studied in many different applications. Investigated the challenges and opportunities of AI in 5G and 6G networks. Such as energy management and radio resource allocation. Using AI to achieve energy efficiency in 6G will be essential. In addition, someone has proposed a deep RL method,Joint optimization problem for maximizing computation and minimizing energy consumption through offloadingFor 5G and higher networks.
Their network also utilizes MEC servers as processing units to assist their network in computationally intensive tasks. Similarly, someone introduced the deep RL algorithm in the industrial internet of things environment.Just to find an optimal virtual network function placement and scheduling strategy,To minimize end-to-end latency and costs.

In the study of using drones in intelligent farms, a detailed introduction was given on how to use drones to capture aerial images and use image classification to identify crops and weeds in the field.The idea of using drones to spray insecticides was discussedThe trade-off between latency and battery usage.
In 5G and higher networks, using drones and MEC devices simultaneously is beneficial for applications. For different applications such as space air ground networks and emergency search and rescue missionsProvided extensive investigation on the use of drones and MECs. In addition, the use ofDrones provide the possibility of connecting to 6G car networking applications.

The existing methods for optimizing energy consumption and latency of drones are not limited to smart farm scenarios. For example, by optimizing the following parametersUser association, power control, computing power allocation, and location planning. A network composed of satellites, drones, ground base stations, and IoT devices was considered. Using deep RL as a task scheduling solution to considerMinimizing processing delay while minimizing drone energy limitation. Alternatively, clustering and trajectory planning can be used to optimize energy efficiency and task latency.
In addition, game theory solutions are used to solve the task unloading problem in UAV group scenarios. Although we are exploring similar issues, we focus on using DQLJointly solving energy and task delay optimization problems.

3. System model
j J...MECj0 J +.
t TKBjt.DjtPjt.scheduling algorithm In order to assign each task to the processing unit in a way,Enable tasks to be completed before their deadlines and maximize drone hover time.

WRj0vjtv.The first goal is to maximize the minimum remaining power,To extend the hover time of the drone network.Rj0

Energy consumed.

Pjtj0t0 is a binary decision variable,If processing unit j0 processes tasks, it is equal to 1.jtPjt.

P+jtj0t0 is a binary decision variable,If it is processing unit j0 that starts processing tasksThe time interval t0 is equal to 1t0j0tj.
.

xjtj0j0.When the task will be executed on processing unit j0, it is set to 1, otherwise it will be set to 0..

Q-Learning Q Q . Q Q .The Q value measures the performance of the action in a given stateFuture cumulative discount rewards. Q .
In deep Q-Learning, we use aDeep neural network Q . Q . Q .. Q .

DQL - Q Q .Q-Learning . DQL We use DNN to predict the Q value of each action in a given state, rather than looking up the Q value in the Q table..
Experience is a tuple that includes proxiesCurrent state, next state, actions, and rewards. DNN.DNN Q .

Each drone in the network will have its ownMDP framework...Drones must choose processing units that minimize deadline violations and energy consumption in order to receive the highest reward.MDP
Status: The status includes the uninstalled task type k, Lj0J MEC j1J+tTj2J+.

(L_ja-1)(1-E(vja)+V_L_ja*E(vja)).L_jaActions that do not result in a significant increase in energy consumption.e.V_L_ja.
.If a deadline violation is inevitable, the punishment will be lighter.

4. Benchmark method
1. Recurrent scheduling (RR):j0J+1J+..
2. Maximum Energy Priority (HEF)..Energy levelEnergy level1Energy level.
MECMEC.MEC1 / J +.

3. Minimum queue time and highest energy priority (QHEF).Minimum queuing time.Energy level.Energy levelEnergy level..
4. Q-LearningWe used the proposedQ-Learning algorithm.Q-Learning algorithmepsilon-greedy. Q-Learning algorithmj1J +tTj2J +.

5. Performance evaluation
Simu5G Omnet++ 5G .J=4 MEC L=1..
The task arrival time interval is modeled as an exponential distribution, and each task type has a uniqueAverage arrival rate and processing time.

. Q-Learning Deep Q-Learning 0.05 0.85. [6] ..
Simulate the performance of drones throughout the entire simulation processEnergy level (Bj0) 570 (Hj0) 211 17 4320 12960.


6. Conclusion:
Q-Learning algorithm.Incorporate observation values into the stateQ-Learning.RRHEFQHEFQ-LearningDQLQ-Learning13.
DQLQ-Learning.Q-Learning..
Reference:
[1] A. D. A. Aldabbagh, C. Hairu, and M. Hanafi, , 2020 IEEE (ICSET) pp. 213217IEEE202011.
[2] Y. Lina Y. Xiuming2020 (ICCR) pp. 2124IEEE202012.
[3] J. ZhaoY. WangZ. Fei X. Wang2020 IEEE/CIC (ICCC) pp. 424429IEEE20208.
[4] S. ZhangH. Zhang L. Song D2D6G IEEE 69 pp. 659266022020.
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])
Mobile advertising space rental |
Tag: Deep reinforcement learning for task offloading in unmanned aerial
Guess you like
-
Pinduoduo's "Trillion-Yuan Support" Plan: A Three-Year, 100 Billion Yuan Investment to Build a Multi-Win Business EcosystemDetail
2025-04-03 14:41:29 11
-
Huyu Xianxiang and AVIC Optoelectronics Institute Forge Strategic Partnership to Shape China's eVTOL Avionics LandscapeDetail
2025-04-02 18:39:02 1
-
Haier Smart Home's 8th Global R&D Innovation Awards: Illuminating Better Lives with Technology, Achieving User SatisfactionDetail
2025-04-02 15:57:33 21
-
Huawei's 2025 China Digital Power Partner Conference: Carbon-Neutral Path for China, Shared Value CreationDetail
2025-03-31 18:57:09 11
-
OPPO Think Tank: A New Paradigm for Chinese Enterprises' Globalization From Wusha Village to the Global High-End MarketDetail
2025-03-31 18:48:21 1
-
ICLR 2025: Chinese Universities and Companies Showcase AI Prowess with Numerous Accepted Papers; Stanford-HKUST Collaboration Achieves Perfect ScoreDetail
2025-03-31 14:54:45 11
-
Huawei HarmonyOS Smart Home Partner Summit: Deep Dive into Spatial Intelligence Transformation and Ecosystem Development StrategyDetail
2025-03-31 13:01:45 1
-
AI Large Models Drive Innovation in Humanoid Robots and Autonomous Driving: 2025 as a Key MilestoneDetail
2025-03-31 13:00:04 11
-
Eight Cities Pilot Credit Supervision Data Openness, Empowering Micro and Small Enterprises with Mobile Payment PlatformsDetail
2025-03-26 09:32:47 1
-
Xiaomi's "Just a Little Profit": The Deep Logic and Sustainability Behind its Low-Margin StrategyDetail
2025-03-25 15:07:32 21
- Detail
-
The Ninth Huawei ICT Competition China Challenge Finals Conclude Successfully: Kunpeng and Ascend Tracks Crown Their ChampionsDetail
2025-03-24 16:26:03 11
-
Ronshen Sugar Cube Refrigerator: The Official Product of the 2025 FIFA Club World Cup, Ushering in a New Era of Healthy Food PreservationDetail
2025-03-24 15:40:35 21
-
Zhihu Launches New Version of Zhihu Straight Answer: Deep Integration of AI and Community to Enhance Professionalism and CredibilityDetail
2025-03-24 14:04:38 1
-
China Construction Ninth Harmony (Zhongjian Jiuhe) and Huawei HarmonyOS Smart Home Deepen Strategic Partnership at AWE2025, Building a Green and Intelligent Future HomeDetail
2025-03-23 15:21:15 41
-
ZuoYeBang Books Leads the New Trend in Intelligent Education Publishing at Changsha Book FairDetail
2025-03-21 15:15:33 1
-
Tianyancha: Shielding Consumer Safety and Reshaping Business Trust with DataDetail
2025-03-21 08:47:58 1
-
Hisense at AWE2025: AI Empowerment, Leading the Transformation of Future Smart LivingDetail
2025-03-20 18:24:11 11
-
Haier TV Makes a Stunning Debut at AWE 2024: Zhiyuan AI Large Model and PureScene Care Screen Usher in a New Era of Smart HomesDetail
2025-03-20 15:17:20 1
-
China Power's Xin Yuan Zhi Chu (New Source Smart Storage): Open Energy Intelligence Computing Center Leads Intelligent Transformation of the Energy IndustryDetail
2025-03-20 15:15:39 1