Deep reinforcement learning for task offloading in unmanned aerial vehicle assisted intelligent farm network
AD |
Wen | Zhang BucaiEditor | Zhang Bucai1. IntroductionAs wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods


Wen | Zhang Bucai
Editor | Zhang Bucai
1. Introduction
As wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods. However, with current technology,Drones have limited energy and computing power and may not be able to executeAll intensive image classification tasksHow to improve the capabilities of drones has become a top priority.

2. Related work
The use of reinforcement learning (RL) to manage wireless network resources and optimize performance has been widely studied in many different applications. Investigated the challenges and opportunities of AI in 5G and 6G networks. Such as energy management and radio resource allocation. Using AI to achieve energy efficiency in 6G will be essential. In addition, someone has proposed a deep RL method,Joint optimization problem for maximizing computation and minimizing energy consumption through offloadingFor 5G and higher networks.
Their network also utilizes MEC servers as processing units to assist their network in computationally intensive tasks. Similarly, someone introduced the deep RL algorithm in the industrial internet of things environment.Just to find an optimal virtual network function placement and scheduling strategy,To minimize end-to-end latency and costs.

In the study of using drones in intelligent farms, a detailed introduction was given on how to use drones to capture aerial images and use image classification to identify crops and weeds in the field.The idea of using drones to spray insecticides was discussedThe trade-off between latency and battery usage.
In 5G and higher networks, using drones and MEC devices simultaneously is beneficial for applications. For different applications such as space air ground networks and emergency search and rescue missionsProvided extensive investigation on the use of drones and MECs. In addition, the use ofDrones provide the possibility of connecting to 6G car networking applications.

The existing methods for optimizing energy consumption and latency of drones are not limited to smart farm scenarios. For example, by optimizing the following parametersUser association, power control, computing power allocation, and location planning. A network composed of satellites, drones, ground base stations, and IoT devices was considered. Using deep RL as a task scheduling solution to considerMinimizing processing delay while minimizing drone energy limitation. Alternatively, clustering and trajectory planning can be used to optimize energy efficiency and task latency.
In addition, game theory solutions are used to solve the task unloading problem in UAV group scenarios. Although we are exploring similar issues, we focus on using DQLJointly solving energy and task delay optimization problems.

3. System model
j J...MECj0 J +.
t TKBjt.DjtPjt.scheduling algorithm In order to assign each task to the processing unit in a way,Enable tasks to be completed before their deadlines and maximize drone hover time.

WRj0vjtv.The first goal is to maximize the minimum remaining power,To extend the hover time of the drone network.Rj0

Energy consumed.

Pjtj0t0 is a binary decision variable,If processing unit j0 processes tasks, it is equal to 1.jtPjt.

P+jtj0t0 is a binary decision variable,If it is processing unit j0 that starts processing tasksThe time interval t0 is equal to 1t0j0tj.
.

xjtj0j0.When the task will be executed on processing unit j0, it is set to 1, otherwise it will be set to 0..

Q-Learning Q Q . Q Q .The Q value measures the performance of the action in a given stateFuture cumulative discount rewards. Q .
In deep Q-Learning, we use aDeep neural network Q . Q . Q .. Q .

DQL - Q Q .Q-Learning . DQL We use DNN to predict the Q value of each action in a given state, rather than looking up the Q value in the Q table..
Experience is a tuple that includes proxiesCurrent state, next state, actions, and rewards. DNN.DNN Q .

Each drone in the network will have its ownMDP framework...Drones must choose processing units that minimize deadline violations and energy consumption in order to receive the highest reward.MDP
Status: The status includes the uninstalled task type k, Lj0J MEC j1J+tTj2J+.

(L_ja-1)(1-E(vja)+V_L_ja*E(vja)).L_jaActions that do not result in a significant increase in energy consumption.e.V_L_ja.
.If a deadline violation is inevitable, the punishment will be lighter.

4. Benchmark method
1. Recurrent scheduling (RR):j0J+1J+..
2. Maximum Energy Priority (HEF)..Energy levelEnergy level1Energy level.
MECMEC.MEC1 / J +.

3. Minimum queue time and highest energy priority (QHEF).Minimum queuing time.Energy level.Energy levelEnergy level..
4. Q-LearningWe used the proposedQ-Learning algorithm.Q-Learning algorithmepsilon-greedy. Q-Learning algorithmj1J +tTj2J +.

5. Performance evaluation
Simu5G Omnet++ 5G .J=4 MEC L=1..
The task arrival time interval is modeled as an exponential distribution, and each task type has a uniqueAverage arrival rate and processing time.

. Q-Learning Deep Q-Learning 0.05 0.85. [6] ..
Simulate the performance of drones throughout the entire simulation processEnergy level (Bj0) 570 (Hj0) 211 17 4320 12960.


6. Conclusion:
Q-Learning algorithm.Incorporate observation values into the stateQ-Learning.RRHEFQHEFQ-LearningDQLQ-Learning13.
DQLQ-Learning.Q-Learning..
Reference:
[1] A. D. A. Aldabbagh, C. Hairu, and M. Hanafi, , 2020 IEEE (ICSET) pp. 213217IEEE202011.
[2] Y. Lina Y. Xiuming2020 (ICCR) pp. 2124IEEE202012.
[3] J. ZhaoY. WangZ. Fei X. Wang2020 IEEE/CIC (ICCC) pp. 424429IEEE20208.
[4] S. ZhangH. Zhang L. Song D2D6G IEEE 69 pp. 659266022020.
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])
Mobile advertising space rental |
Tag: Deep reinforcement learning for task offloading in unmanned aerial
Guess you like
-
S&P Global Sustainability Yearbook 2024: Baidu's Inclusion Highlights the Crucial Role of AI GovernanceDetail
2025-02-19 21:08:50 1
-
Ronshen Refrigerators Lead 2024 Offline Market: Full-Scenario Embedded Refrigerators Drive Consumption UpgradeDetail
2025-02-19 19:12:01 11
-
Lenovo Xiaoxin Pro 2025 Series Unveiled: AI-Powered Evolution for an Upgraded ExperienceDetail
2025-02-19 10:43:34 11
-
The DeepSeek-R1 7B/14B API service is officially launched, offering 1 million free tokens!Detail
2025-02-19 10:18:07 1
-
Baidu's 2024 Financial Report: AI Strategy Drives Revenue Growth, Smart Cloud Leads the Large Model RaceDetail
2025-02-18 19:11:21 1
-
Xiaohongshu's IPO Plans: Rumors of State-Owned Enterprise Investment False, but Valuation Could Reach $20 USD BillionDetail
2025-02-18 10:27:03 1
-
Ulike Launches Three New Hair Removal Devices, Ushering in a New Era of Home Hair RemovalDetail
2025-02-17 22:00:06 11
-
Global Personal Smart Audio Market in 2025: Opportunities and Challenges Amidst Strong GrowthDetail
2025-02-17 15:28:45 1
-
OPPO Find N5: An In-Depth Look at the New Document App and Cross-System ConnectivityDetail
2025-02-17 15:25:26 1
-
Ping An Good Driver's AI-Powered Smart Insurance Planner Wins 2024 Technological Innovation Service Case AwardDetail
2025-02-17 09:36:45 11
- Detail
-
Xiaomi's Electric Vehicles Become a Growth Engine: Over 135,000 Deliveries in 9 Months, Orders Extending 6-7 Months OutDetail
2025-02-16 12:34:46 1
-
Geely Granted Patent for "Smart Charging Robot" Design, Enabling Automated EV ChargingDetail
2025-02-14 16:58:11 1
-
OPPO Find N5: Ushering in the 8mm Era for Foldable Smartphones A Milestone Breakthrough in Chinese Precision ManufacturingDetail
2025-02-14 13:05:02 1
-
Global Semiconductor Market Experiences Strong Growth in 2024: AI-Driven Data Centers Fuel Expansion, Samsung Reclaims Top SpotDetail
2025-02-14 13:00:26 21
-
Douyin's 2025 Spring Festival Consumption Data Report: Livestreaming Significantly Boosts Offline Consumption, Intangible Cultural Heritage and Tourism Emerge as New HighlightsDetail
2025-02-06 10:59:24 11
-
98-inch or 100-inch TV? An In-Depth Analysis of Large-Screen TV Selection ChallengesDetail
2025-02-06 05:24:30 1
-
Hanoi Stadium Drone Disaster: Unveiling the Complex Relationship Between Vietnam and the Sino-Korean Drone MarketDetail
2025-02-05 12:51:51 21
-
Douyin's 2023 Spring Festival Consumption Data Report: A Collision of Robust Consumption and Diversified New Year CustomsDetail
2025-02-05 10:21:17 1
-
Baidu Intelligent Cloud Illuminates China's First Self-Developed 10,000-GPU Cluster, Ushering in a New Era of AI Computing PowerDetail
2025-02-05 09:36:39 11