Deep reinforcement learning driven autonomous flight UAV for construction progress monitoring
Date
Authors
Type
Language
Publisher
Reading access rights:
Rights Holder
Conference Date
Conference Place
Conference Title
ISBN, e-ISBN
Container Title
Department
Version
Faculty
First Page
Note
Subject Area
Subject Field
Subject (OSZKAR)
Unmanned aerial vehicles (UAVs)
Progress Monitoring
Deep Reinforcement Learning
Proximal Policy Optimization
Gender
University
- Cite this item
- https://doi.org/10.3311/CCC2023-005
OOC works
Abstract
Recently, Unmanned Aerial Vehicles(UAV) have been studied as a means of monitoring construction sites more safely and accurately. However, construction sites are complex environments with numerous heavy equipment and workers constantly moving around, making it difficult to predict obstacles and anticipate changes. To use UAVs for on-site monitoring in such environments, a control algorithm that can adapt to changing conditions is required. Therefore, this study proposes a reinforcement learning-based autonomous drone algorithm. UAVs, obstacles, and target points are positioned in a 3D learning environment, and random movements are assigned. The UAV detects objects using LiDAR sensors and assigns penalties if it collides with an obstacle, while rewarding if it reaches a target point. Through this method, the autonomously driven UAV trained using the proposed algorithm demonstrated similar accuracy to the existing GPS-based autonomous driving algorithm and up to 50% shorter average time to reach the target point, highlighting its high potential for practical use.