Active UAV Re-Scan Planning for Dynamic Object Artifact-Free Point Cloud Mapping
Topic description
Unmanned Aerial Vehicles (UAVs) are increasingly employed for 3D mapping in domains such as construction, forestry, infrastructure inspection, and disaster response. A major challenge lies in the presence of dynamic object artifacts in UAV point clouds, which are caused by moving objects (e.g., vehicles, people), vegetation swaying in the wind, or water surfaces. These artifacts compromise the reliability of Digital Elevation Models (DEMs), Digital Surface Models (DSMs), and 3D reconstructions.
UAVs possess the flexibility to revisit locations during the same mission. This capability enables the development of an active re-scan strategy, which involves detecting artifact-prone areas in real time or immediately after an initial flight, and autonomously planning targeted re-flights to refine the data.
This research integrates robotics principles (e.g., occupancy grids, ray tracing, Next-Best-View planning, and coverage optimization) with deep learning techniques for predicting artifact likelihood and uncertainty from point cloud patches or image features. The anticipated outcome is a system capable of generating clean, static 3D maps with minimal additional flight time.
Topic objectives and methodology
The research aims to design and evaluate an uncertainty-aware UAV mapping pipeline that:
1. Detects and quantifies dynamic object artifacts in UAV point clouds.
2. Estimates where re-scans are most beneficial.
3. Plans and executes UAV re-flights (active mapping) to minimize ghost points while respecting flight-time and battery constraints.
4. Demonstrates improved map quality and efficiency compared to full re-flights.
The study will begin with UAV data collection over environments containing dynamic elements such as vehicles, pedestrians, or vegetation. Point clouds will be generated from either UAV LiDAR or photogrammetry. Dynamic object artifacts will first be detected using robotics principles including occupancy grids, ray tracing, and temporal consistency checks. To enhance detection, a lightweight deep learning model will be trained to predict the likelihood and uncertainty of artifacts from local point cloud patches, optionally supported by image features.
Based on the uncertainty maps, an active re-scan strategy will be designed, where Next-Best-View or coverage optimization techniques identify the most beneficial re-flight locations under battery and time constraints. The UAV will then execute these targeted re-flights to replace unreliable data with new static observations. Finally, the improved point clouds and derived DEMs/DSMs will be evaluated against baselines such as single-pass mapping, full re-flights, and inpainting-based post-processing.
References for further reading
• A Dynamic Object Removal and Reconstruction Algorithm for Point Clouds, DOI 10.1109/SOLI60636.2023.10425733
• Fu, H.; Xue, H.; Xie, G. MapCleaner: Efficiently Removing Moving Objects from Point Cloud Maps in Autonomous Driving Scenarios. Remote Sens. 2022, 14, 4496. https://doi.org/10.3390/rs14184496
• Peng, H.; Zhao, Z.; Wang, L. A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR. Sensors 2024, 24, 645. https://doi.org/10.3390/s24020645
• A. Bircher, M. Kamel, K. Alexis, H. Oleynikova and R. Siegwart, "Receding Horizon "Next-Best-View" Planner for 3D Exploration," 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 1462-1468, doi: 10.1109/ICRA.2016.7487281.