Quadplane Achieves Perception-Based Autonomous Landing in Unstructured Environments, Enabling Long-Range Missions
Summarize this article with:
QuadPlanes represent a promising platform for long-range autonomous missions, combining the efficiency of fixed-wing aircraft with the agility of multi-rotor drones, but reliable operation in challenging environments demands robust autonomous landing capabilities. Ashik E Rasul, Humaira Tasnim, Ji Yu Kim, Young Hyun Lim, Scott Schmitz, and Bruce W. Jo from Tennessee Technological University present a newly developed lightweight QuadPlane system designed for efficient, vision-based autonomous landing and accurate visual-inertial odometry.
This research addresses a critical need for dependable landing in unstructured, GPS-denied environments, overcoming limitations imposed by payload constraints and the challenging flight characteristics of larger QuadPlane aircraft. By carefully optimising the hardware platform, sensor configuration, and embedded computing architecture, the team establishes a foundation for deploying truly autonomous landing capabilities in dynamic, real-world scenarios, paving the way for applications such as long-range aerial monitoring.
This research focuses on developing and testing a vision-based approach to enable safe and accurate landings in challenging environments, particularly when GPS signals are unavailable or unreliable.
The team investigates a system integrating a stereo vision camera, an inertial measurement unit, and a robust state estimation algorithm to perceive the landing site and guide the QuadPlane during the final approach. The method creates a detailed 3D map of the landing area, allowing the QuadPlane to accurately estimate its position and orientation relative to the ground. Furthermore, researchers introduce a novel landing trajectory planning algorithm designed to optimise the landing sequence, minimising both landing time and energy consumption. Extensive simulations and flight tests demonstrate the effectiveness of the proposed system, achieving successful autonomous landings with a precision of under 0. 2 metres. Safe landing is essential for reliable operation, especially in real-world scenarios where landing zones are often unstructured and highly variable. This requires strong generalization capabilities from the perception system, and deep neural networks offer a scalable solution for learning landing site features across diverse visual and environmental conditions.
Vision Guides Safe VTOL Landing Autonomy This paper details the development and initial testing of a vision-based autonomous landing system for Vertical Take-Off and Landing (VTOL) air taxis. The research focuses on creating a robust and reliable system capable of autonomous landing in GPS-denied environments, a critical requirement for urban air mobility.
The team uses a quadplane as a test platform, equipped with an Arducam IMX519 camera and an Intel RealSense D435i depth sensor. Software components include YOLOv5/v8 object detection algorithms for identifying landing pads, nvblox for GPU-accelerated incremental signed distance field mapping, and visual-inertial odometry (VIO) for pose estimation and control. An NVIDIA Jetson Orin Nano provides onboard processing power, and Fused Deposition Modeling (FDM) is used for rapid prototyping and customisation of the airframe. Researchers conducted extensive testing in simulated environments, verified sensor functionality in controlled ground experiments, and performed initial flight tests to validate baseline stability and control.
Bayesian Data Augmentation improves the robustness of the perception deep neural network.
Results demonstrate the feasibility of a vision-based autonomous landing system for VTOL air taxis, with YOLOv5/v8 achieving promising results in detecting landing pads and VIO providing accurate pose estimation.
The team plans to integrate all components into a fully functional prototype, validate the full flight envelope through comprehensive flight tests, and evaluate the system’s performance in real-world urban environments. Autonomous Landing via Onboard Deep Learning This research presents a QuadPlane system designed for efficient, vision-based autonomous landing, specifically for long-range aerial monitoring applications. Scientists successfully transformed a fixed-wing aircraft into a platform capable of perception-based autonomous landing by integrating tightly coupled control, perception, and deep learning processing modules.
The team validated the baseline flight stability of the QuadPlane through loiter-mode flight tests, demonstrating stable performance. This work establishes a foundation for deploying autonomous landing capabilities in challenging, unstructured environments where GPS signals are unavailable. Researchers acknowledge that further testing is needed to evaluate the performance of the complete system, including the depth camera, under full flight conditions with the complete payload. Future work will focus on validating the full flight envelope through field tests and numerical analysis of the aerodynamic design, culminating in a full-system autonomous flight test to demonstrate real-time helipad detection and controlled descent. 👉 More information 🗞 Development and Testing for Perception Based Autonomous Landing of a Long-Range QuadPlane 🧠 ArXiv: https://arxiv.org/abs/2512.09343 Tags:
