New system employs a deep neural network to overcome the challenge of ground-effect turbulence
Credit: Caltech
Landing multi-rotor drones smoothly is difficult. Complex turbulence is created by the airflow from each rotor bouncing off the ground as the ground grows ever closer during a descent. This turbulence is not well understood nor is it easy to compensate for, particularly for autonomous drones. That is why takeoff and landing are often the two trickiest parts of a drone flight. Drones typically wobble and inch slowly toward a landing until power is finally cut, and they drop the remaining distance to the ground.
At Caltech’s Center for Autonomous Systems and Technologies (CAST), artificial intelligence experts have teamed up with control experts to develop a system that uses a deep neural network to help autonomous drones “learn” how to land more safely and quickly, while gobbling up less power. The system they have created, dubbed the “Neural Lander,” is a learning-based controller that tracks the position and speed of the drone, and modifies its landing trajectory and rotor speed accordingly to achieve the smoothest possible landing.
“This project has the potential to help drones fly more smoothly and safely, especially in the presence of unpredictable wind gusts, and eat up less battery power as drones can land more quickly,” says Soon-Jo Chung, Bren Professor of Aerospace in the Division of Engineering and Applied Science (EAS) and research scientist at JPL, which Caltech manages for NASA. The project is a collaboration between Chung and Caltech artificial intelligence (AI) experts Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences, and Yisong Yue, assistant professor of computing and mathematical sciences.
A paper describing the Neural Lander will be presented at the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Robotics and Automation on May 22. Co-lead authors of the paper are Caltech graduate students Guanya Shi, whose PhD research is jointly supervised by Chung and Yue, as well as Xichen Shi and Michael O’Connell, who are the PhD students in Chung’s Aerospace Robotics and Control Group.
Deep neural networks (DNNs) are AI systems that are inspired by biological systems like the brain. The “deep” part of the name refers to the fact that data inputs are churned through multiple layers, each of which processes incoming information in a different way to tease out increasingly complex details. DNNs are capable of automatic learning, which makes them ideally suited for repetitive tasks.
To make sure that the drone flies smoothly under the guidance of the DNN, the team employed a technique known as spectral normalization, which smooths out the neural net’s outputs so that it doesn’t make wildly varying predictions as inputs/conditions shift. Improvements in landing were measured by examining deviation from an idealized trajectory in 3D space. Three types of tests were conducted: a straight vertical landing; a descending arc landing; and flight in which the drone skims across a broken surface–such as over the edge of a table–where the effect of turbulence from the ground would vary sharply.
The new system decreases vertical error by 100 percent, allowing for controlled landings, and reduces lateral drift by up to 90 percent. In their experiments, the new system achieves actual landing rather than getting stuck about 10 to 15 centimeters above the ground, as unmodified conventional flight controllers often do. Further, during the skimming test, the Neural Lander produced a much a smoother transition as the drone transitioned from skimming across the table to flying in the free space beyond the edge.
“With less error, the Neural Lander is capable of a speedier, smoother landing and of gliding smoothly over the ground surface,” Yue says. The new system was tested at CAST’s three-story-tall aerodrome, which can simulate a nearly limitless variety of outdoor wind conditions. Opened in 2018, CAST is a 10,000-square-foot facility where researchers from EAS, JPL, and Caltech’s Division of Geological and Planetary Sciences are uniting to create the next generation of autonomous systems, while advancing the fields of drone research, autonomous exploration, and bioinspired systems.
“This interdisciplinary effort brings experts from machine learning and control systems. We have barely started to explore the rich connections between the two areas,” Anandkumar says.
Besides its obvious commercial applications–Chung and his colleagues have filed a patent on the new system–the new system could prove crucial to projects currently under development at CAST, including an autonomous medical transport that could land in difficult-to-reach locations (such as a gridlocked traffic). “The importance of being able to land swiftly and smoothly when transporting an injured individual cannot be overstated,” says Morteza Gharib, Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering; director of CAST; and one of the lead researchers of the air ambulance project.
###
The paper is titled “Neural Lander: Stable Drone Landing Control Using Learned Dynamics.” It can be found online here: https:/
VIDEO HERE: https:/
Media Contact
Robert Perkins
[email protected]