Home / Blog / SLAM: The L is for Localization – the M is for Mapping
SLAM: The L is for Localization – the M is for Mapping

SLAM: The L is for Localization – the M is for Mapping

Autonomous Drone Sensing Technologies for SLAM

There are many components and technologies that go into building a drone that can navigate on its own. An autonomous drone uses an on-going loop of sensing, thinking and acting. A robot is only as good as its sensors, so sensing is an essential component that allows the drone to reach its destination.

SLAM, short for Simultaneous Location and Mapping, is the foundation of a self-navigating system and provides the framework within the robot can path plan. To arrive at its destination, a drone needs to know its location, build a map of its surroundings, and then plan a path or trajectory to where it’s going. In a dynamic environment, the drone needs to continuously update its mapping and SLAM supports that in real-time. If an object enters the drone’s path, it needs to generate a new path.

See an overview video about SLAM here.

ModalAI is using Visual Inertial Odometry (VIO) to support localization capabilities in its SLAM implementation and Voxels for the mapping elements.

Visual Inertial Odometry (VIO)

Visual Inertial Odometry (VIO) fuses an image sensor and an Inertial Measurement Unit (IMU) to estimate the drone’s change in position relative to where it started. In VOXL we use a global shutter image sensor that captures low-distortion pixels needed for computer vision simultaneously, instead of a rolling shutter sensor that is affected by the rapid movement of a drone.

Voxels

Voxels, or volumetric pixels, are computed by using the drone’s current location and projecting points from a depth map into 3D space. Voxel mapping is used for the Unmanned Aerial Systems (UAS) to understand the structure of its environment. For indoor applications, such as warehouse or security inspection, an active depth sensor like Time of Flight provides great results. For outdoor applications, passive stereo cameras work best, especially to compute depth maps at a longer range.

How it Works

As the drone takes flight, the UAS maintains its position using VIO and voxel mapping begins. 

Once the 3D structure is known, tree-based path planning (RRT*) can be used to generate a trajectory.

As the drone proceeds on its flight path, it continuously updates the mapping to stay on its mission and avoid obstacles. When new voxels obstruct the UAS’ planned path, it computes a new path to navigate around the obstacle.

While performing its mission, the drone is streaming the video the operator needs for the application. Some examples are mission-critical defense and security applications or safety inspections of a bridge or road.

The drone typically records video at a much higher resolution than it streams while in flight. You can use the captured video for further offline analysis.