DRAFT 2024-2025 Airside system software architecture

This is a controlled document.

Only the following members may edit:

  • Autonomy leads

  • Airside project manager

  • Xierumeng (I’m leaving the team :surebud:)

If changes are required, contact one of these members on Discord.

Overview

Audience: All members.

The airside system is an autonomous perception-decision-control system that runs on the drone. The airside system implements the multiprocessing worker model:

All worker use the local space coordinate system except for Flight interface worker:

More information on autonomous systems in general:

Additional resources

System

The airside system implements multiprocessing:

There are 2 dataflow paths:

  • A: Selection of landing location

  • B: Autonomous landing

Key:

  • Red: Perception

  • Orange: Tracking

  • White: Planning

  • Green: Control (also perception from drone telemetry)

  • Blue: Not the responsibility of the system

Workers

General

Worker

Description

Task

Worker

Description

Task

Flight interface worker

Interfaces with the flight controller. The output of the worker is used for perception and decision making; the input is used for controlling the drone.

Task:

  • Telemetry:

    • Polls the common flight controller for drone telemetry

    • Converts the telemetry position from world to local

    • Timestamps the telemetry and sends it to the appropriate worker(s)

  • Commands:

    • Receives commands from decision worker

    • If applicable, converts the command from local space to world space:

Video input worker

Interfaces with the camera device.

Task:

  • Gets frames from the common camera and timestamps them

Detect target worker

Detects objects (if any) in images.

Task:

  • Gets images and detect objects to create bounding boxes

    • There may be 0 object detections and therefore 0 bounding boxes in the list

  • Sends the bounding boxes and with forwarded timestamp

    • Decision worker controls which output queue is used

Decision worker

Builds a model of the world and takes action.

Input list is sorted in descending order of confidence (i.e. index 0 is best, 1 is next best, etc.).

Task:

  • Maintain model of world

  • Decide which landing pad to investigate

    • Search if there is no landing pad: TODO Write documentation

TODO CREATE DECISION WORKER DOCUMENT

Selection of landing location

Worker

Description

Task

Worker

Description

Task

Data merge worker

Synchronizes telemetry and detected objects.

Task:

  • Gets telemetry data and bounding boxes along with their corresponding timestamp

  • Merges the telemetry data to the bounding boxes by closest timestamp

Geolocation worker

Finds out where the detected object is in the world.

Task:

  • Gets telemetry data and bounding boxes

  • Converts bounding boxes to locations in the world

Cluster estimation worker

Estimates the location of objects in the world based on groups of detections.

Input list length is the number of simultaneous detections in a single frame.

Task:

  • Collects a scatter plot of locations of detected objects in the world

  • Estimates the location of each object

Autonomous landing

Worker

Description

Task

Worker

Description

Task

Autonomous landing worker

Lands on selected landing pad.

Task:

  • Gets bounding boxes

  • Converts the bounding boxes into TODO

Â