Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

Introduction

There is always contention about this so I figured I would put it to rest. We have to lock in this architecture early so we can develop the entire system around it and get to testing.

What is autonomy compute?

Essentially our system needs to use a camera feed to make decisions. How and where that camera feed is processed is extremely important to optimize for latency and aircraft performance (weigh, power consumption, etc.).

Related Resources

Options for Compute

  • Airside

    • video is not streamed to the ground and is processed on the drone

      • low latency guaranteed

      • reliable guaranteed data rate on hardwired airside systems

        • this is a massive pro for airside compute

      • simple

      • drone has to carry the computer

        • drone often has to carry a computer anyways for command and control

      • only debug link is SSH, VNC, or Serial which cannot stream full resolution low latency video

    • Computer Options

      • companion computer to flight controller

        • Pixhawk and RPi

        • this is what we're doing now and is very simple and easy, for sure the path forward. The only drawback is harder to debug because a user cant visually look at the high quality image stream. Drawback is negligible espeacilly with an SSH into the RPi imo.

      • integrated single linux based computer

        • i.e. ardupilot and auto code one processor

        • possibly is the long term path because it’s easier to integrated hardware wise.

  • Groundside:

    • video is streamed to the ground for processing

      • high latency

      • high band width link required

      • potentially lighter as you don’t need to carry a GPU

        • in reality though the systems required to proccess commands often outweigh this.

      • easier to debug because operators can view the full resolution video stream and a user interface

      • easier to restart the code if it breaks

    • Options

      • Analog video system

        • analog VTX and VRX is piped into a laptop, laptop does proccessing

        • cannot do this bc quality doesnt solve for being good enough for our autonomy team historically

        • lower latency at the cost of significantly variable image quality

          • filtering out analog breakup can be difficult in software

      • Digital video system

        • something like DJI O3 air unit or walksnail or open IPC, digital data is streamed to laptop

        • these are expensive, range concerns, we've tested some in the past with minimal luck

      • LTE video streaming

        • we stream images through an airside device to the internet.

        • A device on the ground does the processing and returns the commands to the drone.

        • This is optimal because of the really high bandwidth link required to stream data is well satisfied range requirements quite well.

        • LTE has proven reliable on flight tests and at competition.

        • LTE connectivity device must be carried which is heavy

        • Options for streaming

          • RPi with LTE hat

          • Phone

      • Options for the groundside compute

        • Laptop in the field

          • we can have a laptop in the field with us on the flight test which receives the images over the internet (through someone's phone’s wifi hotspot) then do the proccessing and send commands to the drone from that laptop

          • requires a laptop with lots of battery and a powerful GPU (very expensive)

          • easy to debug because you have the full video link on one laptop

          • This approach is preferred because this is historically how we have attempted things within the last four years. Note none of these attempts have resulted in a performance at competition.

        • Computer elsewhere

          • can remote into the external computer for full resolution debug data with some latency hit

          • cheaper than a laptop in the field (desktops are cheaper than laptops)

          • we already have a desktop in the field

Historical Paths

In the past few years we’ve always attempted some groundside compute solution throughout the season:

  • Competition 2021: Attempted to use GoPro with Lightbridge and groundside personal gaming laptop.

  • Competition 2022: Attempted to use PiCam with OpenHD and Jetson.

  • Competition 2023: Attempted to use pilot FPV camera and personal gaming laptop (was planned $200 CV camera with airside Jetson).

  • Competition 2024: Attempted to use $200 CV camera with LTE and Jetson.

Most of these attempts were not employed at competition though some were.

Conclusions

  • All 2025 systems will use airside compute for simplicity.

    • This was agreed upon soon after the 2024 competition was completed.

    • This allows the autonomy team to start designing around this POR.

Justification

  • The RPi 5 has plenty of compute power for all our computer vision.

  • LTE stream is never fully reliable. We don’t want autonomy to design around it’s functionality and reliability which has been a downfall in the past resulting in systems that do not function. Airside compute is by far the most reliable approach to autonomy.

  • An LTE stream for video requires extra weight and power to implement in hardware. Increasing assembly time and decreasing flight time and payload capability. An LTE stream for video is significantly more power and effort than an LTE system that is data only for the Pixhawk as a backup for command and control. See https://uwarg-docs.atlassian.net/wiki/spaces/EL/pages/2701197313/RPi+Interface+Rev+C?search_id=a657dcd7-828f-4350-bc55-00ac02a9ccd3&additional_analytics=queryHash---1c222edcb8ab63d7700c11935d113ab058a8a39e4ea6075d3cf2bf51235d3fe4 for reference.

  • Debugging can still be done via SSHing into the RPi using it’s onboard wifi network when the drone is in proximity of the ground station. This should not be relied on in the intended system due to range limitation of this debugging solution. This can be used during early stage testing of the autonomy system when there are lots of untested pieces of software being operated.

  • No labels