Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 20 Next »

This page details the high-level design behind the autopilot code.


Highest level:

The autopilot will consist of 3 threads that each manage a state machine and many other threads that manage sensor data acquisition. The 3 state machines are the attitude manager; responsible for putting and keeping the aircraft in some desired attitude, the path manager; responsible for instructing the attitude manager to achieve some attitude in order to navigate the aircraft to some particular location, and the telemetry manager (which has yet to be designed), responsible for all communications with the ground station. At the same time, there will be a thread for each sensor, ensuring data is collected at well timed intervals.



Path Manager


The duty of this module is to determine where the aircraft needs to go and how to get it there. Using the GPS, the altimeter, and information from the ground, this module decides how the aircraft should be oriented and at what speed it needs to be at to get where it needs to go. Those instructions are communicated to the attitude manager.

This state machine will be implemented in a thread of its own.

For the most part, the way-point management algorithm should be fine to port from PicPilot. The same should be true with the PID algorithm (used to determine ideal attitude/airspeed to ask the lower level to achieve).

The reason we ask the attitude manager for data is that GPS is actually fairly inaccurate (only accurate within a few meters, at best). So we can combine what we know about how the aircraft is oriented and how fast it's going with the GPS measurements to get more accurate results.


The above diagram shows the state flow of the Path manager. The following diagram shows which modules get which data from which other modules. (Note that no 2 sub modules directly talk to each other. Rather it's the state machine that does the work of communicating the data from one to the other).



Some notes:

  • Autonomous takeoffs/landing is still a whiles away, but not sure how it would fit into such an architecture. Probably want another state machine that takes control from this one when we deal with take offs/landings.
  • Not sure how information coming from the computer vision will be used. Does it belong in this state machine ? Probably will need another state machine that takes over control from this one once we're close enough that we can start finding targets via computer vision.

Attitude/airspeed manager

The duty of this module is to constantly accept instructions from the path manager module (it's told what attitude and airspeed are required) and use information it gets from the appropriate sensors to get the aircraft to arrive at the desired attitude and airspeed as fast as possible.

This state machine is meant to be implemented inside a thread of its own.

The sensors relevant to the attitude manager are the IMU and the airspeed sensor.

The Sensor Fusion, the PID and the output mixing algorithms can, for the most part, be ported from PicPilot.

Data is also transferred up to the higher level Path manager, because it can make use of the IMU data in it's own Sensor fusion algorithms.


Some notes:

  • Any time we don’t have a new sensor measurement (if we even run that fast), maybe we should extrapolate what’s going on based on previous measurements.
  • Stall protection / verifying whether the higher level is asking for garbage belongs in this module.
  • Autonomous take offs/landings should be opaque to this module. If a higher level module tells it what speed and attitude the plane needs to be at, this module just obliges, having no idea whether we’re at 10 000 feet or about to land/on the ground.


Sensor interfaces


As an example, here is the GPS interface. Interfaces to all sensors should look very similar.

typedef struct
{
    long double latitude;  // 8 Bytes
    long double longitude; // 8 Bytes
    float utcTime;     // 4 Bytes. Time in seconds since 00:00 (midnight)
    float groundSpeed; // in m/s
    int altitude; // in m
    int16_t heading; // in degrees. Should be between 0-360 at all times, but using integer just in case
    uint8_t numSatellites;    // 1 Byte

    uint8_t sensorStatus; // 0 = no fix, 1 = gps fix, 2 = differential gps fix (DGPS) (other codes are possible)
    bool dataIsNew; // true if data has been refreshed since the previous time GetResult was called, false otherwise.

} GpsData_t;

class Gps
{
	public:

		/**
		* Initialises internal parameters.
		* Does nothing. derived classes take care of whatever they need themselves.
		*/
		Gps(){};

		/**
		* Begins the process of collecting the sensor's data.
		* This is a non blocking function that returns right away.
		*/
        virtual void BeginMeasuring(void) = 0;

		/**
		* Gets the information about the aircraft's position (See GpsData_t struct).
		* This is a non blocking function that returns right away, either with new data,
		* or with old data (in case of old data, the dataIsNew flag of the result struct will be cleared).
		* @param[out]		Data 		pointer to the result struct.
		*/
        virtual void GetResult(GpsData_t *Data) = 0;
};

The design is such that a "GPS" thread would periodically call BeginMeasuring, which would begin the process of collecting the data from the sensor. That data should be available by the time the appropriate state machine thread gets around to calling GetResult to collect it.

The Data struct that gets populated by the GetResult function will contain, along with the fields that store the sensor specific data, an "isDataNew" field that indicates whether the sensor data has indeed been refreshed since the last time the caller called GetResult, and a "sensorStatus" field that indicates any sensor specific failures.

None of the methods in the interface should be blocking. This means all communications with the sensor will be internally interrupt (and possibly DMA) based. As a result, we can guarantee the processor spends it's time doing meaningful work rather than polling sensors.

The interface is made up entirely of pure virtual methods. This allows many implementations of the module (for many different parts). This way, we would be able to easily select which part is available to us at the time of flight without having to modify any of he calling code.


Note: We might find it useful to add a 4th function to the interface: “Sensor_Calibrate” which might either load calibration values we stored earlier or execute a calibration or something.

RTOS

The main autopilot systems are built on FreeRTOS, an open-source Real-Time Operating System. FreeRTOS gives us the ability to run multiple concurrent tasks on a single embedded system, each of which has a priority and can "block" for a period of time while they wait for data. At any given time, the highest priority task that is not blocking will be running.

A few notes about developing for real-time systems:

  • Treat timeouts as a design error. Most blocking OS functions on the autopilot allow (or require) specifying a timeout for the blocking operation.

Autopilot Drivers

Required:

  • GPS
  • XBee
  • Altimeter
  • IMU
  • Interchip
  • EEPROM

Optional:

  • Airspeed
  • Battery
  • Ultrasonic

Safety Drivers

Required:

  • PWM
  • PPM
  • Interchip

Autopilot Logic

Required:

  • Task management
  • PID
  • Waypoint management
  • Navigation
  • Telemetry
  • Autonomous level
  • EEPROM storage
  • Startup codes

Optional:

  • Sensor fusion
  • Landing/take off
  • Multi-vehicle

Safety Logic

Required:

  • Anonymous level
  • Safety switch
  • PPM disconnect
  • Startup codes
  • SPI heartbeat
  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.