Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Once we know where the object is in the drone space, we need to convert it into a vector in the world space. To do this we can get the drone’s yaw, pitch, and roll in the world space and create a rotation matrix. This provides us with a vector that points from the drone to the detected object.

Projective Perspective Transform Matrix

...

The diagram above displays the vectors used in the geolocation algorithm. The world space is the coordinate system used to describe the position of an object in the world (e.g. latitude and longitude). The world space is shown in the diagram with the black coordinate system. The camera space is the coordinate system used to describe the position of an object in relation to the camera. The camera space is showing in the diagram with vectors c, u, v. Note that bolded variables are vector quantities.

The table below outlines what each variable represents.

Vector

What it represents

o

The location of the camera in the world space (latitude and longitude of camera).

c

Orientation of the camera in the world space (yaw, pitch, and roll of camera).

u

Horizontal axis of the image in the camera space (right is positive).

v

Vertical axis of the image in the camera space (down is positive).

Geolocation Assumptions: Geolocation assumes the ground is flat (to be clear I am not saying the Earth is flat, but we can assume the ground is flat for a small area of the Earth). The image below displays this

World to Ground Space

Break ---------------------------------------------------------------

...