Cameras

Overview

Cameras are complicated.

Sensor

The camera sensor is what detects the incoming light.

Shutter

A camera sensor is typically a rectangle divided into pixels. The lens of the camera focuses light onto the sensor, where photons hit the sensor pixels. Each pixel converts the photons into either voltage or current. Electronics are attached to the sensor to read and reset the total voltage/current at some fixed period (e.g. 24FPS is 41.667ms/frame).

In a rolling shutter camera, each row of pixels in the sensor is exposed to light and then read at the same time, followed by the next row of pixels, until the last line is reached, restarting at the first. There is distortion in the resulting image as a moving object is in a slightly different position every time a row is exposed. For low speeds, this is basically unnoticeable. However, at high speeds, the distortion effect is significant as the object moves further in the period.

A global shutter camera exposes and reads all pixels in the sensor simultaneously. This eliminates the rolling shutter distortion. However, global shutter cameras are more expensive as they require additional electronics to control and read all pixels at once. A global shutter also does not prevent blurriness of fast moving objects, only their distortion.

camera-selection_rolling_shutter_car_3200x1800-20241019-023602.webp
Global shutter (left) vs rolling shutter (right). Notice how the image on the left appears fine while the image on the right is sheared. Rolling shutter distortion can also manifest as a jello effect in video.

Exposure

Exposure is the amount of time each pixel receives photons in each frame, and is typically lower than the period. Increasing the exposure time allows the camera to capture better images in a low light environment as the sensor will receive more photons, but as each pixel has a maximum limit, it is possible to overexpose and even wash out information of neighbouring pixels.

Information is lost from the left flag and the top part of this image.

Moving objects are blurry as photons from the same point on an object hit different pixels, and higher speeds and longer exposure time make this effect more pronounced. Lowering the exposure time mitigates blurriness, but detail is lost as less photons are received in each frame and the resulting image is dark.

Quantum Efficiency

AKA spectral response, quantum response, spectral efficiency, spectral sensitivity.

Quantum efficiency is the ability for a sensor to detect electromagnetic radiation (i.e. light) at various frequencies. For a camera sensor, this information is typically found within its datasheet.

Sensors are typically specialized around a specific range of frequencies. For example, human eyes and visible light cameras have the largest response in the visible spectrum, while an infrared cameras have their largest response in the infrared spectrum.

 

Lens

The camera lens is what focuses the light from the world onto the camera sensor.

Barrel Distortion

Camera lenses can also have barrel distortion. A lens with zero barrel distortion is known as rectilinear, while a lens with non-zero barrel distortion is known as curvilinear, or fisheye.

Straight lines in the world map to straight lines in an image produced by a rectilinear lens. This transformation is affine, and therefore makes mathematical calculations easier.

Straight lines in the world map to curved lines in an image produced by a curvilinear lens. There are additional steps required for mathematical calculations. The image from a curvilinear lens has more detail in the centre of the image (i.e. the centre has more pixels), and compresses the edges into relatively few pixels. A curvilinear lens can also have a field of view greater than 180°, while a rectilinear lens is limited to a maximum of 180°.

Field of View

Field of view (FOV) describes the relationship between the area the lens can capture vs the distance from that area. A larger FOV allows a greater area to be seen from the same distance. However, detail is lost as the total number of pixels in the sensor is fixed, as each pixel covers a greater area. A smaller FOV allows additional detail from each pixel, at the expense of area seen.

If the area to be captured is constant, then decreasing the FOV requires the distance between the camera and the area to be increased.

Play with this tool to see various FOV (move the orange slider around): https://morn91.github.io/exx/focal-length/#5&1&135&1

 

Â