Hyperspectral cameras used in industrial environments or onboard airborne platforms (UAV or airplanes) often operate in a pushbroom fashion. Those cameras, like Specim ones, all build following the same approach:

  • a front lens, to define a FOV
  • an entrance slit so that the sensor sees only a thin line of the target at once
  • a dispersive optics, so that the thin beam of light entering the system spreads over the last main component
  • a matrix detector.

With such configuration, the whole sample or scene is not seen entirely at once, but a thin line only. A movement is needed to image the full object, provided by, e.g., a conveyor belt or a UAV. However, we may wonder how much of the sample, scene, or target is seen by the sensor.

What the sensors effectively measures depends on several parameters:

  • the frame rate of the camera
  • the speed of the movement
  • the integration time
  • the slit width of the camera
  • the front objective
  • the measurement distance.

To fully understand this, let consider Figure 1, with a concrete example: A Specim FX17 hyperspectral camera placed on a 1m wide conveyor belt, sorting plastic flakes at 2m/s.

Diagram of A Specim FX17 hyperspectral camera placed on a 1m wide conveyor belt, sorting plastic flakes at 2m/s.
  1. As a first step, we will assume the user wants to keep the correct aspect ratio, i.e., a round object imaged as round. Since the FX17 measures spectra over 640 pixels per sightline, the pixel size over the belt is 1000 / 640 = 1.56 mm. The camera’s frame rate needs to be set to 1282 fps to get “square” pixels. This is doable with the FX17, by reducing the number of spectral bands, for instance, by measuring spectra over only 112 bands (the full range 900 – 1700 nm would require 224 bands).
  2. When setting the camera to 1282 fps, a new line (=frame) is acquired every 0.78 ms. During that time, the belt has moved by 1.56 mm, making sense as the user wants to keep the aspect ratio. However, what here matters the most is the distance covered by the belt while the camera is effectively capturing data during the integration time. The max possible integration time to work at 1282 fps is about 0.5 ms. It means that during that time, the belt would capture data over 1 mm. So, in Figure 1, A = 1 mm.
  3. With its default 38 degrees front optics, the FX17 would need to be placed at 1.45 m from the belt; from this distance, the width of the line seen by the camera is 3.48 mm.

Here we are in a configuration where the slit width image is much larger than A, and Figure 1 would become as follow:

As can be seen, because of the large slit, two consecutive frames overlap each other. Even if the acquired frame with a twice lower pace (641 fps in our present case), the aspect ratio would not be kept anymore (a round object would be imaged as an oval). However, we still get the fully imaged sample. 

Looking at the width of the slit image convoluted with the distance run by the belt within the integration time, the distance between each frame run by the belt can be up to 4.48 mm, leading to a frame rate of 446 fps for a fully imaged sample. If the frame rate decreases, two consecutive frames would not overlap anymore, and part of the sample would not be captured.