Last week we revealed the first half of our list of Top 10 Questions from our customer’s and demo tours. Now we’ll conclude this list with easy questions that have a little more complex answers.
6 – What is the spatial resolution?
Spatial resolution is the ability of the system to separate two objects. This often gets mixed with pixel size. As a rule of thumb, we can say that the spatial resolution of any imaging system is about twice the pixel size.
In our hyperspectral cameras which are line scan devices, the size of the pixel of the image is FOV / N, where FOV is the field of view of the camera, and N the amount of spatial pixels over a line.
For example, our PFD VNIR camera has 1312 spatial pixels per line. When it is imaging an object and the FOV is 30 cm, then the pixels size is 228 µm, leading to a spatial resolution of 456 µm.
7 – How long does it take to make an image?
Three things effect on how much time it takes to make a hyperspectral image:
- the size of the image
- the integration time
- the acquisition speed of the camera
As mentioned before SPECIM hyperspectral cameras are line scan devices. Thus you need a scanner to image a full sample. That means that either the camera or the sample is moving.
To illustrate the acquisition time, let’s concentrate on the second option with the moving sample. We have a fixed camera, and the sample is on a sample tray. We place the camera at correct distance to get a relevant field of view to cover the sample completely or partly.
To get an image while keeping the aspect ratio, the pixel size along the scanning direction needs to equal the pixel size along the FOV.
Then, if the samples have a length L, the amount of lines that the camera needs to acquire to scan it completely is L / pixel_size, which is actually (L x N) / FOV.
Now that we have that covered we can concentrate on the integration time and acquisition speed of the camera, also known as its frame rate.
The integration time (T_int), is the time which it takes by the detector of the camera to collect the photons.
Then, the full time to acquire a spectral line is T_line = T_int + T_read, where T_read is the read out time of the detector. Then, for a given/chosen integration time, the maximum frame rate of the camera is 1 / T_line. If this figure is higher than the specified maximum frame rate of the camera, then the frame rate is actually the maximum specified frame rate.
Remark: a relevant integration time is set when the dynamic range of the detector is filled at 75% at the peak sensitivity of the camera when the camera is imaging a white reference tile under the used illumination.
A lower integration time can be set, but at the expense of the SNR of the system, whereas a too high integration time would saturate the detector, and the measured signal is erroneous.
Thus the scanning time is (L x N) / (FOV x frame_rate).
Now that we have got that sorted out, let’s take an example.
Scan a piece of meat (20 x 10 cm) with a RH NIR and PFD VNIR cameras. To scan the sample completely, it is wise to have a FOV of 12 cm, and to scan over 25 cm.
1) RH NIR camera: 320 spatial pixels, maximum frame rate of 350 fps.
With a proper illumination in e.g SPECIM 40 x 20 scanner, a typical integration time is 0.7 ms and the read out time is short. Thus the camera can run at 350 fps. According to the above the pixel size would be 375 µm, and the scan time (250 x 320) / (120 x 350) = 1.9 s.
2) PFD VNIR camera: 1312 spatial pixels, maximum frame rate of 65 fps.
Under the same illumination as for 1), a typical integration time with this camera is ca. 15 ms. Then the recommended maximum frame rate is about 50 fps (with a bit of margin). The image would have a pixel size of 91 µm, and the acquisition time would become 55 s.
8 – What is the maximum speed of a conveyor belt?
From the above this is quite straight forward. The scanning speed is L / acq_time, meaning (FOV x frame_rate) / N. With the above examples, it becomes 131.25 mm/s (1) and 4.57 mm/s (2).
But, with these calculated speeds, to keep the aspect ratio of the sample over a FOV of 20 cm: if an object is round, its image will be round. But is it always relevant? For the sorting industry, the point is not to image the samples, but to sort them. Thus, taking only 3 scan lines over each of them would be enough. If for instance you need to sort plastics particles of 5 mm, the distance between each imaged line could be up to 1.5 mm. Each particle would be imaged by 3 lines, each of them with a rather large amount of pixels. The resulting scanning speed for the RH NIR and PFD camera would then become 525 mm/s and 75 mm/s.
Also, the wider the conveyor belt, the wider the FOV of the camera needs to be to cover it, the coarser the spatial resolution becomes. This needs to be taken into account.
Remark: binning options are available with some cameras, and these can also be used to speed up the frame rate. Also, for fast acquisition, a strong illumination is recommended, to decrease the integration time, allowing a faster frame rate.
9 – What is the size of the data set?
SPECIM cameras code the raw data over 2 bytes (12 or 16 bits depending on the camera model) unsigned integers with BIL interleaves, ENVI data type 12. Then for an image with M x N pixels, and acquired with B spectral bands, the data cube size is M x N x B x 2 bytes.
Taking the same example as above, i.e. the meat sample scanned with the RH NIR and PFD VNIR camera, keeping the aspect ratio, we get:
1) RH NIR: 256 bands; the image size becomes 667 x 320 x 256 x 2 = 109 281 280 Bytes
2) PFD VNIR: 768 bands; the image size becomes 2734 x 1312 x 768 x 2 = 5 509 644 288 Bytes.
Finally, be aware that with some cameras, it might be relevant to use binning options. Especially in the above example, spatial and spectral binning with the VNIR PFD camera would be useful to save space on the memory hard disk. As mentioned above, with the current system, scanning a piece of meat over a FOV of 12 cm, the pixel size on the sample would be 91 um. Would 182 µm be enough (x2 spatial binning)? Or even 364 µm (x4 spatial binning)?
In those cases, with a x2 spatial binning, the size of the data would decrease by a factor 4 (twice less amount of pixels are actually imaged along the FOV of the camera, and the same along the scanning direction, in order to keep the aspect ratio), and by a factor 16 with a spatial binning x4.
Combining this with a spectral binning, x2 or x4, then the needed space on the disk would decreased by the same factor. Yet, spectral binning may affect the spectral resolution.
With the PFD camera, an asymmetric binning 8×8 at most (spectral x spatial) is available.
Remark: binning improves the SNR of the system. 2 x 2 binned data have a SNR improved by 40% compared to non-binned data.
10 – Can I use my lens?
Right. I did not mention it before, with a hyperspectral cameral, a lens, or a fore objective is needed. Depending on our cameras, we accept only our proprietary C mounts.
The fore objective is used to define the FOV of the “camera”, which should actually be called “camera + lens” (still called camera below to keep simplicity).
The FOV of the camera is then
· in degrees: 2 x Arctan(Ls/2f)
· in mm: (Ls x D) / f
with Ls the effective length of the slit, D the measurement distance and f the focal length of the lens. In these formulas, a key assumption is the magnification of the optical part of the camera (spectrograph) being 1.
Actually, Ls can be quite challenging to determine. It is the smallest among these:
· the physical length of the camera (spectrograph) entrance slit
· the effective length of the detector (N x pixel pitch on the detector)
· the image size of the lens
Once more, let’s take our examples:
1) RH NIR camera with an OLES30: the length of the slit is 14.2 mm; the length of the detector is 9.6 mm and the image size of the OLES30 is 12.8 mm. Then the effective slit length is 9.6 mm
2) PFD camera with an OLE18.5: the length of the slit is 14.2 mm; the length of the detector is 10.5 mm and the image size of the OLE18.5 is 12.4 mm. Then the effective slit length is 10.5 mm
Then, a key parameter of the lens is its transmission. The lens should have a relevant transmission over the full spectral range, without chromatic aberration (otherwise, the system might be on focus at 1 wavelength, but not on the others).
Finally, the numerical aperture of the lens, or its F number is also quite important. With our VNIR/VIS spectral cameras, the F number of the optics is 2.4, for our NIR/SWIR/MWIR/LWIR C cameras 2.0, and LWIR HS 1.0. Using a front optics with a higher F number than the camera optics (spectrograph) would only limit the amount of light entering the camera, leading to higher integration time. Yet, having a F number lower than the one of the camera is problematic, as it would produce excess stray light, affecting the quality of the data.
– Mathieu Marmion, French and Norwegian MSc in electrical engineering, Finnish PhD in physical geography, joined Specim in June 2011 as a Technical Sales Engineer. Since 2013, he has working as a Sales Manager, responsible for Industrial and Research applications in Europe.