1. What is the purpose of airborne hyperspectral imaging?
Any material that is detectable either directly or indirectly based on its spectral features, can be mapped with an airborne hyperspectral camera. The point of airborne hyperspectral imaging is in creating a material map of the study area, land or water surface. Taking the sensor in the air gives you a vantage point to search these materials, plant species etc. from the much larger area than what is immediately visible on the ground – easily hundreds of square kilometers at the time, only limited by altitude and time spent flying. While airborne sensors usually create detailed geometric models (LiDAR) or imagery for human interpretation (multispectral cameras, SAR), hyperspectral sensors create data which is analysed into a thematic map of material features. Imagine that – you can be kilometers up in the air, speeding hundreds of kilometers per hour, and still create an exact map of materials, minerals or plant species at your survey area. No other passive imaging technology can do that!

Imagine you are in a boat on this river, tasked to assess the existence of particular plant species of interest in the area (red). How accurate situational awareness of the target plant quantity would you expect to create, compared to the real situation apparent from the above image? (Image courtesy of SpecTIR LLC)
2. How does it differ from multispectral airborne imaging?
“Multispectral” is one of the most confused and misused umbrella terms used in remote sensing. Multispectral imagers typically have 3-5 broad bands with gaps in between, depending on which applications the multispectral imager is built for. Let’s remember that even normal digital camera found from every smartphone, is multispectral imager with 3 spectral bands. Thus, multispectral can mean almost anything from general consumer camera to an application specific imager.
By definition, hyperspectral imaging collects hundreds of contiguous, narrow spectral bands. This means that there are no “gaps” between the bands. Hyperspectral means far more, more narrow bands than multispectral imaging.
Resulting differences for a user are threefold. With hyperspectral imager, you can differentiate material (minerals, plants etc) of much smaller spectral difference, thanks to much higher spectral fidelity. To put it in layman’s terms, with multispectral imager you can – for example – tell if the area has vegetation or not. With hyperspectral imager, you can tell which species the vegetation consists of, and moreover, if those plants are suffering a stress, and at best what is the cause of the stress. Multispectral imaging may be enough to tell asphalt apart from gravel or concrete, but in order to tell how old the asphalt is and what is its material composition, you need more and narrower bands provided by the hyperspectral imager. Only hyperspectral imager can tell apart minerals that are important for geologists, but which exhibit so minute spectral differences that multispectral imager is unable to tell them apart. There are many of such minerals!

Land classification map with roof/road separation task left, RGB multispectral image right. (Analysis work courtesy of Dr. Kati Laakso)
Multispectral imager is always built for a particular application, whereas hyperspectral imager is application agnostic; it collects all the spectral information from the target and so doing serves all possible applications, only limited by the skill of the analyst. This means you can come back to the collected data and turn it into an application – or several applications – you were not tasked with originally. As long as the data is collected well, it can be turned into dozens of different applications. If organization is collecting hyperspectral data for a client, it is wise to price full rights reflecting the future potential of the dataset! Remember that every time you collect a snapshot of the world at that moment, you constitute a starting point for a temporal study.
Multispectral image is typically a photo that is interpreted by the human eye. Hyperspectral image quantifiable set of data instead of just an image. While the end result may still be an image like a thematic map or a detection alert in operators screen, it is an end result of an algorithm based on characterised data. And it depicts phenomenon that would not be detectable with less or broader spectral bands. Hyperspectral data is more empirical “big data”. Whether it should be considered sparse or dense, depends on the viewpoint.
3. Are pixels true-orthorectified?
Yes. When data is georeferenced using DSM (see glossary in the end of the text), Specim’s CaliGeoPRO preprocessing software is computing the exact distance between the sensor – the nodal point of central projection – and every image pixel. The end result is an image where parallax errors due to elevation differences are removed, i.e. pixels are shifted to their correct locations so that resulting image can be used as a map where directions, distances and geolocations at each pixel are correct. If DSM is not available, DEM/DTM can be used to remove the parallax errors of the ground surface. This would of course still leave parallax errors to everything on top of the surface, but it is still better than using flat earth model only, especially over rolling terrain.
Let’s have an example of the importance of DSM compared to giving just single elevation value to the whole image. 115m tall cooling towers of a coal powerplant were collected using SPECIM AisaFENIX. Cooling towers were symmetrical, meaning both roots and the mouths of the towers were perfectly round. When the distance was given to the roots of the towers, the roots were round but mouths were incorrectly oval, meaning they have parallax error and thus also wrong geocoordinates. When the distance was given to the mouth of the towers, they were round but the roots of the towers were oval in turn, meaning parallax error had moved there (see image below). This is simple geometry and explains georeferencing errors when using only flat earth model. True-orthorectification is required and DSM is the key because through DSM the exact distance to each point in the image can be calculated, whether they are ground features, trees or man-made structures like the cooling towers used in the example.

Left: Roots correctly round – mouths incorrectly oval Right: Mouths correctly round – roots incorrectly oval. Also note
the difference in swath widths of the two different georectification results of this same dataset
the difference in swath widths of the two different georectification results of this same dataset

115m high cooling towers seen from the air.
You can continue reading this article by downloading our free report: