Multispectral photography

Photography with ordinary digital cameras records a spectrum between approximately 400 and 750 nm, which roughly corresponds to the perception limits of the human visual range. Since the human vision system normally records color (i.e., different wavelength bands) with three different types of photoreceptors in the retina, ordinary digital cameras are designed to do the same, and to record the three "primary" (for us humans) colors red, green and blue in a way that approximates the sensitivity of the human vision system.

The sensitivity of human eyes to different wavelengths is not actually as clear-cut as the above oversimplification may suggest. For instance, the three (or occasionally four) different types of cones in the human retina have large overlaps in spectral sensitivity, as well as distinct topographic distributions and densities across the retina. The three "primary" colors and their respective wavelength bands are actually distinguished from one another by the human vision system thanks to quite a lot of post-processing by both the peripheral and the central nervous system. This likely involves an opposition of signal levels coming from different cone types and a number of "assumptions" made by the nervous system in carrying out this post-processing. This may also explain the apparently counter-intuitive effects of some of the types of color blindness that result from genetic defects that affect the sensitivity of one or more of the cone types.

Of course, other organisms often perceive light with photoreceptors sensitive to different wavelength ranges than human eyes, and their nervous systems process the data recorded by eyes and other photosensitive organs in a different way than the human brain. For instance, many birds possess four types of photoreceptors sensitive to different wavelength bands. Therefore, their vision is quite different from ours, and probably richer than ours as far as color discrimination is involved.

The optics of some eye types can be strikingly different from ours as well. For instance, certain ostracod crustaceans use reflector optics more similar to mirror telescopes than to our eyes. Different types of composite eyes exist, with very distinct image-forming capabilities. Some stomatopod crustaceans (mantis shrimps) are believed to use specialized regions of their eyes to achieve three-dimensional vision within a singe eye, allowing each eye to concentrate its attention onto a separate prey or threat, and to detect the polarization of light in addition to different wavelengths. Even with eyes similar to ours, different types of nervous systems can extract vastly different amounts of information from a recorded scene, and therefore the nervous system must be regarded as an integral part of the visual system. These are just a few of the problems encountered when trying to understand how other species perceive their environment with their eyes.

Multispectral imaging uses cameras and videocameras to record images in different bands and, sometimes, at different wavelengths than the human eye. For instance, a multispectral camera can subdivide the visible spectrum into a larger number of "primary" colors, or information channels, than the three perceived by the human visual system. With a sufficiently large number of channels, for example, it is possible to carry out accurate remote analyses of the composition of a planetary surface without landing a probe there. Another use of multispectral imaging is the recording of wavelengths that lie outside the human visual range. This is also commonly done in astronomy, space exploration and remote sensing. Thermal infrared sensing is just one of these applications with everyday uses in rescue, police and military operations. Hyperspectral imaging is a closely related term, which seems to differ from multispectral imaging mainly by the fact of using a very broad range of electromagnetic radiation (e.g., light and radar).

A common misconception is that multispectral imaging, or multispectral photography, alone allows us to model and understand the vision of organisms capable of perceiving radiation of different wavelength bands than humans. Because of the factors mentioned above, multispectral imaging alone cannot even begin to simulate the vision of another species. It just gives us a coarse idea of what primary colors the species in question can perceive. On the other hand, multispectral imaging is very useful in several other contexts.

Multispectral imaging is not the sole domain of scientists and the military. The sensors of consumer digital cameras have a native sensitivity to near-UV (NUV) and near-IR (NIR) that is suppressed by NUV- and NIR-cut internal filters to make the spectral response of these cameras similar to the human eye's. By removing these filters, the sensitivity of ordinary digital cameras can be extended into the bandwidths adjacent to the visible range. This, however, requires the use of lenses capable of imaging in these ranges, and of filters that separate these ranges from visible light and subdivide them into useful sub-ranges.

In practice, the NUV range corresponds roughly to the UV-A range (300-400 nm). However, there are no standard, generally accepted definitions of the NUV and NIR ranges, other than they are, by definition, adjacent to the visible range.

It is quite feasible to remove the NUV- and NIR-cut filters placed in front of camera sensors. In some cameras, including DSLRs with optical viewfinders, this filter must be replaced with a NUV- and NIR-transparent window of similar thickness to allow focusing through the viewfinder. In mirrorless cameras and cameras with live-view capabilities, the filter can be simply removed, although this often prevents focusing to infinity with dedicated camera lenses. In this way, the sensitivity of modern digital cameras can often be extended to a band roughly between 300 and 1,100 nm. Dedicated UV videocameras often use sensors without Bayer filters (which separate visible light into its three primary colors) and microlenses (which increase the sensitivity to light but often absorb UV-B radiation). Sometimes, these videocameras use "naked" sensors unprotected by a glass window. These videocameras extend their imaging capabilities to 200 nm or even shorter wavelengths.

However a multispectral image is recorded, it has to be made visible to human eyes for visual interpretation. There is no way to display an image with more than three simultaneous primary colors, and all their possible combinations, to human eyes. If the recorded data contains more than three primary colors, at most three of them can be displayed. Each of these primary colors, or information channels, must be remapped (in practice, recolored) to one of the primary colors visible to the human eye. This allows the three colors to be blended into all perceptible combinations (or at least, on current computer displays, a fairly large subset and approximation of all perceptible colors and shades). If more than three channels must be combined in a single image, some other information, like intensity and color range, must be reduced or eliminated. For instance, one might think of remapping an ultraviolet channel to visible violet light, and visible light in the original image to its red, green and blue components. This works fine, as long as the red, green and blue channels do not combine to produce a violet color. However, if both the red and blue channels in a given pixel of the visible-range image contain a similar, higher-than-zero luminance, the mixture of red and blue will produce a violet that cannot be distinguished from the remapped ultraviolet.

For the same reason, it is possible to remap ultraviolet to blue and infrared to red without losing information, but if we want to include also visible light we must remap all of its channels (the original red, green and blue) to the remaining unused channel(s), which in this case means the green channel. As a further example, if we have a multispectral image that contains three separate channels, all in the visible green region (for instance, such an image can be used in the remote sensing of vegetation type and health), we can remap one of these channels to blue, another to green and the third to red, and make all of them visible in the same false-color image.

More complex types of remapping are possible. For instance, one of the visible primary colors can be used to display the difference (or another mathematical operation) between two recorded channels of a multispectral image. Any type of color-remapping results in a false-color image, which may turn out to be easy to visually interpret or completely "alien" and unintelligible to untrained viewers. As a further example, in the following paper I combined images recorded in reflected IR and transmitted IR into false-color images:

  • Savazzi, E. & Sasaki, T. (2013): Observations on land snail shells in near-ultraviolet, visible and near-infrared radiation. Journal of Molluscan Studies. doi: 10.1093/mollus/eys039

A practical procedure for remapping multispectral images to false-color composites with Photoshop Elements is described here (dead link). I recommend also this page from the same author about insect vision.

Ideally, a lens for multispectral photography should transmit radiation throughout the spectrum of interest, and additionally focus all wavelengths of interest equally well onto the sensor. No real lens can do the latter (although optical systems that use only mirrors and no refractive lenses can get close to this goal). Ordinary camera lenses are achromatic, i.e., they focus correctly at two distinct wavelengths. Between these wavelengths, and outside them, light is unfocused to varying degrees. This is called axial chromatic aberration. Different wavelengths are also focused onto different positions on the sensor in peripheral areas of the image. This is called transversal chromatic aberration. It is entirely possible (and actually very common) for an achromatic lens to correct only one type of chromatic aberration, or to correct both types to different extents and at different wavelengths. Usually, camera lenses are designed to correct transversal chromatic aberration, and leave axial chromatic aberration more uncorrected because the latter is masked by the depth of field of the lens in normal use.

Apochromatic lenses correct transversal chromatic aberration at three separate wavelengths, and superachromatic lenses at four. They are usually much more expensive than achromatic lenses. Some lens brands use the "apochromatic", "apochromat" and "apo" denominations more as a marketing gimmick than as a declaration of real design parameters. An "apo" denomination on a microscope objective, on the other hand, is usually more factual. Real apochromatic and superachromatic lenses are generally superior within their design range of wavelengths. However, in principle it is entirely possible for an apochromatic or superachromatic lens to have a poorer correction of chromatic aberration than an achromatic lens, at wavelengths well outside the design range. Some (very expensive) microscope objectives correct axial chromatic aberration at one or two visible wavelengths, and at one additional wavelength well into the UV-B, and are designed to visually center and focus an excimer laser beam onto its intended target.

Special-purpose lenses like the Nikon UV Nikkor 105 mm f/4.5 and Jenoptik CoastalOpt 60 mm f/4 are ideal for multispectral photography, because they display only minor amounts of chromatic aberrations across a wide spectrum. Other lens types, like the UV Rodagon 60 mm f/4.5, are suitable across a narrower spectrum. Many of the "accidental" UV lenses can be used by substantially stopping down the aperture to reduce the axial chromatic aberration. Several additional lenses can be useful for recording more restricted spectra.

Figure 1. Living snail in near UV, visible light and near IR.

When recording a multispectral image with an ordinary camera, different wavelength bands are recorded by taking a set of pictures with different types of filters mounted on the lens. This implies that only static subjects can be recorded. The camera, light source, diffusers/reflectors and subject must be mounted on rigid supports and must keep a constant relative position and orientation among images. Unless absolutely necessary, the lens should not be refocused, because this usually causes a slight change in image magnification, which in turn causes the images taken in different bands not to overlap properly. The lens aperture and illumination should also remain constant. Exposure can be adjusted by changing the exposure time or, with electronic flash, by manually adjusting the flash power output. Of course, these techniques are not feasible with all subjects, and compromises may be necessary. For instance, when recording living snails (Figure 1), I had no practical way to force the snail to keep the same position, and in this case it is simply not possible to combine the different images into a single composite image. The IR image (rightmost) shows anatomical details ("vein" patterns and air bubbles) of the snail mantle through the shell, not detectable in the UV and visible images.

Thorlabs FB340-10 filter, 335-345 nm NUV-pass.
Asahi Spectra XRR0340 filter, 295-370 nm NUV-pass.
Baader U filter, 330-370 nm NUV-pass.
Schuler UV filter, 350-400 nm NUV-pass.
B+W 486 filter, visible.
>850 nm NIR-pass filter.
>1,000 nm NIR-pass filter.
Remapped 330-370 nm (red), 325-345 nm (green), 330-370 nm (blue).
Remapped 315-375 nm (red), 330-370 nm (blue).
Remapped 335-345 nm (green), 330-370 nm (blue).

Figure 2 shows an orchid (Phalaenopsis sp.) with four types of UV-pass filters, a UV- and IR-cut filter that transmits only visible light, and two IR-pass filters. The transmission bands of the filters are only approximate, and do not reflect the actual peak of the spectral distribution in the recorded image, which is affected by several additional factors. The false-color image in Figure 2H uses three channels and displays color halos around the perimeter of some petals, caused by lateral chromatic aberration of the used lens and perhaps by slight movement of the subject between successive shots. False color remappings that involve only two of the channels (Figure 2I, J) also show interesting differences in absorption in different, albeit partly overlapping bands. It is often difficult to clearly identify the wavelength bands recorded in different color channels with wideband filters. If overlap between adjacent bands is undesirable, bandpass filters with narrow bands can be used to record images.

Some of the above images recorded with wideband filters (especially Figure 2B) show different information recorded in different color channels, as a result of the different spectral sensitivity of each sensel type. These images could be split into separate channels and each channel could be processed and remapped separately.

Unlike orchids with petals of uniform colors, which typically are not so interesting in UV images, this species/cultivar shows a distinctly uneven color pattern in visible and UV images. The red patches in the visible image correspond to a selectively high absorption at some UV wavelengths and a selectively low absorption (in effect, a reversal of the pattern) at other UV wavelengths. As typical of flowers, no patterns are visible in IR images, probably because no insects seem to be capable of vision in the near IR (although some insects do have medium or far IR sensors). Longer NIR wavelengths (1,000 nm and above, versus 850-950 nm) are more penetrating and cause a slightly more translucent appearance.

Incidentally, orchids have been known to display UV patterns for quite some time. For instance, see:

The UV filters used in Figure 2 are discussed in detail here.