Skip to main content
All CollectionsWorking with projects
Uploading and using multispectral data
Uploading and using multispectral data

How to process and interact with multispectral data

Karen Joyce avatar
Written by Karen Joyce
Updated over a week ago

Who can use this feature

Editors of a project within a Pro + workspace.

While standard cameras capture data from red, green, and blue (RGB) colours (just like our eyes), multispectral sensors capture data in additional wavelengths as 'bands' or 'layers'. Making measurements from light beyond which our eyes can see is remote sensing's 'superpower'. It's used in all sorts of applications including agriculture, forestry, fire management, weed detection, mineral identification, and habitat monitoring.

For more information about multispectral imaging, read on here.

What is the best way to upload multispectral data?

We recommend starting a new project, or opening an existing one and simply dragging your folder/s of data into the project.

Often you will have a sequence of RGB images, and then each of the multispectral images will have its own file. The RGB images will look like 'normal' photos while the multispectral images will be in greyscale.

Make sure that you upload all of your images together as a single dataset. Don't separate out the RGB, and don't upload individual bands separately.

How do I view my multispectral data?

When your data are processed, you will see the layers appear in the table of contents of your project. If you uploaded RGB data as well as the multispectral bands, the multispectral orthomosaic will appear below the RGB orthomosaic. This means that you will need to 'turn off' the RGB layer to see the multispectral one.

The multispectral data behave as any other layer in your project. This means you can use the compare tool to swipe between layers, perform contrast enhancement, and inspect pixel values.

How are multispectral data processed on GeoNadir?

As soon as you upload geotagged images to GeoNadir, they will automatically be pushed through our processing pipeline.

Usually the RGB sensor has a higher spatial resolution than the multispectral ones. So we use these data to generate the DSM and DTM, as well as the RGB orthomosaic.

The multispectral layers will then be processed together to create the multispectral orthomosaic. As above, it's important that you upload these together, not as separate datasets. When uploaded together, we are able to calibrate for offsets in each sensor location and your layers will line up properly.

What multispectral sensors are compatible?

We are sensor agnostic! As long as your camera is capturing overlapping geotagged images, we can process them. We most commonly see data from the DJI Mavic 3 enterprise multispectral, Micasense, and Sequoia cameras.

However not all sensors are radiometrically calibrated (which means the pixel value will be converted from DN to reflectance during processing) based on the available information provided by the manufacturer. Here is a list of sensors/drones that can be radiometrically calibrated.

  • DJI Phantom 4 Multispectral,

  • DJI Mavic 3 Multispectral,

  • MicaSense Altum,

  • MicaSense RedEdge-M,

  • MicaSense RedEdge-MX ,

  • MicaSense Altum

  • Sentera 6X

We are aiming to support as many sensors as possible and keep updating this list.

Can I download the processed data?

Absolutely! Once the data processing is complete, you will have the option to download the original images, RGB orthomosaic, multispectral orthomosaic, DSM, and DTM.

Can I use multispectral data in other software?

Most definitely. There are many powerful analytical tools available for analyzing multispectral data, and doing justice to the money you spent purchasing the camera! You will first need to download your data (as above) before opening in your favorite software. We like using ENVI, QGIS, and ArcGIS Pro.

Why some of the pixel value of my multispectral data is negative?

Negative values in radiometric data can be a common issue across various drone sensors, but here we explain the specifics for MicaSense multispectral sensors. These negative values can occur due to low-angle sunlight and the orientation of the drone during flight. When the drone banks away from the sun, especially at low sun angles, the DLS 2 sensor may fail to detect sufficient light, leading to erroneous horizontal irradiance measurements. This lack of light can result in negative values in the radiometric data. To avoid this, it's recommended to use a calibrated reflectance panel to ensure accurate calibration under varying lighting conditions. For the best practice of multispectral image capture or calibration, it is recommended to reach out to the manufacturers as well.

Did this answer your question?