If you're flying the Mavic 3 Enterprise or similar with both RGB and multispectral capabilities, there's a few things that you need to keep in mind with how you set your mission planning parameters, and how your orthomosaics will look once processed.
What are the key differences in the camera specs?
Besides the obvious spectral sensitivity differences, the factors that most affect your data capture are the camera field of view (FOV), image array size, and the number of effective pixels.
It's best to check on your specific model, but as an example, here's how the multispectral and RGB sensors on the Mavic 3M differ:
FOV: 84 deg for RGB; 73.91 deg for MS. This means that the RGB sensor has a wider angle lens (sort of like having better peripheral vision), so the area on the ground that you are imaging will be larger than with the MS. Note that the actual angle is slightly different depending on if you are looking at the length or the width dimension, but don't worry about that here.
Below is an image showing the difference in coverage of a multispectral band (in this case, green) compared to the RGB image captured at the same time.Image array size: 5280×3956 for RGB; 2592×1944 for MS. This is the number of pixels in each photo represented as length x width. The difference in the sensors here means that there will be many more pixels in your RGB data over the same area. In turn this translates to a higher spatial resolution for the RGB data.
Number of effective pixels: 20 MP for RGB; 5 MP for MS. This is pretty much the same information as per the image array size, but sometimes people are more familiar with referring to cameras in terms of their megapixels. Basically the RGB camera is more detailed spatially than the MS camera.
How do these differences affect my mission planning?
Given that we have recognised that the FOV impacts the area covered, and the array size impacts the spatial resolution or image detail, it's super important to take this into account when planning your mission.
Most mission planners will give you an indication of the expected area covered and the GSD (ground sample distance) that will be achieved based on the altitude parameter you set. However, these calculations are often based on the RGB sensor only! So they will be incorrect when translating across to the MS sensor.
Key things to note:
Because the MS image is smaller than the RGB image, the overlap and sidelap calculations will also differ. With each successive photo and flight line, the image will shift by a set amount. But that set amount is proportionally different to the size of the photo.
In the figure below, you can see that an 80% sidelap in RGB will translate to a 75% sidelap in MS. So if you want to achieve 80% sidelap in MS, you will need to set it higher than 80% in your mission planning software. The same is true for overlap.
Also because the image footprint is smaller for the MS image, you may need to fly higher to cover a larger area. Alternatively you can fly additional flight lines.
Because the array size is smaller for the MS image, each pixel represents a larger area on the ground. This means that the ground sample distance is greater for the MS image, resulting in decreased detail. If spatial detail is important to you, you will need to fly at a lower altitude. Check out the video below to understand more about the relationship between detail and flying height
How will the differences affect my processed data?
Each MS image is smaller than the corresponding RGB image, so it is likely that your processed MS orthomosaic will also cover a reduced area.
There are fewer pixels representing the same area covered (smaller array), so your MS orthomosaic will have a larger ground sample distance and reduced detail.
The lenses for the RGB camera and each of the MS bands are next to each other, so they are looking at a slightly different location on the Earth. Think of this like how your left eye sees something slightly different to your right. While the MS bands are georeferenced to each other during processing, they are not aligned with the RGB data. So it's likely that you will see a slight offset between the orthomosaics generated from MS vs. RGB. This effect will be more noticeable at lower altitudes (again, experiment with your eyes and targets at varying distances to experience this effect). If this is concerning, you can post-process your data in a GIS. Alternatively, if it's possible to fly higher, we recommend trying that.