Keywords: platform

Summary

This demo which features four cameras on a single platform. Each cameras has a different attachment affine transform so they point in different directions with some overlap between all the cameras. DIRSIG automatically includes basic image geolocation information in the output ENVI header file, so the separate images can easily be mosaiced via standard software packages.

multi

The following demos, manuals and tutorials can provide additional information about the topics at the focus of this demo:

Details

This demo attempts to illustrate that a DIRSIG4-era .platform file can have multiple camera instruments attached to it. In this example, there is a single mount with 4 identical cameras, but the model allows for any custom combination of 1 or more mounts and 1 or more unique cameras per mount. For example, a system might have a lidar system mounted near the front of the aircraft and an RGB camera mounted near the rear for context imaging.

Important Files

This section highlights key files important to the simulation.

The Multi-Camera Platform File

The primary file of interest in this simulation is the multi-camera platform file (see multi.platform). When loaded into the DIRSIG Platform Editor, this platform will present as a series of 4 camera instruments that are attached to a single static mount. Each of the camera attachments has unique set of rotations and translations to position the cameras relative to the platform and, of course, to each other.

The default platform coordinates are right-handed with +Y being the forward (along-track) direction of the vehicle, +X being the right wing and +Z being up. The four cameras are setup so that (a) they are spatially offset from each other and (b) rotated to point in the four orthogonal directions. For example, the first camera is the one in the front and looking forward. To look forward this camera needs a +X rotation and to shift forward it needs +Y translation. To correctly leverage the order of operations in this affine transform, we need to rotate and then translate. The remaining cameras use the other permutations of axis rotations and rotations:

Table 1. Camera Rotations and Translations
Camera Name Rotation Translation

Camera #1

+X

+Y

Camera #2

+Y

+X

Camera #3

-X

-Y

Camera #4

-Y

-X

These attachment operations can be directly viewed in XML of the multi.platform file as well:

The affine transform setup in the platform XML file for camera #1.
        <affinetransform>
          <xrotate>0.02</xrotate>
          <ytranslate>0.01</ytranslate>
        </affinetransform>

For this demo each camera is identical except for the previously discussed attachment transforms and the output image filenames for each camera are unique (otherwise all the cameras would attempt to write to the same file).

Simulations and Results

This section includes any step-by-step instructions for running and visualizing the simulations included in this demo.

The Single-Camera, Wide-Area Simulation

Run the single-camera, wide-area wide.sim file using DIRSIG4:

$ dirsig4 wide.sim

or using DIRSIG5:

$ dirsig5 wide.sim

The output image can be viewed in the DIRSIG image viewer or scaled to a PNG on the command-line with the DIRSIG image_tool utility:

$ image_tool convert --autoscale=minmax --format=png wide.img
The output of the single-camera, wide-area simulation.

wide.img

The Multi-Camera Area Simulation

Run the multi-camera multi.sim file using DIRSIG4:

$ dirsig4 multi.sim

or using DIRSIG5:

$ dirsig5 multi.sim

The output of this simulation is four separate image files based on the file names in each of the camera focal planes (e.g., camera1.img, camera2.img, camera3.img and camera4.img). The output images can be viewed in the DIRSIG image viewer or scaled to a PNG on the command-line with the DIRSIG image_tool utility:

$ image_tool convert --autoscale=minmax --format=png camera?.img
camera1.img
Figure 1. Output of camera #1.
camera2.img
Figure 2. Output of camera #2.
camera3.img
Figure 3. Output of camera #3.
camera4.img
Figure 4. Output of camera #4.

The output image file header files contain basic 2D ortho-projection data (see below). Each entry includes a 2D image coordinate pair followed by the corresponding latitude and longitude:

The 2D ortho-projection data in the camera1.img.hdr file.
geo points = {
  1, 1, 43.120394433, -78.450389345,
  1, 240, 43.119963839, -78.450388682,
  320, 1, 43.12039423, -78.449604824,
  320, 240, 43.119963935, -78.449605025
}

Although beyond the scope of this demo, this data can be used by many image exploitation or electronic light table (ELT) software packages to stitch or mosaic the four individual camera frames into a single image. The image below was produced using the ENVI image processing and analysis package.

multi
Figure 5. The automosaic’ed image produced by ENVI from the individual camera frames.