The BasicPlatform plugin was created to provide backward compatibility in DIRSIG5 for the sensor collection capabilities in DIRSIG4.

The inputs to this plugin are the DIRSIG4 era platform, motion (either a GenericMotion or a FlexMotion description) and a tasks file. This allows the plugin to emulate the sensor collection capabilities in DIRSIG4. For more details, see the Usage section at the end of this manual.

Implementation Details

Spatial and Temporal Sampling

This sensor plugin performs spatial and temporal sampling differently than DIRSIG4 did. Specifically, DIRSIG4 used discrete sampling approaches to separately sample the spatial and temporal dimensions of flux onto a given pixel and DIRSIG5 simultaneously samples the spatial and temporal dimensions with uniformly distributed samples.

In DIRSIG4, the amount of sub-pixel sampling was controlled on a per focal plane basis using rigid N x M sub-pixel sampling. Furthermore, delta sampling vs. adaptive sampling was a choice for the user. In contrast the BasicPlatform plugin uniformly samples the detector area and the number of samples used for each pixel is adaptive and driven by the convergence parameters. In general, the entire spatial response description in the input file is ignored by this plugin.

The DIRSIG4 temporal integration feature included an integration time and an integer number of samples to use across that time window. The temporal sampling approach brute-force resampled the entire detector for each sample, which resulted in a proportional increase in run-time. The DIRSIG5 BasicPlatform plugin ignores the number of temporal samples and instead uniformly distributes the ray times across the integration time window.

Advanced Features

This plugin includes a few advanced features that should become permanent features at some point in the future.

Important These advanced feature configurations are not supported by the graphical user interface (GUI) and will get deleted if the user loads and saves the .platform file in the Platform Editor.

Color Filter Array (CFA) Patterns

As an alternative to assigning a focal plane one (or more) channels that are modeled at every pixel, the option also exists to generate mosaiced output of a color filter array (CFA) that can be demosaiced as a post processing step. The currently supported CFA patterns include the Bayer and TrueSense patterns. These are configured by adding the <channelpattern> section to the <spectralresponse> for a given focal plane. Each channel pattern has a list of pre-defined "filters" that must be mapped to the names of channels defined in the <channellist>. Examples for each pattern are shown in the following sections.

Note The DIRSIG package does not come with a tool to demosaic images produced with CFA patterns. Standard algorithms are available in a variety of software packages including OpenCV (see the cv::cuda::demosaicing() function).

Example Bayer Pattern

A 3-color Bayer pattern must have channel names assigned to its red, green and blue filters:

              <spectralresponse>
                <bandpass spectralunits="microns">
                  ...
                </bandpass>
                <channellist>
                  <channel name="RedChannel" ... >
                    ...
                  </channel>
                  <channel name="GreenChannel" ... >
                    ...
                  </channel>
                  <channel name="BlueChannel" ... >
                    ...
                  </channel>
                </channellist>
                <channelpattern name="bayer">
                  <mapping filtername="red" channelname="RedChannel"/>
                  <mapping filtername="green" channelname="GreenChannel"/>
                  <mapping filtername="blue" channelname="BlueChannel"/>
                </channelpattern>
              </spectralresponse>

Example TrueSense Pattern

A 4-color TrueSense pattern must have channel names assigned to its pan, red, green and blue filters:

              <spectralresponse>
                <bandpass spectralunits="microns">
                  ...
                </bandpass>
                <channellist>
                  <channel name="PanChannel" ... >
                    ...
                  </channel>
                  <channel name="RedChannel" ... >
                    ...
                  </channel>
                  <channel name="GreenChannel" ... >
                    ...
                  </channel>
                  <channel name="BlueChannel" ... >
                    ...
                  </channel>
                </channellist>
                <channelpattern name="truesense">
                  <mapping filtername="pan" channelname="PanChannel"/>
                  <mapping filtername="red" channelname="RedChannel"/>
                  <mapping filtername="green" channelname="GreenChannel"/>
                  <mapping filtername="blue" channelname="BlueChannel"/>
                </channelpattern>
              </spectralresponse>

See the ColorFilterArray1 demo for a working example.

Active Area Mask

The user can supply an "active area mask" for all the pixels in the array via a gray scale image file. The brightness of the pixels are assumed to convey the relative sensitivity across the pixel area. A mask pixel value of 0 translates to zero sensitivity and a mask pixel value of 255 translated to a unity sensitivity. Hence, areas of the pixel can be masked off to model front-side readout electronics or to change the shape of the pixel (e.g. circular, diamond, etc.). The mask is used to override the uniform pixel sample with an importance based approach. Therefore, the active area image can contain gray values (values between 0 and 255) that indicate the pixel has reduced sensitivity in that specific area.

The active area mask is supplied via the <activearea> XML element within the <focalplane> element (see example below):

           <focalplane ... >
             <capturemethod type="simple">
               <spectralresponse ... >
                 ...
               </spectralresponse>
               <spatialresponse ... >
                 ...
               </spatialresponse>
               <activearea>
                 
               </activearea>
             </capturemethod>
             <detectorarray spatialunits="microns">
               ..
             </detectorarray>
           </focalplane ... >

The 
                 <scale>10</scale>
               </psf>
             </capturemethod>
             <detectorarray spatialunits="microns">
               ..
             </detectorarray>
           </focalplane ... >
Note The example above mirrors the setup illustrated in the PSF diagram at the start of this section. The image file contains the Airy disk pattern shown as projected onto the focal plane. The size of that spread pattern captured in the image spans approximately 10 pixels. Hence, the scale is 10. The scale is not related to the number of pixels in the PSF image but rather to the effective size of the pattern in focal plane pixel units.

Internal Detector Modeling

By default, the BasicPlatform plugin is producing an at aperture radiometric data product that is by default in spectrally integrated radiance units of watts/(cm2 sr). Depending on the various options set, these units can change. For example:

  • The flux term can be switched from watts to milliwatts, microwatts, or photons/second.

  • The area term can be switched from per cm2 to per m2.

  • Enabling "temporal integration" will integrate the seconds in the flux term resulting in joules, millijoules, microjoules or photons (depending on the flux term settings).

The default output radiometric data products are provided so that users can externally perform modeling of how specific optical systems, detectors and read-out electronics would capture the inbound photons and produce a measured digital count.

For users that do not wish to perform this modeling externally, the internal detector model described here provides the user with the option to produce output data that is in digital counts and features Shot (arrival) noise, dark current noise, read noise and quantization. The general flow of the calculations is as follows:

  1. The at aperture spectral radiance [photons/(s cm2 sr)] is propagated through a user-defined, clear aperture to compute the at pixel spectral irradiance [photons/(s cm2)].

  2. The area of the pixel is used to compute the spectral photon rate onto the pixel [photons/s].

  3. The integration time of the pixel is used to temporally integrate how many photons arrive onto the pixel [photons].

  4. The channel relative spectral response (RSR) and a user-define scalar quantum efficiency (QE) is used to convert the spectral photon arrival rate into the number of electrons generated in the pixel [electrons].

  5. Shot noise is added using a Poisson distribution with a mean of the number of electrons [electrons].

  6. Dark current noise is added using a Poisson distribution with a mean of the number of electrons [electrons].

  7. Read noise is added using a Poisson distribution with a mean of the number of electrons [electrons].

  8. The number of electrons is scaled using a user-defined analog-to-digital converter (ADC) to produce a digital count (DC).

  9. The final digital count value is written to the output.

detection diagram
Figure 5. The flow diagram of the internal detector model.

The internal detector model must be configured by manually editing the .platform file to add the <detectormodel> element and few other elements. The following .platform file excerpt an be used as a template:

        <instrument type="generic" ... >
          <properties>
            <focallength>125</focallength>
            <aperturediameter>0.017857</aperturediameter>
            <aperturethroughput>0.50</aperturethroughput>
          </properties>
          <focalplane ... >
            <capturemethod type="simple">
              <imagefile areaunits="cm2" fluxunits="watts">
                <basename>vendor1</basename>
                <extension>img</extension>
                <schedule>simulation</schedule>
                <datatype>12</datatype>
              </imagefile>
              <spectralresponse ... >
                ...
              </spectralresponse>
              <spatialresponse ... >
                ...
              </spatialresponse>
              <detectormodel>
                <quantumefficiency>0.80</quantumefficiency>
                <readnoise>60</readnoise>
                <darkcurrentdensity>1e-05</darkcurrentdensity>
                <minelectrons>0</minelectrons>
                <maxelectrons>100e03</maxelectrons>
                <bitdepth>12</bitdepth>
              </detectormodel>
              <temporalintegration>
                <time>0.002</time>
                <samples>10</samples>
              </temporalintegration>
            </capturemethod>
          </focalplane>
        </instrument>

Requirements

  • The capture method type must be simple. This internal model is not available with the "raw" capture method, which is used to produce spatially and spectrally oversampled data for external detector modeling.

  • The aperture size must be provided to compute the acceptance angle of the system in order to compute the at focal plane irradiance from the at aperture radiance. This is specified using the <aperturediameter> element in the <instrument><properties>. The units are meters.

  • The focal plane must have temporal integration enabled (a non-zero integration time) in order to compute the total number of photons arriving onto the detectors.

Note The areaunits and fluxunits for the <imagefile> are ignored when the detector model is enabled. These unit options are automatically set to the appropriate values to support the calculation of photons onto the detectors.

Parameters

The following parameters are specific to the detector model and are defined within the <detectormodel> element of the focal plane’s capture method.

quantumefficiency

The average spectral quantum efficiency (QE) of the detectors. This spectrally constant parameter works in conjunction with the relative spectral responses (RSR) of the channel(s) applied to the detectors. The channel RSR might be normalized by area or peak response, hence the name relative responses. This QE term allows the RSR to be scaled to a spectral quantum efficiency. If the RSR curve you provide is already a spectral QE, then set this term to 1.0.

Important Don’t forget to account for the gain or bias values that are defined for each channel. In practice, we suggest not introducing an additional gain transform and leave the channel gain and bias values at the default values of 1.0 and 0.0, respectively.
readnoise

The mean read noise for all pixels in electrons (units are electrons).

darkcurrentdensity

The dark current noise as an area density. The use of density is common since it is independent of the pixel area and will appropriately scale as the pixel area is increased or decreased (units are Amps/m2).

<minelectrons>, <maxelectrons> and <bitdepth>

The analog-to-digital (A/D) converter (ADC) translates the electrons coming off the array into digital counts. Electron counts below the minimum or above the maximum will be clipped to these values. The ADC setup is used to linearly scale the electrons within the specified min/max range into the counts into the unsigned integer range defined by the <bitdepth>. For the example above, a 12-bit ADC will produced digital counts ranging from 04095.

Note Electron counts below the minimum or above the maximum values defined for the ADC will be clipped to the respective upper/lower value.

Options

  • Once the A/D converter scales the electrons into counts, the pixel values are now integer digital counts rather then floating-point, absolute radiometric quantities. Although those integer values can continue to be written to the standard output ENVI image file as double-precision floating-point values (the default), it is usually desirable to write to an integer format. The <datatype> element in the <imagefile> can be included to change the output data type to an integer type (see the the data type tag in the ENVI image header file description).

Note The example above has the A/D producing a 12-bit output value and the output image data type was selected to be 16-bit unsigned. In this case, the 12-bit values will be written on 16-bit boundaries. Similarly selecting 32-bit unsigned would write the 12-bit values on 32-bit boundaries. Selecting an 8-bit output would cause the upper 4-bits to be clipped from the 12-bit values.

Inline Processing

There is also an ability to have this plugin launch an external process to perform operations on the output image. The <processing> element can be added to each focal plane and can include one or more "tasks" to be performed on the supplied schedule. Currently these processing tasks are applied to the radiometric image product and not the truth image product.

The schedule options include:

simulation_started

A processing step that is performed on the radiometric image file at the start of the simulation.

simulation_completed

A processing step that can be performed on the radiometric image file at the end of the simulation.

task_started

A processing step that can be performed on the radiometric image file at the start of each task.

task_completed

A processing step that can be performed on the radiometric image file at the end of each task.

capture_started

A processing step that can be performed on the radiometric image file at the end of each capture.

capture_completed

A processing step that can be performed on the radiometric image file at the end of each capture.

Note The processing schedule is most likely related to the output file schedule for the focal plane. For example, if the focal plane produces a unique file per capture, then the processing schedule would most likely be capture_completed so that it can perform post-processing on the file just produced by the capture event.

The example below is meant to demonstrate how a (fictitious) program called demosaic can be run at the end of each capture. The options (provided in order by the <argument> elements) are specific to the program being executed:

           <focalplane ... >
             <capturemethod type="simple">
              <spectralresponse ... >
                ...
              </spectralresponse>
              <spatialresponse ... >
                ...
              </spatialresponse>
              <imagefile areaunits="cm2" fluxunits="watts">
                <basename>truesense</basename>
                <extension>img</extension>
                <schedule>capture</schedule>
              </imagefile>
              <processing>
                <task schedule="capture_completed">
                  <message>Running demosaic algorithm</message>
                  <program>demosaic</program>
                  <argument>--pattern=truesense</argument>
                  <argument>--output_filename=example.jpg</argument>
                  <inputargument>--input=</inputargument>
                  <removefile>true</removefile>
                </task>
              </processing>
           </focalplane>

The <inputargument> element is special as that argument will be automatically appended with the filename produced by the capture. In this example, the <imagefile> indicates that a file with the basename of truesense and a file extension of .img will be produced for each capture, resulting in filenames such as truesense_t0000-c0000.img, truesense_t0000-c0001.img, etc. When this processing task is triggered after the first capture, the input argument to the demosaic program will be --input=truesense_t0000-c0000.img. Subsequent processing calls will be automatically called with the appropriate filename that is changing from capture to capture.

The <removefile> option allows you to request that the current file produced by DIRSIG be removed after the processing task is complete. In the case of the example above, after the DIRSIG image has been demosaiced, the original mosaiced image could be removed (to save disk space, etc.). The default for this option is false.

Tip These processing steps can also be used to perform tasks unrelated to image data processing. For example, to append the current time into a log file, to perform file housekeeping tasks, etc. The intention is for any program that can be executed from the command-line to be triggerable as part of the simulation event schedule.

Usage

The BasicPlatform is implicitly used when the user launches DIRSIG5 with a DIRSIG4 era XML simulation file (.sim). To explicitly use the BasicPlatform plugin in DIRSIG5, the user must use the newer JSON formatted simulation input file (referred to a JSIM file with a .jsim file extension). At this time, these files are hand-crafted (no graphical editor is available). An example is shown below:

[{
    "scene_list" : [
        { "inputs" : "./demo.scene" }
    ],
    "plugin_list" : [
        {
            "name" : "BasicAtmosphere",
            "inputs" : {
                "atmosphere_filename" : "mls_dis4_rural_23k.atm"
            }
        },
        {
            "name" : "BasicPlatform",
            "inputs" : {
                "platform_filename" : "./demo.platform",
                "motion_filename" : "./demo.ppd",
                "tasks_filename" : "./demo.tasks",
                "output_folder" : "output"
                "output_prefix" : "test1_"
            }
        }
    ]
}]

The optional output_folder and output_prefix variables serve the same purpose as (and take precedence over) the respective command-line --output_folder and --output_prefix options. The value with providing these options in the JSIM file is that each BasicPlatform plugin instance can get a unique value for these options, where as the command-line options will provided each plugin instance with the same value.


1. Robert Fiete, "Modeling the imaging chain of digital cameras"', Tutorial Texts in Optical Engineering, TT92, SPIE Press, 2010