Keywords:

Summary

This is a DIRSIG5-era demonstration of looking at an exo-atmospheric object from a ground based sensor. In this case the object is a suspicious "moon" satellite in a static, geo-synchronous orbit position.

The following demos, manuals and tutorials can provide additional information about the topics at the focus of this demo:

  • Related Demos

    • See MoonSat1 for the original DIRSIG4 version of this demo.

  • Related Manuals

  • Related Tutorials

    • N/A

Details

The key to this demo and what makes it specific to DIRSIG5 is how the target object and the sensor are located. In DIRSIG4, the ray tracer used double-precision, which allowed the user to create scenes that could span the solar system and still have 1 mm precision. DIRSIG5 adopted a single-precision ray tracer to reduce memory usage and increase speed. However, that prevents building single scenes that include the entire earth and the moon in a single coordinate space. Instead, the approach for building large scenes (including scenes with exo-atmospheric objects) is to make smaller, individual scenes that are positioned with the Earth-Centered, Earth-Fixed (ECEF) coordinate system. The ECEF coordinate system in DIRSIG5 is a double-precision coordinate system that serves as the glue to traverse between various scenes and the sensors.

Here is a brief summary of the approach in this demo:

  • The "moon" object is a scene that is composed of just the moon object. Although it has a traditional geographic origin in the moon.scene file, it is largely irrelevant since the only object in the scene is positioned using the ECEF coordinate system.

    • This establishes a compact, localized coordinate system for this object that the ray tracer uses.

  • The sensor platform is also positioned via the demo.ppd file using the ECEF coordinate system.

    • As a result, the DIRSIG5 BasicPlatform plug will shoot rays that are in the double-precision ECEF coordinate system and traverse to the local, single-precision coordinate system of the scenes in the simulation.

Important
The most important aspect of small scenes and the good precision management is not just the size of the scene but also that the scene is localized around its local origin. Do not create a scene with geometry that is only 10 m across but is offset 1,000 km from the local origin. This large bias in the geometry coordinates uses up valuable space in the single-precisions storage. Instead, center the 10 m scene around the local origin and then introduce the offset when the scene is inserted.

To summarize the setup, the "moon" is going to be statically positioned above the Earth. The "moon" will be imaged from a statically positioned sensor on the Earth, looking up at the "moon". During the 24 hours that will be simulated, the Earth will make a complete revolution and the "moon" will rotate with it. In the middle of the day, the sensor is looking up at the "back side" of the moon that is facing away from the Sun. In the middle of the night, the sensor is looking up at the "front side" of moon that is facing toward the Sun. In between, we expect to see the Sun illuminated portiong of the "moon" move across the visible face.

Important Files

Geometry

The "moon" satellite is modeled using the built-in sphere geometry via the GLIST file, which uses the native UV mapping of the sphere to wrap a texture around it. The "moon" is positioned into the scene using an ECEF location via the FlexMotion model. Note that this single instance for the moon has the tag moon associated with it to simplify tasking our sensor to image it.

The moon.glist file.
<geometrylist>
  <object>
    <basegeometry>
      <sphere>
        <matid>moon</matid>
        <radius>9800</radius>
      </sphere>
    </basegeometry>
    <dynamicinstance tags="moon">
      <motion type="flexible">
        <locationengine type="waypoints">
          <data source="internal" datetime="relative" frame="ecef" delimiter="," loop="false">
            <![CDATA[
              0.0,40862089.098,0.000,5988075.458
            ]]>
          </data>
        </locationengine>
        <orientationengine type="euler">
          <data source="internal" datetime="relative" frame="ecef" order="xyz" delimiter="," loop="false">
            <![CDATA[
              0.0,1.5707,1.5707,0.0
            ]]>
          </data>
        </orientationengine>
      </motion>
    </dynamicinstance>
  </object>
</geometrylist>

Materials

The materials for this scene are incidental as the focus of the demo is on the positioning and viewing of an exo-atmospheric object. The material uses the ClassicEmissivity optical property with a texture map to introduce variation. The source image for the texture map moon can be found in maps/moon.png.

The moon material file.
MATERIAL_ENTRY {
    ID           = moon
    NAME         = Moon
    EDITOR_COLOR = 0.2, 0.2, 0.2
    DOUBLE_SIDED = TRUE

    RAD_SOLVER_NAME = Simple
    RAD_SOLVER {
        QUALITY = LOW
    }

    SURFACE_PROPERTIES {
        EMISSIVITY_PROP_NAME = ClassicEmissivity
        EMISSIVITY_PROP {
            FILENAME = gray.ems
            TEXTURE_MAP {
                IMAGE_LIST {
                    IMAGE {
                        FILENAME = moon.png
                        MIN_WAVELENGTH = 0.4
                        MAX_WAVELENGTH = 0.7
                    }
                }
                UV_PROJECTOR {
                    ORIGIN = IMAGE
                    FLIPX = FALSE
                    FLIPY = FALSE
                    EXTENDX = MIRROR
                    EXTENDY = MIRROR
                }
            }
        }
    }
}

Platform and Tasking

This simulation uses a platform with a simple 320 x 240 (QVGA) camera that has a read-out rate of 0.00055555556 (a period of 1800 seconds or 30 minutes). The demo.tasks file defines an instantaneous capture and the video.tasks file defines a 24 hour (84000 seconds) collection window, which produces 47 capture frames.

The platform uses the tracking mount (with the moon tag associated with the moon instance in the GLIST file) to simplify the pointing of the sensor in ECEF space.

The platform PPD file using ECEF location and orientation.
<platformmotion type="generic">
  <method type="raw" />
  <locationjitter/>
  <orientationjitter/>
  <data rotationframe="ecef" rotationorder="yzx" angletype="absolute" angularunits="radians" spatialunits="meters">
    <entry>
      <datetime type="relative">0</datetime>
      <position>
        <location type="ecef">
          <x>6378137.0</x>
          <y>0</y>
          <z>0</z>
        </location>
      </position>
      <orientation>
        <eulerangles>
          <cartesiantriple><x>0.707</x><y>2.96965894</y><z>0</z></cartesiantriple>
        </eulerangles>
      </orientation>
    </entry>
  </data>
</platformmotion>

Simulations and Results

Single-Frame (Still) Simulation

To run the single-frame simulation, perform the following steps:

  1. Run the DIRSIG demo.jsim file

  2. Load the resulting demo-t0000-c0000.img file in the image viewer.

The single-frame simulation produces a single image frame.

demo
Figure 1. Output of the single-frame simulation.

Multi-Frame (video) Simulation

To run the multi-frame simulation, perform the following steps:

  1. Run the DIRSIG video.jsim file

  2. Load the resulting demo-t0000-c0000.img, demo-t0000-c0001.img, etc. files in the image viewer.

The imaging instrument is setup to use the "file per capture" output schedule. As a result, the simulation produces 47 separate image files for the 47 captures. The individual frames were scaled from radiances to 8-bit PNG files using the DIRSIG image_tool utility. Normally image_tool will autoscale each input image individual, but in this case we want the scaling used for each image to be the same. Furthermore, we want a fully illuminated frame to define the scaling for all the images. The trick used here is to add the --all_same flag, which computes the scales from the first image and applies them to the remaining images. To get the right scaling, we explicitly provide a fully illuminated moon frames as the first image file and then use a wildcard to specify the rest of the image files. The wildcard will include this fully illuminated image as well, which means it will get scaled a second time. But the scaling is a quick operation compared to the convenience of this method.

Converting the individual frames to PNG with a common scaling.
$ image_tool convert --minmax --all_same demo-t0000-c0019.img demo-t0000-c00*.img

The video below was created from these 47 frames using the ffmpeg tool.

The output of the multi-frame simulation (with the default Earth disabled).

When the "moon" is fully illuminated, it is directly facing the Sun. At the same time, the sensor is directly away from the Sun. Hence, the Earth is between the Sun and the "moon". Therefore, we should expect to see the shadow of the Earth on the "moon" in the middle of the night (aka, it’s "full moon" phase). This can be observed as the moon getting darker in the frame when it is expected to be fully illuminated (see demo-t0000-c0018.img corresponding to the darker frame in the above video). To eliminate the shadow of the Earth, you need to disable the built-in Earth core using the --disable_earth_core option. The video below was created with this option engaged and the dark "full moon" frame is eliminated.

The output of the multi-frame simulation with the default Earth disabled.