Before going on, I just want to clarify my reasons, again, for covering DigitalGlobe in this series. Primarily, they make it easy to find things. They have a lot of resources on their own site, talking about their imagery products (but maybe not so easy to read, depending on what you’re looking for). So, no budding bromance here for DigitalGlobe. But perhaps some admiration for what they are doing (especially with our floods, of late).
In the last lesson we identified, through DigitalGlobe’s own document about their WordView-2 satellite, 8 colors available for use through multispectral sensor arrays, as well as one panchromatic sensor array. DigitalGlobe notes there are three sensor arrays on board. We would know this because they list some very different pixel resolutions for the multispectral and panchromatic products (multiple meters versus centimeters on the ground). At full width, the panchromatic sensor focal plane has 35,420 pixels. The multispectral sensors only have 8,881 per color band.
That’s a big difference—basically the multispectral sensor can give images which distinguishes objects on the ground that are two yards long, like a car, or bigger, while the panchromatic sensor can help distinguish objects that are less than a couple of feet long, about 20 inches. These would be taken under ideal conditions.
These images are taken, ultimately, through a method called pushbrooming. This describes just how the detector subarrays (the sensors mounted on the focal plane of the imager) are arranged and used. Go here to see the difference between that and another method, called “whisk broom” (pushbroom is the second animated picture). On DigitalGlobe’s WorldView-2, the sensors are laid out so (note there are more panchromatic sensors than multispectral)(this comes from Figure 1 in revision 1 of DigitalGlobe’s Radiometric Use of WorldView-2 Imagery):
Each long (or short, in the case of panchromatic) bar represents a pixel, able to pick up different colors. MS refers to multispectral and PAN to panchromatic. So, you can see that the multispectral detector pixels line up in parallel to each other, with each line, or row, able to detect a different color. And you can also see the detectors for all are staggered. There are several reasons for the staggering of these arrays, which you can read here and here, should you want to fall asleep. But I believe the simplest explanation may be that such a configuration is used for sharper images with less pixels.
These pixels are able to be exposed, pushbroom-style, at different rates of time, and, with some limitations, can be simultaneously exposed. DigitalGlobe calls these modes, and there are three of them, identified in 2010: A, B, and C. It sounds like DigitalGlobe likes using “C” mode, as they call that the “nominal operating configuration.” This is a combined imaging collection using the panchromatic and multispectral sensor arrays, using different exposure times and data compression levels.
Honestly, there’s a lot going on with these sensors. It sounds like there’s a processor on board that’s helping to collect and compress the images, as well as manage the timing of the sensor band’s exposure, and even the integration of all the sensor’s collected data. Please go to this document if you want more detail (and, heaven help you, math) about how things are being calculated to get certain products.
One thing to keep in mind during all the descriptions of these products is the clause at the bottom of the literature: “U.S. regulation requires imagery to be resampled to a minimum of .50 m pan and 2.0 m multispectral.” This means DigitalGlobe probably can give you better imagery, but there are US rules not allowing that to happen. If you wish to delve more into DigitalGlobe’s request for a relaxation and lifting of those rules, go here.
Speaking of relaxation, it’s time to stop here. More to come, about colors in particular, next lesson.
- DigitalGlobe shows Colorado flood imagery (themadspaceball.wordpress.com)
- Multispectral SWIR Camera (rdmag.com)
- DigitalGlobe First Look Webinar – Essential Imagery for First Responders (gisuser.com)