Landsat-live is now twice as crisp! We’ve added pansharpening to our constantly refreshed Landsat-live pipeline, and are now serving NASA’s Landsat 8 data at 15 meter resolution. The improvements from 30 meter resolution to 15 meter resolution is like putting on a pair of glasses with the right prescription.
Hover over each scene below to compare it with the previous resolution:
Neighborhoods in Mexico City (Chapultapec to the north, Benito Juárez in the east)
Potassium chloride processing ponds in Lop Nur, China
Bilbao, Spain, around the Exhibition Center building
Halytsynove is a town in Ukraine, on the Inhul River near where it meets the Black Sea
Pansharpening is the process of combining the color (spectral resolution) of Landsat 8’s RGB data with the sharpness (spatial resolution) of its panchromatic data. We used a variation on the Brovey transform for pansharpening, which is appropriate for data like Landsat where the pan band is relatively similar in resolution to the color bands. Different techniques would work better for some other data, where the ratio of the resolutions can go as high as 5:1. To understand pansharpening better, let’s look at the way Landsat collects data.
Landsat 8 carries two sensors, each specified for a different data collection task. Like an ordinary color camera, Landsat 8’s Operational Land Imager (OLI) measures reflected light at a number of visible and near infrared wavelengths, while the Thermal Infrared Sensor (TIRS) collects thermal information much farther away from the visible range. Together these instruments collect nearly a dozen wavelength ranges. Among them are what we need to composite true color imagery – the primary colors of visible light: red, green, and blue. The images from those bands each have 30 m (100 ft) spatial resolution. Here’s the city of Venice, Italy, in red light, green light, and blue light:
Left to right: the red, green, and blue color components of Venice
To make an image the way our eyes or an ordinary phone camera would see it, we “stack” the bands to create RGB color:
True-color RGB (red, green, and blue) image of Venice
This is an example of a multispectral image, meaning it’s composed of multiple parts of the spectrum: our red, green and blue components. We can abbreviate that to RGB, or just call it true color. The full electromagnetic spectrum covers all possible frequencies of radiation, including radio waves (much lower frequency than visible light) and gamma rays (much higher frequency), and images that make those wavelengths visible to the eye are false color.
Landsat 8’s bands sample parts of the electromagnetic spectrum that are most useful for monitoring the Earth’s surface – making visible its broad cityscapes, geologies, and ecologies. Along with the red, green, and blue bands, Landsat 8 collects a panchromatic – or just “pan” – band at 15 m (50 ft) spatial resolution. An image from the pan band is similar to black-and-white film: it combines light from the red, green, and blue parts of the spectrum into a single measure of overall visible reflectance. It looks like this:
Panchromatic image of Venice
The panchromatic images are sharper than the multispectral images because the broader spectral width allows smaller detectors to maintain a high signal to noise ratio. In other words, when you look at a greater range of light, you collect more photons, allowing you to distinguish smaller features more reliably.
Landsat’s panchromatic spatial resolution is two times better than its multispectral spatial resolution. We usually talk about resolution linearly: 15 m is twice the resolution of 30 m. But because the pixels form 2D images, it’s also valid to say that 15 m is four times the resolution of 30 m, because it means twice as many pixels per unit area.
Here’s our view of Venice before and after pansharpening:
RGB (left) and pansharpened Venice.
Before pansharpening, we can see the backward S shape of the Grand Canal, but little else in the city. After pansharpening, the lines of smaller canals come out, as well as details like boats and even large buildings.
Rio-mucho: integrating pansharpening at scale
Our Landsat-live pipeline processes 120,000 megapixels (675 GB of images) on a typical day. To support the integration of image processing algorithms like pansharpening, and to work with images in a fast and scalable way, we build optimization tools that are modular and robust.
Our pansharpening process uses two essential Python libraries, rasterio and rio-mucho. Rasterio is a Python wrapper around GDAL for raster processing functions. It is designed to help Python programmers increase their efficiency and productivity in GIS data work, as well as to help GIS analysts learn important Python protocols and idioms. Rio-mucho is a framework within rasterio to parallelize image processing.
With Landsat, we work on several gigabytes of data at once in any single processing task. We feed rio-mucho a set of input rasters and a set of functions that we want applied to them. Rio-mucho then takes over, logically applying the passed in functions to small windowed chunks of the images in parallel, completing image processing jobs with a minimum of overhead for both the computer and the programmer. Rio-mucho gives us a standard way to efficiently run heavyweight algorithms on rasters that are larger than our available RAM.
The graph below shows the difference in memory consumption and processing time to pansharpen a 15,500×15,750 pixel image, with and without rio-mucho:
Rio-mucho cut memory consumption by 7×, thereby avoiding disk paging issues, and also sped up the running time by 3×. We hope you’ll find rio-mucho as useful as we have.
We’re just getting started!
Pansharpening is one of several image processing algorithms we are implementing in our Landsat‑live pipeline to get clearer, more accurate view of the world. The potential of the open, always-expanding Landsat 8 dataset is huge. Feel free to connect with me, @bluebweee, on Twitter, and follow @mapbox for any updates on pansharpening or upcoming projects! If you have any questions, please hit us up.