Expanding Mapnik’s Cartographic Capabilities

April 12 2012 by Artem Pavlenko

We are well into the the year of the open map, and people are waking up to what this means for them in all areas of data and design. One size simply doesn’t fit all where maps are concerned and we are actively developing new features in core Mapnik to give more power to designs.

Up until now Mapnik has concentrated mainly on vector processing - you configure your data sources, apply styles and filters, and get a beautiful map at the end of the process. In the case of tiled maps, these are small raster images which are assembled into the final map. This is where Mapnik would usually stop, after all there lots of tools out there to do post-processing. But bearing in mind the new innovative ways people want to use maps, things would get interesting if you could embed image processing/filtering in that chain. Both vector and image processing should play together, complimenting each other.

Consider the recent beautiful ‘watercolor’ maps from Stamen, or even our own process at MapBox of assembling (compositing) tiles from multiple sources. You can apply post-processing on a rendered map image, perhaps splitting a map into multiple maps, then processing using third-party tools and finally combining them. But this requires a programmatic approach. Granted, writing a Python or JavaScript to accomplish this is reasonably easy and fun, but having this implemented internally opens new, interesting doors.

What about making it possible to control both vector and raster processing from within the UI? This would give the power back to cartographers and designers and give them access to the custom cartography they’re looking for. If this is the goal, here are some first steps to get there. The features I’m working on fall into two categories: vector and raster processing or ‘vertex converters’ and ‘image filters’.

Converters

In the vector domain, we have ‘converters’ whose concept is influenced by AGG design. A ‘converter’ takes input (vertex source) and applies some kind of useful transformation on each vertex. Most importantly, they can be chained. For example, ‘vertex converter’ pipe-line for closed paths (PolygonSymbolizer) looks like:

[original source] -> [projection transform] -> [viewport transform]

To improve rendering speed at high zoom levels a ‘clip’ converter can be introduced:

[original source] -> [clip_poly] -> [projection transform] -> [viewport transform]

LineSymbolizer uses two more converters (conv_stroke and conv_dash(optional)):

[original source] -> [clip_poly] -> [projection transform] -> [viewport transform] -> if(dash):[dash conv] -> [stroke conv]

There is also a ‘smooth’ converter which automatically approximates straight segments with Bezier curves, and can be plugged into this chain:

[original source] -> [clip_poly] -> [projection transform] -> [viewport transform] -> [ smooth val:0.5] -> if(dash)[dash conv] -> [stroke conv]

The ability to ‘lazily’ chain conversions is a powerful concept. More converters can be added - for example “svg-styled transform” to offset/rotate/scale geometries. There are also use cases when clipping is undesirable and needs to be switched off (e.g curvy ocean text).

From the user’s point of view, we want to be able to mix and match converters easily, but there is a problem here. While the use case is clear, the ‘lazy’/compiled nature of these pipe-lines makes implementing them cumbersome. All possible permutations result in lots of lines of repetitive/redundant code. I think I’ve solved this with some help from meta-programming Boost.MPL. I abstracted the ‘converter’ logic into something more managable and easy to use:

// c++   
// declare all possible converters to choose from for this particular case (LineSymbolizer)
typedef boost::mpl::vector<clip_line_tag,transform_tag, smooth_tag, dash_tag, stroke_tag> conv_types;

// create 'converter' object supporting types declared above
vertex_converter<box2d<double>,rasterizer,line_symbolizer, proj_transform, CoordTransform,conv_types> 
converter(ext,*ras_ptr,sym,t_,prj_trans);

// enable individual converters as required
if (sym.clip())  converter.set<clip_line_tag>(); // optional clip
converter.set<transform_tag>(); // always apply coordinate transform
if (sym.smooth() > 0.0) converter.set<smooth_tag>(); // optional smooth converter
if (has_dash()) converter.set<dash_tag>(); // optional dash       
converter.set<stroke_tag>(); //always apply stroke converter

for (unsigned i=0;i<feature->num_geometries();++i)
{
  converter.apply(geom); // apply chained conversions and feed into rasterizer
}

The above might look scary but it is by far more readable and maintainable than what we have currently.

Let’s see how this functionality to turn on/off converters can be used. One such example is applying ‘smoothing’ to water polygons:

Original style from demo/python

  <PolygonSymbolizer fill="rgb(153,204,255)"/>

Alt "smooth=0.0"

Smooth water polygons

  <PolygonSymbolizer fill="rgb(153,204,255)" smooth="0.7"/>

Alt "smooth=0.7"

In the future I’m sure we can come up with some interesting converters to add to the pipelines.

Image filters and compositing

This is another area I’m working on and where I think Mapnik can lead the way. I’m experimenting with different ways of applying compositing and filters. As it currently stands, compositing can be applied at <Symbolizer> and/or <Style> level e.g:

 
  <Style name="my-style" comp-op="color_dodge"/>
  <PolygonSymbolizer comp-op="src_atop"/>

All compositing modes defined in SVG 1.2 are supported.

One change to be aware of is that a ‘pre-multiplied’ pixel format is used throughout the rendering process now - it is important for correct compositing. At the end, the final image is ‘demultiplied’ to play nicely with PNG format which expects plain (not-multiplied by alpha) R,G,B values.

This week I also started work to implement image filters in the Mapnik core. The idea is to provide a framework for adding and chaining image filters at <Style> level. Similar to converters in many ways. So far, I implemented 3x3 2D convolution kernels as in Gimp using Boost.GIL library. The syntax to expose this has not been finalised, but I envisage something along these lines:

<Style name="my-style" image-filter="grey|blur:rx=4,ry=4|emboss|..."/>

Here are some maps I made using and abusing some of the above features:

Compositing/Blur/Smoothing

Alt "compositing+blur"

Embose

Alt "emboss"

Sobel

Alt "sobel"

Blurred text halos

Alt "text"

Watercolor textures/neon roads

Alt "watercolour"

These new features are all works in progress and can be pulled from origin/compositing branch.