As I mentioned in one of my lasts posts about it, I started using SRTM 1 arc data for rendering my maps. So far, with the 3 arc dataset, I was able to generate a huge image by stitching the tiles with gdal_merge.py, then generating the 3 final images (height, slope and hill shade) plus one intermediary for the slope, all with gdaldem. Now this is no longer possible, as the new dataset is almost 10x the old one, so instead of going that way, I decided to try another one.

With gdalbuildvrt it is possible to generate an XML file that 'virtually' stitches images. This means that any attempt to access data through this file will actually make the library (libgdal) to find the proper tile(s) and access them directly.

So now the problem becomes processing each tile individually and then virtual stitching them. The first part is easy, as I just need to do the same I was doing to the huge stitched image before. I also took the opportunity to use tiled files, which instead of storing the image one scan line at a time (being 1 arc second resolution, each scan line has 3601 pixels; the extra one is for overlapping with the neighbors), it stores the file in 256x256 sub-tiles, possibly (that is, no tested) making rendering faster by clustering related data closer. The second step, with gdalbuildvrt, should also be easy.

The first block on the way is the fact that SRTM tiles above 50°N are only 1801 pixels wide, most posibly because it makes no sense anyways. This meant that I had to preprocess the original tiles so libgdal didn't have to do the interpolation at render time (in fact, it already has to do it once while rendering, using the lanczos scaling algorithm). This was done with gdalwarp.

The second one came from slope and hill shading tiles. As the algortihm goes, it generates some 'fade out' values in the edges, and when libgdal was stitching them, I could see it as a line in the seam. This was fixed by passing -compute_edges to gdaldem.

Finally, for some reason gdalbuildvrt was generating some very strange .vrt files. The format of these files is more or less the following:

  • For each band in the source tiles it creates a band in the result.
    • For each source tile, it describes:
      • The source file
      • The source band
      • The size and tiling of the source (3601², 256²)
      • The rectangle we want from the source (0², 3601²)
      • The rectangle in the result (x, y, 3601²)

The problem I saw was some weird declararions of the rectangles in the result, where the coordinates or the sizes didn't match what I expected. I will try to figure this out with the GDAL poeple in the following weeks, but first I want to make sure that the source tiles are easily downloadable (so far I have only found download options through USGS' EarthExplorer, which requires you to be logged in in order to download tiles, which means that it is not very scriptable, so not very reproducible).

So for the moment I'm using my own .vrt file generator, completely not generic enough for release, but soon. I also took the opportunity to make the rectangles in the result non-overlapping, being just 3600² in size. I know that the generated file works because I'm also generating smaller samples of the resulting layers (again, height, slope and hill shading) for rendering smaller zoom levels.

The only remaining question about huge DEM datasets is contour generation. So far I had just generated contour lines for each tile and lived with the fact that they too look ugly at the seams.


gdal gis srtm

Posted Thu 30 Apr 2015 03:03:22 PM CEST Tags: srtm

Since I started playing with rendering maps I included some kind of elevation info for highlighting mountains. At the beginning it was just hillshading provided by some german guy (I don't have the reference on me right now), but after reading Tilemill's terrain data guide, I started using DEMs to generate 4 different layers: elevation coloring, slope shading, hillshading and contour lines.

When I started I could find only three DEM sources: SRTM 3arc and ViewFinderPanoramas (1arc and 3arc). The second one tries to flatten plains (for instance, the Po's plain nearby to where I live), where it generates some ugly looking terracing. The third one, when I downloaded the corresponding tile (they're supposed to be 1x1 degrees), its medatada reported an extension between 7 and 48 degrees east, and between 36 and 54 degrees north, and that its size is 147602x64801 pixels. I also remember stitching all the tiles covering Europe, just to get a nice 1x1 degrees hole in the North Adriatic sea. Not having much time to pre or post process the data, I decided to stick to SRTM.

Things changed at the end of last year. The US government decided to release 1arc, 30m global coverage (previously that resolution covered only the US). I started playing with the data mid January, only to find that it is not void-filled: this DEMs are derived from the homonymous Shuttle mission, which used radar to get the data. Radar gets very confused when water is involved; this is no problem on rivers, lakes or sea, where elevation is constant relative to the coast, but it's a problem on snow covered mountains, glaciers and even clouds. This means that the data has NODATA holes. The SRTM 3arc v4.1 I was using had these 'voids' filled; deFerrantis has painstakingly been filling these voids too by hand, using topographic maps as reference.

So I set up to fill these voids too. But first let's see how the original data looks like. All the images are for the area near Isola 2000, a ski station I go often. The first image is how this looks on the SRTM 3arc v4.1:

This is a 4x4 grid of 256x256 pixel tiles (1024x1024 in total) at zoom level 13. The heights range from ~700m up to ~2600m, and the image combines all the 4 layers. It already starts showing some roundness in the terrain, specially in ridges and mountain tops; even at the bottom of the deep valleys.

For contrast, this is deFerrantis data:

This is the first time I really take a look on the result; it doesn't seem to be much better that 3arc. Here's the terracing I mentioned:

For contrast, check what 1arc means:

From my point of view, the quality is definitely better. Peaks, crests and valleys are quite sharp. As for the mountain sides, they look rugged. My appreciation is this reflects better the nature of the terrain in question, but Christoph Hormann of Imagigo.de views it as sample noise. He has worked a lot on DEMs to generate very beautiful maps.

But then we have those nice blue lagoons courtesy of voids (the blue we can see is the water color I use in my maps). So, how to proceed?

The simplest way to fix this is covering the voids with averages calculated from the data at the seams of the voids. GDAL has a tool for that called, of course, gdal_fillnodata.py. This is the outcome:

At first this looks quite good, but once we start to zoom in (remember there are at least 5 more zoom levels), we start to see some regular patterns:

Another option is to use deFerrantis' data to fill the voids. For this we need to merge both datasets. One way to do it is using GDAL's gdalwarp tool. We create a file piling up layers of data; first the most complete one, then the layers with holes:

gdalwarp deFerrantis/N44E007.hgt mixed.tif
gdalwarp SRTM_1arc_v3/n44_e007_1arc_v3.tif mixed.tif

This looks like this:

I have to be honest, it doesn't look good. Both files declare the same extents and resolution (their metadata is similar, but the second file has more), but if you compare the renders for SRTM_1arc_v3 and deFerrantis, you will notice that they don't seem to align properly.

The last simple option would be to upsample SRTM_3arc_v4.1 and then merge like before, but it took me a while to figure out the right parameters:

gdalwarp -te 6.9998611 43.9998611 8.0001389 45.0001389 -tr 0.000277777777778 -0.000277777777778 -rb SRTM_3as_v4.1/srtm_38_04.tif srtm_1as_v3-3as_v4.1.tif
Creating output file that is 3601P x 3601L.
Processing input file SRTM_3as_v4.1/srtm_38_04.tif.
Using internal nodata values (eg. -32768) for image SRTM_3as_v4.1/srtm_38_04.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.
gdalwarp SRTM_1as_v3/n44_e007_1arc_v3.tif srtm_1as_v3-3as_v4.1.tif
Processing input file SRTM_1as_v3/n44_e007_1arc_v3.tif.
Using internal nodata values (eg. -32767) for image SRTM_1as_v3/n44_e007_1arc_v3.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.

The complex part was the -te and -tr parameters. The 3 arc file covers a 5x5 degrees zone at 40-45°N, 5-10°E. The 1 arc file only covers 44-45°N, 7-8°E, so I need to cut it out. I use the -te option for that, but according the doc it's «[to] set georeferenced extents of [the] output file to be created». This means the degrees of the W, S, E and N limits of the output file. Note that the actual parameters are 1.5" off those limits; I took those limits from the original 1 arc file.

The second one is even more cryptic: «[to] set [the] output file resolution (in target georeferenced units)». This last word is the key, units. According to gdalinfo, both files have a certain pixel size in units declared in the projection. Both declare UNIT["degree",0.0174532925199433]; “degree“ has a clear meaning; the float besides it is the size of the unit in radians (π/180). So the parameters for -tr is how many units (degrees) does a pixel represent (or, more accurately, how many degrees are between a pixel center and the next one). Notice that the vertical value is negative; that's because raster images go from North to South, but degrees go the other way around (North and East degrees are positive, South and West negative). In any case, I also just copied the pixel size declared in the 1 arc file.

After all this dissertation about GeoTIFF metadata, finally the resulting DEM:

As I said earlier, I don't have much time to invest in this; my map is mostly for personal consumption and I can't put much time on it. so my conclusion is this: if I can manage the 9x data size, I think I'm going this way.


elevation gdal gis srtm

Posted Sat 07 Feb 2015 02:25:17 AM CET Tags: srtm

Another long time without mentioning any advancements on my map making efforts. While not much has changed, what has changed is a big step to easier customization.

In the last post in this series I gave a quick list on how I make my own maps:

  • Use SRTM elevation to generate a hypsometric base map, hill and slope shading, and finally hypsometric curves. For the latter I'm using gdal-contour and shp2pgsql.
  • Use any extract you want and import them with osm2pgsql. Small extracts import quite quick, but so far, I have never succeeded to import a big part of Europe (from Portugal to south Finland, so it covers UK/Ireland and Istambul) in a machine with 8GiB of RAM and hard disks. The recommendation is to use a machine with lots of RAM (16+, I heard up to 256 for the whole planet) and SSDs.
  • Use TileMill for the initial design.
  • Do a lot of xml manipulation to try to customize OSM's mapnik style based on your design.
  • Use [generate_tiles.py](https://github.com/openstreetmap/mapnik-stylesheets/ generate_tiles.py) to, well, generate the tiles.

But since August 2013 things have changed in the 3rd and 4th items in that list. Andy Allan had finished a first big iteration of redoing OSM's mapnik style in CartoCSS. The latter is a CSS-like language that is the native way to style things in TileMill. Since then, customizing is way much easier, not only because of the Cascading part of CSS, but also because Andy took the time to make a lot of things (widths, colors) to things similar to C/C++'s #defines, which you can override and impact anywhere where the definition is used.

So now, steps 3 and 4 are more like:

  • Use TileMill to change OSM's initial design[1].
  • Export the design to a mapnik XML file.

In fact, that last step could be avoided, given that TileMill can also render everything by himself.

The last step in having your own tiles to be able to use them. I use Marble in my phone, but I also setup a slippy map so I can brag about it. Unluckily I can't embed the map here (I should fix this).

The tiles served actually come from different rendering passes using different, incremental designs. The previous-to-last one can be seen in Deutschland's region; I will be rendering some parts of London with a newer design before the end of the month.


[1] You can use any text editor to change the CartoCSS files; TileMill will pick them up via inotify and rerender accordingly. The only problem is when you're editing multiple files to impact a zoom level that takes a lot to render (for me is between 6 and 8).


openstreetmap elevation gdal gis srtm

Posted Wed 05 Feb 2014 01:59:00 AM CET Tags: srtm

So it's been a long while that I talked about my quest towards the perfect map for in-car navigation and general orientation. This post is to fix that.

I started with marble, monav and OSM tiles. The tiles proved inefficient for in-car navigation because of the non-contrasting palette, which makes differentiating ways from blocks very difficult in the sunlight. Then I switched to creating maps with CloudMade, which at the beginning seemed enough, but then I noticed that the solution was not complete, as it lacked some features (in OSM terms) that couldn't be mapped. When I started to look for alternatives, I found a site that showed TileMill, which can use OSM data from a database, and specially its ability to create relief maps, with altitude coloring, slope and hillshading. I was doomed :)

I started exploring the relief part. relief data is available from several sources, the most prominent being the ones derived from the Shuttle Radar Topography Mission, like CGIAR-CSI or Jonathan de Ferranti. Discussing with a friend, he told me to stick to the first ones, as de Ferranti's data has been smoothed by hand and is not always the best. Also, his several sources have incompatible licenses, so it should be unusable. With this data I also generated contour lines, which you will recognize from topographic maps.

The next step is how to use OSM data. I started using GeoFabrik's extracts, first importing the data into a database, but this takes a lot of time, memory and disk space. I switched to the shapefiles, but this introduced several problems. First, the sum of the data contained in the several shapefiles for a region (landuse, natural, places, points, railways, roads and waterways) are not the entire set of data; for instance, all data related to ski lifts is not there. I wanted those. So I reluctantly reverted to importing the data in a database.

Then there's the problem of rendering a usable map. Both TileMill and OSM render the tiles using Mapnik, which implements the so called Painter's algorithm: you define layers and they're 'painted' from bottom up, one on top of the other. So, what do you put in a layer? You define a data source, which is a select in the database, and you link it to a style. A style defines several rules, which can provide extra filters (which technically you could do in the database, but simplifies the data source definitions), min and/or max scales (related to zoom levels) and how to paint them: fill a polygon, draw a line, put a symbol, write some text, etc.

Clearly, just like that, the complexity is big. Defining if a feature appears in a zoom level or not and how, specially if it changes between zoom levels, is quite complex. Then you have more factors: putting borders to a line is actually implemented as drawing two lines, one thicker, defining the borders, and then a thinner one on top of it, defining the inner part, done in two different layers. This is called casing. Also, think that you have to change the casing and the fill color when there is a tunnel or add an extra casing (layer) for a bridge, and even when there are several bridges on top of each other (think of complex highway junctions; implementet with more layers)! TileMill proved to be good for testing, but this level of nitpicking was too much for it.

For a while I flirted with the idea of generating the Mapnik input file with a structural description of what I wanted, but I aborted it before it became a monster. I learned something from Frankenstein!

So, why not come full circle and take a look at how the OSM tiles are generated? This was definitely the right move. Instead of trying to mimic the complex set of rules and layers that make OSM tiles perfect, just edit the xml files to take out uninteresting data, modify some zoom levels here and there to make some things appear sooner (I'm very interested in gas stations, hospitals, parkings, archaeological sites and viewpoints), and edit the colors. I can even export what I have done in TileMill to a Mapnik project, extract the part where a define the relief layers and add them as the background.

So the final setup is as following:

  • Use elevation information from CGIAR-CSI, process them as TileMill says.
  • Use Geofabrik's pbf extracts; import them in a PostGis Database as per this page in OSM's wiki.
  • Use TileMill for initial designing, specially about color changes.
  • Modify osm.xml by hand, picking out what's not wanted, changing zoom levels for some stuff.
  • Use a script to extract colors and line widths from osm.xml; use the colors designed with TileMill.
  • Use a modified version of generate_tiles.py to render big areas or special cities.

I almost have it finished with the scripts that allows me to iterate over the last two steps without much hand intervention (except tweaking the values, of course). When I'm done I'll publish it, then clean it up, then republish :)


openstreetmap gis srtm

Posted Fri 24 May 2013 11:09:48 PM CEST Tags: srtm