Writing a tile server in python
Another dictated post111, but heavily edited. Buyer beware.
I developed a tileset based on OpenStreetMap data and style and elevation information, but I don't have a render server.
What I have been doing is using my own version of an old script from the
mapnik
version of the OSM style. This
script is called generate_tiles
, and I made big modifications to it and now it's capable of doing many things,
including spawning several processes for handling the rendering. You can define regions that you want to render, or you
can just provide a bbox or a set of tiles or just coordinates. You can change the size of the meta tile, and it handles
empty tiles. If you find a sea tile, most probably you will not need to render its children9, where children are the
four tiles that are just under it in the next zoom level. For instance, in zoom level zero we have only one tile (0,0,0),
and it's children are (1,0,0), (1,0,1), (1,1,0) and (1,1,1). 75% of the planet's surface is water, and with Mercator
projection and the Antartic Ocean, the percent of tiles
could be bigger, so this optimization cuts a lot of useless rendering time.
Another optimization is that it assumes that when you render zoom level N, you will be using
at least the same data for zoom level N+1. Of course, I am
not catching that data because mapnik
does not allow this, but the operating system does the catching. So if
you have enough RAM, then you should be able to reuse all the data that's already in buffers and cache, instead of
having to fetch them again from disk. This in theory should accelerate rendering and probably it is10.
The script works very well, and I've been using it for years already for rendering tiles in batches for several zoom
levels. Because my personal computer is way more powerful than
my server (and younger; 2018 vs 2011), I render in my computer and rsync
to my server.
So now I wanted to make a tile server based on
this. Why do I want to make my own and not use renderd
? I think my main issue with renderd
is that it does not store
the individual tiles, but keeps metatiles of 8x8 tiles and serve the individual tiles from there. This saves inode usage
and internal fragmentation. Since my main usage so far has been (and probably will continue to be) rendering regions by
hand, and since my current (static) tile server stores all the latest versions of the tiles I have rendered since I
started doing this some 15 years ago, I want updating the server in a fast way. Most tile storage methods I know
fail terribly at update time (see here); most of the time it means
sending the whole file over the wire. Also, individual
tiles are easier to convert to anything else, like creating a MBTiles file, push it to my phone, and have a offline
tile service I can carry with me on treks where there is no signal. Also, serving the tiles can be as easy as
python -m http.server
from the tileset root directory. So renderd
is not useful for me. Another reason is, well, I
already have the rendering engine working. So how does it work?
The rendering engine consists
of one main thread, which I call Master, and rendering threads3. These rendering threads load the style and wait for
work to do. The current style file is 6MiB+ and takes mapnik
4s+ to load it and generate all its structures, which
means these threads have to be created once per service lifetime. I have one queue that can send commands from the
Master to the renderer pool asking for rendering a metatile, which is faster than rendering the individual tiles. Then
one of the rendering threads picks the request from this queue, calls mapnik
, generates the metatile, cuts it into
the subtiles and saves them to disk. The rendering thread posts in another queue, telling the Master about the children
metatiles that must be rendered, which due to emptiness can be between 0 and 4.
To implement the caching optimization I mentioned before, I use a third structure to maintain a stack. At the beginning I push into it the initial work; later I pop one element from it, and when a rendered returns the list of children to be rendered, I push them on top of the rest. This is what tries to guarantee that a metatile's children will be rendered before moving to another region that would trash the cache. And because the children can inspect the tiles being written, they can figure out when a child is all sea tiles and not returning it for rendering.
At the beginning I thought that, because the multiprocessing queues are implemented with pipes, I could use select()
4 to
see whether the queue was ready for writing or reading and use a typical non-blocking loop. When you're trying
to write, these queues will block when the queue is full, and when you're trying to read, they will block when the queue
is empty. But these two conditions, full and empty, are actually handled by semaphores, not by the size of the pipe.
That means that selecting on those pipes, even if I could reach all the way down into the structures of the
multiprocessing.Queue
all the way down. and add them to a selector, yes, the read queue will not be selected if it's
empty (nothing to read), but the write queue will not, since availability of space in the pipe does not mean the queue
is not full.
So instead I'm peeking into these queues. For the work queue, I know that the Master thread8 is the only writer, so I can peek to see if it is full. If it is, I am not going to send any new work to be done, because it means that all the renders are busy, and the only work queued to be done has not been picked up yet. For the reading side it's the same, Master is the only reader. so, I can peek if it's empty, and if it is, I am not going to try to read any information from it. So, I have a loop, peeking first into the work queue and then into the info queue. If nothing has been done, I sleep a fraction of a second.
Now let's try to think about how to replace this main loop with a web frontend. What is the web frontend going to do?
It's going to be getting queries by different clients. It could be just a slippy map in a web page, so we have a browser
as a client, or it could be any of the applications that can also render slippy maps. For instance, on Linux, we have
marble
; on Android, I use MyTrails, and OsmAnd.
One of the things about these clients is that they have timeouts. Why am I mentioning this? Because rendering a metatile for me can take between 3 to 120 seconds, depending on the zoom level. There are zoom levels that are really, really expensive, like between 7 and 10. If a client is going to be asking directly a rendering service for a tile, and the tile takes too long to render, the client will timeout and close the connection. How do we handle this on the server side? Well, instead of the work stack, the server will have request queue, which will be collecting the requests from the clients, and the Master will be sending these requests to the render pool.
So if the client closes the connection, I want to be able to react to that, removing any lingering requests made by that client from the request queue. If I don't do that, the request queue will start piling up more and more requests, creating a denial of service. This is not possible in multiprocessing queues, you cannot remove an element. The only container that can do that is a dequeue5, which also is optimized for putting and popping things from both ends (it's probably implemented using a circular buffer), which is perfect for a queue. As for the info queue, I will not be caring anymore about children metatiles, because I will not be doing any work that the clients are not requesting.
What framework that would allow me to do this? Let's recap the requirements:
- Results are computed, and take several seconds.
- The library that generates the results is not async, nor thread safe, so I need to use subprocesses to achieve parallelization.
- A current batch implementation uses 2 queues to send and retrieve computations to a pool of subprocesses; my idea is to "just" add a web frontend to this.
- Each subprocess spends some seconds warming up, son I can't spawn a new process for each request.
- Since I will have a queue of requested computations, if a client dies, if its query is being processed, then I let it finish; if not, I should remove it from the waiting queue.
I started with FastAPI, but it doesn't have the support that I need. At first I just implemented a tile server; the idea was to grow from there6, but reading the docs it only allows doing long running async stuff after the response has been sent.
Next was Flask. Flask is not async unless you want to use sendfile()
. sendfile()
is a way to make the
kernel read a file and write it directly on a socket without intervention from the process requesting that. The
alternative is to to open the file, read a block, write it on the socket, repeat. This definitely makes your code more
complex, you have to handle lots of cases. So sendfile()
is very, very handy, but it's also faster because it's
0-copy. But Flask does not give control of what happens when the client suddenly closes the connection. I can
instruct it to cancel the tasks in flight, but as per all the previous explanation, that's not what I want.
This same problem seems to affect all async frameworks I looked into. asyncio
, aiohttp
, tornado
. Except, of course,
twisted
, but its API for that is with callbacks, and TBH, I was starting to get tired of all this, and the prospect
of callback hell, even when all the rest of the system could be developed in a more async way, was too much. And this is
not counting the fact that I need to hook into the main loop to step the Master. This could be implemented with timed
callbacks, such as twisted
's callLater()
, but another thought started to form in my head.
Why did I go directly for frameworks? Because they're supposed to make our lives easier, but from the beginning I had the impression that this would not be a run of the mill service. The main issue came down to beign able to send things to render, return the rendered data to the right clients, associate several clients to a single job before it finished (more than one client might request the same tile or several tiles that belong to the same metatile), and handle client and job cancellation when clients disappear. The more frameworks' documentation I read, the more I started to fear that the only solution was to implement an non-blocking12 loop myself.
I gotta be honest, I dusted
an old Unix Network Programming book, 2nd Ed.,
1998 (!!!), read half a chapter, and I was ready to do it. And thanks to the simple selector
API, it's a breeze:
- Create a listening socket.
- Register it for read events (connections).
- On connection, accept the client and wait for read events in that one too.
- We were not registering for write before because the client is always ready for write before we start sending anything, which lead to tight loops.
- On client read, read the request and send the job to Master. Unregister for read.
- But if there's nothing to read, the client disconnected. Send an empty.response, unregister for read and register for write.
- Step Master.
- If anything came back, generate the responses and queue them for sending. Register the right clients for write.
- On client write (almost always), send the response and the file with
sendfile()
if any. - Then close the connection and unregister.
- Loop to #3.
Initially all this, including reimplementing fake Master and render threads, took less than 200 lines of code, some 11h
of on-and-off work. Now that I have finished I have a better idea of how to implement this at least with twisted
,
which I think I will have to do, since step 4 assumes the whole query can be recv()
'ed in one go and step 7 similarly
for send()
'ing; luckily I don't need to do any handholding for sendfile()
, even when the socket is non blocking.
A more production ready service needs to handle short reads and writes. Also, the HTTP/1.1 protocol all clients are
using allows me to assume that once a query is received, the client will be waiting for an answer before trying anything
else, and that I can close the connection once a response has been send and assume the client will open a new connection
for more tiles. And even then, supporting keep alive should not be that hard (instead of closing the client, unregister
for write, register for read, and only do the close dance when the response is empty). And because I can simply step
Master in the main loop, I don't have to worry about blocking queues.
Of course, now it's more complex, because it's implementing support for multiple clients with different queries requiring rendering the same metatile. This is due that applications will open several clients for fetching tiles when showing a region, and unless it's only 4 and they fall in the corner of 4 adjacent metatiles, they will always mean more than one client per metatile. Also, I could have several clients looking at the same region. The current code is approaching the 500 lines, but all that should also be present in any other implementation.
I'm pretty happy about how fast I could make it work and how easy it was. Soon I'll be finishing integrating a real
render thread with saving the tiles and implement the fact that if one metatile's tile is not present, we can assume
it's OK, but if all are not present, I have to find out if they were all empty or never rendered. A last step would be
how to make all this testable. And of course, the twisted
port.
-
This is getting out of hand. The audio was 1h long, not sure how long it took to auto transcribe, and when editing and thinking I was getting to the end of it, the preview told me I still had like half the text to go through. ↩
-
No idea what I wanted to write here :) ↩
-
Because
mapnik
is not thread safe and because of the GIL, they're actually subprocesses via themultioprocessing
module, but I'll keep calling them threads to simplify. ↩ -
Again, a simplification. Python provides the selector module that allows using abstract implementations that spare us from having to select the best implementation for the platform. ↩
-
I just found out it's pronounced like 'deck'. ↩
-
All the implementations I did followed the same pattern. In fact, right now, I hadn't implementing the rendering tile server: it's only blockingly
sleep()
'ing for some time (up to 75s, to trigger client timeouts), and then returning the tiles already present. What's currently missing is figuring out whether I should rerender or use the tiles already present7, and actually connecting the rendering part. ↩ -
Two reasons to rerender: the data is stale, or the style has changed. The latter requires reloading the styles, which will probably mean rebuilding the rendering threads. ↩
-
I keep calling this the Master thread, but at this point instead of having its own main loop, I'm just calling a function that implements the body of such loop. Following previous usage for such functions, it's called
single_step()
. ↩ -
Except when you start rendering ferry routes. ↩
-
I never measured it :( ↩
-
Seems like
nikola
renumbers the footnotes based on which order they are here at the bottom of the source. The first note was 0, but it renumbered it and all the rest to start counting from 1. ↩ -
Have in account that I'm explicitly making a difference between a non-blocking/
select()
loop from anasync/await
system, but have in account that the latter is actually implemented with the formet. ↩