We have been writing Ansible tasks wrong

Look at the following Ansible task:

- name: Prometheus Exporters
  - name: Extend caps for ping - File
    template:
      src:   "roles/server/files{{ item }}"
      dest:  "{{ item }}"
      owner: prometheus
      group: root
      mode:  0600
    loop:
      - /etc/systemd/system/prometheus-blackbox-exporter.service.d/override.conf
    when: "'blackbox' in exporters"
    register: foo
    tags: config, monitoring
    notify:
    # - Reload systemd
    - Reload Blackbox Exporter

For a while I have been wishing to be able to write this like this:

class PrometheusExporters:
    def extend_caps_for_ping_File(self):
        if 'blackbox' in exporters:
            foo = []
            for exporter in exporters:
                # tags: config, monitoring
                foo.append(template(f"roles/server/files{exporter}", exporter, 'prometheus', 'root', 0o600, 
                           notify=(reload_blackbox_exporter,)))

Notice that these are not 100% equivalent; the Python version uses exporter as the loop_variable, but in the Ansible code I never set that because it's so cumbersome.

Why do I prefer that notation? Because:

  • Even if Ansible aims to be a declarative language, it has many programming languages features like when (if), loop (for) and block/recover (try/except).
  • Yes, the register equivalent I wrote there is not really nice, but alternatives are, I think, worse.
  • When using certain modules I almost always use the same params. Notice that a proper translation would be template(src=f"roles/server/files{exporter}", dest=exporter, owner='prometheus', group='root', mode=0o600), and that this is very close to the inline module invocation a.k.a. free form arguments1: template: src="roles/server/files{{ exporter }}" dest="{{ exporter }}" owner=prometheus group=root mode=0600.
  • If the compiler would be clever enough, I could declare functions that would work as templates for tasks.
  • As a programmer, this order makes more sense.

But I don't have to wait to have enough energy to write such compiler myself; I can more or less already change the order:

- name: Prometheus Exporters
  - name: Extend caps for ping - File
    when: "'blackbox' in exporters"
    loop:
      - /etc/systemd/system/prometheus-blackbox-exporter.service.d/override.conf
    register: foo
    template:
      src:   "roles/server/files{{ item }}"
      dest:  "{{ item }}"
      owner: prometheus
      group: root
      mode:  0600
    notify:
    # - Reload systemd
    - Reload Blackbox Exporter
    tags: config, monitoring

Not 100% where I want it (setting loop_variable is still ugly), but in my head it's easier to read. I don't use free form arguments because it's not clear that I can split the line like I did on the Python code.


  1. Thanks oblikoamorale#ansible@libera.chat. 

Diacritics and the US intl with dead keys keyboard layout

Two days ago https://norcal.social/@superball asked about generating smart quotes on Linux. I never got to understand what they meant with 'smart quotes', but I answered with a reference to the US intl with dead keys keyboard layout. I have been using it for ages. This allowed me to write proper Spanish with a US layout and it just mimicked my experience when learning to type with a Olivetti Lexicon 80.

I knew for a long while that I gloat, yes, gloat, about being able to write 'weird' letters, but it seems like I never really wrote them all. So today I sat down to waste, yes, waste a couple of hours, as in more than two, compiling the following table:

glyph name letters key combo
` grave ẁèỳùìòàǹm̀ `
~ tilde ẽỹũĩõãṽñ Shift+
˝ double grave űő Alt-Gr+Shift+2
¯ macron11 ēȳūīōāḡǖ110 Alt-Gr+Shift+3
¸ cedilla2 ȩŗ1016ţşḑģ10ḩķ1016ļ1016çņ1016 Alt-Gr+Shift+5
^ circumflex ŵêŷûîîôâŝĝĥĵẑĉ Shift+6
̛4 horn ơ Alt-Gr+Shift+7
˛ ogonek ęųįǫą Alt-Gr+Shift+8
˘ breve11 ĕŭĭŏăğ Alt-Gr+Shift+9
° overring ẘẙůå9 Alt-Gr+Shift+0
̣ ̣5 underdot ẉẹṭỵụịọạṣḍḥḳḷẓṿḅṇṃ Alt-Gr+Shift+-
´ acute ẃéŕýúíóṕáśǵj́ḱĺźćǘ110ńḿ '
¨ 'two dots'6 ẅëẗÿüïöäḧẍ Shift+'
˙ overdot å7910é10ṙṫẏı12ȯṗȧṡḋḟġḣȷ12ŀ13żẋċḃṅṁ Alt-Gr+.
ˇ caron ěřť10ǔǐǒǎšď10ǧȟǰǩľ10žčǔň Alt-Gr+Shift+.
15 14 ʠⱳẻɼƭỷủỉỏƥảʂɗƒɠɦƙȥƈʋɓɲɱ Alt-Gr+Shift+/

The biggest surprises about it are:

  • Overdot is the most used writable diacritic, even when 5 of the letters use a different diacritic.
  • Underdot and caron get more than I expected.
  • Only one letter with horn? Was it worth it? Maybe I don't know how to use it?
  • Ogonoek is the best diacritic name. A shame is so underused :)
  • If it wasn't for the overdot, x would be the single letter that can't be combined.
  • Best symbol? I'm partial to ř because it's used to represent a Czech sound that is also present in the West North West part of my country, one ocean and one Equator apart. See https://www.youtube.com/watch?v=uDpVPj49R8w and, wow, https://www.youtube.com/watch?v=9cZSKnfeigI.
  • But also and ơ for their different uniqueness.

Let me tell you about that last video. The guy is from Córdoba, from Cruz del Eje, so he has a mix of Cordobese accent but the ř sound from Traslasierra ('behind the mountains', referencing the Sierras Grandes) and La Rioja. Riojan people have another accent.

https://www.openstreetmap.org/relation/153536?mlat=-30.7336&mlon=-64.7943#map=8/-30.188/-66.621.

Of course, these are not all diacritics (see https://en.wikipedia.org/wiki/Diacritic), and probably not all the possible combinations. Not to mention that they can be accumulated; see both glyphs noted with 1, but also https://en.wikipedia.org/w/index.php?title=Zalgo_text.

Finally, diacritics are not the only thing you can write with this layout. These symbols are also writable without combining:

¡¹²³¤£€¼½¾‘’¥×÷ äåé®™þüúíóö«» áßðëïœø¶° æ·©¢ñµ±ç¿

and ^ combined with digits writes them superscripted:

¹²³⁴⁵⁶⁷⁸⁹⁰

Definitely a very versatile layout. If you want to fully explore your keyboard layout, install tastenbrett18 and take a look.

Ah! And smart quotes are not actual glyphs but a feature (I completely forgot about it because I always deactivate them):

https://en.wikipedia.org/w/index.php?title=Quotation_marks_in_English#Smart_quotes.


  1. With v

  2. I can't quite see the glyph3, so I have to trust Python and unicodedata.name()

  3. I'm getting old (almost 50!), presbyopia is hitting and these glasses are 1yo. Maybe it's also time I succumb and raise the minimum font size from 8 to 10. 

  4. These ones are weird. At least on my editor they 'glue' to the character on its left and it becomes difficult to deal with. 

  5. While typing these, I had to use Space to make them show in their isolated form. Fore this one I had to press it twice. Dunno why. 

  6. Not 'double dots'? This symbol is used for both diaeresis (diæresis?) and umlauts8

  7. With w

  8. How come all these names don't have themselves in it?!?! "Ümlaut", there, I fixed it for you :) 

  9. According to my editor and Python, these are both LATIN SMALL LETTER A WITH RING ABOVE

  10. Notice how the diacritic applied is different to the one you're supposed to be pressing. 

  11. Can't help myself; I pronounce these in French :) 

  12. Notice that the diacritic is an overdot, but these letters 'naturally' have dots in them, so it removes it. 

  13. Not sure if you see the same as me; I see a dot to the right of the l glyph. If this is the canonical representation, in some ways 10 applies. 

  14. This modifier changes the letters in many different ways, and the result not always looks like an actual diacritic, but I included it for completeness. 

  15. It doesn't show at all; maybe because of 14

  16. These actually get an undercomma. 17 

  17. I have the impression that this post has more footnotes than actual text :) 

  18. Literally 'keysboard' in German; 'tasten', keys; 'brett', board. 

The mastodon effect: meta discussion

Yesterday1 I did something that I don't know how to qualify. At the beginning I was just playing a silly thing where I registered a new domain on a dynamic domain service. I set up Apache on my home server to answer this domain with a single 211 bytes index.html file, and wrote a toot with a link to that site. What I wanted to see is the mastodon effect, which is similar to the slashdot effect, but supposedly more automatic. The idea is that your followers' Fediverse servers would try to obtain a preview of the page to present the users. And because they might boost that toot to their followers, you can easily either get DDOS'ed by this, or get a huge bill for network traffic.

But for reasons that will be apparent soon, this post is going to be meta. What happened with the toot? Many things.

It has been more than 48 hours1 since I posted the link and I have more than 2.4k boosts. It's definitely way, way more than I expected. My now second most popular toot2 was about an antipodes map3, and it had 200 boosts or so. A couple of months ago, there was this European petition about taxing the rich. That petition needed a million signatures all over Europe. I tried to boost and write about it a lot, but I didn't get much traction, my toots about it only got 6 boosts. Those toots, in contrast to the experiment or the antipodes, were the ones that to me were more important to be spread. And yet, what gets more around from me are amusing toots.

And that is something I have been thinking about for a while. The people that I follow who have lots of followers, boosts and replies are mostly people that are mostly focused on one or maybe two main topics. My feed is all over the place: development, replacing cars with ebikes, 15m cities, things that amuse me, science, (astro)photography, languages, maps, history and god knows what else. And I came to the conclusion that I will never try to focus on one or two topics, that I want to keep my feed like that and that's fine, just don't get down if your interactions are more limited. If I want something more I will have to find it elsewhere (and my psychologist agrees :)

The second thing I want to talk about is that the toot has 418 likes. That's really interesting. I don't know why people liked the post, the post itself, maybe because I explained a little bit the idea behind the post, and of course that's why it has been boosted a lot, but why likes? I have expressed sometimes how I use replying, boosting, liking, and bookmarking myself. Replying means I think I have something to add or ask, or maybe just a joke, which I do often. Boosting is for things that are really, really interesting, things I want to spread, and that I probably don't know how to write about or have the time to do it about myself. Bookmarking is for either reading later (which most of the time never happens), save something on the phone to reply in the computer or vice versa (the phone is awkward to type in, but it has a camera); or I really want to keep it as reference.

To me likes mean two things. The first one is, of course, I like what you did that I don't consider that important enough to boost it, I just want to encourage you to do more of that. For instance, I like when people say that they have added some stuff into OpenStreetMap, made a little better something we all can enjoy, use and contribute. Sometimes I boost, but most of the time I just like, because I feel like I'm preaching to the choir here. Finally, I might like instead of saying 'Thanks'.

Third thing was that 56 people1 followed me after the experiment toot. Getting extra 20% of followers in 48h feels weird. I don't know why people follow me, but this is definitely unexpected. I did not expect people to follow me just because I am doing this experiment; maybe they just saw something else in my feed and decided to follow :shrug:.

Fourth thing that this experiment did was to break my notifications. I mentioned that I use Phanpy, which luckily does not really send any notifications, it just adds a dot to the bell icon, so there has been no real intrusion in my life, but it means that now that dot by the bell is not useful for me anymore4 :) It just sits there because people keep boosting it. I have Phanpy configured such that I have four columns: Notifications, my home screen, the local feed for my server, and bookmarks (see above). I can see when notifications are arriving in that left column, but the icon doesn't work anymore. I also use Phanpy on my mobile phone (no, no dedicated app, I don't consider Mastodon a critical enough service to require my attention when something happens), where I only have one column and the bell icon is completely useless there4. I'm not sure if other Mastodon clients have this feature, but Phanpy has the option to only show notifications that have mentions, meaning replies. Phanpy also has had grouping notifications such as boosts and likes into a single element for a long time. I think newer Mastodon versions have this too. These three features have allowed me to keep effectively using this Mastodon account in these last two5 days.

Finally, people started commenting about it. Someone wondered if this was GDPR compliant. I am no GDPR expert. I know that it's all about what do you do with personal information. I don't have much personal information from the people that have boosted the toot. I could maybe ask the Mastodon API, but this is not the reason for the experiment. See the last paragraph for more details. The only thing I have is the Apache log, which indicates, among other things, the IP of the client and the user agent. The latter I will use to distinguish Fediverse services from actual people following the link. Then in 15 or few more days logrotate will rotate these logfiles out of existence. There is no permanent storage whatsoever, this is just a home server basic setup, nothing fancy. There is not even a log aggregator or nothing that I can use to massage log lines apart from what I already explained.

People started asking about the results. Sorry people, you will ave to be patient with me. I didn't expect to write this, but now that I am doing it I think it's more important that the raw numbers. They say that they have bookmarked the post so they can come back later and see what happened. What I usually do in these cases is to edit the original post so anybody who has interacted with the tooth will get a notification. But that is fine when it's just a few boosts and likes. Notifying more than 2.4k people where only a handful has expressed interest on the outcome of it sounds like too much. So I will do exactly what they suggest, just reply my own toot with the link to this post, and later another with the future technical post.

One person started by sending a private message telling another person who had evidently boosted the toot, telling them in Dutch that it was probably a scam and suggesting them to unboost the tooth. I answered them that this is no scam, that it is just an idiot trying to do an experiment that went out of hand. And they told me the experiment has been done a million times, which they're totally right, and that I am "profiting from friendly people who risk getting muted or blocked because I don't want to be bothered by experiments in my timeline". My answer to that was, literally, "I would have to think about this" and this is what spurred this post. I mean, I already did some meta commentary before where I talked about how things that I felt really important and pressing got mostly ignored while silly stuff like antipodes or a stupid experiment got more attention.

I can understand the part of where something amusing gets a lot of attention. I understand people that are using Mastodon to amuse themselves, mostly because I do that too. But "profiting from friendly people", I never thought it that way. I mean, maybe this person is an idiot, but I try to think when my actions have unintended consequences. And despite the tone, it made me think, which is good. "Profiting", I kind of am, because I'm asking for a favor from people I don't know. But I didn't force anybody to do what they did. Yes, initially the toot did not explain what it was for, but I edited it not even 15 minutes later mentioning what it was for, and I think that's when it really took off. People have been genuinely boosting this on their own accord. I have no leverage to force anybody to do it; if they do it, they do it because they want. And if that makes this person mute or block people because of that, it's on them. But it took me the whole day to figure this out.

Another person said several things. One was, the domain name suggest that I'm trying to DDoS Mastodon. Yeah, the name was very poorly chosen. It was a puny name in the sense that I just replaced the "do" in Mastodon with "DDoS", and becomes something like "mastodohsn't", which completely smashes the meaning of everything. It's a crappy domain name, I agree. He mentions not visiting ddns.net domains. I didn't think about that. I had had a dynamic domain between 2003 and 2012 for my home server. Back then it was the easiest way to get a domain name. Now I use one that my ISP provides me. But for this experiment I set up a completely different domain name, mostly because I didn't want to have my personal domain name posted publicly. So far I have maintained a very low profile about it. Definitely I have never posted a link to my home server on Mastodon. He mentions that chain letters are already being extensively researched. This is not what I'm doing. I'm interested on how much impact posting a link on the Fediverse has on web servers. "Either you have criminal intent or you are not a scientific researcher". I am not a scientific researcher, but I am not a criminal either. He mentions Sami Kamkar. Sami Kamkar created and released in 2005 the fastest-spreading videos of all time, the MySpace Warms Sami, and was subsequently raided by the United States Secret Service under the Patriot Act. My answer to him was it's a very poorly devised project that went out of hand, which is a very honest and accurate answer. I am definitely very far from being anything similar to Sami Kamkar. Again, I'm just going to peruse some logs. I'm going to try to figure out how to monitor those logs. And that's all. There is no other hidden agenda. There is no other interest. This is my home server. The logs were not going to be retained for any long period of time.

So, in conclusion, to whoever has boosted this, thank you. This is what I asked you to do and this is what you did, good job :) To whoever liked it, thank you too, I guess. To whoever has followed me because of that toot, welcome to a random guy's eclectic feed, I hope you stay. To those who have sent me messages because of the experiment or the toot or some other technical stuff, thank you for being so interested. Thank you to those who made me doubt myself, because it's one of the ways I grow, by double checking myself and deciding whether what I did was right or wrong and I should apologize.

And I think that's it. I hope to write the analysis soon. Right now I have around one hits per minute, so probably for tomorrow night when I will have the time to sit down and do stuff on this.


  1. The toot was posted on my Saturday noon. This post was dictated on Sunday night, edited a bit on Monday night, and cleaned up on Friday night. I'm not going to updated the dates or values because they don't change much and I wouldn't like to them to be accidental;ly out of sync. 

  2. I think it doesn't exist anymore, because Mastodon instances remove old toots. 

  3. https://en.wikipedia.org/wiki/Antipodes 

  4. It was not useful until Thursday; today Friday it became useful again. 

  5. Four. 

dpkg -S does not handle symlinks

Just a quick one: dpkg -S does not handle symlinks, and now that the usrmerge is almost passed (I think?), it's getting more annoying:

mdione@ioniq:~$ dpkg -S /lib/x86_64-linux-gnu/libgexiv2.so.2.14.3
dpkg-query: no path found matching pattern /lib/x86_64-linux-gnu/libgexiv2.so.2.14.3

mdione@ioniq:~$ dpkg -L libgexiv2-2:amd64
/usr/lib/x86_64-linux-gnu/libgexiv2.so.2.14.3

mdione@ioniq:~$ namei -lx /lib/x86_64-linux-gnu/libgexiv2.so.2.14.3
f: /lib/x86_64-linux-gnu/libgexiv2.so.2.14.3
Drwxr-xr-x root root /
lrwxrwxrwx root root lib -> usr/lib
drwxr-xr-x root root   usr
drwxr-xr-x root root   lib
drwxr-xr-x root root x86_64-linux-gnu
-rw-r--r-- root root libgexiv2.so.2.14.3

/lib has been moved to /usr/lib and symlinked for compatibility reasons. There's 22yo (!!!) bug about it, which was deemed important 8ya. I hope it gets fixed soon :)

Measure your optimnizations

One of the parts of having my own map style with hypsometric contour lines is that I have to generate those contour lines. There's a tool in GDAL, particularly the one that actually does everything based on DEM files, called gdaldem that can generate shapefiles with contour lines that mapnik can read. But since my source files are 1x1° files, I will have to generate one layer for each shapefile and that doesn't scale very well, especially at planet size.

So what I do is I convert those shapefiles to SQL files and then I inject them into my database one by one, and then I can use mapnik's own support for filtering by bbox when it's rendering, so that should be faster4.

I put the SQL files in my file system, and then I import them by hand as I need them, and I'm running out of space again. A few years ago I had a 1TB disk, and that was enough, and now I am at the 2TB disk, and it's getting small. I have the impression that the new DEMs I am using are bigger, even if I streamlined every layer so it uses as less space as possible.

One of the things I'm doing is converting my processing script into a Makefile, so I can remove intermediary files. My process goes from the original DEM files, that are in LatLon, I project them to WebMerkator. This file becomes the source for the terrain files, which gives the hypsometric tints, and I generate the contours from there, and then I do a compensation for slope shade and hill shade. Notice that I get two intermediary files that I can easily remove, which are first, the reprojected file, because once I have the terrain and contour files, I can remove it, I don't care anymore; and also the compensated file, I don't need it anymore once I have the shade files. The Makefile is covering that part, once the files are generated, the intermediary files are gone.

Going back to the SQL files, I don't inject SQL data directly into my database, because I don't have space for that. So, I just generate this SQL file and I compress it, so it's not using so much space, because SQL is really a lot of text. I've been using xz as the compressor, and I have been blindly using its highest compression level, CL 9. What do I mean with blindly? I noticed it actually takes a lot of time. I just measured it with one tile, and it took 451 seconds. That's 7.5 minutes per degree tile, which is a lot. So I asked myself, what's the compression ratio to time spent ratio?

I took a single file and I compressed it with all the compression levels between 1 and 9, and I took the time and the space in the final file. I made a scatter graph, and it looks like this pretty weird Z figure2:

Here's the raw data1:

level time_in_seconds readable_time size_in_bytes comp_ratio
1 57.84 57s 129_486_376 29.21%
2 117.40 1m57s 129_993_440 29.33%
3 252.28 4m12s 130_306_780 29.40%
4 212.26 3m32s 102_359_596 23.09%
5 347.51 5m47s 98_992_464 22.33%
6 344.58 5m44s 99_114_560 22.36%
7 370.20 6m10s 99_043_096 22.34%
8 416.48 6m56s 99_005_352 22.33%
9 451.85 7m31s 99_055_552 22.35%

I'm not going to explain the graph or table, except to point to the two obvious parts: the jump from CL 3 to 4, where it's not only the first and only noticeable space gain, it also takes less time; and the fact that compressions levels 1-3 and 4-9 have almost no change in space gained. So I either use CL 1 or 4. I'll go for 1, until I run out of space again.

All this to say: whenever you make an optimization, measure all the dimensions, time, space, memory consumption, and maybe you have other constraints like, I don't know, heat produced, stuff like that. Measure and compare.


  1. Sorry for the ugly table style. I still don't know how to style it better. 

  2. Sorry for the horrible scales. Either I don't know it enough, or LibreOffice is quite limited on how to format the axises3

  3. No, I won't bother to see how the plural is made, this is taking me long enough already :-P 

  4. This claim has not been proven and it's not in the scope of this post. 

Writing a tile server in python

Another dictated post111, but heavily edited. Buyer beware.

I developed a tileset based on OpenStreetMap data and style and elevation information, but I don't have a render server. What I have been doing is using my own version of an old script from the mapnik version of the OSM style. This script is called generate_tiles, and I made big modifications to it and now it's capable of doing many things, including spawning several processes for handling the rendering. You can define regions that you want to render, or you can just provide a bbox or a set of tiles or just coordinates. You can change the size of the meta tile, and it handles empty tiles. If you find a sea tile, most probably you will not need to render its children9, where children are the four tiles that are just under it in the next zoom level. For instance, in zoom level zero we have only one tile (0,0,0), and it's children are (1,0,0), (1,0,1), (1,1,0) and (1,1,1). 75% of the planet's surface is water, and with Mercator projection and the Antartic Ocean, the percent of tiles could be bigger, so this optimization cuts a lot of useless rendering time.

Another optimization is that it assumes that when you render zoom level N, you will be using at least the same data for zoom level N+1. Of course, I am not catching that data because mapnik does not allow this, but the operating system does the catching. So if you have enough RAM, then you should be able to reuse all the data that's already in buffers and cache, instead of having to fetch them again from disk. This in theory should accelerate rendering and probably it is10.

The script works very well, and I've been using it for years already for rendering tiles in batches for several zoom levels. Because my personal computer is way more powerful than my server (and younger; 2018 vs 2011), I render in my computer and rsync to my server.

So now I wanted to make a tile server based on this. Why do I want to make my own and not use renderd? I think my main issue with renderd is that it does not store the individual tiles, but keeps metatiles of 8x8 tiles and serve the individual tiles from there. This saves inode usage and internal fragmentation. Since my main usage so far has been (and probably will continue to be) rendering regions by hand, and since my current (static) tile server stores all the latest versions of the tiles I have rendered since I started doing this some 15 years ago, I want updating the server in a fast way. Most tile storage methods I know fail terribly at update time (see here); most of the time it means sending the whole file over the wire. Also, individual tiles are easier to convert to anything else, like creating a MBTiles file, push it to my phone, and have a offline tile service I can carry with me on treks where there is no signal. Also, serving the tiles can be as easy as python -m http.server from the tileset root directory. So renderd is not useful for me. Another reason is, well, I already have the rendering engine working. So how does it work?

The rendering engine consists of one main thread, which I call Master, and rendering threads3. These rendering threads load the style and wait for work to do. The current style file is 6MiB+ and takes mapnik 4s+ to load it and generate all its structures, which means these threads have to be created once per service lifetime. I have one queue that can send commands from the Master to the renderer pool asking for rendering a metatile, which is faster than rendering the individual tiles. Then one of the rendering threads picks the request from this queue, calls mapnik, generates the metatile, cuts it into the subtiles and saves them to disk. The rendering thread posts in another queue, telling the Master about the children metatiles that must be rendered, which due to emptiness can be between 0 and 4.

To implement the caching optimization I mentioned before, I use a third structure to maintain a stack. At the beginning I push into it the initial work; later I pop one element from it, and when a rendered returns the list of children to be rendered, I push them on top of the rest. This is what tries to guarantee that a metatile's children will be rendered before moving to another region that would trash the cache. And because the children can inspect the tiles being written, they can figure out when a child is all sea tiles and not returning it for rendering.

At the beginning I thought that, because the multiprocessing queues are implemented with pipes, I could use select()4 to see whether the queue was ready for writing or reading and use a typical non-blocking loop. When you're trying to write, these queues will block when the queue is full, and when you're trying to read, they will block when the queue is empty. But these two conditions, full and empty, are actually handled by semaphores, not by the size of the pipe. That means that selecting on those pipes, even if I could reach all the way down into the structures of the multiprocessing.Queue all the way down. and add them to a selector, yes, the read queue will not be selected if it's empty (nothing to read), but the write queue will not, since availability of space in the pipe does not mean the queue is not full.

So instead I'm peeking into these queues. For the work queue, I know that the Master thread8 is the only writer, so I can peek to see if it is full. If it is, I am not going to send any new work to be done, because it means that all the renders are busy, and the only work queued to be done has not been picked up yet. For the reading side it's the same, Master is the only reader. so, I can peek if it's empty, and if it is, I am not going to try to read any information from it. So, I have a loop, peeking first into the work queue and then into the info queue. If nothing has been done, I sleep a fraction of a second.

Now let's try to think about how to replace this main loop with a web frontend. What is the web frontend going to do? It's going to be getting queries by different clients. It could be just a slippy map in a web page, so we have a browser as a client, or it could be any of the applications that can also render slippy maps. For instance, on Linux, we have marble; on Android, I use MyTrails, and OsmAnd.

One of the things about these clients is that they have timeouts. Why am I mentioning this? Because rendering a metatile for me can take between 3 to 120 seconds, depending on the zoom level. There are zoom levels that are really, really expensive, like between 7 and 10. If a client is going to be asking directly a rendering service for a tile, and the tile takes too long to render, the client will timeout and close the connection. How do we handle this on the server side? Well, instead of the work stack, the server will have request queue, which will be collecting the requests from the clients, and the Master will be sending these requests to the render pool.

So if the client closes the connection, I want to be able to react to that, removing any lingering requests made by that client from the request queue. If I don't do that, the request queue will start piling up more and more requests, creating a denial of service. This is not possible in multiprocessing queues, you cannot remove an element. The only container that can do that is a dequeue5, which also is optimized for putting and popping things from both ends (it's probably implemented using a circular buffer), which is perfect for a queue. As for the info queue, I will not be caring anymore about children metatiles, because I will not be doing any work that the clients are not requesting.

What framework that would allow me to do this? Let's recap the requirements:

  • Results are computed, and take several seconds.
  • The library that generates the results is not async, nor thread safe, so I need to use subprocesses to achieve parallelization.
  • A current batch implementation uses 2 queues to send and retrieve computations to a pool of subprocesses; my idea is to "just" add a web frontend to this.
  • Each subprocess spends some seconds warming up, son I can't spawn a new process for each request.
  • Since I will have a queue of requested computations, if a client dies, if its query is being processed, then I let it finish; if not, I should remove it from the waiting queue.

I started with FastAPI, but it doesn't have the support that I need. At first I just implemented a tile server; the idea was to grow from there6, but reading the docs it only allows doing long running async stuff after the response has been sent.

Next was Flask. Flask is not async unless you want to use sendfile(). sendfile() is a way to make the kernel read a file and write it directly on a socket without intervention from the process requesting that. The alternative is to to open the file, read a block, write it on the socket, repeat. This definitely makes your code more complex, you have to handle lots of cases. So sendfile() is very, very handy, but it's also faster because it's 0-copy. But Flask does not give control of what happens when the client suddenly closes the connection. I can instruct it to cancel the tasks in flight, but as per all the previous explanation, that's not what I want.

This same problem seems to affect all async frameworks I looked into. asyncio, aiohttp, tornado. Except, of course, twisted, but its API for that is with callbacks, and TBH, I was starting to get tired of all this, and the prospect of callback hell, even when all the rest of the system could be developed in a more async way, was too much. And this is not counting the fact that I need to hook into the main loop to step the Master. This could be implemented with timed callbacks, such as twisted's callLater(), but another thought started to form in my head.

Why did I go directly for frameworks? Because they're supposed to make our lives easier, but from the beginning I had the impression that this would not be a run of the mill service. The main issue came down to beign able to send things to render, return the rendered data to the right clients, associate several clients to a single job before it finished (more than one client might request the same tile or several tiles that belong to the same metatile), and handle client and job cancellation when clients disappear. The more frameworks' documentation I read, the more I started to fear that the only solution was to implement an non-blocking12 loop myself.

I gotta be honest, I dusted an old Unix Network Programming book, 2nd Ed., 1998 (!!!), read half a chapter, and I was ready to do it. And thanks to the simple selector API, it's a breeze:

  1. Create a listening socket.
  2. Register it for read events (connections).
  3. On connection, accept the client and wait for read events in that one too.
  4. We were not registering for write before because the client is always ready for write before we start sending anything, which lead to tight loops.
  5. On client read, read the request and send the job to Master. Unregister for read.
  6. But if there's nothing to read, the client disconnected. Send an empty.response, unregister for read and register for write.
  7. Step Master.
  8. If anything came back, generate the responses and queue them for sending. Register the right clients for write.
  9. On client write (almost always), send the response and the file with sendfile() if any.
  10. Then close the connection and unregister.
  11. Loop to #3.

Initially all this, including reimplementing fake Master and render threads, took less than 200 lines of code, some 11h of on-and-off work. Now that I have finished I have a better idea of how to implement this at least with twisted, which I think I will have to do, since step 4 assumes the whole query can be recv()'ed in one go and step 7 similarly for send()'ing; luckily I don't need to do any handholding for sendfile(), even when the socket is non blocking. A more production ready service needs to handle short reads and writes. Also, the HTTP/1.1 protocol all clients are using allows me to assume that once a query is received, the client will be waiting for an answer before trying anything else, and that I can close the connection once a response has been send and assume the client will open a new connection for more tiles. And even then, supporting keep alive should not be that hard (instead of closing the client, unregister for write, register for read, and only do the close dance when the response is empty). And because I can simply step Master in the main loop, I don't have to worry about blocking queues.

Of course, now it's more complex, because it's implementing support for multiple clients with different queries requiring rendering the same metatile. This is due that applications will open several clients for fetching tiles when showing a region, and unless it's only 4 and they fall in the corner of 4 adjacent metatiles, they will always mean more than one client per metatile. Also, I could have several clients looking at the same region. The current code is approaching the 500 lines, but all that should also be present in any other implementation.

I'm pretty happy about how fast I could make it work and how easy it was. Soon I'll be finishing integrating a real render thread with saving the tiles and implement the fact that if one metatile's tile is not present, we can assume it's OK, but if all are not present, I have to find out if they were all empty or never rendered. A last step would be how to make all this testable. And of course, the twisted port.


  1. This is getting out of hand. The audio was 1h long, not sure how long it took to auto transcribe, and when editing and thinking I was getting to the end of it, the preview told me I still had like half the text to go through. 

  2. No idea what I wanted to write here :) 

  3. Because mapnik is not thread safe and because of the GIL, they're actually subprocesses via the multioprocessing module, but I'll keep calling them threads to simplify. 

  4. Again, a simplification. Python provides the selector module that allows using abstract implementations that spare us from having to select the best implementation for the platform. 

  5. I just found out it's pronounced like 'deck'. 

  6. All the implementations I did followed the same pattern. In fact, right now, I hadn't implementing the rendering tile server: it's only blockingly sleep()'ing for some time (up to 75s, to trigger client timeouts), and then returning the tiles already present. What's currently missing is figuring out whether I should rerender or use the tiles already present7, and actually connecting the rendering part. 

  7. Two reasons to rerender: the data is stale, or the style has changed. The latter requires reloading the styles, which will probably mean rebuilding the rendering threads. 

  8. I keep calling this the Master thread, but at this point instead of having its own main loop, I'm just calling a function that implements the body of such loop. Following previous usage for such functions, it's called single_step()

  9. Except when you start rendering ferry routes. 

  10. I never measured it :( 

  11. Seems like nikola renumbers the footnotes based on which order they are here at the bottom of the source. The first note was 0, but it renumbered it and all the rest to start counting from 1. 

  12. Have in account that I'm explicitly making a difference between a non-blocking/select() loop from an async/await system, but have in account that the latter is actually implemented with the formet. 

Collating, processing, managing, backing up and serving a gallery of a 350GiB, 60k picture collection

In the last two days I have commented a little bit how I process and manage my photos. I'm not a very avid photographer, I have like 350 gigabytes of photos, most of them are yet not processed, around 60,000 of them. So I will comment a little bit more how do I manage all that.

I start with the camera, a 24Mpx camera, just a couple of lenses, nothing fancy. Go out, take some pictures, come back home.

I put the SD camera on my computer and I use my own software to import it. The import process is not fancy, it just empties the SD card, checks every file for the EXIF information, uses the date and time to create the filename, a sequence number if needed, and puts them all in a single incoming directory where all the current unprocessed images are1.

Then I use this software I developed in PyQt5. It's very, very basic, but it's really quick, it's mostly keyboard based. It reads the EXIF information and present some of the tags at the left of the screen; things like date, time, size, orientation and then focal length, aperture, ISO and various other data I can get from the images. It's mostly focused on my current camera and the previous one, both Nikons2. The previous one was an N90, right now it's an N7200. The image occupies most of the window, and the program is always in full screen. At the bottom there's the filename and a couple of toggles.

I can do several things with this:

  • Go forwards, backwards, by one, by ten, by a hundred and by a thousand, because that incoming directory right now has almost seven years of history, probably ten thousand pictures.

  • Move randomly, which allows me to pick up a new thing to collate when I get bored with the current one but I want to keep doing it to reduce the backlog.

  • Mark the images in different ways. The main ones are about selecting for storing, with two modes: One is to keep the image in the original size. I usually use this for my best landscape or astro photos. The other one will resize it down to twelve megapixels3, from 6000x4000 pixels to 4500x3000 pixels, 75% on each dimension.

  • Rotate the images, just in case the camera did not guess the orientation correctly, usually when I'm taking pictures right upward or right downwards.
  • Select several pictures for stitching, which will use hugin to do so. It's not 100% automatic, but at least puts the pictures in a stitch directory and point hugin there.

  • Select a picture for cropping or editing; I'm not going to develop a whole image editor, so I just delegate to an existing program, gwenview.

  • Select images for deleting and delete them permanently.

  • Select several images for comparison and enter/exit comparison mode, which means that going backwards and forwards applies only this this set. This is good for things like when you take certain pictures, but there are not necessarily sequences in the original picture sequence, which for me makes culling images faster.

  • It has two zoom levels, fit to screen and full size. I don't have much the need for other options.
  • 99% of the pictures I take are freehand, so in a sequence there's always some movement between images. In full size I can put every image on its own position, aligning the whole sequence and allow culling based on blurriness or other factors.

  • Also in full size, I can lock the view, so when I pan one of the images and I switch to another one, it will also pan that second image to that position. It also helps when I'm checking for details between two different images of the same thing.

  • Move all the selected images, resize them if needed, and put them in a folder. It also creates a hardlink between my categorization in folders into a folder that collects all the images by date; there's one folder for each month and year with all the pictures of that month inside. It uses hardlinks so it doesn't duplicate the image file, saving space.

  • It also has a readonly mode, so I can hand the computer to my kids to watch the photos.

When culling, I use the comparison mode and individual position and lock view features a lot, going back and forth between images, discarding until only one is left.

That's the first part, the one I must spend my time on, just basic culling, selection and storage. My main tree is just a tree based on my way of categorizing the images.

My program doesn't have a directory view; instead, I just use gwenview again.

Notice there's no photo editing in this workflow. I rarely shoot in RAW for two reasons: a) I'm really bad at postprocessing; and b) even if I was good, I don't have the time to do it; my free time is shared among several hobbies. I only do it for astro photograpy and very few, rare occasions.

The third tool I use is digikam. I use it for two things, which are related: semi-automatic and manual tagging. The semi-automatic is face detection; digikam can find and guess faces, but requires manual confirmation4. The fully manual part is plain tagging, mostly with location5 and sometimes some other info. I sometimes also rate my pictures; I mostly use four and five, sometimes three, only for my best pictures.

Then there's another script that reads the digikam database and uses the tags to create another directory for the tags, which also uses hardlinks. It still doesn't do anything about the rating, but I could easily add that.

That's all on my personal computer. I use rsync to make a copy on my home server that has two purposes. One, it's a backup, which includes all the original 24Mpx images that I hadn't culled yet, which I think is the biggest part of my collection.

The second one, it feeds a gallery program that is developed in PHP by a guy named Karl. It's probably the single paid software I use. It's a single PHP file that you put at the root of your gallery, you enable PHP processing by your web server (in my case, Apache), and generates the gallery on the run, just reading the directories and creating all the necessary thumbnails and all that. I did a small change to this program. The original algorithm creates thumbnails based on each file's path (and other attributes, 4 or 5 I think), but because I have all these hard links, it creates duplicated thumbnail files. So I changed it to use the filename instead of the filepath6.

I don't have any kind of synchronization with my phone. Most of the pictures I take with it are not the kind of pictures I usually will put in my own gallery, except the times I go out without my camera and I end up taking pictures anyway. I still don't have a workflow for that, it's mostly manual. So if I ever lose my phone, I'm fscked because I have definitely no backups of it.

That lack of synchronization also means that the only way to see the pictures in my phone is by opening the gallery in the browser. It's not the best, but I don't do that that often. I have tried to use alternatives like NextCloud, which I also have installed on my home server. I have some issues with permissions because, again, this is a backup directory, so it has all the owner information that belongs to me, instead of the web server. That means it doesn't have the proper permissions to let NextCloud manage those files. Luckily files.gallery just needs a subdirectory.

Another reason is that before I was using static gallery generators: sigal, gallerpy or even nikola, which drives this glob. All those can generate the gallery statically, so serving them is so much easier. My old home server died at some point and I had to come up with something. I had a spare old laptop laying around and I used that. Now it's enough to generate the gallery on the fly. I have plans to make something bigger, but that's for another time.


  1. In fact I have another directory for all the unprocessed photos from another era, and I'm thinking of starting a new era. 

  2. Even if EXIV is a standard for storing tags, there's no standard for the tag names, so every manufacturer has its own sets, that even change between camera lines. For a better idea of what I'm talking about, just peruse Image::ExifTool's source code

  3. I currently own no screen that is 4500 pixels of width, let alone 6000. Maybe my kids will, but by then Mpx count will be so different that it won't make any sense to accomodate that. Right now storage for me is expensive, so I'll keep it this way. 

  4. Or rejection: the false positive rate is bigger that I would like, and it doesn't have a way to say 'yes, this is that person, but don't train on this image'. This is the case for pictures where the face is either semi occluded, sometimes painted, sometimes bad lightning, and mostly just blurry. 

  5. Most of my pictures don't have GPS info, not even the ones in the phone. The latter I only enable when I really need the info later, mostly for mapping. Later I either discard the photo or remove the info. 

  6. For a while now I'm even making this distinction in my own code, filename vs filepath. 

Yuba Kombi + Virvolt 900 - a trip review

Editor's note: another dictated1

Last September or October we bought a Yuba Kombi and we took it to a bycicle repairman to install a Virvolt 900, which is a 691Wh battery and a 80Nm, 36V motor. This is slightly above the usual sizes you can find around, but the bicycle is quite heavy: it has the feet rests for the kids behind, plus the monkey bars, plus the seat for the youngest and a basket in the front, which is quite big. We have 24Kg for the bike, and then 3Kg for the motor and another 3Kg for the battery, excluding all the accessories.

Today I decided to take it for long spin. I've been using it for a while, but not for very long distances. Last weekend I took it almost to Callelongue, which is not that far: 20km way and back, with a pit stop at the supermarket for 2 full bags of groceries. It had less than a full battery and I had to return it with some juice for the next day, and it passed that non-test with good marks.

Today the trip was different: I wanted to have an idea of how much the battery lasts, and for that I selected a mixed trip, which includes mostly flat section, which is 34.5km long and only 240m of climb; and a quite steep one at the end, 9.2km long and 408m of climb. Technically I didn't choose that; what I choose is go to Sainte-Baume and I started looking at the routes that every router I had available. The first thing I noticed is that many of these routes were having a lot of uphill and downhill sections, ups and downs, ups and downs, which meant that there were several instances where every meter I gained I had to go down later and then go up again, which made no sense, specially since electric bikes do not have any type of energy recovery at all. If you spend energy twice or thrice to climb the same elevation meters, you are basically wasting it.

I remembered about BRouter, which is supposed to be a bicycle centered router, and start playing with it. I also watched a talk from five years ago where the author, Arndt Brenschede, explained that the only energy you have to think about is how much energy you lose due to friction against road and wind. I think he never explains it really well.

I chose the trekking bike profile, which was actually showing me some ups and downs, so I started picking at it, and I discovered that the profile was saying that going up had no penalty, and I thought that was wrong. In fact going down had a penalty of 60, which means that every 100m that you go down are equivalent to doing 6km flat. I say "well, if I want to penalize going up, then I just put a similar value for going up", and that gave me a route that was mostly flat and almost monotonically going up.

The trip2

Of course this is according to the data that these routers are using, which is mostly based on SRTM. This dataset has several issues, the first one being the resolution: it's only 30 meters, and that's 30 meters of longitude at the Equator, and 30 meters of latitude everywhere. I already discussed that the actual logitudinal resolution changes with latitude: every degree becomes smaller and smaller the closer you get to the poles, but I guess that's taken in account by the router. Even if it's a DEM, meaning it doesn't include trees and buildings, the level of detail for the road network can be really bad, and for ebikes and acoustic bikes, every meter counts.

On the way up I stopped several times: to buy some food; to eat said food, get some water and rest a little bit; to take some pictures, because to me that's part of going around. Everything was fine but I was getting really tired on the legs when I arrived to Saint-Zacharie, which is only 270 meters above sea level, right before the steep climb. So far I had mostly used level 2 out of five assistance levels, and gears between 7-9 out of nine gears, which is the same to what I use in the city when I'm alone. This is an assistive motor, not a pulling motor, so it only works when you work. That means I had almost constantly pedaled for 35km when I got to this part. I switched to using the lighter gears, between 6-3, and also using assistance levels 4 and 5.

When I reached Plan d'Aups I was not really tired; it was a fun experience under a really nice day. Then I decided to go down and I made one mistake, which was not taking the same route back home, but a shorter route that not only climbed 55m higher, it also had 80m of extra ascent6, for a total of 135m more than the original plan. I had only two bars out of five, and like many other electric vehicles, when this system gets to only one bar, it stops giving much assistance to save the battery from fully draining. My plug-in hybrid car does the same: when you reach the last 25% of the battery, it disables the full electric mode; at first it starts the ICE motor to charge the battery, then to drive the car, and there's no way to turn it off, even if you are close to your destination. I guess other cars would have this option if you used the internal routing system, but those maps are usually really crappy and outdated; or, if at least they're not out of date, it probably means it has a 4g chip that also sends a lot of telemetry home. That's a can of worms that I'll keep in the rack for a while.

I took the route down which was really fun, even if the bike is not the best for that. In particular, the wheels do not have road tires, but more like gravel tires with lugs. I think it lost a lot of energy in terms of friction the same way you can feel the difference between good asphalt from bad or old asphalt. Despite the steep descent, I could not make it go above 50km/h :)

When I reached Geménos I had only one bar3 and the system started to skimp energy, and that's when I got to the flats. I decided to save the battery so I turned the assistance off and start pedaling back home. It was still more than 20 kilometers away and I was really tired. I'm definitely out of shape for these kind of things; the whole trip was around 80km and 785m of climb, so something that I definitely could not do on an acoustic bike; I barely could with this one. Let's see tomorrow what my body says4.

From there the way back home was really slow. I stopped several times that allowed my legs to to rest a little bit, took a few pictures and finding directions, making sure I was getting the right road. I hit another issue: one of the roads that the router asked me to take was closed, and the only sensible option was to take another route that was going up 15m in some 300m. I was definitely very very very very very tired by then and my left knee started to hurt, so at this point I was being miserable and I still had more than 15 kilometers to go.

I made a few other stops, I bought a coffee and sweets in a bakery, and I just went back really really careful of what I was doing, taking every downhill section, and going at a pace that was not painful to me on every uphill section. It's a shame that my recording phone didn't work, the GPX would have showed how bad this was compared to being able to use the assistance, even when I was technically going down.

I'm now in the phase of preparing another testing trip: Clermont-Ferrand - Mont-Dore, some 45km +680m/+-380m6. My idea is to stop somewhere in the middle and try to recharge the battery, but the region is so sparsely populated that even finding a bakery, let alone a restaurant, means extending the trip for a few kms and +- meters. One route by the single restaurant I could find in the general direction adds 1.4km and +-120m; another, that goes by a charging station, adds 4.2km and and only +-30m, but this means having a snack, waiting in a parking, and going off the scenic road I wanted.

I have several rabbit holes I would like to go down and those include:

  • Figure out whether I want a big bicycle like this or a smaller one but with smaller tires (I still want to carry two young children for a while). Smaller tires mean worse riding experience, specially since they don't have any suspension at all. But a lighter bike with the same system would have more range! One thing I know is that I'm not going to make this kind of trips with my kids. They're still young (almost 5 and 7yo) and they would get bored in the uninteresting or monotonic sections of the trip. It should also be a transitional bike; once they grow old enough, they would get their own bikes, but that's a few years from now.

  • Could we collect GPS data to improve the data these routers use for ascent/descent/slope calculations?

  • What router parameters make sense for ebikes5? I used my intuition to go against what the original developer decided, and they seem way more informed than me.

  • What about buying an extra battery to extend the range and also as backup?

  • Would commerces like restaurants, bars and bakeries allow or even offer recharging services, free or for a fee? I though of asking to recharge the battery at the bakery, but it was one of those chain things and it could have put the single employee in trouble. One thing is to recharge a smartphone, another is a 700Wh ebike battery.

I'll probably only check the first one, since I do want to buy a bicycle.


  1. So dictated that I couldn't even finish this sentence while editing :) 

  2. This trip is not the one I parametrized but the same one I got just using a better profile. 

  3. My mistake here was to use assistance on the way down, so the motor was helping me reaccelerate after each hairpin, all 15 of them. 

  4. My left knee complained for a couple of days, but that was all. 

  5. And EVs in general. 

  6. My notation here is: distance in km, plain ascend in m, extra ascend in m. The extra ascend are meters that are both climbed up and down. Routers usually speak in terms of plain ascend (difference in elevation between start and destination) and (total) ascend, but I prefer to highlight the extra elevation. 

Monitoring the maximum latency for a scrape period with prometheus

Ever since I watched (several times) Gil Tene's talk about how we are measuring latency wrong I set myself to try get such values in a graph. His thesis is that we are using histograms and other similar tools for measuring percentiles of our response times and that that is wrong because we never see the maximum time; our tools are hiding them behind averages. What can we do about it with our current tools, in particular Prometheus and Grafana? Let's first define exactly what we want:

On one side we have a stream of log lines. These lines will be parsed and some values will be extracted from them. On the other side we have Prometheus trying to scrape a representative of those values. The representative that we want for every scrape is the max of all the values collected in that period between scrapes. For the next scraping period, we must forget this max and calculate the new one, and so on for each scrape interval.

Prometheus offers us three different metric types that could allow us to get such value. The first and obvious one would be a gauge, but there is no tool that does what I described in the previous paragraph.

The second one would be a histogram. For a histogram, we define a set of values that become boundaries. For each interval defined by those boundaries, the histogram gives us the amount of values that fall in them, and also below the lowest boundary and above the highest one. This is not enough because we can never have even a ballpark of what's the maximum value is, much less it's actual value. The max is lost among all the values that fall in the interval above the las boundary we define; we only know which interval contains the max. To at least get a ballpark you need to figure out an upper boundary, otherwise the max could be anywhere between the highest boundary and infinite. Maybe you're lucky and you have something like a timeout, but it's not often that a timeout fires during active data transmissions. On the other hand, if you're measuring transfer times for a service that returns files, you don't have an upper bound of how long it can take because it depends on the whole network path and the size of the file.

The last one is summaries. Summaries work slightly different. Instead of partitioning in terms of values, it partitions in terms of percentiles. One aspect of summaries is that you cannot aggregate them. Not only Prometheus' documentation mentions this, it also cites one of Gil Tene's articles about it1. But one of the peculiarities of percentiles is that if you take the 0 percentile, you get the minimum, and if you take the 100 percentile, you get the maximum. And as Gil Tene mentions in that same article, those are the only two values that you can aggregate because if you have all the minimums, the global minimum is just the minimum of all those values, and similarly for the maximum.

Now, how do we actually implement it? I had mentioned before, we have two tools for converting log lines into metrics for Prometheus. One is mtail, and the other one is the Grok exporter. mtail doesn't seem to have any support for summaries, so we are only left with the Grok exporter.

I tried to set up a Grok exporter configuration to do exactly that, only the minimum and the maximum for a given value, which in my case was time to first byte (TTFB), because I'm trying to apply it to a service that gives files as responses. The total latency of such service depends on the size of the file, but TTFB should not, so I'm interested in that.

I want the minimum and the maximum, so I set up a metric for that. I define it as summary, and I set it up for the zero and a hundred percentiles. And this is where the problem starts getting slightly more complicated.

First, the Grok exporter uses a library for computing the percentiles. That library does not accept just an array of the percentiles you are interested in; quoting the docs:

grok_exporter does not provide exact values for the quantiles, but only estimations. For each quantile, you also specify an uncertainty that is tolerated for the estimation

I don't know how much that impacts the two values I'm looking for, minimum and maximum, but I started with a 1% tolerance just to try:

- type: summary
  name: elevation_tile_ttfb_extremes
  help: Minimum and Maximum for TTFB
  match: '%{ELEVATION_LOG}'
  value: '{{multiply .first_byte 0.001}}'  # µs to ms
  quantiles: { 0: 0.01, 100: 0.01 }  # min, max
  max_age: 15s

And it didn't work.

First, it started complaining that when it was calculating some values, it was trying to convert the string that it got from the log lines to an integer, but there were empty strings and basically it was failing:

grok-exporter[1402204]: WARNING: skipping log line: error processing metric elevation_tile_ttfb:
    unexpected error while evaluating template for label __value__: template: __value__:1:2:
    executing "__value__" at <multiply .first_byte 0.001>: error calling multiply: error executing multiply function:
    cannot convert  to floating point number: strconv.ParseFloat: parsing "": invalid syntax

(I had to split it into several lines so you don't have to scroll horizontally [so much]. I'm not going to comment on whether Golang's tendency to wrap errors to the point, shown above, where the chain of error messages becomes a type of horizontal stack trace, is a good thing or not).

Once I fixed that, instead I got a traceback that points to an index error inside that library.

2024/03/10 13:42:33 http: panic serving 127.0.0.1:49912: runtime error: index out of range [99] with length 1
goroutine 39 [running]:
net/http.(*conn).serve.func1(0xc000420000)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0xb7d0e0, 0xc000038060)
        /usr/local/go/src/runtime/panic.go:975 +0x3e9
github.com/fstab/grok_exporter/vendor/github.com/beorn7/perks/quantile.(*Stream).Query(0xc00045cd20, 0x4059000000000000, 0x4051e3d70a3d70a4)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/beorn7/perks/quantile/stream.go:165 +0x16f
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*summary).Write(0xc000144c00, 0xc000175b20, 0x3, 0xc00023f412)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/summary.go:310 +0x3c7
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus.processMetric(0xca4840, 0xc000144c00, 0xc00023f5c0, 0xc00023f5f0, 0x0, 0x0, 0x0)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:596 +0xa2
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather(0xc000100410, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:490 +0x8ca
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp.HandlerFor.func1(0x7fe9cd2298f8, 0xc00050e280, 0xc00022e300)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go:126 +0x99
net/http.HandlerFunc.ServeHTTP(0xc00017a230, 0x7fe9cd2298f8, 0xc00050e280, 0xc00022e300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1(0x7fe9cd2298f8, 0xc00050e280, 0xc00022e300)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:40 +0xab
net/http.HandlerFunc.ServeHTTP(0xc00050d170, 0x7fe9cd2298f8, 0xc00050e280, 0xc00022e300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1(0xca9840, 0xc00024a000, 0xc00022e300)
        /go/src/github.com/fstab/grok_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:100 +0xda
net/http.HandlerFunc.ServeHTTP(0xc00050d260, 0xca9840, 0xc00024a000, 0xc00022e300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0x131bd20, 0xca9840, 0xc00024a000, 0xc00022e300)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
net/http.serverHandler.ServeHTTP(0xc000196000, 0xca9840, 0xc00024a000, 0xc00022e300)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc000420000, 0xcac140, 0xc000244000)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c

In my last post about this, I mentioned that the Grok exporter has been unsupported for four years, nobody has written any single line of code after that, and there are many open issues and even several requests still open and nothing has happened in the last four years. And this library, the one that calculates the percentiles, is even worse. It's been five years since they last updated it, which means that there's no chance that even if I report the bug, I will get releases of both that library and then the Grok exporter to fix it. And I don't feel like reading a whole paper in statistics at this point of my life just to solve this problem2.

So what options do we have? Well, I think there's only one, and it's writing your own tool that fulfills the description in the second paragraph. I'm not really sure I have the energy to do it, but I think this will keep jumping around my head and I will succumb and do it, mostly because I think it's important.

But until that moment occurs, I have no solution for this problem, and that means that in the meantime I will have to live with histograms.



  1. It's actually interesting that Prometheus people are aware of Gil Tene's ideas and yet don't have a solution for them. 

  2. I remember being a frevient free software advocate for reasons exactly like this. But then, I don't have the energy nor I think I can convince my manager to get someone paid to fix it. 

Sending AWS CloudWatch alarms through SNS to MSTeams

I'm new to AWS os please take the following statements with a grain of salt. Also, I'm tired, but I want to get this of my chest before the weekend begins (although, technically, it has already begun), so it might not be so coherent.

AWS provides some minimum monitoring of your resources with a tool called CloudWatch. Think of prometheus + grafana, but more limited. Still, is good enough to the point it makes sense to setup some Alerts on it. Many of AWS's resources are not processes running on a computer you have access to, so you can't always install some exporters and do the monitoring yourself.

If you're like me, CloudWatch Alerts must be sent to the outside world so you can receive them and react. One way to do this1 is to channel them through SNS. SNS supports many protocols, most of them internal to AWS, but also HTTP/S. SNS is a pub-sub system, and requires a little bit of protocol before it works.

On the other end we2 have MSTeams3. MSTeams has many ways of communicating. One is Chat, which is a crappy chat67, and another is some kind of mix between a blog and twitter, confusingly called Teams. The idea in a Team is that you can post... Posts? Articles? And from them you can have an unthreaded converstion. Only Teams have webhooks; Chats do not, so you can't point SNS there.

If you have read other articles about integrating CloudWatch Alerts or SNS to MSTeams, they will always tell you that you not only need SNS, but also a Lambda program. Since we already handle gazillion servers, not all of them in AWS, and one in particular we pay quite cheap for dedicated HW, and also we're trying to slim our AWS bill (who doesn't), I decided to see if I can build my own bridge between SNS and Teams.

I already said that SNS has a litte protocol. The idea is that when you create an HTTP/S Subscription in SNS, it will POST a first message to the URL you define. This message will have a JSON payload. We're interested in two fields:

{
    "Type": "SubscriptionConfirmation",
    "SubscribeURL": "..."
}

What you have to do is get this URL and call it. That way SNS will know the endpoint exists and will associate an ARN to the Subscription. Otherwise, the Subscription will stay unconfirmed and no messages will be sent to it. Interestingly, you can't neither edit nor remove Subscriptions (at least not with the web interface), and I read that unconfirmed Subscriptions will disappear after 3 days or so 4.

SNS messages are also a JSON payload POST'ed to the URL. They look like this:

{
  "Type" : "Notification",
  "MessageId" : "<uuid1>",
  "TopicArn" : "<arn>",
  "Subject" : "...",
  "Message" : "...",
  "Timestamp" : "2024-01-19T14:29:54.147Z",
  "SignatureVersion" : "1",
  "Signature" : "cTQUWntlQW5evk/bZ5lkhSdWj2+4oa/4eApdgkcdebegX3Dvwpq786Zi6lZbxGsjof2C+XMt4rV9xM1DBlsVq6tsBQvkfzGBzOvwerZZ7j4Sfy/GTJvtS4L2x/OVUCLleY3ULSCRYX2H1TTTanK44tOU5f8W+8AUz1DKRT+qL+T2fWqmUrPYSK452j/rPZcZaVwZnNaYkroPmJmI4gxjr/37Q6gA8sK+WyC0U91/MDKHpuAmCAXrhgrJIpEX/1t2mNlnlbJpcsR9h05tHJNkQEkPwFY0HFTnyGvTM2DP6Ep7C2z83/OHeVJ6pa7Sn3txVWR5AQC1PF8UbT7zdGJL9Q==",
  "SigningCertURL" : "https://sns.eu-west-1.amazonaws.com/SimpleNotificationService-01d088a6f77103d0fe307c0069e40ed6.pem",
  "UnsubscribeURL" : "https://sns.eu-west-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=<arn>:<uuid2>"
}

Now, CloudWatch Alerts sent via SNS are sent in the Message field. As Message's value is a string and the Alert is encoded as JSON, yes, you guessed it, it's double encoded:

{
  "Message" : "{\"AlarmName\":\"foo\",...}"
}

Sigh. After unwrapping it, it looks like this:

{
  "AlarmName": "...",
  "AlarmDescription": "...",
  "AWSAccountId": "...",
  "AlarmConfigurationUpdatedTimestamp": "2024-01-18T14:32:17.244+0000",
  "NewStateValue": "ALARM",
  "NewStateReason": "Threshold Crossed: 1 out of the last 1 datapoints [10.337853107344637 (18/01/24 14:28:00)] was greater than the threshold (10.0) (minimum 1 datapoint for OK -> ALARM transition).",
  "StateChangeTime": "2024-01-18T14:34:54.103+0000",
  "Region": "EU (Ireland)",
  "AlarmArn": "<alarm_arn>",
  "OldStateValue": "INSUFFICIENT_DATA",
  "OKActions": [],
  "AlarmActions": [
    "<sns_arn>"
  ],
  "InsufficientDataActions": [],
  "Trigger": {
    "MetricName": "CPUUtilization",
    "Namespace": "AWS/EC2",
    "StatisticType": "Statistic",
    "Statistic": "AVERAGE",
    "Unit": null,
    "Dimensions": [
      {
        "value": "<aws_id>",
        "name": "InstanceId"
      }
    ],
    "Period": 60,
    "EvaluationPeriods": 1,
    "DatapointsToAlarm": 1,
    "ComparisonOperator": "GreaterThanThreshold",
    "Threshold": 10.0,
    "TreatMissingData": "missing",
    "EvaluateLowSampleCountPercentile": ""
  }
}

The name and description are arbitrary texts you wrote when setting the Alarm and the Subscription. Notice that the region is not the codename as in eu-west-1 but a supposedly more human readable text. The rest is mostly info about the Alarm itself. Also notice the Dimensions field. I don't know what other data comes here (probably the arbitrary fields and values you can setup in the Alarm), all I can say is that that format (list of dicts with only two fields, one called name and the other value) is possibly the most annoying implementation of a simple dict. I hope they have a reason for that, besides over engineering.

Finally, notice that the only info we get here about the source of the alarm is the InstanceId. As those are random strings, to me they don't mean anything. Maybe I can setup the Alarm so it also includes the instance'a name5, and even maybe the URL pointing to the metric's graph.

Finally, Teams' webhook also expects a JSON payload. I didn't delve much in what you can give to it, I just used the title, text and themeColor fields. At least text can be written in MarkDown. You get such a webhook going to the Team, click in the ("vertical ellipsis") icon, "Connectors", add a webhook and obtain the URL from there. @type and @context I copied from an SNS-to-Lambda-to-Teams post.

So to build a bridge between CloudWatch Alerts through SNS to MSTeams's Team we just need a quite straightforward script. I decided to write it in Flask, but I'm pretty sure writing it in plain http.server and urllib.request to avoid dependencies is not much more work; I just didn't want to do it. Maybe I should have tried FastAPI instead; I simply forgot about it.

Without further ado, here's the script. I'm running Python 3.8, so I don't have case/match yet.

#! /usr/bin/env python3

from flask import Flask, request
import json
import requests

app = Flask(__name__)

@app.route('/', methods=[ 'POST' ])
def root():
    print(f"{request.data=}")

    request_data = json.loads(request.data)

    # python3.8, not case/match yet
    message_type = request_data['Type']

    if message_type == 'SubscriptionConfirmation':
        response = requests.get(request_data['SubscribeURL'])
        print(response.text)

        return f"hello {request_data['TopicArn']}!"

    message = {
        '@type': 'MessageCard',
        '@context': 'http://schema.org/extensions',
        'themeColor': '4200c5',
    }

    if message_type == 'Notification':
        try:
            alarm = json.loads(request_data['Message'])
        except json.JSONDecodeError:
            message['title'] = request_data['Subject']
            message['text']  = request_data['Message']
        else:
            instance_id = alarm['Trigger']['Dimensions'][0]['value']
            state = alarm['NewStateValue']

            if state == 'ALARM':
                color = 'FF0000'
            else:
                color = '00FF00'

            message['title'] = f"{instance_id}: {alarm['Trigger']['MetricName']} {state}"
            message['text']  = f"""{alarm['AlarmName']}

{alarm['Trigger']['MetricName']} {alarm['Trigger']['ComparisonOperator']} {alarm['Trigger']['Threshold']}
for {int(alarm['Trigger']['Period']) // 60} minutes.

{alarm['AlarmDescription']}

{alarm['NewStateReason']}

for {instance_id} passed to {state} at {alarm['StateChangeTime']}."""
            message['themeColor'] = color

        response = requests.post('https://<company>.webhook.office.com/webhookb2/<uuid1>@<uuid2>/IncomingWebhook/<id>/<uuid3>', json=message)
        print(response.text)

        return f"OK"

  1. Again, I'm new to AWS. This is how it's setup at $NEW_JOB, but there might be better ways. If there are, I'm happy to hear them. 

  2. 'we' as in me and my colleagues. 

  3. Don't get me started... 

  4. I know all this because right now I have like 5-8 unconfirmed Subscriptions because I had to figure all this out, mostly because I couldn't find sample data or, preferably, a tool that already does this. They're 5-8 because you can't create a second Subscription to the same URL, so I changed the port for every failed attempt to confirm the Subscription. 

  5. We don't have pets, but don't quite have cattle either. We have cows we name, and we get a little bit sad when we sell them, but we're happy when they invite us to the barbecue. 

  6. OK, I already started... 

  7. I added this footnote (I mean, the previous one... but this one too) while reviewing the post before publishing. Putting the correct number means editing the whole post, changing each number twice, which is error prone. In theory nikola and/or MarkDown support auto-numbered footnotes, but I never managed to make it work. I used to have the same issue with the previous static blog/stite compiler, ikiwiki, so this is not the first time I have out-of-order footnotes. In any case, I feel like they're a quirk that I find cute and somehow defining.