Sending AWS CloudWatch alarms through SNS to MSTeams

I'm new to AWS os please take the following statements with a grain of salt. Also, I'm tired, but I want to get this of my chest before the weekend begins (although, technically, it has already begun), so it might not be so coherent.

AWS provides some minimum monitoring of your resources with a tool called CloudWatch. Think of prometheus + grafana, but more limited. Still, is good enough to the point it makes sense to setup some Alerts on it. Many of AWS's resources are not processes running on a computer you have access to, so you can't always install some exporters and do the monitoring yourself.

If you're like me, CloudWatch Alerts must be sent to the outside world so you can receive them and react. One way to do this1 is to channel them through SNS. SNS supports many protocols, most of them internal to AWS, but also HTTP/S. SNS is a pub-sub system, and requires a little bit of protocol before it works.

On the other end we2 have MSTeams3. MSTeams has many ways of communicating. One is Chat, which is a crappy chat67, and another is some kind of mix between a blog and twitter, confusingly called Teams. The idea in a Team is that you can post... Posts? Articles? And from them you can have an unthreaded converstion. Only Teams have webhooks; Chats do not, so you can't point SNS there.

If you have read other articles about integrating CloudWatch Alerts or SNS to MSTeams, they will always tell you that you not only need SNS, but also a Lambda program. Since we already handle gazillion servers, not all of them in AWS, and one in particular we pay quite cheap for dedicated HW, and also we're trying to slim our AWS bill (who doesn't), I decided to see if I can build my own bridge between SNS and Teams.

I already said that SNS has a litte protocol. The idea is that when you create an HTTP/S Subscription in SNS, it will POST a first message to the URL you define. This message will have a JSON payload. We're interested in two fields:

{
    "Type": "SubscriptionConfirmation",
    "SubscribeURL": "..."
}

What you have to do is get this URL and call it. That way SNS will know the endpoint exists and will associate an ARN to the Subscription. Otherwise, the Subscription will stay unconfirmed and no messages will be sent to it. Interestingly, you can't neither edit nor remove Subscriptions (at least not with the web interface), and I read that unconfirmed Subscriptions will disappear after 3 days or so 4.

SNS messages are also a JSON payload POST'ed to the URL. They look like this:

{
  "Type" : "Notification",
  "MessageId" : "<uuid1>",
  "TopicArn" : "<arn>",
  "Subject" : "...",
  "Message" : "...",
  "Timestamp" : "2024-01-19T14:29:54.147Z",
  "SignatureVersion" : "1",
  "Signature" : "cTQUWntlQW5evk/bZ5lkhSdWj2+4oa/4eApdgkcdebegX3Dvwpq786Zi6lZbxGsjof2C+XMt4rV9xM1DBlsVq6tsBQvkfzGBzOvwerZZ7j4Sfy/GTJvtS4L2x/OVUCLleY3ULSCRYX2H1TTTanK44tOU5f8W+8AUz1DKRT+qL+T2fWqmUrPYSK452j/rPZcZaVwZnNaYkroPmJmI4gxjr/37Q6gA8sK+WyC0U91/MDKHpuAmCAXrhgrJIpEX/1t2mNlnlbJpcsR9h05tHJNkQEkPwFY0HFTnyGvTM2DP6Ep7C2z83/OHeVJ6pa7Sn3txVWR5AQC1PF8UbT7zdGJL9Q==",
  "SigningCertURL" : "https://sns.eu-west-1.amazonaws.com/SimpleNotificationService-01d088a6f77103d0fe307c0069e40ed6.pem",
  "UnsubscribeURL" : "https://sns.eu-west-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=<arn>:<uuid2>"
}

Now, CloudWatch Alerts sent via SNS are sent in the Message field. As Message's value is a string and the Alert is encoded as JSON, yes, you guessed it, it's double encoded:

{
  "Message" : "{\"AlarmName\":\"foo\",...}"
}

Sigh. After unwrapping it, it looks like this:

{
  "AlarmName": "...",
  "AlarmDescription": "...",
  "AWSAccountId": "...",
  "AlarmConfigurationUpdatedTimestamp": "2024-01-18T14:32:17.244+0000",
  "NewStateValue": "ALARM",
  "NewStateReason": "Threshold Crossed: 1 out of the last 1 datapoints [10.337853107344637 (18/01/24 14:28:00)] was greater than the threshold (10.0) (minimum 1 datapoint for OK -> ALARM transition).",
  "StateChangeTime": "2024-01-18T14:34:54.103+0000",
  "Region": "EU (Ireland)",
  "AlarmArn": "<alarm_arn>",
  "OldStateValue": "INSUFFICIENT_DATA",
  "OKActions": [],
  "AlarmActions": [
    "<sns_arn>"
  ],
  "InsufficientDataActions": [],
  "Trigger": {
    "MetricName": "CPUUtilization",
    "Namespace": "AWS/EC2",
    "StatisticType": "Statistic",
    "Statistic": "AVERAGE",
    "Unit": null,
    "Dimensions": [
      {
        "value": "<aws_id>",
        "name": "InstanceId"
      }
    ],
    "Period": 60,
    "EvaluationPeriods": 1,
    "DatapointsToAlarm": 1,
    "ComparisonOperator": "GreaterThanThreshold",
    "Threshold": 10.0,
    "TreatMissingData": "missing",
    "EvaluateLowSampleCountPercentile": ""
  }
}

The name and description are arbitrary texts you wrote when setting the Alarm and the Subscription. Notice that the region is not the codename as in eu-west-1 but a supposedly more human readable text. The rest is mostly info about the Alarm itself. Also notice the Dimensions field. I don't know what other data comes here (probably the arbitrary fields and values you can setup in the Alarm), all I can say is that that format (list of dicts with only two fields, one called name and the other value) is possibly the most annoying implementation of a simple dict. I hope they have a reason for that, besides over engineering.

Finally, notice that the only info we get here about the source of the alarm is the InstanceId. As those are random strings, to me they don't mean anything. Maybe I can setup the Alarm so it also includes the instance'a name5, and even maybe the URL pointing to the metric's graph.

Finally, Teams' webhook also expects a JSON payload. I didn't delve much in what you can give to it, I just used the title, text and themeColor fields. At least text can be written in MarkDown. You get such a webhook going to the Team, click in the ("vertical ellipsis") icon, "Connectors", add a webhook and obtain the URL from there. @type and @context I copied from an SNS-to-Lambda-to-Teams post.

So to build a bridge between CloudWatch Alerts through SNS to MSTeams's Team we just need a quite straightforward script. I decided to write it in Flask, but I'm pretty sure writing it in plain http.server and urllib.request to avoid dependencies is not much more work; I just didn't want to do it. Maybe I should have tried FastAPI instead; I simply forgot about it.

Without further ado, here's the script. I'm running Python 3.8, so I don't have case/match yet.

#! /usr/bin/env python3

from flask import Flask, request
import json
import requests

app = Flask(__name__)

@app.route('/', methods=[ 'POST' ])
def root():
    print(f"{request.data=}")

    request_data = json.loads(request.data)

    # python3.8, not case/match yet
    message_type = request_data['Type']

    if message_type == 'SubscriptionConfirmation':
        response = requests.get(request_data['SubscribeURL'])
        print(response.text)

        return f"hello {request_data['TopicArn']}!"

    message = {
        '@type': 'MessageCard',
        '@context': 'http://schema.org/extensions',
        'themeColor': '4200c5',
    }

    if message_type == 'Notification':
        try:
            alarm = json.loads(request_data['Message'])
        except json.JSONDecodeError:
            message['title'] = request_data['Subject']
            message['text']  = request_data['Message']
        else:
            instance_id = alarm['Trigger']['Dimensions'][0]['value']
            state = alarm['NewStateValue']

            if state == 'ALARM':
                color = 'FF0000'
            else:
                color = '00FF00'

            message['title'] = f"{instance_id}: {alarm['Trigger']['MetricName']} {state}"
            message['text']  = f"""{alarm['AlarmName']}

{alarm['Trigger']['MetricName']} {alarm['Trigger']['ComparisonOperator']} {alarm['Trigger']['Threshold']}
for {int(alarm['Trigger']['Period']) // 60} minutes.

{alarm['AlarmDescription']}

{alarm['NewStateReason']}

for {instance_id} passed to {state} at {alarm['StateChangeTime']}."""
            message['themeColor'] = color

        response = requests.post('https://<company>.webhook.office.com/webhookb2/<uuid1>@<uuid2>/IncomingWebhook/<id>/<uuid3>', json=message)
        print(response.text)

        return f"OK"

  1. Again, I'm new to AWS. This is how it's setup at $NEW_JOB, but there might be better ways. If there are, I'm happy to hear them. 

  2. 'we' as in me and my colleagues. 

  3. Don't get me started... 

  4. I know all this because right now I have like 5-8 unconfirmed Subscriptions because I had to figure all this out, mostly because I couldn't find sample data or, preferably, a tool that already does this. They're 5-8 because you can't create a second Subscription to the same URL, so I changed the port for every failed attempt to confirm the Subscription. 

  5. We don't have pets, but don't quite have cattle either. We have cows we name, and we get a little bit sad when we sell them, but we're happy when they invite us to the barbecue. 

  6. OK, I already started... 

  7. I added this footnote (I mean, the previous one... but this one too) while reviewing the post before publishing. Putting the correct number means editing the whole post, changing each number twice, which is error prone. In theory nikola and/or MarkDown support auto-numbered footnotes, but I never managed to make it work. I used to have the same issue with the previous static blog/stite compiler, ikiwiki, so this is not the first time I have out-of-order footnotes. In any case, I feel like they're a quirk that I find cute and somehow defining. 

Debugging docker containers with the host's tools

At $NEW_JOB we heavily rely on Docker containers. The two biggest reasons I don't like them are that I think they're a nightmare to keep up to date security wise without also getting new versions with potential changes in behavior; and because they're usually built with as less tools inside the image as possible. I understand that the reasons might be also double: smaller image size and maybe intentionally reducing the attack surface and/or tools available to any hacker that might break into your shell from your service. I would rather have better defenses than having no tools to help me debug a break in.

For a while I've been thinking that it should be possible to mount the hosts's filesystem inside the container and use the host's tools to debug. I was attacking the problem kind of the wrong way, reading about how filesystem namespaces are implemented and how containers use them, until I finally changed my search query and found how to "Mount volumes into a running container" by Kynan Rilee.

The idea is really simple: find out the host's device that has the filesystem for the root partition (what? you have separate /usr?), create the device in the container, and then use nsenter without the --user namespace to mount it on /opt/host inside the container (otherwise you get a 'permission denied' error).

But that's still not enough. We have a few envvars to set before we can use the tools in /opt/host. First one is obviously PATH:

export PATH="$PATH":/opt/host/bin:/opt/host/sbin:/opt/host/usr/bin:/opt/host/usr/sbin

Still not enough, you need to also be able to load libraries from the new tree:

root@3e282deec242:/# mtr
mtr: error while loading shared libraries: libgtk-3.so.0: cannot open shared object file: No such file or directory

Here we have a dychotomy. We have to prioritize one of the two trees, either the container or the host. I think it's best to use the container's, but YMMV:

export LD_LIBRARY_PATH=/lib:/usr/lib:/opt/host/lib:/opt/host/usr/lib:/opt/host/usr/lib/x86_64-linux-gnu

Perl tools will also complain:

root@3e282deec242:/# ack
Can't locate File/Next.pm in @INC (you may need to install the File::Next module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at /opt/host/bin/ack line 11.
BEGIN failed--compilation aborted at /opt/host/bin/ack line 11.

So you need another one:

export PERL5LIB=/opt/host/etc/perl:/opt/host/usr/local/lib/x86_64-linux-gnu/perl/5.36.0:/opt/host/usr/local/share/perl/5.36.0:/opt/host/usr/lib/x86_64-linux-gnu/perl5/5.36:/opt/host/usr/share/perl5:/opt/host/usr/lib/x86_64-linux-gnu/perl-base:/opt/host/usr/lib/x86_64-linux-gnu/perl/5.36:/opt/host/usr/share/perl/5.36:/opt/host/usr/local/lib/site_perl

Incredibly python3 works OOTB.

I think that's all. I'll update this post if I find more envvars to set.

Here's a scripted version, except for all the exports up there. This omission has two or three reasons:

  • bash does not has a way to accept commands to run before showing the prompt. More below.
  • Some of those values are hard to guess; you will have to adapt them to your particular host's system.
  • I guess that's all :)

You can put them in your container's .bashrc and it will be read when bash starts.

Finally, the promised script:

#! /bin/bash

set -eu pipefail

container=$1

root_device=$(findmnt --noheadings --mountpoint / | awk '{ print $2 }')
container_pid=$(docker inspect --format {{.State.Pid}} "$container")

# create device and mount point
# the lack of double quotes around this -v----------------------------------v is intentional
docker exec "$container" mknod --mode 0660 "$root_device" b $(stat --format '%Hr %Lr' "$root_device")
docker exec "$container" mkdir -p /opt/host

# mount with host's root perms; that's why --user is not there
nsenter --target "$container_pid" --mount --uts --ipc --net --pid -- mount "$root_device" /opt/host

echo "go debug; don't forget to set envvars!"
docker exec "$container" /bin/bash

# cleanup
nsenter --target "$container_pid" --mount --uts --ipc --net --pid -- umount /opt/host
docker exec "$container" rm "$root_device"

You will probably need to run this as root, even if you can run docker naked, only because of nsenter.

Maybe I should also use nsenter for the debus session; that way I would be full root there too. I'll update this post if I ever find out situations where I needed that.

ikiwiki to nikola: the script

People asked for it:

#! /usr/bin/python3

import argparse
from datetime import datetime
from glob import glob
from os import stat
from os.path import basename, splitext
import re
import sys
import time


footnote_re = re.compile(r'\[(?P<foot_number>\d+)\]')
taglink_re = re.compile(r'\[\[!taglink (?P<tag_name>[^\]]*)\]\]')
image_re = re.compile(r'\[\[!img (?P<path>.*)\]\]')
format_start_re = re.compile(r'^\[\[!format (?P<language>.*) """$')
format_end_re = re.compile(r'^"""\]\]$')


def rewrite_footnotes_line(line, text_block, footnote_block, taglink_block, foot_number):
    new_line = line
    changed = False

    while footnote := footnote_re.search(new_line):
        # remove the []s
        start = footnote.start('foot_number') - 1
        end = footnote.end('foot_number') + 1

        prefix = new_line[:start]
        postfix = new_line[end:]
        foot_number = footnote.group('foot_number')

        if text_block:
            new_line = f"{prefix}[^{foot_number}]{postfix}"
        elif footnote_block:
            new_line = f"{prefix}[^{foot_number}]:{postfix}"
        else:
            raise ValueError('found a footnote in the taglink_block!')

        changed = True
    else:
        if not changed and footnote_block and len(line) > 0:
            # '[^]: ' <-- 5 extra chars
            new_line = f"{' ' * (len(foot_number) + 5)}{line.strip()}"

    return new_line, foot_number


def rewrite_footnotes(src):
    lines = src.splitlines()
    hr_count = len([ line for line in lines if line.startswith('---') ])

    new_lines = []
    text_block = True
    footnote_block = False
    taglink_block = False
    hr_seen = 0
    foot_number = ''

    for line in lines:
        line_length = len(line)

        if line_length > 4 and line[:4] == '    ':
            # it's an inline code block, leave alone
            new_lines.append(line)
            continue

        if line.startswith('---'):
            hr_seen += 1

            # if there is only one hr, then we have text + taglink blocks
            # if there are two or more, it's text + footnote + taglink blocks
            if text_block and hr_count >= 2 and hr_seen == hr_count - 1:
                text_block = False
                footnote_block = True
                # don't keep it
                continue
            elif hr_seen == hr_count:
                text_block = False
                footnote_block = False
                taglink_block = True
                # we'll need it later
                new_lines.append(line)
                continue

        try:
            new_line, foot_number = rewrite_footnotes_line(line, text_block, footnote_block, taglink_block, foot_number)
        except Exception as e:
            print(f"got `{e}´ for `{line}´.")
            raise

        new_lines.append(new_line)

    return '\n'.join(new_lines) + '\n'


def rewrite_taglinks(src):
    new_lines = []
    new_tags = []

    for line in src.splitlines():
        if len(line) > 0 and line == '-' * len(line):
            # don't keep it
            continue

        tags = taglink_re.findall(line)
        if len(tags) > 0:
            new_tags.extend(tags)
        else:
            new_lines.append(line)

    return '\n'.join(new_lines) + '\n', new_tags


def rewrite_images(src):
    new_lines = []

    for line in src.splitlines():
        image = image_re.search(line)
        if image is not None:
            # get the text before and after the whole directive
            start = image.start(0)
            end = image.end(0)

            prefix = line[:start]
            postfix = line[end:]
            path = image.group('path')

            # the root to which this 'absolute' path points is the website's root
            new_line = f"{prefix}![](/{path}){postfix}"

            new_lines.append(new_line)
        else:
            new_lines.append(line)

    return '\n'.join(new_lines) + '\n'


lang_map = dict(
    py='python',
    sh='bash',
)


def rewrite_format(src):
    new_lines = []

    for line in src.splitlines():
        start = format_start_re.match(line)
        if start is not None:
            lang = start.group('language')
            # if there's no mapping return the same lang
            new_line = f"```{lang_map.get(lang, lang)}"
            new_lines.append(new_line)
            continue

        if format_end_re.match(line):
            new_lines.append('```')
            continue

        new_lines.append(line)

    return '\n'.join(new_lines) + '\n'


def titlify(src):
    words = src.split('-')
    words[0] = words[0].title()

    return ' '.join(words)


def test_offesetify():
    src = -3600
    dst = '+0100'

    assert offsetify(src) == dst


def offsetify(src):
    hours, seconds = divmod(src, 3600)
    # "offsets are always in minutes" sounds like one item in 'things dveloper believe about timezones'
    minutes, _ = divmod(seconds, 60)

    # NOTE: time.timezone returns seconds west of UTC, which is opposite of what usual offsets go
    if src > 0:
        sign = '-'
    else:
        sign = '+'

    return f"{sign}{-hours:02d}{minutes:02d}"


def datify(src):
    '''1701288755.377908 -> 2023-11-29 21:12:35 +0100'''

    # BUG: I'm gonna assume current timezone.
    # thanks SirDonNick#python@libera.chat
    # dto=DT(2023,11,29, 12,13,59, tzinfo=UTC_TZ);  DT.astimezone( dto , getTZ('Europe/Brussels') ) #==> 2023-11-29 13:13:59+01:00
    offset = time.timezone
    dt = datetime.fromtimestamp(src)

    return f"{dt.strftime('%Y-%m-%d %H:%M:%S')} {offsetify(offset)}"


# zoneinfo for some reason doesn't know about CEST, so I'll just hack a mapping here
tzname_to_utc_offset = dict(
    CEST='+0200',
     CET='+0100',
)

month_name_to_number = dict(
    jan= 1,
    ene= 1,
    feb= 2,
    mar= 3,
    apr= 4,
    abr= 4,
    may= 5,
    jun= 6,
    jul= 7,
    aug= 8,
    ago= 8,
    sep= 9,
    oct=10,
    nov=11,
    dec=12,
    dic=12,
)


def dedatify(src):
    #       0         1      2     3      4       5           6     7
    # src=['Posted', 'Sun', '26', 'Aug', '2012', '11:27:16', 'PM', 'CEST']
    month = month_name_to_number[src[3].lower()]
    utc_offset = tzname_to_utc_offset[src[7]]
    h, m, s = [ int(x) for x in src[5].split(':') ]

    if src[6].upper() == 'PM':
        h += 12  # TODO: support 12PM

    return f"{src[4]}-{month:02d}-{int(src[2]):02d} {h:02d}:{m:02d}:{s:02d} {utc_offset}"


def build_meta(filepath, tags, date=None):
    filename = splitext(basename(filepath))[0]
    if date is None:
        mtime = stat(filepath).st_mtime
        date_string = datify(mtime)
    else:
        date_string = dedatify(date)

    meta = f""".. title: {titlify(filename)}
.. slug: {filename}
.. date: {date_string}
.. tags: {', '.join(tags)}
.. type: text
"""

    return filename, meta


def import_post(opts):
    src = open(opts.filepath).read()
    mid, tags = rewrite_taglinks(rewrite_footnotes(src))
    dst = rewrite_format(rewrite_images(mid))
    if opts.date is None:
        filename, meta = build_meta(opts.filepath, tags)
    else:
        filename, meta = build_meta(opts.filepath, tags, date=opts.date)

    open(f"posts/{filename}.md", 'w+').write(dst)
    open(f"posts/{filename}.meta", 'w+').write(meta)


def parse_args():
    parser = argparse.ArgumentParser()

    parser.add_argument('filepath', metavar='FILE')
    parser.add_argument('-d', '--date', nargs=8,
                        help='Just pass something like "Posted Wed 12 Sep 2012 08:19:23 PM CEST".')

    return parser.parse_args()


if __name__ == '__main__':
    opts = parse_args()

    import_post(opts)

I removed all the tests, but they all looked like this:

def test_dedatify():
    src = 'Posted Wed 12 Sep 2012 08:19:23 PM CEST'.split()
    dst = '2012-09-12 20:19:23 +0200'
    assert dedatify(src) == dst

Enjoy.

Logging shell scripts' executions: a better way

Some 11 years ago I wrote:

wanna debug it by at least tracing its execution? Well,
simply surround it by:

    (
    set -x
    ...
    ) > /tmp/$0-$(date +%F.%T).log

Well, a couple of years back at my $OLD_JOB we needed to do exactly that, even with the option turn it on and off at will. For the latter, we decided to use an envvar. Here's how we did it, it's much more robust:

if [ -n "$DEBUG" ]; then
    export PS4='> $(date +"%Y-%m_dT%H:%M:%S") > $(basename "${BASH_SOURCE}") > ${FUNCNAME[0]:=__main__}():${LINENO} > '
    exec 5>> /var/log/foo.log
    BASH_TRACEFD=5
    set -x
fi

There's a lot there, so let's try to unpack it.

  • The last line, set -x, makes bash print every line executed
  • PS4 defines the "prompt" for such lines. We're doing a lot here, so:
  • We start printing a date. Maybe the Y-m-s is too much, but it looks familiar from other log files. This is command substitution that is executed every time a line is printed, the same way you can do fun stuff with your shell prompt.
  • We print the filename of the source file.
  • The the function from the top of the stack, or __main__ in the case of the main body of the script. Guess who wrote this and what's their favorite language :)
  • Then the line number
  • Then bash prints the executed line
  • bash has grown organically, which is a nice way to say 'hacky'. This incantation is how you open a file for appending and copy the file descriptor to the file descriptor 5. Remember 0 is stdin, 1 is stdout and 2 is stderr. We're playing safe with 5.
  • We tell bash to use fd 5 for printing all this, sending it to the file. By default stderr is used, which you probably want to keep for your own script.
  • All this only if DEBUG has any contents. We usually do export DEBUG=1.

Enjoy.

Update

Forgot to mention: this will slow down your execution time noticeably, specially if it's a 65k LoC beast. Ask me how I know, but only after buying me a beer :)

Migrating from ikiwiki to nikola

As I mentioned several times already, my ikiwiki setup for this glob is falling apart in my machine. As it is written in perl, a language I haven't touched in may many years, and its community seems to have dwindled and almost disappeared, I've been thinking of migrating to something else. As a pythonista, one obvious option is nikola. Also because I know the original developer :)

But what would it take to do this? Well, my ikiwiki posts are written in Markdown, and nikola also reads that format. At the beginning I thought of converting to reStructuredText because I have an issue: because of a bad command (probably a cp instead of rsync or tar), I lost the original file times. With reStructuredText, I can provide the date as a directive, and I can recover the original dates from archive.org's snapshots of my glob. But then I read that the same data can be put in a sidecar .meta file, so I can keep my original file format. Also, many things I wanted work best with Markdown, most notably footnotes, which, I don't know if you noticed, never worked on this glob :) Thanks +ChrisWarrick#nikola@libera.chat for all the help!

Still, ikiwiki handles a few things not very Markdown'ly, including images, code snippets and tags. To be honest, the last two are not really a part of Markdown, but it still means I have to convert one markup into another.

I had used pytest in the past, but not much really. I usually write a test() function where I test with assert everything, and once all tests pass, I call main() at script start instead. This was another quick hack, but I wanted to give it a spin. I started with some pure TDD, writing input and outputs in test functions and just assert f(input) == output and pytest did everything else for me, including showing me a diff that points out to the small errors I was making. The iteration pace was feverish.

All in all, it took me 3 21-23h hackatons to mostly finish it. I wrote one function for each step (footnotes, tags, images and code snippets), all of them looking all the input lines all over again, but it doesn't really matter, as I have to import many files by hand to specify the original publishing date. I also tested each regexp1 individually, like I was discussing the other day2. They were short enough not to follow my first tip, but by $GOD I used the other two a lot. There are another four helper functions (convert slugs to titles; convert time.timezone format to UTC offset (for instance, +0100); convert timestamps to a certain date format; and convert another date format to the same one), all also well tested. Then one short function to write the sidecar file, one that glues everything together, and one for parsing command line parameters. All that, tests and their data and all, in 538 lines of very hacky Python :) I'll try to post the code some other day, but frankly I run out of steam and I still have lots of posts to import by hand.

And that's it! Hopefully this will be the first post in the new glob version. I imported a few old posts already and it's working just fine. I expect a few tweaks in the future, as we're talking about ~300 posts and I can't promise the very old ones follow the same format. I set the feed size to one and I'll grow for the next nine posts so I don't break planets and feed readers. I hope I got that right :)


  1. Yes, of course this includes regexps! But thanks to testing them, it didn't become the proverbial extra problem. 

  2. Are you also on mastodon? Come say hi! 

Monitoring apache with prometheus and grafana

NE: this one is not going to be that coherent either, even when I'm actually writing it. Again, sorry for the mess.

Since nobody seems to have ever written a blog post about this beyond the basic 'explain + install + graph one metric in grafana', here's my take.

Analyzing Apache activity with tools like prometheus and grafana is not easy task, mostly because the kind of things that Apache can do is quite varied: it could be serving static files, of any size, receiving uploads, answering API calls, thin REST ones or big fat heavy ones, or even offloading SSL or just proxying to other services. So the first thing you have to ask yourself is: what does my Apache do?

In my case it's two things:

  • Serving static files, mostly map tiles and photos. These vary in size a lot: tiles are usually small, but I also have big 12MiB photos and at least 2 scaled down versions, the thumbnail and the gallery main image. These thumbnails and main image are generated from the originals and cached.

  • Serving a few services: webmail, rss reader and 'cloud' suit (file sharing, calendars, etc).

Second, you have to define what do you actually want to know about it. There are many options: What's my error rate? How long does it take to serve requests? What about queries whose payload can wildly vary in size? What about uploads?

You can also ask yourself about those values: Do I wan to know it in general, by any individual URL, by certain prefixes...?

These question should lead you to define certain things, including changes you might need to do to Apache. For instance, Apache's 'combined' log format include only status and written size (including headers), but neither read size, response time or time to first byte. TTFB is an important value to know for requests that either upload or download lots of data; for instance, in my case, the gallery's images. TTFB will tell you much time you spend before starting your answer. Even then, this value can be misleading, since there's nothing to prevent a framework or application to send headers as long as their values are calculated, but still have to much computation before actually sending the payload of the response.

# enable time to first byte
LogIOTrackTTFB ON

LogFormat "%h %l %u %t \"%r\" %>s %I %O %^FB %D \"%{Referer}i\" \"%{User-Agent}i\"" combined_io

So, where do we start about this? A good starting point is Brian Brazil's oldish post, where he mentions the Grok Exporter, but there is no version in Debian. This means, I need a binary release. Unfortunately, the GE seems to be abandoned by its original developer. The people at SysDigLabs maintain a fork, but they seem to have mostly focused on doing only maintenance around CI, dependencies, but no major code changes. None of the 11 pull requests or the 70 issues in the orignal repo seem to be addressed. I don't blame them, I'm just pointing it out. Besides, they're not providing binaries, except if you count a docker image as such.

So I decided to pick another fork. Of them all, a couple of them do provide binary releases, so I just picked one.

Meanwhile, I also thought that since I was already modifying Apache's log, I could do it in such a way that it would write CSV instead and use the CVS exporter. But since the GE is already there, some people seem to be using it, and this other one for some reason mentions SSH, I decided to stick to GE.

Ok, we install the GE, enable and start the systemd service. This involves extracting the zip, putting the binary and the systemd service unit file somewhere, call systemd link so it sees the new service, write the configuration file and place the pattern files you want in one of the pattern directories declare in the config file. Now what? Well, you can say this is the fun part.

I decided to cut my monitoring by service. First to monitor was my raster tile based map. I butchered the original patterns for apache, splitting it into a prefix, a middle part that will match the path in the request, and a postfix that also adds the three new fields. This way I can write a middle section for each service. Things could be different if I was serving each one a different vhost. It also allowed me to capture parts of the path for instance to count served map tiles per zoom level. This was easy once I figured I had to do the last step of the previous paragraph, otherwise you get errors like Pattern %{IP:client_ip} not defined.

HTTPD_LOG_PREFIX %{IP:client_ip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{HTTPDATE:timestamp}\]
HTTPD_LOG_POSTFIX %{NUMBER:status} (?:%{NUMBER:read}|-) (?:%{NUMBER:wrote}|-) (?:%{NUMBER:first_byte}|-) (?:%{NUMBER:total_time}|-) %{QS:referrer} %{QS:agent}

ELEVATION_LOG %{HTTPD_LOG_PREFIX} "(?:%{WORD:verb} /~mdione/Elevation/%{NUMBER:zoom_level}/%{NOTSPACE}(?: HTTP/%{NUMBER:http_version})?|%{DATA:rawrequest})" %{HTTPD_LOG_POSTFIX}

Have you noticed Apache's log have this ugly format for timestamps? 18/Nov/2023:23:44:13 +0100. Luckily we don't really needed them, otherwise this would be a nighmare just to parse and convert to something we can use.

Now it's time to define the metrics. I wrote one metrics file per service. Since this is a static site, I built histograms for response time and response size. Just for fun, I also wrote one for FTTB. This is quite straightforward, except for one thing: defining the buckets. What I did was use awk to get the field I wanted, then sort -n and less -N. I was looking at around 1000 requests, so I made buckets of around 100 requests. This value will not be so easy to write for more wildly variant services. For instance, at $WORK we receive and send documents or bundles of documents to be signed. The size of these PDFs and bundles of them are virtually unlimited, and in fact, we're starting to experience issues where the servers freeze for up to 10m just chewing and digesting big uploads.

- type: histogram
  name: elevation_tile_ttfb
  help: Elevation tile Time-To-First-Byte
  match: '%{ELEVATION_LOG}'
  value: '{{multiply .first_byte 0.001}}'  # µs to ms
  buckets: [ 1, 10, 20, 30, 40, 50, 75, 100, 150 ]

- type: counter
  name: elevation_tile_zoom_level_count
  help: Counts tiles per zoom level
  match: '%{ELEVATION_LOG}'
  labels:
    zoom_level: '{{.zoom_level}}'

Last step of the chain, and quite frankly, the one that took me more time: having a nice graph in grafana. My grafana version is somewhat old (6.7.2 v 10.2.1!1), so my only option is a heatmap panel. One thing to know about the grok exporter's metrics is that they're counters, so the first thing you have to do is to use increase() to get the derivative. I took me quite a long time to find out this (no, I can't say I figure it out, I read it here). Second, Grafana by default assumes it has to massage the data to build the heat map, so you have to tell it that the data is already a histogram. In the same vein, you also have to tell it a second time you want a heatmap, this time as the query's format. I also hid zero values and made the tooltip a histogram. Play around with the rest of the settings.

{
  "type": "heatmap",
  "targets": [
    {
      "expr": "increase(gallery_ttfb_bucket[1m])",
      "legendFormat": "{{le}}",
  "format": "heatmap"
}
  ],
  "heatmap": {},
  "dataFormat": "tsbuckets",
  "yAxis": {
    "format": "ms",
  },
  "tooltip": {
    "show": true,
    "showHistogram": true
  },
  "hideZeroBuckets": true
}

Here's the Gallery's dashboard:

It's me traversing my gallery. The first part is me getting into a directory that has never been seen, so the gallery had to generate all the thumbnails. That's why we're seeing TTFBs and RTs so high, but the sizes stay low, because they're all thumbnails. Then it's me poking around here and there, most of the time seing the thumbnail table, but sometimes navigating the photos one by one in their big version. No original version was downloaded. I think I'll have to let these run for a while an revisit the buckets. The one for the response size already needs an adjustment. Also notice that my home server is quite boring, I have to do synthetic usage just to have values to graph.

One thing to notice here. Not once in this post I mention the word latency, even when it's one of the buzzwords around web services. About that I strongly suggest you to watch Gil Tene's How NOT to measure latency talk. It's an oldie, but man it packs a lot of info, including demystifying quantiles and nines.

Update

Today I noticed that an anternative to the Grok Exporter is mtail. It has the advantages that it's well maintained and already available in Debian, but on the other hand you have to program it. I'll stick to GE until a reason raises to switch to it.

Also:

  • Slight additions and clarifications.
  • Fixed typos and removed dupes.

  1. This was because I was using grafana from Debian and not their repository

Is dinant dead; or: A tip for writing regular expressions

NE: Another dictated and quickly revised post. Sorry for the mess.

Last night I was trying to develop a Prometheeus exporter for Apache logs. There's only one already written but it doesn't provide much information, and I just wanted to try myself (yes, a little NIH).

So I decided to start with the usual thing; that is, parsing the log lines. What's the best thing to do this than regular expressions and since I needed to capture a lot of stuff, and then be able to reference them, I thought "Oh yeah, now I remember my project dinant. What happened with it?"

I opened the last version of the source file and I found out that it's incomplete code and it's not in a good shape. So I said "look, it's too late, I'm not going to put it back in shape this because, even if I'm doing this for a hobby, eventually I will need this for work, so I will try to get something quick fast, and then when I have the time I'll see if I can revive dinant". So the answer to the title question is "maybe".

One of the ideas of dinant was that you would build your regular expressions piece by piece. Because it provides blocks that you could easily combine, that made building the regular expression easy, but it doesn't mean that you cannot do that already. For instance the first thing I have to parse is an IP address. What's an IP address? It's four octets joined by three dots. So we just define a regular expression that matches the octet and then a regular expression that matches the whole IP. Then for the rest of the fields of the line I kept using the same idea.

Another tip is that for defining regular expressions I like to use r-strings, raw strings, so backslashes are escaping regular expression elements like . or * and not escaping string elements like \n or \t, and given that they are prefixed by r, to me it's not only a raw string but it's also a regular expression string :)

Finally, building your regular expressions block by block and then combining them in a final regular expression should make your regular expressions easier to test, because then you can you can build test code that test each block individually, and then you test bigger and bigger expressions, exactly like I did for dinant.

Here's the regexps quite well tested:

import re

capture = lambda name, regexp: f"(?P<{name}>{regexp})"

octect = r'([0-9]|[1-9][0-9]|1[0-9]{1,2}|2[0-4][0-9]|25[0-5])'
assert re.fullmatch(octect, '0') is not None
assert re.fullmatch(octect, '9') is not None
assert re.fullmatch(octect, '10') is not None
assert re.fullmatch(octect, '99') is not None
assert re.fullmatch(octect, '100') is not None
assert re.fullmatch(octect, '255') is not None
assert re.fullmatch(octect, '-1') is None
assert re.fullmatch(octect, '256') is None

IPv4 = r'\.'.join([octect] * 4)  # thanks to r'', the \ is a regexp escape symbol, not a string escape symbol
assert re.fullmatch(IPv4, '0.0.0.0') is not None
assert re.fullmatch(IPv4, '255.255.255.255') is not None
assert re.fullmatch(IPv4, '255.255.255') is None
assert re.fullmatch(IPv4, '255.255') is None
assert re.fullmatch(IPv4, '255') is None

Meanwhile, after reading this, I decided to just use the grok exporter. More on that soon.

Update

Talking this morning about it with a friend, I realized that the IPv4 regex is more complex than it needs to be: Apache logs will never have a wrong IP, unless they're badly misbehaving, at which point you should have better ways to detect that.

Automating blender based hillshading with Python

Remember my Blend based hillshading? I promised to try to automate it, right? Well, it seems I have the interest and stamina now, so that's what I'm doing. But boys and girls and anything in between and beyond, the stamina is waning and the culprit is Blender's internals being exposed into a non-Pythonic API3. I swear if I worked in anything remotely close to this, I would be writing a wrapper for all this. But in the meantime, it's all a discovery path to something that does not resemble a hack. Just read some of Blender's Python Quickstart:

When you are familiar with other Python APIs you may be surprised that new data-blocks in the bpy API cannot be created by calling the class:

bpy.types.Mesh()
Traceback (most recent call last):
File "<blender_console>", line 1, in <module>
TypeError: bpy_struct.__new__(type): expected a single argument

This is an intentional part of the API design. The Blender Python API can’t create Blender data that exists outside the main Blender database (accessed through bpy.data), because this data is managed by Blender (save, load, undo, append, etc).

Data is added and removed via methods on the collections in bpy.data, e.g:

mesh = bpy.data.meshes.new(name="MyMesh")

That is, instead of making the constructor call this internal API, they make it fail miserably and force you to use the internal API! Today I was mentioning that Asterisk's programming language was definitely designed by a Telecommunications Engineer, so I guess this one was designed by a 3D artist? But I digress...

One of the first thing about Blender's internals is that one way to work is based on Contexts. This makes sense when developing plugins, where you mostly need to apply things to the selected object, but for someone really building everything from scratch like I need to, it feels weird.

One of the advantages is that you can open a Python console and let Blender show you the calls it makes for every step you make on the UI, but it's so context based that the results is useless as a script. Or for instance, linking the output of a thing into he the input of another is registered as a drag-and-drop call that includes the distance the mouse moved during the drag, so it's relative of the output dot where you started and what it links to also depends on the physical and not logical position of the things you're linking,

bpy.ops.node.link(detach=False, drag_start=(583.898, 257.74))

It takes quite a lot of digging around in a not very friendly REPL1 with limited scrollback and not much documentation to find more reproducible, less context dependent alternatives. This is what's eating up my stamina, it's not so fun anymore. Paraphrasing someone on Mastodon: What use is a nice piece of Open Software if it's documentation is not enough to be useful2?

Another very important thing is that all objects have two views: one that has generic properties like position and rotation, which can be reacheched by bpy.data.objects; and one that has specific properties like a light's power or a camera's lens angle, which can be reached by f.i. bpy.data.cameras. This was utterly confusing, specially since all bpy.data's documentation is 4 lines long. Later I found out you can get specific data from the generic one in the .data attribute, so the take out is: always get your objects from bpy.data.objects.

Once we get over that issue, things are quite straightforward, but not necessarily easy. The script as it is can already be used with blender --background --python <script_file>, but have in account that when you do that, you start with the default generic 3D setup, with a light, a camera and a cube. You have to delete the cube, but you can get a reference to the other two to reuse them.

Then comes the administrative stuff around just rendering the scene. To industrialize it and be able to quickly test stuff, you can try to get command line options. You can use Python's argparser module for this, but have in account that those --background --python blender.py options are going to be passed to the script, so you either ignore unknown options or you declare those too:

mdione@ioniq:~/src/projects/elevation$ blender --background --python blender.py
Blender 3.6.2
Read prefs: "/home/mdione/.config/blender/3.6/config/userpref.blend"
usage: blender [-h] [--render-samples RENDER_SAMPLES] [--render-scale RENDER_SCALE] [--height-scale HEIGHT_SCALE] FILE
blender: error: unrecognized arguments: --background --python

Also, those options are going to be passed to Blender! So at the end of your run, Blender is going to complain that it doesn't understand your options:

unknown argument, loading as file: --render-samples
Error: Cannot read file "/home/mdione/src/projects/elevation/--render-samples": No such file or directory
Blender quit

The other step you should do is to copy the Geo part of GeoTIFF to the output file. I used rasterio, mostly because at first I tried gdal (I was already using gdal_edit.py to do this in my previous manual procedure), but it's API was quite confusing and rasterio's is more plain. But, rasterio can't actually open a file just to write the metadata like gdal does, so I had to open the output file, read all data, open it again for writing (this truncates the file) and write metadata and data.

Now, some caveats. First, as I advanced in my last post, the method as it is right now has issues at the seams. Blender can't read GDAL VRT files, so either I build 9 planes instead of 1 (all the neighbors are needed to properly calculate the shadows because Blender is also taking in account light reflected back from other features, meaning mountains) or for each 1x1 tile I generate another with some buffer. I will try the first one and see if it fixes this issue without much runtime impact.

Second, the script is not 100% parametrized. Sun size and power are fixed based on my tests. Maybe in the future. Third, I will try to add a scattering sky, so we get a bluish tint to the shadows, and set the Sun's color to something yellowish. These should probably be options too.

Fourth, and probably most important. I discovered that this hillshading method is really sensible to missing or bad data, because they look like dark, deep holes. This is probably a deal breaker for many, so you either fix your data, or you search for better data, or you live with it. I'm not sure what I'm going to do.

So, what did I do with this? Well, first, find good parameters, one for render samples and another for height scale. Render time grows mostly linearly with render samples, so I just searched for the one before detail stopped appearing; the value I found was 120 samples. When we left off I was using 10 instead of 5 for height scale, but it looks too exaggerated on hills (but it looks AWESOME in mountains like the Mount Blanc/Monte Bianco! See below), so I tried to pinpoint a good balance. For me it's 8, maybe 7.

Why get these values right? Because like I mentioned before, a single 1x1°, 3601x5137px tile takes some 40m in my laptop at 100 samples, so the more tuned the better. One nice way to quickly test is to lower the samples or use the --render-scale option of the script to reduce the size of the output. Note that because you reduce both dimensions at the same time, the final render (and the time that takes) is actually the square of this factor: 50% is actually 25% (because 0.50 * 0.50 = 0.25).

So, without further addo, here's my script. If you find it useful but want more power, open issues or PRs, everything is welcome.

https://github.com/StyXman/blender_hilllshading 5

Try to use the main branch; develop is considered unstable and can be broken.

A couple of images of the shadows applied to my style as teaser, both using only 20 samples and x10 height scale:

Dhaulagiri:

Mont Blanc/Monte Bianco:

Man, I love the fact that the tail of the Giacchiaio del Miage is in shadows, but the rest is not; or how Monte Bianco/Mont Blanc's shadow reaches across the valley to the base of la Tête d'Arp. But also notice the bad data close to la Mer de Glace.


  1. Ok, TBH here, I'm very much used to ipython's console, it's really closer to the plain python one. No tab completion, so lots of calls to dir() and a few help()s. 

  2. I couldn't find it again. Mastodon posts are not searchable by default, which I understand is good for privacy, but on the other hand the current clients don't store anything locally, so you can't even search what you already saw. I have several semi-ranting posts about this and I would show them to you, but they got lost on Mastodon. See what I mean? 

  3. So you have an idea, this took me a whole week of free time to finish, including but not in the text, my old nemesis, terracing effect. This thing is brittle

  4. Yeah, maybe the API is mostly designed for this. 

  5. My site generator keeps breaking. This is the second time I have to publicly admit this. Maybe next weekend I'll gather steam and replace it with nikola

Finding data peaks with grafana and prometheus

While doing my research for the OSM import and update posts I used one of the simplest graphs I have in my Grafana instance: disk usage. Because the source is Prometheus' node_exporter's node_filesystem_* series, I have to use an expression:

node_filesystem_size_bytes - node_filesystem_free_bytes

In the same graph I also have just node_filesystem_size_bytes drawn as a dotted line to see what's the limit of each disk (this system has only two for the moment). But for the investigation itself I also wanted to know the peak of the data usage, and in particular when it happened. So my train of thought was this:

“I just need to take the max() of the data.”

Wrong. max() is an aggregator operator (yes, not a function; I thought it was one, as many examples use parenthesis) and does not do what I want. If you graph that, you only get the same line as the max of all the graphed series, which, in retrospective, makes sense.

So I try around more stuff and I can't get it done. As in many other cases, I head over to #prometheus@libera.chat for answers. The first part of the solution is max_over_time(), which is a function (yes this time) that takes a range vector and produces an instant vector. Now, I want to see the max over the graphed range, which will change as I zoom in and out in Grafana looking at different parts of the process, not the max over the whole series or some other fixed number, so I'll be using Grafana's $__range variable. So this must be it:

max_over_time((node_filesystem_size_bytes - node_filesystem_free_bytes)[$__range])

Right?

Wrong, again. parse error: ranges only allowed for vector selectors.

sigh

I cheated again and got the answer from the IRC channel. The answer is subqueries:

max_over_time((node_filesystem_size_bytes - node_filesystem_free_bytes)[$__range:])

Spot the difference, I challenge you.

Let's take a step back.

To start with, I realized something that is quite subtle here. If you read the documentation closely, it talks not only about instant and range vectors, but also about instant and range vector selectors. In my head, as a programmer, I now have to think of them in terms of foo being a IVS that 'returns' and IV; if you add a range, as in foo[1m], you get a RVS that 'returns' an RV1.

Then, as this video mentions, binary operators return mostly IVs, and that's why you can't apply a range to them, because they're not selectors, but final values.

But.

The subquery syntax2, among other things, allows you to 'apply a range' to your IV and get a RV. One slight difference is that on a RVS you might get different amount of data point across series, while with a subquery, I think, the IVS in the subquery is evaluated for every period, so you always get a value for each time. I don't have an example without data, but if you run a subquery, you'll see all the timestamps don't have a decimal digits (they're integers) because they align with the scrapping period (15s):

node_filesystem_free_bytes[1m]
------------------------------
node_filesystem_free_bytes{device="/dev/sda2",fstype="ext4",instance="127.0.0.1:9100",job="node",mountpoint="/"}
    217540681728 @1692981943.193
    217540648960 @1692981958.194
    217540624384 @1692981973.193
    217540595712 @1692981988.193
[...]

vs

node_filesystem_free_bytes[1m:]
-------------------------------
node_filesystem_free_bytes{device="/dev/sda2",fstype="ext4",instance="127.0.0.1:9100",job="node",mountpoint="/"}
    217540513792 @1692982035
    217540476928 @1692982050
    217540448256 @1692982065
    217540423680 @1692982080
[...]

Again, spot the difference. Go ahead.

So for now part of the solution feels like a black box. I hope one day understand all this, but for now this will have to be enough.

But I haven't finished.

That query only gives you an instant vector. If you graph that, you also get a line following the original data! The tricks are:

  • Tell Grafana to do an 'instant query'.

This makes Grafana get a single value, not a value per graph period.

  • Tell Grafana to extend the graph by using a series override and apply "Transform: Constant".

This makes that single dot per series a line.

The final graph:

You can see the import peaks at the left. Disk capacity is drawn with thick dashes. The other disk's usage didn't change in all that time, so there's no peak and the max line follows the graph.


  1. At the beginning I thought the problem was that expression values were not instant vectors. They seemed to be something else, and you that couldn't range them. The alternative is to create a recording rule, which is a calculated (from an expression) data series, from which you can get ranges because you can use an IVS to reference them. 

  2. Good luck trying to get something useful from that. Try this one instead. 

Disk usage while updating an OSM rendering database

Now the new part. I have never updated an OSM rendering table, so let's see what the docs say about it:

To keep an osm2pgsql database up to date you have two options: you either provide the original import data file, or you provide the replication URL. In my case, the latter is http://download.geofabrik.de/europe-updates/, but as you can see, it can be inferred from the extract; it even picks up the right date1:

$ osm2pgsql-replication init --verbose --port 5433 --database gis --osm-file europe-latest.osm.pbf
2023-08-23 15:07:18 [DEBUG]: Replication information found in OSM file header.
2023-08-23 15:07:18 [DEBUG]: Replication URL: http://download.geofabrik.de/europe-updates
2023-08-23 15:07:18 [DEBUG]: Replication sequence: 3787
2023-08-23 15:07:18 [DEBUG]: Replication timestamp: 2023-08-11T20:21:59Z
2023-08-23 15:07:18 [INFO]: Initialised updates for service 'http://download.geofabrik.de/europe-updates'.
2023-08-23 15:07:18 [INFO]: Starting at sequence 3787 (2023-08-11 20:21:59+00:00).

Then the update is really simple: you just tell it to do it and it will do it, automatically! The only condition is that you give the update the same parameters you gave to the original osm2pgsql invocation. It will proceed to download a batch of updates until some value is reached (around 120MiB?), then call osm2pgsql, rinse, repeat, until all missing updates have been applied:

$ osm2pgsql-replication update --verbose --port 5433 --database gis -- --cache 0 --number-processes 4 --slim --flat-nodes $(pwd)/nodes.cache --hstore --multi-geometry --style $osm_carto/openstreetmap-carto.style --tag-transform-script $osm_carto/openstreetmap-carto.lua
2023-08-23 15:13:16 [INFO]: Using replication service 'http://download.geofabrik.de/europe-updates'. Current sequence 3787 (2023-08-11 22:21:59+02:00).
2023-08-23 15:13:16 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:16 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/state.txt HTTP/1.1" 200 123
2023-08-23 15:13:16 [DEBUG]: Calling osm2pgsql with: /usr/bin/osm2pgsql --append --slim --prefix planet_osm --cache 0 --number-processes 4 --slim --flat-nodes /home/mdione/src/projects/osm/data/osm/nodes.cache --hstore --multi-geometry --style /home/mdione/src/projects/osm/osm-carto/openstreetmap-carto.style --tag-transform-script /home/mdione/src/projects/osm/osm-carto/openstreetmap-carto.lua -d gis -P 5433 /tmp/tmpk7ml1gi9/osm2pgsql_diff.osc.gz
2023-08-23 15:13:16 [DEBUG]: Importing from sequence 3787
2023-08-23 15:13:16 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:16 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/state.txt HTTP/1.1" 200 123
2023-08-23 15:13:16 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:16 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/000/003/788.osc.gz HTTP/1.1" 200 30348254
2023-08-23 15:13:25 [DEBUG]: Downloaded change 3788. (389531 kB available in download buffer)
2023-08-23 15:13:25 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:25 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/000/003/789.osc.gz HTTP/1.1" 200 35284953
2023-08-23 15:13:36 [DEBUG]: Downloaded change 3789. (245491 kB available in download buffer)
2023-08-23 15:13:36 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:36 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/000/003/790.osc.gz HTTP/1.1" 200 32891529
2023-08-23 15:13:46 [DEBUG]: Downloaded change 3790. (114339 kB available in download buffer)
2023-08-23 15:13:46 [DEBUG]: Starting new HTTP connection (1): download.geofabrik.de:80
2023-08-23 15:13:46 [DEBUG]: http://download.geofabrik.de:80 "GET /europe-updates/000/003/791.osc.gz HTTP/1.1" 200 35347966
2023-08-23 15:13:57 [DEBUG]: Downloaded change 3791. (-26371 kB available in download buffer)
2023-08-23 15:14:16  osm2pgsql version 1.8.0
2023-08-23 15:14:16  Database version: 15.3 (Debian 15.3-0+deb12u1)
2023-08-23 15:14:16  PostGIS version: 3.3
2023-08-23 15:14:16  Setting up table 'planet_osm_point'
2023-08-23 15:14:16  Setting up table 'planet_osm_line'
2023-08-23 15:14:16  Setting up table 'planet_osm_polygon'
2023-08-23 15:14:16  Setting up table 'planet_osm_roads'
2023-08-23 16:26:17  Reading input files done in 4321s (1h 12m 1s).
2023-08-23 16:26:17    Processed 3014131 nodes in 1506s (25m 6s) - 2k/s
2023-08-23 16:26:17    Processed 687625 ways in 977s (16m 17s) - 704/s
2023-08-23 16:26:17    Processed 28176 relations in 1838s (30m 38s) - 15/s
2023-08-23 16:27:11  Going over 217062 pending ways (using 4 threads)
Left to process: 0........
2023-08-23 16:30:04  Processing 217062 pending ways took 173s (2m 53s) at a rate of 1254.69/s
2023-08-23 16:30:04  Going over 89496 pending relations (using 4 threads)
Left to process: 0.......
2023-08-23 17:24:42  Processing 89496 pending relations took 3277s (54m 37s) at a rate of 27.31/s
2023-08-23 17:24:43  Done postprocessing on table 'planet_osm_nodes' in 0s
2023-08-23 17:24:43  Done postprocessing on table 'planet_osm_ways' in 0s
2023-08-23 17:24:43  Done postprocessing on table 'planet_osm_rels' in 0s
2023-08-23 17:24:43  All postprocessing on table 'planet_osm_point' done in 0s.
2023-08-23 17:24:43  All postprocessing on table 'planet_osm_line' done in 0s.
2023-08-23 17:24:43  All postprocessing on table 'planet_osm_polygon' done in 0s.
2023-08-23 17:24:43  All postprocessing on table 'planet_osm_roads' done in 0s.
2023-08-23 17:24:43  osm2pgsql took 7827s (2h 10m 27s) overall.
[...]

I'm not going to put all 4 or 5 iterations, this should be enough. 2h10m to process around 120MiB od data, meaning around 1m/MiB. I have 12 days of updates, each around 30MiB, except for the last three, that are 90, 160 and 130 MiB each, due to an ongoing edit battle. A total of around 620MiB of updates, it's going to take 10h at this pace.

[The next day]

[...]
2023-08-23 23:31:36 [INFO]: Data imported until 2023-08-22 20:21:53+00:00. Backlog remaining: 1 day, 1:09:43.865536

After ~8h10m it finished. Disk space grew 10GiB, which is more than the 10x of the import (more like 15x!). At this pace I should get space issues in around 220 days. On the other hand, these diffs are quite unusual, so probably the stats are skewed. Also, I guess at some point I should redo some of the clustering and analyzing done in the import process, but I don't see anything on the docs, so I'll ask around. There are not big peaks to talk about. The most space I see being freed is around 200MiB.

The osm2pgsql people have automated it to the point where they even provide in their documentation an example systemd unit file to do the updates for you. Check the docs linked above.


  1. Yes, I started writing these posts 13 days ago.