Optimizing Debian packages

If you follow me (I know, you don't), you know I render my own maps. For that I use mapnik, a library that can take data and a style and generate the maps I want. I install mapnik using Debian sid's original packaging. Lately, I've been wondering if I can shave some time from its rendering time.

Debian binary packages are compiled from source (duh). Since there is a single binary package per CPU architecture, the compile options must be generic enough to run on all alternative CPUs in that arch. This means that they're optimized, but in the most generic way, nothing specialized to the CPU you actually have (unless you have the most generic one, which might not exist).

So the question is: can I get some more juice if I recompile at least mapnik, the lib that does most of the CPU crunching? Let's see.

But first, and this is the question that took me more time to answer: how do I personalize the flags using during compilation? You can donwload Debian source packages with apt source mapnik and the packages needed for compiling it with sudo apt build-dep mapnik. In particular, mapnik uses cmake and it's developed in C++, so what we're looking for is to provide options for gcc and/or g++. Luckily, cmake understands CFLAGS and CXXFLAGS, so now it's a matter of telling Debian's build system to set those.

Unluckily, it seems like dpkg-buildpackage ignores those. But that tool uses dpkg-buildflags, and that tool has several ways to modify those envvars. It has two sets of operations to modify the flags passed to, in this case, cmake. One set is for the package's maintainer, and the other can be used by a user like us to customize the compilation.

So, without further addo:

DEB_CFLAGS_APPEND='-O3 -march=native -flto' DEB_CXXFLAGS_APPEND='-O3 -march=native -flto' dpkg-buildpackage --no-sign binary

This will build a new set of .deb files compiled with those flags. But it doesn't mean you can use them as is.

I mean, you can, but I would rather do this in a more clean manner. I would like a package that it's seen like an update to the one already provided by Debian, but that also looks like an older version to any update that might come from Debian too (Debian sid is like a rolling release, and I update once a week). This can be achieved with version suffixes1 by adding, for instance, +0.1mdione to the Debian version, and a new entry in the debian/changelog:

mapnik (4.1.3+ds-2+0.1mdione) unstable; urgency=medium

  * Recompile with -O3 -march=native -flto

 -- Marcos Dione <mdione@grulic.org.ar>  Sun, 02 Nov 2025 11:55:22 +0200

More info at Section 6.3 of the Guide for Debian Maintainers.

And now, to test our optimizations. I decided to run a small batch, once for warm up, once to measure, then upgrade, then run it again. But I run (!!!) into trouble: the second run took more time than the first one! In the end, I found a 16%+ swing between the slowest and the fastest runs (I run it several times more), way more than what any optimization of this kind might give. Still, I had no option than to continue, just in case.

And the winner is: no win. After several runs, I couldn't even get numbers as good as the fastest from the original version. So, at least in my case, Gentoo'ing it did not make any sense2.


  1. this is not an official name. 

  2. OK, several caveats here: First, I didn't optimize the whole system. But in this case I'm not sure how much sense it would have made. I could optimize postgres3 (the data is mostly stored there) and the myriad of dependencies for mapnik4, but that would take more time from me that what I evidently can (or, actually, can't) get out of it. Second, the system was not 100% idle, so maybe that explains the swings. Third, I had a third, but I forgot. 

  3. I'm going to end this post here, but I notice that at least for zoom level 11, postgres was using a core 100% for ~2/3rds of the time it takes to render that single tile (3m+!), but afterwards its usage is lower, even when more cores (1 vs up to 8) are rendering at the same time. 

  4. 168 libraries, according to ldd, and I don't really know how many are used in the code paths I exercise. 

Letting screen and tmux survive a logout under, and attaching processes to systemd scopes

I'm not exactly sure since when, but for a while I had had a new behavior: logging out from a session that has some screen or tmux sessions launched will terminate those too, which beats one of the purposes of running stuff in them.

There is a workaround, which you can find everywhere:

loginctl enable-linger $USER 
systemd-run --scope --user screen

This is good for launching new sessions, but what if you already have a session running, you're out of luck.

Or are you?

Unluckily systemd does not have a nice way to do this. But it is still possible.

We can create a new scope and attach the screen master later:

systemd-run --scope --user sleep infinity
# Running as unit: run-p1787326-i1787327.scope; invocation ID: ed6adac8a0db4e8f9e9ce59731cc2aa5

We use infinity so the scope does not finish before we can attach to it. Now let's find the control group under which it's running:

systemctl --user show -P ControlGroup run-p1787326-i1787327.scope
# /user.slice/user-1000.slice/user@1000.service/app.slice/run-p1787326-i1787327.scope

This references a scope directory in the /sys/fs/cgroup/ directory. We need to add the PID of the screen to the cgroup.procs file in it:

echo 1789758 > /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app.slice/run-p1787326-i1787327.scope/cgroup.procs

Thanks to grawity#systemd@libera.chat.

Defining reserved space in blocks vs percentage

A quick one.

Many (most?) filesystems in Linux can define reserved space. This is space that only the superuser can write to, and it's usually reserved for times when you've run out of space, allowing the super user to still operate the computer, even when normal users can't write anymore. This is also why there is the concept of available vs free space in partitions.

Historically, reserved space was defined in percentage. But with bigger and bigger disks (30TB now?), even 1% is a a lot of space (300GB). Fortunately, we can also define it in blocks. At 4KiB per block, you can reserve "only" 1TiB with:

tune2fs -r $((1024 * 1024 / 4)) [partition]

Unluckily, many of these tools don't define --long-options.

screen to tmux

I use screen a lot, but not much of it. Given recent developments around several CVEs I decided to try to learn the tmux equivalent of what I use. Both have the concept of windows. Both allow rebinding commands to keys, but these are the default bindings. tmux also has subcommands like tmux ls. Both support splitting the window; screen calls the areas 'regions'; tmux calls them panels.

Feature screen tmux
list active sessions screen -ls tmux ls
new named session screen -dR foo tmux -L foo
reattach a session screen -dR foo tmux -L foo attach
control key C-a C-b
detach d d
new window c c
next window n n
previous window p p
split horizontally S "
split vertically | %
go to next pane TAB o
rename session 1 $

  1. sessions can't be renamed in screen

Debuggung system libraries in Debian

"My maps do not render". It's always a simple question like this. The error message was quite cryptic:

Failed to parse color: "D27E01"

This makes sense, that is not a color; #D27E01 is. I thought "I might have made a typo". Searching through my map style gave nothing. Hmm,m that's weird. Maybe the database? I pick a few colors from the data, f.i. for bus or subte/metro/subway/underground lines. Nope, not that either. So where is it coming from?

I fire gdb and thanks to automatic debug symbol loading I get more info that I would otherwise get. First rock I stumble upon: the function where that error is raised is called for every parsed color. In a map style there are gazillion, and remember I'm also parsing some coming from the db. So to set a conditional break point. It's easy, right? Just break parse_color if [condition] and done!

Not so fast. To begin with, strings don't exist in C1, it's the arrays and \0. Also, arrays don't exist in C, it's pointers and wishful thinking. condition will have to involve strcmp() and == 0. But the parameter is actually a std::string const& str, so it's in the C++ realm. After asking around, guinevere#gdb@libera.chat suggested break parse_color if $_streq(str.data(), "D27E01"), which worked.

The next step was to make sense of the code. mapnik, the library I'm debugging, is the most inscrutable code I have ever seen. Here's a full backtrace of the moment I hit the bug:

#0  mapnik::parse_color (str="D27E01") at ./src/color_factory.cpp:32
#1  0x00007ffff5921295 in mapnik::color::color (this=this@entry=0x7fffffffc0d0, str="D27E01", premultiplied=premultiplied@entry=false) at ./src/color.cpp:38
#2  0x00007ffff614051d in mapnik::evaluate_expression_wrapper<mapnik::color>::operator()<mapbox::util::variant<mapnik::value_null, bool, long, double, icu_76::UnicodeString, mapnik::attribute, mapnik::global_attribute, mapnik::geometry_type_attribute, mapbox::util::recursive_wrapper<mapnik::unary_node<mapnik::tags::negate> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::plus> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::minus> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::mult> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::div> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::mod> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::less> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::less_equal> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::greater> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::greater_equal> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::equal_to> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::not_equal_to> >, mapbox::util::recursive_wrapper<mapnik::unary_node<mapnik::tags::logical_not> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::logical_and> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::logical_or> >, mapbox::util::recursive_wrapper<mapnik::regex_match_node>, mapbox::util::recursive_wrapper<mapnik::regex_replace_node>, mapbox::util::recursive_wrapper<mapnik::unary_function_call>, mapbox::util::recursive_wrapper<mapnik::binary_function_call> >, mapnik::feature_impl, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, mapnik::value_adl_barrier::value, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, mapnik::value_adl_barrier::value> > > > (this=<optimized out>, expr=..., feature=..., vars=...) at ./include/mapnik/symbolizer.hpp:284
#3  mapnik::extract_value<mapnik::color>::operator() (this=<optimized out>, expr=...) at ./include/mapnik/symbolizer.hpp:342
#4  apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:332
#5  0x00007ffff61405e8 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#6  0x00007ffff6140640 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#7  0x00007ffff61406a0 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#8  0x00007ffff6140700 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#9  0x00007ffff6140760 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#10 0x00007ffff61407c0 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#11 0x00007ffff61407eb in visit<mapnik::extract_value<mapnik::color>, mapnik::detail::strict_value const&> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#12 0x00007ffff6140837 in mapnik::util::apply_visitor<mapnik::extract_value<mapnik::color>, mapnik::detail::strict_value const&> (v=..., f=...) at ./include/mapnik/util/variant.hpp:42
#13 0x00007ffff614090b in mapnik::get<mapnik::color, (mapnik::keys)9> (sym=..., feature=..., vars=std::unordered_map with 0 elements) at ./include/mapnik/symbolizer.hpp:335
#14 mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4>::process (this=0x7fffffffc890, sym=..., feature=..., prj_trans=...) at ./src/agg/process_line_symbolizer.cpp:95
#15 0x00007ffff597d682 in mapnik::process_impl<true>::process<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4>, mapnik::line_symbolizer, mapnik::feature_impl, mapnik::proj_transform> (ren=..., 
    sym=..., f=..., tr=...) at ./include/mapnik/symbolizer_dispatch.hpp:43
#16 0x00007ffff597d6e1 in mapbox::util::detail::dispatcher<void, mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer>::apply<mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&, mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> > > (
    v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#17 0x00007ffff597d6f9 in mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer>::visit<mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >, mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&, mapnik::point_symbolizer const&, void> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#18 0x00007ffff597d712 in mapnik::util::apply_visitor<mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >, mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&> (f=..., v=...) at ./include/mapnik/util/variant.hpp:42
#19 0x00007ffff598611a in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_style (this=<optimized out>, p=..., style=0x1cb2e60, rc=..., 
    features=std::shared_ptr<mapnik::Featureset> (use count 3, weak count 0) = {...}, prj_trans=...) at ./include/mapnik/feature_style_processor_impl.hpp:592
#20 0x00007ffff59869af in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_material (this=this@entry=0x7fffffffc890, mat=..., p=...)
    at ./include/mapnik/feature_style_processor_impl.hpp:552
#21 0x00007ffff5987994 in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_submaterials (this=this@entry=0x7fffffffc890, parent_mat=..., p=...)
    at ./include/mapnik/feature_style_processor_impl.hpp:453
#22 0x00007ffff598c1e0 in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::apply (this=this@entry=0x7fffffffc890, scale_denom=<optimized out>, 
    scale_denom@entry=0) at ./include/mapnik/feature_style_processor_impl.hpp:148
#23 0x00007ffff6d9d858 in agg_renderer_visitor_1::operator()<mapnik::image<mapnik::rgba8_t> > (this=<optimized out>, pixmap=...) at src/mapnik_python.cpp:220
#24 0x00007ffff6dabeea in mapbox::util::detail::dispatcher<void, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::apply<mapnik::image_any&, agg_renderer_visitor_1> (v=..., f=...) at /usr/include/mapbox/variant.hpp:332
#25 0x00007ffff6dabf28 in mapbox::util::detail::dispatcher<void, mapnik::image<mapnik::null_t>, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::apply<mapnik::image_any&, agg_renderer_visitor_1> (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#26 0x00007ffff6dabf89 in mapbox::util::variant<mapnik::image<mapnik::null_t>, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::visit<agg_renderer_visitor_1, mapnik::image_any&, mapnik::image<mapnik::null_t>&, void> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#27 0x00007ffff6dabfa2 in mapnik::util::apply_visitor<agg_renderer_visitor_1, mapnik::image_any&> (f=..., v=...) at /usr/include/mapnik/util/variant.hpp:42
#28 0x00007ffff6da4e41 in render (map=..., image=..., scale_factor=1, offset_x=0, offset_y=0) at src/mapnik_python.cpp:316
#29 0x00007ffff6dac584 in boost::python::detail::invoke<int, void (*)(mapnik::Map const&, mapnik::image_any&), boost::python::arg_from_python<mapnik::Map const&>, boost::python::arg_from_python<mapnik::image_any&> > (f=<optimized out>, 
    ac0=..., ac1=<synthetic pointer>...) at /usr/include/boost/python/detail/invoke.hpp:79
#30 boost::python::detail::caller_arity<2u>::impl<void (*)(mapnik::Map const&, mapnik::image_any&), boost::python::default_call_policies, boost::mpl::vector3<void, mapnik::Map const&, mapnik::image_any&> >::operator() (
    this=<optimized out>, args_=<optimized out>) at /usr/include/boost/python/detail/caller.hpp:233
#31 0x00007ffff6fdf3ae in boost::python::objects::py_function::operator() (this=0xec3670, args=0x7fffe7caf400, kw=<optimized out>) at ./boost/python/object/py_function.hpp:147
#32 boost::python::objects::function::call (this=0xec3ac0, args=0x7fffe7caf400, keywords=0x0) at libs/python/src/object/function.cpp:221
#33 0x00007ffff6fdf62c in boost::python::objects::(anonymous namespace)::bind_return::operator() (this=<optimized out>) at libs/python/src/object/function.cpp:581
#34 boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke (function_obj_ptr=...) at ./boost/function/function_template.hpp:193
#35 0x00007ffff6fe465b in boost::function0<void>::operator() (this=<optimized out>) at ./boost/function/function_template.hpp:771
#36 boost::python::detail::exception_handler::operator() (this=<optimized out>, f=...) at libs/python/src/errors.cpp:74
#37 0x00007ffff6da9b67 in boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d0f0 <runtime_error_translator(std::runtime_error const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#38 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> >::operator()<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#39 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#40 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...)
    at /usr/include/boost/function/function_template.hpp:137
#41 0x00007ffff6da9bc7 in boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d250 <value_error_translator(mapnik::value_error const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#42 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> >::operator()<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#43 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#44 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...)
    at /usr/include/boost/function/function_template.hpp:137
#45 0x00007ffff6da9c27 in boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d120 <out_of_range_error_translator(std::out_of_range const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#46 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> >::operator()<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#47 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#48 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...) at /usr/include/boost/function/function_template.hpp:137
#49 0x00007ffff6da9c87 in boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d150 <standard_error_translator(std::exception const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#50 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> >::operator()<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#51 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#52 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...) at /usr/include/boost/function/function_template.hpp:137
#53 0x00007ffff6fe4511 in boost::function2<bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::operator() (this=<optimized out>, a0=..., a1=...) at ./boost/function/function_template.hpp:771
#54 boost::python::detail::exception_handler::handle (this=<optimized out>, f=...) at ./boost/python/detail/exception_handler.hpp:41
#55 boost::python::handle_exception_impl (f=...) at libs/python/src/errors.cpp:24
#56 0x00007ffff6fdc3c6 in boost::python::handle_exception<boost::python::objects::(anonymous namespace)::bind_return> (f=...) at ./boost/python/errors.hpp:29
#57 boost::python::objects::function_call (func=<optimized out>, args=<optimized out>, kw=<optimized out>) at libs/python/src/object/function.cpp:622
#58 0x0000000000543b8b in _PyObject_MakeTpCall (tstate=0xa7d510 <_PyRuntime+283024>, callable=0xec3ac0, args=<optimized out>, nargs=2, keywords=<optimized out>) at ../Objects/call.c:242
#59 0x000000000055f191 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>) at ../Python/generated_cases.c.h:813
#60 0x000000000064db6c in _PyEval_EvalFrame (tstate=0xa7d510 <_PyRuntime+283024>, frame=0x7ffff7fb2020, throwflag=0) at ../Include/internal/pycore_ceval.h:119
#61 _PyEval_Vector (args=0x0, argcount=0, kwnames=0x0, tstate=0xa7d510 <_PyRuntime+283024>, func=0x7ffff7497740, locals=<optimized out>) at ../Python/ceval.c:1814
#62 PyEval_EvalCode (co=0xbcf8d0, globals=<optimized out>, locals=<optimized out>) at ../Python/ceval.c:604
#63 0x000000000066da21 in run_eval_code_obj (tstate=0xa7d510 <_PyRuntime+283024>, co=0xbcf8d0, globals=0x7ffff7434a00, locals=0x7ffff7434a00) at ../Python/pythonrun.c:1381
#64 0x000000000066988c in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7ffff7434a00, locals=0x7ffff7434a00, flags=<optimized out>, arena=<optimized out>, interactive_src=0x0, generate_new_source=0)
    at ../Python/pythonrun.c:1466
#65 0x0000000000682983 in pyrun_file (fp=fp@entry=0xb1dcd0, filename=filename@entry=0x7ffff744aa30, start=start@entry=257, globals=globals@entry=0x7ffff7434a00, locals=locals@entry=0x7ffff7434a00, closeit=closeit@entry=1, 
    flags=0x7fffffffd3a8) at ../Python/pythonrun.c:1295
#66 0x0000000000682283 in _PyRun_SimpleFileObject (fp=fp@entry=0xb1dcd0, filename=filename@entry=0x7ffff744aa30, closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd3a8) at ../Python/pythonrun.c:517
#67 0x00000000006820be in _PyRun_AnyFileObject (fp=0xb1dcd0, filename=0x7ffff744aa30, closeit=1, flags=0x7fffffffd3a8) at ../Python/pythonrun.c:77
#68 0x0000000000680ef1 in pymain_run_file_obj (program_name=0x7ffff7434b30, filename=0x7ffff744aa30, skip_source_first_line=0) at ../Modules/main.c:410
#69 pymain_run_file (config=0xa4fc08 <_PyRuntime+96392>) at ../Modules/main.c:429
#70 pymain_run_python (exitcode=0x7fffffffd39c) at ../Modules/main.c:697
#71 Py_RunMain () at ../Modules/main.c:776
#72 0x000000000063d6eb in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at ../Modules/main.c:830
#73 0x00007ffff7c90ca8 in __libc_start_call_main (main=main@entry=0x63d640 <main>, argc=argc@entry=3, argv=argv@entry=0x7fffffffd5d8) at ../sysdeps/nptl/libc_start_call_main.h:58
#74 0x00007ffff7c90d65 in __libc_start_main_impl (main=0x63d640 <main>, argc=3, argv=0x7fffffffd5d8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffd5c8) at ../csu/libc-start.c:360
#75 0x000000000063cab1 in _start ()

What I get from it is that the color is not coming from the style of the db but it's the result of an expression; see frame #3. Thing is, those parameters have been optimized out into CPU registers and there is no easy way to inspect them as is:

#3  mapnik::extract_value<mapnik::color>::operator() (this=<optimized out>, expr=...) at ./include/mapnik/symbolizer.hpp:342
342     in ./include/mapnik/symbolizer.hpp
(gdb) print this
$1 = <optimized out>
(gdb) print expr
$2 = <optimized out>

So, what options do I have? Well, one is to compile mapnik (and python3-mapnik, because I'm driving it from a Python program; you can see it in the stacktrace since frame #292) and I tried this first, but even compiling the released code on my Debian failed in ways I could not fix myself without potentially introducing more bugs.

But luckily and ofcoursely3 the sources from Debian did compile; I compiled the debs using CXXFLAGS='-g -O0' CFLAGS='-g -O0' dpkg-buildpackage -us -uc4, installed them by hand and now I can do my debug session normally.

Unluckily for this post the error was in the data. When I searched for it I used the wrong key for the Postgres HStore (color vs colour; damn Webster!). It's also a lucky hit, because I was reaching the point were I would had to untangle a mess of C++ templating and I was running out of time. mapnik is definitely the hardest code I had ever had to follow. You get a preview just in that backtrace.


  1. I know the code is in C++, it's just for the joke. 

  2. To be honest, it surprises me that most of the stack is Python layers; mapnik and agg seem to like very deep call stacks. Maybe it's just I'm not that far deep. 

  3. Sue me :) 

  4. You have no idea how much time I spent coming up with this invocation. In retrospective, it's obvious, but mapnik seems to be able to use both scons and cmake for configuring the build; I was using the scons style of params while I should have been using the cmake one5

  5. Part of the confusion comes from the fact that I used to compile mapnik from git, and as it's my custom I still had the ./config.sh script I use (together with ./build.sh) to leave a trace of how I compiled something. This script was using scons, while debian/rules uses cmake. I was aware cmake was involved, mostly from the compilation output on the terminal, the classic [ 1%] Building CXX object CMakeFiles/wkt.dir/src/wkt/wkt_factory.cpp.o type of lines, but I thought that for some reason scons was calling it. 

Liberating your ebooks purchased on Amazon, A.K.A. the sigh post

Wow, fist written (not dictated) post in a long time. Let's see if I can finish it.

Given the fact that Amazon is shutting down downloading your purchased ebooks, I decided to download them all before the cut in a couple of days. To me, the definitve gide comes in this toot:

rem@front-end.social, Feb 18

The first link is a tool that will automatically download the books for you; otherwise, you have to download them by hand/clickops. If your collection is big that can be tedious. The first problem is that that tool is developed in a obscure programming language called bun1, which of course gets installed by the classic curl | bash method which... sigh. Let's just say it's not the ideal. At least the options they use in curl are not bad, but it would be nice if they used the --long-options so I don't have to peruse curl's man page to see if they're not doing anything even more crappy, like ignoring bad SSL certs or something worse.

So instead of blindly doing that, I open the script and start reading. The usual crap, but it includes this gem:

install_env=BUN_INSTALL
bin_env=\$$install_env/bin

install_dir=${!install_env:-$HOME/.bun}
bin_dir=$install_dir/bin
exe=$bin_dir/bun

In a roundabout way, it's saying that it will honor the BUN_INSTALL envvar as the root of the installation, defaulting to $HOME/.bun, which is what I was looking for; I don't want more trash in my home directory. I also run the script with tracing to make sure it doesn't do anything ugly.

mdione@ioniq:~/src/system/fsck-amazon$ BUN_INSTALL=$(pwd) bash -x ./install

I should have read it more. The bloody thing helpfully adds these lines you the .bashrc:

# bun
export BUN_INSTALL="$HOME/src/system/fsck-amazon"
export PATH="$BUN_INSTALL/bin:$PATH"

sigh So I undo that and only set the envvars on the shell I run for all this.

Ok, now to follow the bulkk tool's install instructions. The step that most time takes is:

🚚 typescript...

sigh. To be slightly fair, my internet has been crappy for a while now. I blame the PLC network between my ISP router and my home made AP.

So far only (!!!) 121MiB have been used. sigh Let's see how much will it be at the end, because the next step is:

bunx puppeteer browsers install chrome

See that last one? This is because the ony things that can use the web lately are only full fledged browsers that include a fsck-ton amount of technologies, so this thing is going to drive Amazon's site with a full fledged Chrome browser. And of course it escapes my feeble attempt to give it a jail:

mdione@ioniq:~/src/system/fsck-amazon/amazon-kindle-bulk-downloader$ bunx puppeteer browsers install chrome
chrome@133.0.6943.98 /home/mdione/.cache/puppeteer/chrome/linux-133.0.6943.98/chrome-linux64/chrome

sigh

mdione@ioniq:~/src/system/fuck-amazon/amazon-kindle-bulk-downloader$ du -sm /home/mdione/.cache/puppeteer/
585     /home/mdione/.cache/puppeteer/

So 706MiB so far.

eyeroll

To not write the auth on the disk in plain text, I just run the system with manual auth:

mdione@ioniq:~/src/system/fsck-amazon/amazon-kindle-bulk-downloader$ bun run start --manualAuth
$ bun run src/index.ts --manualAuth
 Enter the Amazon base URL  https://www.amazon.com/
 Press enter once you've logged in … yes
Got auth
[...]
Downloading complete. You can find your books in the 'downloads' folder.

An that's it. After this I cleaned up the two diredtories, making sure not to delete the downloads the tool created.

Next step is to download the Calibre plugin (I already have Calibre via the OS packages) and follow the install instructions. One of the step is to write a list of serials for the ebooks you have. I my case, it was on drop-from-the-top menu -> All Settings -> Device Options -> Device Info -> Serial Number; it's a 4 groups of 4 characters string, WWWW XXXX YYYY ZZZZ. Also in my case, the config files was ~/.config/calibre/plugins/dedrm.json. You can write it by hand or use the GUI to add new serials. For the latter, it's Calibre -> Preferences -> Plugins -> File type -> DeDRM -> Customize plugin -> Kindle eInk ebooks -> + (add serial). Just notice it wants the serial as a single 16 char string.

Last step: convert them all:

mdione@ioniq:~/src/system/fuck-amazon$ for file in downloads/; do calibredb add $file --with-library=library; done

Notice two things: when I said "making sure not to delete the downloads the tool created", I meant moving that directory into this root and deleting everything else. Second, this just DeDRMs the files, and leaves them in a different directory (library), but Calibre won't see them until you really add them.


  1. Oh, fscking hell: "Bun is a fast JavaScript all-in-one toolkit". sigh 

Replacing an ORM with SQLite and dataclasses

Note: due to my new setup, this post came out almost in raw form. It is now heavily edited as it should have been, but at least forced me to finish it :)

I started using Mastodon some three years ago, and since the beginning I started having issues adapting to how timelines work and how you actually should use the platform. I listed several things that I didn't like, and I always thought that at some point I would just sit down and write my own client. This weekend looked like it would rain all weekend, so it was the perfect time to do it.

As with any project that uses a database, I usually go for an ORM. I have already used SQLAlchemy on 3 or 4 projects, so it was natural for me to use it again. One of them is probably dead, but the rest get repeated usage; not daily, but many times a year for many years. Through all those years, most had the data layer written once and forgot about, because the data model is quite simple; and this case too. The problem is that every time I mostly have to relearn how to use it, specially since even when SQLAlchemy has evidently stayed quite backwards compatible, it has also evolved.

Version 2.0 has a very different interface. With the old version, if you wanted to fetch an object, a row from the database, you did something like this:

image = session.query(Image).filter_by(name=filename)[0]

But with the new version, they have exposed a thin abstraction of the SQL language; the previous exmaple becomes:

image = session.scalars(select(Image).where(User.name == filename))[0]

I love the magick1 they are using to convert that expression into something they can later render as SQL, which involves rewriting __getattribute__() to return another object that implements __eq__() to capture the value.

It's like writing SQL in Python, right? And in some ways, it makes a lot of sense, because one of the things people usually complain about ORMs is that, at some point doing things that are easy in SQL they become too convoluted on ORMs. It's like you are bending yourself backwards just to try to kind of scratch your left ear using your right arm, but under your left leg.

So, putting a slight veneer of Python on top of SQL kind of makes sense, and I like that someone has explored that idea to the point that it has become the new way of doing things. And with that, I mean, I guess the old API still works, again, all the code I have written so far still works, and I still use it still works, but the new way looks better. But then another thing that over and sometimes too, and in particular, SQL actually seems to do too, is to have two levels of data returned. You may have, you may get objects, which is basically you go and fetch stuff from the database and you get objects in return, and it's the objects you are thinking of in terms of your application model and not in terms of your tables and other stuff, which is amazing because you can do one-to-end and end-to-end relationships, and it's just natural, because let's say a person has several addresses, you just say person.addresses.add, and you add an address there, and it just works. You don't have to think about anything there, you just add stuff and it will go in the database. Before you have an end-to-end relationship and you can go from photos to tags and from tags to photos, and it just works, right? You just get, you fetch a tag and say, give me all the photos, and it just gives you the photos and you completely forget about the intermediary table, and that's just amazing. It just works and it's fantastic. Then there's the other API where you don't get objects, you just get data from the database. You get roles, you get tuples with data inside, which has no structure and anything else. After writing this Friday night, what I did in Friday night was mostly to look at the responses from a mastern, servers, see what they look like, and try to make data model based on that. I am not going to complete the model accounts or toots, because they have way more information than I want to, to display in my client, but that's all I did on Friday night. So the morning I started writing the model in SQL Alchemy, and it was just fine, just mapping JSON fields to tables and columns, and then to classes and objects and attributes. I started writing all the code and it was just fine, everything was kind of obvious in that sense. But at some point I hit a wall where I was going to, I was fetching all the, I was fetching accounts based on an ID, and instead of returning me objects, it was returning me tuples. So when I wanted to say, okay, I have a new toot, and this toot was tooted by this, toot or this account, if I don't know the account, I create a new one and then I create a toot with this new account, and I just push it to that list. But in the case of the account already existing and me getting a tuple instead of an object, when I was trying to say, okay, toot.author is this object, I was not putting an object, I was putting just a string of the ID, because that's all I had, that's all what the ORM was giving me. And of course later the ORM says, no, this doesn't work because I expect an object and you'll give me just an ID. And I spent like 24 hours on that, I mean, from Saturday afternoon to this morning, okay, not 24 hours, maybe like 10 hours on it. And asking everywhere and I didn't get a very good answer. So I just switched to SQLite 3. I just had to do it. So before my workflow was, if I needed to add a new column to a table, I only had to add a new attribute to the class and then destroy the database and create it again. One of the good decisions I did at the beginning was to store all the original information in raw, just getting the JSON and storing it in the database. So that means that if I make any changes to the data structure that I will use later, I can just read everything back from the database, create the new objects and save them in the database again. And this is something I could use to replace database structure migrations. I just nuke all the secondary tables and just create all the data again from scratch, yeah, from the original data. So I can do these migrations very easily now. So with SQL Alchemy, I could just drop the tables, create everything again and just feed, create all the objects again and that worked. But now I don't have that anymore. So what do I do? Well, my workflow now includes SQLite browser, which is a GUI application for creating, for manipulating SQLite files, right? And I just had to add a new column to the Tooth table, so I did just that in the GUI. And what I'm going to do at some point is to make a dump of the structure and keep it in a SQL file. So if I ever have to do this migration again, which I have to do at some point, I will just take that file, just executed it from scratch when I have to create the tables again and that's it. I will just do it that way. I really wish our, in this case, SQLite, SQL Alchemy would have been just transparent for me. But I guess that's exactly the kind of things people say about ORAMS, the type of criticism that ORAMS usually get, that 90% of the work is really transparent, but then at some point it is no longer so transparent and you start doing weird shit just to accomplish what you want. So what I'm going to do in the future, I'm not really sure. I will see how this raw SQLite works. This project is rather simple. I have very few tables, like I have two tables for the original raw content, one not 24 hours, maybe like 10 hours on it. And asking everywhere and I didn't get a very good answer. So I just switched to SQLite 3. I just had to do it. So before my workflow was, if I needed to add a new column to a table, I only had to add a new attribute to the class and then destroy the database and create it again. One of the good decisions I did at the beginning was to store all the original information in raw, just getting the JSON and storing it in the database. So that means that if I make any changes to the data structure that I will use later, I can just read everything back from the database, create the new objects and save them in the database again. And this is something I could use to replace database structure migrations. I just nuke all the secondary tables and just create all the data again from scratch, yeah, from the original data. So I can do these migrations very easily now. So with SQL Alchemy, I could just drop the tables, create everything again and just feed, create all the objects again and that worked. But now I don't have that anymore. So what do I do? Well, my workflow now includes SQLite browser, which is a GUI application for creating, for manipulating SQLite files, right? And I just had to add a new column to the Tooth table, so I did just that in the GUI. And what I'm going to do at some point is to make a dump of the structure and keep it in a SQL file. So if I ever have to do this migration again, which I have to do at some point, I will just take that file, just executed it from scratch when I have to create the tables again and that's it. I will just do it that way. I really wish our, in this case, SQLite, SQL Alchemy would have been just transparent for me. But I guess that's exactly the kind of things people say about ORAMS, the type of criticism that ORAMS usually get, that 90% of the work is really transparent, but then at some point it is no longer so transparent and you start doing weird shit just to accomplish what you want. So what I'm going to do in the future, I'm not really sure. I will see how this raw SQLite works. This project is rather simple. I have very few tables, like I have two tables for the original raw content, one for the actual raw content and one for the raw toot content. One of the things is when you boost another toot, you basically generate a toot that has a toot nested into it. So the original raw content includes all that information, but to me, for my client, the most important thing is the original toot itself. So I have another table that only has the original toots, not the boosters, and the client will just work from that. And then I have toots, tooters, and tacks so far. I'm not really sure if I will do a table with bookmarks and filters and all that shit, because those things usually stay on the server in your instance, and I should just query them all the time. Press also think of having an offline version where you can do stuff, and if you cannot send them immediately to the instance, you just queue them for later, for when you have a connection, or when you want to use a connection. So that's it. It's very, very simple data model. If this goes right, but later I have a more complex data model, I might try other over ends, especially simpler over ends that are less magic, a little bit more transparent, but yeah, we'll see when the time comes.


  1. Typo intended. It's not real magic, just a clever data model, but sometimes indistinguishable from black magic :) 

Reducing latency in rendering tile servers vs rendering batches

Just a quick note. Many years ago I started rendering my maps in batches. Then I found out that to increase speed, I could use metatiles. I found that 8 was a good size.

Now I developed a rendering tile server, which means I can query for tiles, and if they hagve not been redered yet, they can be rendered on the fly. But now the requirements changed. When rendering batches, I can keep the rendering pipeline busy because there's always another metatile to render (unless I'm at the end of the batch), so all the threads are always busy. But with such a big metatile, the latency is terrible. It's not a problem because I'm, not consuming the tiles immediately.

With the rendering tile server, I do care about latency. Those big metatiles can take several seconds to render, and as a user I don't want to wait that long to see the map. Also, because the client can see at most 4 metatiles, but usually just 1 or 2, the rest of the threads are sitting idle. So by reducing the metatile size to 2, I can reduce the latency by parallelyzing more. Granted, each individual tile takes more time to render in average, but the service becomes more snappy. So far a metatile size of 2 has proven good enough; I don't think I'll ever try with 1.

We have been writing Ansible tasks wrong

Look at the following Ansible task:

- name: Prometheus Exporters
  - name: Extend caps for ping - File
    template:
      src:   "roles/server/files{{ item }}"
      dest:  "{{ item }}"
      owner: prometheus
      group: root
      mode:  0600
    loop:
      - /etc/systemd/system/prometheus-blackbox-exporter.service.d/override.conf
    when: "'blackbox' in exporters"
    register: foo
    tags: config, monitoring
    notify:
    # - Reload systemd
    - Reload Blackbox Exporter

For a while I have been wishing to be able to write this like this:

class PrometheusExporters:
    def extend_caps_for_ping_File(self):
        if 'blackbox' in exporters:
            foo = []
            for exporter in exporters:
                # tags: config, monitoring
                foo.append(template(f"roles/server/files{exporter}", exporter, 'prometheus', 'root', 0o600, 
                           notify=(reload_blackbox_exporter,)))

Notice that these are not 100% equivalent; the Python version uses exporter as the loop_variable, but in the Ansible code I never set that because it's so cumbersome.

Why do I prefer that notation? Because:

  • Even if Ansible aims to be a declarative language, it has many programming languages features like when (if), loop (for) and block/recover (try/except).
  • Yes, the register equivalent I wrote there is not really nice, but alternatives are, I think, worse.
  • When using certain modules I almost always use the same params. Notice that a proper translation would be template(src=f"roles/server/files{exporter}", dest=exporter, owner='prometheus', group='root', mode=0o600), and that this is very close to the inline module invocation a.k.a. free form arguments1: template: src="roles/server/files{{ exporter }}" dest="{{ exporter }}" owner=prometheus group=root mode=0600.
  • If the compiler would be clever enough, I could declare functions that would work as templates for tasks.
  • As a programmer, this order makes more sense.

But I don't have to wait to have enough energy to write such compiler myself; I can more or less already change the order:

- name: Prometheus Exporters
  - name: Extend caps for ping - File
    when: "'blackbox' in exporters"
    loop:
      - /etc/systemd/system/prometheus-blackbox-exporter.service.d/override.conf
    register: foo
    template:
      src:   "roles/server/files{{ item }}"
      dest:  "{{ item }}"
      owner: prometheus
      group: root
      mode:  0600
    notify:
    # - Reload systemd
    - Reload Blackbox Exporter
    tags: config, monitoring

Not 100% where I want it (setting loop_variable is still ugly), but in my head it's easier to read. I don't use free form arguments because it's not clear that I can split the line like I did on the Python code.


  1. Thanks oblikoamorale#ansible@libera.chat. 

Diacritics and the US intl with dead keys keyboard layout

Two days ago https://norcal.social/@superball asked about generating smart quotes on Linux. I never got to understand what they meant with 'smart quotes', but I answered with a reference to the US intl with dead keys keyboard layout. I have been using it for ages. This allowed me to write proper Spanish with a US layout and it just mimicked my experience when learning to type with a Olivetti Lexicon 80.

I knew for a long while that I gloat, yes, gloat, about being able to write 'weird' letters, but it seems like I never really wrote them all. So today I sat down to waste, yes, waste a couple of hours, as in more than two, compiling the following table:

glyph name letters key combo
` grave ẁèỳùìòàǹm̀ `
~ tilde ẽỹũĩõãṽñ Shift+
˝ double grave űő Alt-Gr+Shift+2
¯ macron3 ēȳūīōāḡǖ14 Alt-Gr+Shift+3
¸ cedilla5 ȩŗ46ţşḑģ4ḩķ46ļ46çņ46 Alt-Gr+Shift+5
^ circumflex ŵêŷûîîôâŝĝĥĵẑĉ Shift+6
̛7 horn ơ Alt-Gr+Shift+7
˛ ogonek ęųįǫą Alt-Gr+Shift+8
˘ breve3 ĕŭĭŏăğ Alt-Gr+Shift+9
° overring ẘẙůå8 Alt-Gr+Shift+0
̣ ̣9 underdot ẉẹṭỵụịọạṣḍḥḳḷẓṿḅṇṃ Alt-Gr+Shift+-
´ acute ẃéŕýúíóṕáśǵj́ḱĺźćǘ14ńḿ '
¨ 'two dots'10 ẅëẗÿüïöäḧẍ Shift+'
˙ overdot å1184é4ṙṫẏı12ȯṗȧṡḋḟġḣȷ12ŀ13żẋċḃṅṁ Alt-Gr+.
ˇ caron ěřť4ǔǐǒǎšď4ǧȟǰǩľ4žčǔň Alt-Gr+Shift+.
14 15 ʠⱳẻɼƭỷủỉỏƥảʂɗƒɠɦƙȥƈʋɓɲɱ Alt-Gr+Shift+/

The biggest surprises about it are:

  • Overdot is the most used writable diacritic, even when 5 of the letters use a different diacritic.
  • Underdot and caron get more than I expected.
  • Only one letter with horn? Was it worth it? Maybe I don't know how to use it?
  • Ogonoek is the best diacritic name. A shame is so underused :)
  • If it wasn't for the overdot, x would be the single letter that can't be combined.
  • Best symbol? I'm partial to ř because it's used to represent a Czech sound that is also present in the West North West part of my country, one ocean and one Equator apart. See https://www.youtube.com/watch?v=uDpVPj49R8w and, wow, https://www.youtube.com/watch?v=9cZSKnfeigI.
  • But also and ơ for their different uniqueness.

Let me tell you about that last video. The guy is from Córdoba, from Cruz del Eje, so he has a mix of Cordobese accent but the ř sound from Traslasierra ('behind the mountains', referencing the Sierras Grandes) and La Rioja. Riojan people have another accent.

https://www.openstreetmap.org/relation/153536?mlat=-30.7336&mlon=-64.7943#map=8/-30.188/-66.621.

Of course, these are not all diacritics (see https://en.wikipedia.org/wiki/Diacritic), and probably not all the possible combinations. Not to mention that they can be accumulated; see both glyphs noted with 1, but also https://en.wikipedia.org/w/index.php?title=Zalgo_text.

Finally, diacritics are not the only thing you can write with this layout. These symbols are also writable without combining:

¡¹²³¤£€¼½¾‘’¥×÷ äåé®™þüúíóö«» áßðëïœø¶° æ·©¢ñµ±ç¿

and ^ combined with digits writes them superscripted:

¹²³⁴⁵⁶⁷⁸⁹⁰

Definitely a very versatile layout. If you want to fully explore your keyboard layout, install tastenbrett2 and take a look.

Ah! And smart quotes are not actual glyphs but a feature (I completely forgot about it because I always deactivate them):

https://en.wikipedia.org/w/index.php?title=Quotation_marks_in_English#Smart_quotes.


  1. With v

  2. Literally 'keysboard' in German; 'tasten', keys; 'brett', board. 

  3. Can't help myself; I pronounce these in French :) 

  4. Notice how the diacritic applied is different to the one you're supposed to be pressing. 

  5. I can't quite see the glyph16, so I have to trust Python and unicodedata.name()

  6. These actually get an undercomma. 18 

  7. These ones are weird. At least on my editor they 'glue' to the character on its left and it becomes difficult to deal with. 

  8. According to my editor and Python, these are both LATIN SMALL LETTER A WITH RING ABOVE

  9. While typing these, I had to use Space to make them show in their isolated form. Fore this one I had to press it twice. Dunno why. 

  10. Not 'double dots'? This symbol is used for both diaeresis (diæresis?) and umlauts17

  11. With w

  12. Notice that the diacritic is an overdot, but these letters 'naturally' have dots in them, so it removes it. 

  13. Not sure if you see the same as me; I see a dot to the right of the l glyph. If this is the canonical representation, in some ways 4 applies. 

  14. It doesn't show at all; maybe because of 15

  15. This modifier changes the letters in many different ways, and the result not always looks like an actual diacritic, but I included it for completeness. 

  16. I'm getting old (almost 50!), presbyopia is hitting and these glasses are 1yo. Maybe it's also time I succumb and raise the minimum font size from 8 to 10. 

  17. How come all these names don't have themselves in it?!?! "Ümlaut", there, I fixed it for you :) 

  18. I have the impression that this post has more footnotes than actual text :)