Printing and identifying disks

Another short one. Seems like I haven't completed anything interesting lately, but I have a couple of things in the burner. Also, this is mostly for remembering this myself, because I always spend 30m to come up with this invocation1. This is another of those you-wouldn't-need-to-read-this-post-if-you-read-manpages-from-time-to-time posts.

I have a laptop that's the web/home/media/backup server and it's the AP. For many reasons, it has 4 disks8. I also have like 10 other old disks laying around, waiting for the day I buy a not so old tower to replace the laptop (which is... 13yo). That day, identifying disks will be a necessity. So far, the best tool I have found that gives me an idea what is what is lsblk, but I'm not satisfied with its default output format:

mdione@diablo:~$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sdb      8:16   0   1.8T  0 disk 
└─sdb1   8:17   0   1.8T  0 part /mnt/trash/mdione/public_html/Pictures
                                 /home/mdione/public_html/Pictures
                                 /home/backup
                                 /mnt/data
sdc      8:32   0 931.5G  0 disk 
├─sdc1   8:33   0    16G  0 part [SWAP]
└─sdc2   8:34   0 915.5G  0 part /
sdd      8:48   0 698.6G  0 disk 
└─sdd1   8:49   0 686.6G  0 part /home/mdione/public_html
                                 /mnt/trash
sde      8:64   0   1.8T  0 disk 
├─sde1   8:65   0  23.8M  0 part 
├─sde2   8:66   0   1.8T  0 part 
└─sde3   8:67   0  48.8G  0 part

Why? Because I don't care about MAJ:MIN, but also I need more info: partition table type2, device model3, and at least available space4.

Now look at this9:

mdione@diablo:~$ lsblk --all --output NAME,TYPE,VENDOR,MODEL,REV,ROTA,HOTPLUG,RO,SIZE,PTTYPE,PARTTYPENAME,PARTUUID,PARTFLAGS,PARTLABEL,FSTYPE,FSSIZE,UUID,FSAVAIL,LABEL,FSROOTS,MOUNTPOINTS

This space intentionally left blank

It is a lot of info, but:

  • NAME: yes, and I love the tree.
  • TYPE: OK, this might be extra, but to this day nvme0n1 still feels like a partition, and not a whole device.
  • VENDOR: Yeah, OK, this looks useless for internal disks (the other two are connected via USB, see HOTPLUG).
  • MODEL: yes, please
  • REV: OK, not that I upgrade disk firmware ever...
  • ROTA: HDD vs SSD/Flash/NVME
  • HOTPLUG: mostly, USB, see VENDOR6
  • RO: maybe useful for RO SDs?
  • SIZE: of course. Unluckily I can't find how to ask for a consistent unit (see the G vs M vs T).
  • PTTYPE: dos/MBR vs gpt, which is BIOS/legacy vs UEFI.
  • PARTTYPENAME: might be misleading, see sdb1's and the file system it hosts.
  • PARTUUID: Debian refuses to allow specifying root partiition by label, and this machine detects disks in the 'wrong' order, rendering it unbootable if I use /dev/sdXY instead, so this is important8.
  • PARTFLAGS: 0x80 is DOS bootable partition, which I need in this non-UEFI machine7.
  • FSTYPE: of course.
  • FSSIZE: ditto
  • UUID: see PARTUUID.
  • FSAVAIL: of course.
  • LABEL: except for /, I mount by label. Sue me, Debian.
  • FSROOTS: My disk usage is weird8. To me it's imporatnt to know what is mounted where. The many entries with non / FSROOTs are bind mounts.
  • MOUNTPOINTS: Ditto.

So, in all, this command replaces mount and even mount -t ext4, df -h, fdisk -l, perusing dmesg and maybe more.


  1. Yeah, OK, I made a script out of it, OK? And it's now deployed everywhere (2 machines :) via Ansible. 

  2. For when I'm upgrading disks and the new one is not booting or something, like I did recently. 

  3. To identify two very similar disks for said upgrades, or make sure that /dev/sda is not the main disk on the machine5 but a USB pen drive I want to write a rescue image on; otherwise, I would be /dev[astated]/sad

  4. Who doesn't want to know this, right? 

  5. Since I have a laptop that can handle NVME devices, this has been /dev/nvme0n1, but almost 3y of novelty can't beat more than two decades of precious /dev/sda

  6. Yes, circular 'see', sue me. 

  7. TBF, this machine supports it, but the hassle it means to activate that now is beyond what I'm prepared to fix. 

  8. Rationale is like this: I have an internal SSD (not NVME, this machine is too old), this is /; an internal HDD for trash, mainly semi throwaway map tiles mounted in my public_html directory, and it's full; an external HDD, this is for local backup but also serves my pics from a subdir from my public_html; if I have to run and leave everything behind, this is the first/only thing I pick up; and the external NVME on a USB encasing, which is soon to replace the external HDD. 

  9. Chízus, Nikola completely fsck'd it up. Sorry for the text file. And TBH, this does not even fits my terminal on a 170%10 3840x2160 27" monitor, but at least looks fine in my navigator. 

  10. I'm old, OK? 

Image saving failed: -28 while hibernating Linux

A short one.

If you're trying to hibernate your system but it doesn't shut down, you'll probably try again. After a few attempts1, you go check dmesg and find this error message:

PM: Image saving failed: -28

I only thought about it now, but -28 is ENOSPC, no space error. This means that what you have in memory and swap does not fit in swap, even when it's compressed, which is how hibernate works:

$ free -h
            total        used        free      shared  buff/cache   available
Mem:            31Gi        14Gi        12Gi       2.7Gi       7.0Gi        16Gi
Swap:           47Gi        40Gi       7.8Gi

Unluckily, there's nothing that bubbles up this error to the user interface; hence, the retrying.

In my case, the solution was to stop the program that was using a lot of memory, but another one would be to create a temporary swap file big enough.


  1. my logs say I tried 4 times! 

(More than) Everything you always wanted to know about...

From time to time you need to deep dive into a subject and you need a good site to read about it. Over the years I have come across many such sites, and this is my attempt to collect them in a single place. I have revisited the some of the same subjects, but since searching the web has become a worse and worse experience, finding them again has been almost impossible.

I hope to make this list as long as all the sites I want to put here. Also, I'll be using Archive.org's links, to make sure you can find them even if the sites are down.

Certificates and CAs

Linux memory

UEFI

Bikes

Optimizing Debian packages

If you follow me (I know, you don't), you know I render my own maps. For that I use mapnik, a library that can take data and a style and generate the maps I want. I install mapnik using Debian sid's original packaging. Lately, I've been wondering if I can shave some time from its rendering time.

Debian binary packages are compiled from source (duh). Since there is a single binary package per CPU architecture, the compile options must be generic enough to run on all alternative CPUs in that arch. This means that they're optimized, but in the most generic way, nothing specialized to the CPU you actually have (unless you have the most generic one, which might not exist).

So the question is: can I get some more juice if I recompile at least mapnik, the lib that does most of the CPU crunching? Let's see.

But first, and this is the question that took me more time to answer: how do I personalize the flags using during compilation? You can donwload Debian source packages with apt source mapnik and the packages needed for compiling it with sudo apt build-dep mapnik. In particular, mapnik uses cmake and it's developed in C++, so what we're looking for is to provide options for gcc and/or g++. Luckily, cmake understands CFLAGS and CXXFLAGS, so now it's a matter of telling Debian's build system to set those.

Unluckily, it seems like dpkg-buildpackage ignores those. But that tool uses dpkg-buildflags, and that tool has several ways to modify those envvars. It has two sets of operations to modify the flags passed to, in this case, cmake. One set is for the package's maintainer, and the other can be used by a user like us to customize the compilation.

So, without further addo:

DEB_CFLAGS_APPEND='-O3 -march=native -flto' DEB_CXXFLAGS_APPEND='-O3 -march=native -flto' dpkg-buildpackage --no-sign binary

This will build a new set of .deb files compiled with those flags. But it doesn't mean you can use them as is.

I mean, you can, but I would rather do this in a more clean manner. I would like a package that it's seen like an update to the one already provided by Debian, but that also looks like an older version to any update that might come from Debian too (Debian sid is like a rolling release, and I update once a week). This can be achieved with version suffixes1 by adding, for instance, +0.1mdione to the Debian version, and a new entry in the debian/changelog:

mapnik (4.1.3+ds-2+0.1mdione) unstable; urgency=medium

  * Recompile with -O3 -march=native -flto

 -- Marcos Dione <mdione@grulic.org.ar>  Sun, 02 Nov 2025 11:55:22 +0200

More info at Section 6.3 of the Guide for Debian Maintainers.

And now, to test our optimizations. I decided to run a small batch, once for warm up, once to measure, then upgrade, then run it again. But I run (!!!) into trouble: the second run took more time than the first one! In the end, I found a 16%+ swing between the slowest and the fastest runs (I run it several times more), way more than what any optimization of this kind might give. Still, I had no option than to continue, just in case.

And the winner is: no win. After several runs, I couldn't even get numbers as good as the fastest from the original version. So, at least in my case, Gentoo'ing it did not make any sense2.


  1. this is not an official name. 

  2. OK, several caveats here: First, I didn't optimize the whole system. But in this case I'm not sure how much sense it would have made. I could optimize postgres4 (the data is mostly stored there) and the myriad of dependencies for mapnik3, but that would take more time from me that what I evidently can (or, actually, can't) get out of it. Second, the system was not 100% idle, so maybe that explains the swings. Third, I had a third, but I forgot. 

  3. 168 libraries, according to ldd, and I don't really know how many are used in the code paths I exercise. 

  4. I'm going to end this post here, but I notice that at least for zoom level 11, postgres was using a core 100% for ~2/3rds of the time it takes to render that single tile (3m+!), but afterwards its usage is lower, even when more cores (1 vs up to 8) are rendering at the same time. 

Letting screen and tmux survive a logout under, and attaching processes to systemd scopes

I'm not exactly sure since when, but for a while I had had a new behavior: logging out from a session that has some screen or tmux sessions launched will terminate those too, which beats one of the purposes of running stuff in them.

There is a workaround, which you can find everywhere:

loginctl enable-linger $USER 
systemd-run --scope --user screen

This is good for launching new sessions, but what if you already have a session running, you're out of luck.

Or are you?

Unluckily systemd does not have a nice way to do this. But it is still possible.

We can create a new scope and attach the screen master later:

systemd-run --scope --user sleep infinity
# Running as unit: run-p1787326-i1787327.scope; invocation ID: ed6adac8a0db4e8f9e9ce59731cc2aa5

We use infinity so the scope does not finish before we can attach to it. Now let's find the control group under which it's running:

systemctl --user show -P ControlGroup run-p1787326-i1787327.scope
# /user.slice/user-1000.slice/user@1000.service/app.slice/run-p1787326-i1787327.scope

This references a scope directory in the /sys/fs/cgroup/ directory. We need to add the PID of the screen to the cgroup.procs file in it:

echo 1789758 > /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app.slice/run-p1787326-i1787327.scope/cgroup.procs

Thanks to grawity#systemd@libera.chat.

Defining reserved space in blocks vs percentage

A quick one.

Many (most?) filesystems in Linux can define reserved space. This is space that only the superuser can write to, and it's usually reserved for times when you've run out of space, allowing the super user to still operate the computer, even when normal users can't write anymore. This is also why there is the concept of available vs free space in partitions.

Historically, reserved space was defined in percentage. But with bigger and bigger disks (30TB now?), even 1% is a a lot of space (300GB). Fortunately, we can also define it in blocks. At 4KiB per block, you can reserve "only" 1TiB with:

tune2fs -r $((1024 * 1024 / 4)) [partition]

Unluckily, many of these tools don't define --long-options.

screen to tmux

I use screen a lot, but not much of it. Given recent developments around several CVEs I decided to try to learn the tmux equivalent of what I use. Both have the concept of windows. Both allow rebinding commands to keys, but these are the default bindings. tmux also has subcommands like tmux ls. Both support splitting the window; screen calls the areas 'regions'; tmux calls them panels.

Feature screen tmux
list active sessions screen -ls tmux ls
new named session screen -dR foo tmux -L foo
reattach a session screen -dR foo tmux -L foo attach
control key C-a C-b
detach d d
new window c c
next window n n
previous window p p
split horizontally S "
split vertically | %
go to next pane TAB o
rename session 1 $

  1. sessions can't be renamed in screen

Debuggung system libraries in Debian

"My maps do not render". It's always a simple question like this. The error message was quite cryptic:

Failed to parse color: "D27E01"

This makes sense, that is not a color; #D27E01 is. I thought "I might have made a typo". Searching through my map style gave nothing. Hmm,m that's weird. Maybe the database? I pick a few colors from the data, f.i. for bus or subte/metro/subway/underground lines. Nope, not that either. So where is it coming from?

I fire gdb and thanks to automatic debug symbol loading I get more info that I would otherwise get. First rock I stumble upon: the function where that error is raised is called for every parsed color. In a map style there are gazillion, and remember I'm also parsing some coming from the db. So to set a conditional break point. It's easy, right? Just break parse_color if [condition] and done!

Not so fast. To begin with, strings don't exist in C1, it's the arrays and \0. Also, arrays don't exist in C, it's pointers and wishful thinking. condition will have to involve strcmp() and == 0. But the parameter is actually a std::string const& str, so it's in the C++ realm. After asking around, guinevere#gdb@libera.chat suggested break parse_color if $_streq(str.data(), "D27E01"), which worked.

The next step was to make sense of the code. mapnik, the library I'm debugging, is the most inscrutable code I have ever seen. Here's a full backtrace of the moment I hit the bug:

#0  mapnik::parse_color (str="D27E01") at ./src/color_factory.cpp:32
#1  0x00007ffff5921295 in mapnik::color::color (this=this@entry=0x7fffffffc0d0, str="D27E01", premultiplied=premultiplied@entry=false) at ./src/color.cpp:38
#2  0x00007ffff614051d in mapnik::evaluate_expression_wrapper<mapnik::color>::operator()<mapbox::util::variant<mapnik::value_null, bool, long, double, icu_76::UnicodeString, mapnik::attribute, mapnik::global_attribute, mapnik::geometry_type_attribute, mapbox::util::recursive_wrapper<mapnik::unary_node<mapnik::tags::negate> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::plus> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::minus> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::mult> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::div> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::mod> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::less> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::less_equal> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::greater> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::greater_equal> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::equal_to> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::not_equal_to> >, mapbox::util::recursive_wrapper<mapnik::unary_node<mapnik::tags::logical_not> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::logical_and> >, mapbox::util::recursive_wrapper<mapnik::binary_node<mapnik::tags::logical_or> >, mapbox::util::recursive_wrapper<mapnik::regex_match_node>, mapbox::util::recursive_wrapper<mapnik::regex_replace_node>, mapbox::util::recursive_wrapper<mapnik::unary_function_call>, mapbox::util::recursive_wrapper<mapnik::binary_function_call> >, mapnik::feature_impl, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, mapnik::value_adl_barrier::value, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, mapnik::value_adl_barrier::value> > > > (this=<optimized out>, expr=..., feature=..., vars=...) at ./include/mapnik/symbolizer.hpp:284
#3  mapnik::extract_value<mapnik::color>::operator() (this=<optimized out>, expr=...) at ./include/mapnik/symbolizer.hpp:342
#4  apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:332
#5  0x00007ffff61405e8 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#6  0x00007ffff6140640 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#7  0x00007ffff61406a0 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#8  0x00007ffff6140700 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#9  0x00007ffff6140760 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#10 0x00007ffff61407c0 in apply<mapnik::detail::strict_value const&, mapnik::extract_value<mapnik::color> > (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#11 0x00007ffff61407eb in visit<mapnik::extract_value<mapnik::color>, mapnik::detail::strict_value const&> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#12 0x00007ffff6140837 in mapnik::util::apply_visitor<mapnik::extract_value<mapnik::color>, mapnik::detail::strict_value const&> (v=..., f=...) at ./include/mapnik/util/variant.hpp:42
#13 0x00007ffff614090b in mapnik::get<mapnik::color, (mapnik::keys)9> (sym=..., feature=..., vars=std::unordered_map with 0 elements) at ./include/mapnik/symbolizer.hpp:335
#14 mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4>::process (this=0x7fffffffc890, sym=..., feature=..., prj_trans=...) at ./src/agg/process_line_symbolizer.cpp:95
#15 0x00007ffff597d682 in mapnik::process_impl<true>::process<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4>, mapnik::line_symbolizer, mapnik::feature_impl, mapnik::proj_transform> (ren=..., 
    sym=..., f=..., tr=...) at ./include/mapnik/symbolizer_dispatch.hpp:43
#16 0x00007ffff597d6e1 in mapbox::util::detail::dispatcher<void, mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer>::apply<mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&, mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> > > (
    v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#17 0x00007ffff597d6f9 in mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer>::visit<mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >, mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&, mapnik::point_symbolizer const&, void> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#18 0x00007ffff597d712 in mapnik::util::apply_visitor<mapnik::symbolizer_dispatch<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >, mapbox::util::variant<mapnik::point_symbolizer, mapnik::line_symbolizer, mapnik::line_pattern_symbolizer, mapnik::polygon_symbolizer, mapnik::polygon_pattern_symbolizer, mapnik::raster_symbolizer, mapnik::shield_symbolizer, mapnik::text_symbolizer, mapnik::building_symbolizer, mapnik::markers_symbolizer, mapnik::group_symbolizer, mapnik::debug_symbolizer, mapnik::dot_symbolizer> const&> (f=..., v=...) at ./include/mapnik/util/variant.hpp:42
#19 0x00007ffff598611a in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_style (this=<optimized out>, p=..., style=0x1cb2e60, rc=..., 
    features=std::shared_ptr<mapnik::Featureset> (use count 3, weak count 0) = {...}, prj_trans=...) at ./include/mapnik/feature_style_processor_impl.hpp:592
#20 0x00007ffff59869af in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_material (this=this@entry=0x7fffffffc890, mat=..., p=...)
    at ./include/mapnik/feature_style_processor_impl.hpp:552
#21 0x00007ffff5987994 in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::render_submaterials (this=this@entry=0x7fffffffc890, parent_mat=..., p=...)
    at ./include/mapnik/feature_style_processor_impl.hpp:453
#22 0x00007ffff598c1e0 in mapnik::feature_style_processor<mapnik::agg_renderer<mapnik::image<mapnik::rgba8_t>, mapnik::label_collision_detector4> >::apply (this=this@entry=0x7fffffffc890, scale_denom=<optimized out>, 
    scale_denom@entry=0) at ./include/mapnik/feature_style_processor_impl.hpp:148
#23 0x00007ffff6d9d858 in agg_renderer_visitor_1::operator()<mapnik::image<mapnik::rgba8_t> > (this=<optimized out>, pixmap=...) at src/mapnik_python.cpp:220
#24 0x00007ffff6dabeea in mapbox::util::detail::dispatcher<void, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::apply<mapnik::image_any&, agg_renderer_visitor_1> (v=..., f=...) at /usr/include/mapbox/variant.hpp:332
#25 0x00007ffff6dabf28 in mapbox::util::detail::dispatcher<void, mapnik::image<mapnik::null_t>, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::apply<mapnik::image_any&, agg_renderer_visitor_1> (v=..., f=...) at /usr/include/mapbox/variant.hpp:336
#26 0x00007ffff6dabf89 in mapbox::util::variant<mapnik::image<mapnik::null_t>, mapnik::image<mapnik::rgba8_t>, mapnik::image<mapnik::gray8_t>, mapnik::image<mapnik::gray8s_t>, mapnik::image<mapnik::gray16_t>, mapnik::image<mapnik::gray16s_t>, mapnik::image<mapnik::gray32_t>, mapnik::image<mapnik::gray32s_t>, mapnik::image<mapnik::gray32f_t>, mapnik::image<mapnik::gray64_t>, mapnik::image<mapnik::gray64s_t>, mapnik::image<mapnik::gray64f_t> >::visit<agg_renderer_visitor_1, mapnik::image_any&, mapnik::image<mapnik::null_t>&, void> (v=..., f=...) at /usr/include/mapbox/variant.hpp:838
#27 0x00007ffff6dabfa2 in mapnik::util::apply_visitor<agg_renderer_visitor_1, mapnik::image_any&> (f=..., v=...) at /usr/include/mapnik/util/variant.hpp:42
#28 0x00007ffff6da4e41 in render (map=..., image=..., scale_factor=1, offset_x=0, offset_y=0) at src/mapnik_python.cpp:316
#29 0x00007ffff6dac584 in boost::python::detail::invoke<int, void (*)(mapnik::Map const&, mapnik::image_any&), boost::python::arg_from_python<mapnik::Map const&>, boost::python::arg_from_python<mapnik::image_any&> > (f=<optimized out>, 
    ac0=..., ac1=<synthetic pointer>...) at /usr/include/boost/python/detail/invoke.hpp:79
#30 boost::python::detail::caller_arity<2u>::impl<void (*)(mapnik::Map const&, mapnik::image_any&), boost::python::default_call_policies, boost::mpl::vector3<void, mapnik::Map const&, mapnik::image_any&> >::operator() (
    this=<optimized out>, args_=<optimized out>) at /usr/include/boost/python/detail/caller.hpp:233
#31 0x00007ffff6fdf3ae in boost::python::objects::py_function::operator() (this=0xec3670, args=0x7fffe7caf400, kw=<optimized out>) at ./boost/python/object/py_function.hpp:147
#32 boost::python::objects::function::call (this=0xec3ac0, args=0x7fffe7caf400, keywords=0x0) at libs/python/src/object/function.cpp:221
#33 0x00007ffff6fdf62c in boost::python::objects::(anonymous namespace)::bind_return::operator() (this=<optimized out>) at libs/python/src/object/function.cpp:581
#34 boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke (function_obj_ptr=...) at ./boost/function/function_template.hpp:193
#35 0x00007ffff6fe465b in boost::function0<void>::operator() (this=<optimized out>) at ./boost/function/function_template.hpp:771
#36 boost::python::detail::exception_handler::operator() (this=<optimized out>, f=...) at libs/python/src/errors.cpp:74
#37 0x00007ffff6da9b67 in boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d0f0 <runtime_error_translator(std::runtime_error const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#38 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> >::operator()<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#39 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#40 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::runtime_error, void (*)(std::runtime_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::runtime_error const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...)
    at /usr/include/boost/function/function_template.hpp:137
#41 0x00007ffff6da9bc7 in boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d250 <value_error_translator(mapnik::value_error const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#42 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> >::operator()<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#43 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#44 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<mapnik::value_error, void (*)(mapnik::value_error const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(mapnik::value_error const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...)
    at /usr/include/boost/function/function_template.hpp:137
#45 0x00007ffff6da9c27 in boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d120 <out_of_range_error_translator(std::out_of_range const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#46 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> >::operator()<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#47 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#48 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::out_of_range, void (*)(std::out_of_range const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::out_of_range const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...) at /usr/include/boost/function/function_template.hpp:137
#49 0x00007ffff6da9c87 in boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>::operator() (this=<optimized out>, handler=..., f=..., 
    translate=0x7ffff6d9d150 <standard_error_translator(std::exception const&)>) at /usr/include/boost/python/detail/translate_exception.hpp:39
#50 boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> >::operator()<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::rrlist2<boost::python::detail::exception_handler const&, boost::function0<void> const&> > (this=<optimized out>, f=..., a=<synthetic pointer>...) at /usr/include/boost/bind/bind.hpp:368
#51 boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> > >::operator()<boost::python::detail::exception_handler const&, boost::function0<void> const&> (this=<optimized out>, a1=..., a2=...) at /usr/include/boost/bind/bind.hpp:1298
#52 boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<std::exception, void (*)(std::exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(std::exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke (function_obj_ptr=..., a0=..., a1=...) at /usr/include/boost/function/function_template.hpp:137
#53 0x00007ffff6fe4511 in boost::function2<bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::operator() (this=<optimized out>, a0=..., a1=...) at ./boost/function/function_template.hpp:771
#54 boost::python::detail::exception_handler::handle (this=<optimized out>, f=...) at ./boost/python/detail/exception_handler.hpp:41
#55 boost::python::handle_exception_impl (f=...) at libs/python/src/errors.cpp:24
#56 0x00007ffff6fdc3c6 in boost::python::handle_exception<boost::python::objects::(anonymous namespace)::bind_return> (f=...) at ./boost/python/errors.hpp:29
#57 boost::python::objects::function_call (func=<optimized out>, args=<optimized out>, kw=<optimized out>) at libs/python/src/object/function.cpp:622
#58 0x0000000000543b8b in _PyObject_MakeTpCall (tstate=0xa7d510 <_PyRuntime+283024>, callable=0xec3ac0, args=<optimized out>, nargs=2, keywords=<optimized out>) at ../Objects/call.c:242
#59 0x000000000055f191 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>) at ../Python/generated_cases.c.h:813
#60 0x000000000064db6c in _PyEval_EvalFrame (tstate=0xa7d510 <_PyRuntime+283024>, frame=0x7ffff7fb2020, throwflag=0) at ../Include/internal/pycore_ceval.h:119
#61 _PyEval_Vector (args=0x0, argcount=0, kwnames=0x0, tstate=0xa7d510 <_PyRuntime+283024>, func=0x7ffff7497740, locals=<optimized out>) at ../Python/ceval.c:1814
#62 PyEval_EvalCode (co=0xbcf8d0, globals=<optimized out>, locals=<optimized out>) at ../Python/ceval.c:604
#63 0x000000000066da21 in run_eval_code_obj (tstate=0xa7d510 <_PyRuntime+283024>, co=0xbcf8d0, globals=0x7ffff7434a00, locals=0x7ffff7434a00) at ../Python/pythonrun.c:1381
#64 0x000000000066988c in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7ffff7434a00, locals=0x7ffff7434a00, flags=<optimized out>, arena=<optimized out>, interactive_src=0x0, generate_new_source=0)
    at ../Python/pythonrun.c:1466
#65 0x0000000000682983 in pyrun_file (fp=fp@entry=0xb1dcd0, filename=filename@entry=0x7ffff744aa30, start=start@entry=257, globals=globals@entry=0x7ffff7434a00, locals=locals@entry=0x7ffff7434a00, closeit=closeit@entry=1, 
    flags=0x7fffffffd3a8) at ../Python/pythonrun.c:1295
#66 0x0000000000682283 in _PyRun_SimpleFileObject (fp=fp@entry=0xb1dcd0, filename=filename@entry=0x7ffff744aa30, closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd3a8) at ../Python/pythonrun.c:517
#67 0x00000000006820be in _PyRun_AnyFileObject (fp=0xb1dcd0, filename=0x7ffff744aa30, closeit=1, flags=0x7fffffffd3a8) at ../Python/pythonrun.c:77
#68 0x0000000000680ef1 in pymain_run_file_obj (program_name=0x7ffff7434b30, filename=0x7ffff744aa30, skip_source_first_line=0) at ../Modules/main.c:410
#69 pymain_run_file (config=0xa4fc08 <_PyRuntime+96392>) at ../Modules/main.c:429
#70 pymain_run_python (exitcode=0x7fffffffd39c) at ../Modules/main.c:697
#71 Py_RunMain () at ../Modules/main.c:776
#72 0x000000000063d6eb in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at ../Modules/main.c:830
#73 0x00007ffff7c90ca8 in __libc_start_call_main (main=main@entry=0x63d640 <main>, argc=argc@entry=3, argv=argv@entry=0x7fffffffd5d8) at ../sysdeps/nptl/libc_start_call_main.h:58
#74 0x00007ffff7c90d65 in __libc_start_main_impl (main=0x63d640 <main>, argc=3, argv=0x7fffffffd5d8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffd5c8) at ../csu/libc-start.c:360
#75 0x000000000063cab1 in _start ()

What I get from it is that the color is not coming from the style of the db but it's the result of an expression; see frame #3. Thing is, those parameters have been optimized out into CPU registers and there is no easy way to inspect them as is:

#3  mapnik::extract_value<mapnik::color>::operator() (this=<optimized out>, expr=...) at ./include/mapnik/symbolizer.hpp:342
342     in ./include/mapnik/symbolizer.hpp
(gdb) print this
$1 = <optimized out>
(gdb) print expr
$2 = <optimized out>

So, what options do I have? Well, one is to compile mapnik (and python3-mapnik, because I'm driving it from a Python program; you can see it in the stacktrace since frame #292) and I tried this first, but even compiling the released code on my Debian failed in ways I could not fix myself without potentially introducing more bugs.

But luckily and ofcoursely3 the sources from Debian did compile; I compiled the debs using CXXFLAGS='-g -O0' CFLAGS='-g -O0' dpkg-buildpackage -us -uc4, installed them by hand and now I can do my debug session normally.

Unluckily for this post the error was in the data. When I searched for it I used the wrong key for the Postgres HStore (color vs colour; damn Webster!). It's also a lucky hit, because I was reaching the point were I would had to untangle a mess of C++ templating and I was running out of time. mapnik is definitely the hardest code I had ever had to follow. You get a preview just in that backtrace.


  1. I know the code is in C++, it's just for the joke. 

  2. To be honest, it surprises me that most of the stack is Python layers; mapnik and agg seem to like very deep call stacks. Maybe it's just I'm not that far deep. 

  3. Sue me :) 

  4. You have no idea how much time I spent coming up with this invocation. In retrospective, it's obvious, but mapnik seems to be able to use both scons and cmake for configuring the build; I was using the scons style of params while I should have been using the cmake one5

  5. Part of the confusion comes from the fact that I used to compile mapnik from git, and as it's my custom I still had the ./config.sh script I use (together with ./build.sh) to leave a trace of how I compiled something. This script was using scons, while debian/rules uses cmake. I was aware cmake was involved, mostly from the compilation output on the terminal, the classic [ 1%] Building CXX object CMakeFiles/wkt.dir/src/wkt/wkt_factory.cpp.o type of lines, but I thought that for some reason scons was calling it. 

Liberating your ebooks purchased on Amazon, A.K.A. the sigh post

Wow, fist written (not dictated) post in a long time. Let's see if I can finish it.

Given the fact that Amazon is shutting down downloading your purchased ebooks, I decided to download them all before the cut in a couple of days. To me, the definitve gide comes in this toot:

rem@front-end.social, Feb 18

The first link is a tool that will automatically download the books for you; otherwise, you have to download them by hand/clickops. If your collection is big that can be tedious. The first problem is that that tool is developed in a obscure programming language called bun1, which of course gets installed by the classic curl | bash method which... sigh. Let's just say it's not the ideal. At least the options they use in curl are not bad, but it would be nice if they used the --long-options so I don't have to peruse curl's man page to see if they're not doing anything even more crappy, like ignoring bad SSL certs or something worse.

So instead of blindly doing that, I open the script and start reading. The usual crap, but it includes this gem:

install_env=BUN_INSTALL
bin_env=\$$install_env/bin

install_dir=${!install_env:-$HOME/.bun}
bin_dir=$install_dir/bin
exe=$bin_dir/bun

In a roundabout way, it's saying that it will honor the BUN_INSTALL envvar as the root of the installation, defaulting to $HOME/.bun, which is what I was looking for; I don't want more trash in my home directory. I also run the script with tracing to make sure it doesn't do anything ugly.

mdione@ioniq:~/src/system/fsck-amazon$ BUN_INSTALL=$(pwd) bash -x ./install

I should have read it more. The bloody thing helpfully adds these lines you the .bashrc:

# bun
export BUN_INSTALL="$HOME/src/system/fsck-amazon"
export PATH="$BUN_INSTALL/bin:$PATH"

sigh So I undo that and only set the envvars on the shell I run for all this.

Ok, now to follow the bulkk tool's install instructions. The step that most time takes is:

🚚 typescript...

sigh. To be slightly fair, my internet has been crappy for a while now. I blame the PLC network between my ISP router and my home made AP.

So far only (!!!) 121MiB have been used. sigh Let's see how much will it be at the end, because the next step is:

bunx puppeteer browsers install chrome

See that last one? This is because the ony things that can use the web lately are only full fledged browsers that include a fsck-ton amount of technologies, so this thing is going to drive Amazon's site with a full fledged Chrome browser. And of course it escapes my feeble attempt to give it a jail:

mdione@ioniq:~/src/system/fsck-amazon/amazon-kindle-bulk-downloader$ bunx puppeteer browsers install chrome
chrome@133.0.6943.98 /home/mdione/.cache/puppeteer/chrome/linux-133.0.6943.98/chrome-linux64/chrome

sigh

mdione@ioniq:~/src/system/fuck-amazon/amazon-kindle-bulk-downloader$ du -sm /home/mdione/.cache/puppeteer/
585     /home/mdione/.cache/puppeteer/

So 706MiB so far.

eyeroll

To not write the auth on the disk in plain text, I just run the system with manual auth:

mdione@ioniq:~/src/system/fsck-amazon/amazon-kindle-bulk-downloader$ bun run start --manualAuth
$ bun run src/index.ts --manualAuth
 Enter the Amazon base URL  https://www.amazon.com/
 Press enter once you've logged in … yes
Got auth
[...]
Downloading complete. You can find your books in the 'downloads' folder.

An that's it. After this I cleaned up the two diredtories, making sure not to delete the downloads the tool created.

Next step is to download the Calibre plugin (I already have Calibre via the OS packages) and follow the install instructions. One of the step is to write a list of serials for the ebooks you have. I my case, it was on drop-from-the-top menu -> All Settings -> Device Options -> Device Info -> Serial Number; it's a 4 groups of 4 characters string, WWWW XXXX YYYY ZZZZ. Also in my case, the config files was ~/.config/calibre/plugins/dedrm.json. You can write it by hand or use the GUI to add new serials. For the latter, it's Calibre -> Preferences -> Plugins -> File type -> DeDRM -> Customize plugin -> Kindle eInk ebooks -> + (add serial). Just notice it wants the serial as a single 16 char string.

Last step: convert them all:

mdione@ioniq:~/src/system/fuck-amazon$ for file in downloads/; do calibredb add $file --with-library=library; done

Notice two things: when I said "making sure not to delete the downloads the tool created", I meant moving that directory into this root and deleting everything else. Second, this just DeDRMs the files, and leaves them in a different directory (library), but Calibre won't see them until you really add them.


  1. Oh, fscking hell: "Bun is a fast JavaScript all-in-one toolkit". sigh 

Replacing an ORM with SQLite and dataclasses

Note: due to my new setup, this post came out almost in raw form. It is now heavily edited as it should have been, but at least forced me to finish it :)

I started using Mastodon some three years ago, and since the beginning I started having issues adapting to how timelines work and how you actually should use the platform. I listed several things that I didn't like, and I always thought that at some point I would just sit down and write my own client. This weekend looked like it would rain all weekend, so it was the perfect time to do it.

As with any project that uses a database, I usually go for an ORM. I have already used SQLAlchemy on 3 or 4 projects, so it was natural for me to use it again. One of them is probably dead, but the rest get repeated usage; not daily, but many times a year for many years. Through all those years, most had the data layer written once and forgot about, because the data model is quite simple; and this case too. The problem is that every time I mostly have to relearn how to use it, specially since even when SQLAlchemy has evidently stayed quite backwards compatible, it has also evolved.

Version 2.0 has a very different interface. With the old version, if you wanted to fetch an object, a row from the database, you did something like this:

image = session.query(Image).filter_by(name=filename)[0]

But with the new version, they have exposed a thin abstraction of the SQL language; the previous exmaple becomes:

image = session.scalars(select(Image).where(User.name == filename))[0]

I love the magick1 they are using to convert that expression into something they can later render as SQL, which involves rewriting __getattribute__() to return another object that implements __eq__() to capture the value.

It's like writing SQL in Python, right? And in some ways, it makes a lot of sense, because one of the things people usually complain about ORMs is that, at some point doing things that are easy in SQL they become too convoluted on ORMs. It's like you are bending yourself backwards just to try to kind of scratch your left ear using your right arm, but under your left leg.

So, putting a slight veneer of Python on top of SQL kind of makes sense, and I like that someone has explored that idea to the point that it has become the new way of doing things. And with that, I mean, I guess the old API still works, again, all the code I have written so far still works, and I still use it still works, but the new way looks better. But then another thing that over and sometimes too, and in particular, SQL actually seems to do too, is to have two levels of data returned. You may have, you may get objects, which is basically you go and fetch stuff from the database and you get objects in return, and it's the objects you are thinking of in terms of your application model and not in terms of your tables and other stuff, which is amazing because you can do one-to-end and end-to-end relationships, and it's just natural, because let's say a person has several addresses, you just say person.addresses.add, and you add an address there, and it just works. You don't have to think about anything there, you just add stuff and it will go in the database. Before you have an end-to-end relationship and you can go from photos to tags and from tags to photos, and it just works, right? You just get, you fetch a tag and say, give me all the photos, and it just gives you the photos and you completely forget about the intermediary table, and that's just amazing. It just works and it's fantastic. Then there's the other API where you don't get objects, you just get data from the database. You get roles, you get tuples with data inside, which has no structure and anything else. After writing this Friday night, what I did in Friday night was mostly to look at the responses from a mastern, servers, see what they look like, and try to make data model based on that. I am not going to complete the model accounts or toots, because they have way more information than I want to, to display in my client, but that's all I did on Friday night. So the morning I started writing the model in SQL Alchemy, and it was just fine, just mapping JSON fields to tables and columns, and then to classes and objects and attributes. I started writing all the code and it was just fine, everything was kind of obvious in that sense. But at some point I hit a wall where I was going to, I was fetching all the, I was fetching accounts based on an ID, and instead of returning me objects, it was returning me tuples. So when I wanted to say, okay, I have a new toot, and this toot was tooted by this, toot or this account, if I don't know the account, I create a new one and then I create a toot with this new account, and I just push it to that list. But in the case of the account already existing and me getting a tuple instead of an object, when I was trying to say, okay, toot.author is this object, I was not putting an object, I was putting just a string of the ID, because that's all I had, that's all what the ORM was giving me. And of course later the ORM says, no, this doesn't work because I expect an object and you'll give me just an ID. And I spent like 24 hours on that, I mean, from Saturday afternoon to this morning, okay, not 24 hours, maybe like 10 hours on it. And asking everywhere and I didn't get a very good answer. So I just switched to SQLite 3. I just had to do it. So before my workflow was, if I needed to add a new column to a table, I only had to add a new attribute to the class and then destroy the database and create it again. One of the good decisions I did at the beginning was to store all the original information in raw, just getting the JSON and storing it in the database. So that means that if I make any changes to the data structure that I will use later, I can just read everything back from the database, create the new objects and save them in the database again. And this is something I could use to replace database structure migrations. I just nuke all the secondary tables and just create all the data again from scratch, yeah, from the original data. So I can do these migrations very easily now. So with SQL Alchemy, I could just drop the tables, create everything again and just feed, create all the objects again and that worked. But now I don't have that anymore. So what do I do? Well, my workflow now includes SQLite browser, which is a GUI application for creating, for manipulating SQLite files, right? And I just had to add a new column to the Tooth table, so I did just that in the GUI. And what I'm going to do at some point is to make a dump of the structure and keep it in a SQL file. So if I ever have to do this migration again, which I have to do at some point, I will just take that file, just executed it from scratch when I have to create the tables again and that's it. I will just do it that way. I really wish our, in this case, SQLite, SQL Alchemy would have been just transparent for me. But I guess that's exactly the kind of things people say about ORAMS, the type of criticism that ORAMS usually get, that 90% of the work is really transparent, but then at some point it is no longer so transparent and you start doing weird shit just to accomplish what you want. So what I'm going to do in the future, I'm not really sure. I will see how this raw SQLite works. This project is rather simple. I have very few tables, like I have two tables for the original raw content, one not 24 hours, maybe like 10 hours on it. And asking everywhere and I didn't get a very good answer. So I just switched to SQLite 3. I just had to do it. So before my workflow was, if I needed to add a new column to a table, I only had to add a new attribute to the class and then destroy the database and create it again. One of the good decisions I did at the beginning was to store all the original information in raw, just getting the JSON and storing it in the database. So that means that if I make any changes to the data structure that I will use later, I can just read everything back from the database, create the new objects and save them in the database again. And this is something I could use to replace database structure migrations. I just nuke all the secondary tables and just create all the data again from scratch, yeah, from the original data. So I can do these migrations very easily now. So with SQL Alchemy, I could just drop the tables, create everything again and just feed, create all the objects again and that worked. But now I don't have that anymore. So what do I do? Well, my workflow now includes SQLite browser, which is a GUI application for creating, for manipulating SQLite files, right? And I just had to add a new column to the Tooth table, so I did just that in the GUI. And what I'm going to do at some point is to make a dump of the structure and keep it in a SQL file. So if I ever have to do this migration again, which I have to do at some point, I will just take that file, just executed it from scratch when I have to create the tables again and that's it. I will just do it that way. I really wish our, in this case, SQLite, SQL Alchemy would have been just transparent for me. But I guess that's exactly the kind of things people say about ORAMS, the type of criticism that ORAMS usually get, that 90% of the work is really transparent, but then at some point it is no longer so transparent and you start doing weird shit just to accomplish what you want. So what I'm going to do in the future, I'm not really sure. I will see how this raw SQLite works. This project is rather simple. I have very few tables, like I have two tables for the original raw content, one for the actual raw content and one for the raw toot content. One of the things is when you boost another toot, you basically generate a toot that has a toot nested into it. So the original raw content includes all that information, but to me, for my client, the most important thing is the original toot itself. So I have another table that only has the original toots, not the boosters, and the client will just work from that. And then I have toots, tooters, and tacks so far. I'm not really sure if I will do a table with bookmarks and filters and all that shit, because those things usually stay on the server in your instance, and I should just query them all the time. Press also think of having an offline version where you can do stuff, and if you cannot send them immediately to the instance, you just queue them for later, for when you have a connection, or when you want to use a connection. So that's it. It's very, very simple data model. If this goes right, but later I have a more complex data model, I might try other over ends, especially simpler over ends that are less magic, a little bit more transparent, but yeah, we'll see when the time comes.


  1. Typo intended. It's not real magic, just a clever data model, but sometimes indistinguishable from black magic :)