Marble, OSM and monav as in car navigation for maemo

Ever since it came out in November 2009 I wanted a Nokia N900. Many reasons made me buy one, some reasons made me wait until this month. Maemo, it's OS, is almost exactly what I wanted, a real GNU/busybox/Linux. I can do a lot of stuff with it because it's, I think, the closest to a mini computer with an integrated GPS, camera, cellular network modem, WiFi and can act both as an USB master or a slave. I don't know about the N950 or the N9, but really this is a tinkerer's dream.

So far I haven't done much with it, most of my personal time is being invested in something else, but I've been using it mainly as an in-car computer and a WiFi-to-LAN router. The latter will be the subject to another post, but I want to concentrate on the former in this post.

As an in-car computer right now I want two things: navigation and audio player. The audio player is done; so far the media player that comes in the... phone? computer? I'll say computer. So far the media player that comes with the computer is enough for me. Maybe sometime I'll do a satyr skin for Maemo and port the beast to it, just for the heck of it.

Also it has the OVI maps, the default app for navigation, which from what I have heard, is very good, but it needs a data connection to work. I don't want to buy a data plan yet, and also there are places where there is no good data coverage. You might not even get enough signal to make a call or even send an SMS. So I want something that can cache maps locally in the computer.

The answer is actually somewhat complex: marble, monav and OpenStreetMap. Marble is the KDE map viewing application. It can show several types of maps, even from several celestial bodies. It can talk to a GPS to record and save tracks, and to monav, which is a routing application that uses OSM data. Of course, Marble can use OSM's tiles. And the good thing about both is that they can use data cached on the computer, which means no data link needed, at least not all the time.

Installing them in Maemo is as simple as selecting them in the App manager (although I suggest to install and use Faster Application Manager, which is no so far from a mix between a smartphone store and, let's say, aptitude). Both are in the extras repo.

Once they're installed, the next logical step is to donwload monav's data. For that you open marble; from its menu you select «Routing», then «Configure». There you can select Monav, and «Configure». There you can download routing maps per country. Download all the ones you want, but remember it takes space and some time to uncompress.

As for the maps themselves, you can select the tileset you want (so far the common Mapnik and OSMaRenderer are available). Marble will start downloading and caching tiles from OSM's servers. You can even use «Download Region...» to download the tiles for a certain region or route in an interval of zooms. Becareful about this, because OSM's servers will ban you for a little while if you pull too much info.

So now I have an almost complete offline map and routing solution. I still have to investigate how to add voices (so far it's mute). I also found a couple of problems with this setup. The most "simple" to solve are UI ones: the navigation instructions have too much text and too small icons, and while having the navigation widget on, you lose the zoom controls. Luckily my dock leaves the volume/zoom hardware buttons free, so I can zoom with them (who needs multitouch :).

But the most prominent one is the map itself. Both Mapnik and OSMaRenderer are too confusing for in-car navigation, they have too much detail, and minor streets are not clearly distinguished from the blocks themselves. Mapnik in particular has a very low contrast, which also makes it hard to read in plain daylight.

Enter CloudMade. I still don't get much what CloudMade is exactly for, but one thing I know: it's very easy to create tilesets from OSM data. First I browsed the styles already in the site and I found one called «Almost Ordinance Survey 25k». I took it and modified it heavily. I named it «daylight in-car navigation».

Then I found out that it wouldn't be easy to use the tiles with marble. At least I got to find the right settings. Next chapter will be about using OSMaRenderer locally.

Compiling KDE SC 4.7 packages for Debian sid i386

On November 26, the Qt/KDE Debian team (qkd from now on) released an experimental packaging of KDE SC 4.7. This means a substantial and very much awaited upgrade from the actual version available in sid, 4.6.5 . From what I've been told, it has taken so long because there have been a lot of changes in source code structure (KDE has recently restructured its source modularization) and in packaging itself, and even with recent aditions to the group, the resources in terms of developers is still low. In any case, thanks to them for getting this version out, I really appreciate your hard work.

But because of this lack of resources, they only got out packages for the amd64 architecture, and I'm running i386 at home, so I decided to try and compile it by myself. This is the log of the attempt.

First, I added deb-src http://qt-kde.debian.net/debian experimental-snapshots main to my sources.list and got the code for kde4libs, the package I think is the root of the whole repo12:

mdione@mustang:~/src/system/debian$ apt-get source kde4libs
Reading package lists... Done
Building dependency tree
Reading state information... Done
NOTICE: 'kde4libs' packaging is maintained in the 'Git' version control system at:
git://git.debian.org/pkg-kde/kde-sc/kde4libs.git
Need to get 12.4 MB of source archives.
Get:1 http://qt-kde.debian.net/debian/ experimental-snapshots/main kde4libs 4:4.7.2-0r3 (dsc) [4,883 B]
Get:2 http://qt-kde.debian.net/debian/ experimental-snapshots/main kde4libs 4:4.7.2-0r3 (tar) [12.1 MB]
Get:3 http://qt-kde.debian.net/debian/ experimental-snapshots/main kde4libs 4:4.7.2-0r3 (diff) [334 kB]
Fetched 12.4 MB in 1min 54s (109 kB/s)
gpgv: Signature made Sat 22 Oct 2011 02:29:45 AM CEST using RSA key ID 73A85F31
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./kde4libs_4.7.2-0r3.dsc
dpkg-source: info: extracting kde4libs in kde4libs-4.7.2
dpkg-source: info: unpacking kde4libs_4.7.2.orig.tar.bz2
dpkg-source: info: unpacking kde4libs_4.7.2-0r3.debian.tar.gz
dpkg-source: info: applying kconf_update_migrate_from_kde3_icon_theme.diff
dpkg-source: info: applying add_debian_build_type.diff
dpkg-source: info: applying disable_usr_lib_install_rpath.diff
dpkg-source: info: applying make_libkdeinit4_private.diff
dpkg-source: info: applying default_kde4_xdg_menu_prefix.diff
dpkg-source: info: applying qt4_designer_plugins_path.diff
dpkg-source: info: applying hardcode_ptm_device.diff
dpkg-source: info: applying kfreebsd_support.diff
dpkg-source: info: applying debian_menu.diff
dpkg-source: info: applying findservicebydesktoppath_try_realfilepath.diff
dpkg-source: info: applying findqt4_optional_x11_pthread.diff
dpkg-source: info: applying use_dejavu_as_default_font.diff
dpkg-source: info: applying hack_in_etc_kde4_in_kstandarddirs.diff
dpkg-source: info: applying ld_exclude_libs_qtuitools.diff
dpkg-source: info: applying konsole_kfreebsd_fix.diff
dpkg-source: info: applying hurd_support.diff
dpkg-source: info: applying kfileshare_kdesu_fileshareset.diff
dpkg-source: info: applying relax_plugin_kde_version_check.diff
dpkg-source: info: applying add_dlrestrictions_support.diff
dpkg-source: info: applying findpythonlibrary_layout_deb_on_debian.diff
dpkg-source: info: applying ktar_header_checksum_fix.diff
dpkg-source: info: applying ktar_longlink_length_in_bytes.diff
dpkg-source: info: applying nepomuk_unicode.diff

Nice, now get the build-deps:

mdione@mustang:~/src/system/debian$ sudo apt-get build-dep kde4libs
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Build-Depends dependency for kde4libs cannot be satisfied because candidate version of package shared-desktop-ontologies can't satisfy version requirements

Dang, I have an old shared-desktop-ontologies (v. 0.6.x in sid), and peeking in qkd's repo I find no new version of it. The answer must be in experimental, so I add deb http://ftp.nl.debian.org/debian/ experimental main to my sources.list and run apt-get update. Now it must be just a matter of:

mdione@mustang:~/src/system/debian$ sudo apt-get install -t experimental shared-desktop-ontologies
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  shared-desktop-ontologies
1 upgraded, 0 newly installed, 0 to remove and 103 not upgraded.
Need to get 129 kB of archives.
After this operation, 4,096 B of additional disk space will be used.
Get:1 http://ftp.nl.debian.org/debian/ experimental/main shared-desktop-ontologies all 0.8.0-1 [129 kB]
Fetched 129 kB in 1s (92.7 kB/s)
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Reading changelogs... Done
apt-listchanges: Mailing root: apt-listchanges: changelogs for mustang
(Reading database ... 218251 files and directories currently installed.)
Preparing to replace shared-desktop-ontologies 0.6.0-1 (using .../shared-desktop-ontologies_0.8.0-1_all.deb) ...
Unpacking replacement shared-desktop-ontologies ...
Setting up shared-desktop-ontologies (0.8.0-1) ...

And I try again:

mdione@mustang:~/src/system/debian$ sudo apt-get build-dep kde4libs
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  hspell libacl1-dev libattica-dev libattr1-dev libdbusmenu-qt-dev libdlrestrictions-dev libpolkit-qt-1-dev libqca2-dev libstreamanalyzer-dev libstreams-dev
  libutempter-dev pkg-kde-tools
0 upgraded, 12 newly installed, 0 to remove and 8 not upgraded.
Need to get 1,170 kB of archives.
After this operation, 3,551 kB of additional disk space will be used.
Do you want to continue [Y/n]?
[...]

We're set! now to fire the compilation itself. The method I use must not be the best one, but I remember it by heart and works for me :)

mdione@mustang:~/src/system/debian$ cd kde4libs-4.7.2/
mdione@mustang:~/src/system/debian/kde4libs-4.7.2$ nice -n 19 fakeroot debian/rules binary

Notice I use nice to not hog too much my computer and fakeroot. I don't remember why I must use it :| , but I remember I must. Later, asking in #debian-devel I was pointed to dpkg-buildpackage, to which you can give the -j# option, which makes the package to be compiled with # processes in parallel (actually it only works if the compilation is make based, as is in this case), so from now on I'll use that.

The resulting packages are these:

kdelibs5-data_4.7.2-0r3_all.deb
kdelibs5-dbg_4.7.2-0r3_i386.deb
kdelibs5-dev_4.7.2-0r3_i386.deb
kdelibs5-plugins_4.7.2-0r3_i386.deb
kdelibs-bin_4.7.2-0r3_i386.deb
kdoctools_4.7.2-0r3_i386.deb
libkcmutils4_4.7.2-0r3_i386.deb
libkde3support4_4.7.2-0r3_i386.deb
libkdeclarative5_4.7.2-0r3_i386.deb
libkdecore5_4.7.2-0r3_i386.deb
libkdesu5_4.7.2-0r3_i386.deb
libkdeui5_4.7.2-0r3_i386.deb
libkdewebkit5_4.7.2-0r3_i386.deb
libkdnssd4_4.7.2-0r3_i386.deb
libkemoticons4_4.7.2-0r3_i386.deb
libkfile4_4.7.2-0r3_i386.deb
libkhtml5_4.7.2-0r3_i386.deb
libkidletime4_4.7.2-0r3_i386.deb
libkimproxy4_4.7.2-0r3_i386.deb
libkio5_4.7.2-0r3_i386.deb
libkjsapi4_4.7.2-0r3_i386.deb
libkjsembed4_4.7.2-0r3_i386.deb
libkmediaplayer4_4.7.2-0r3_i386.deb
libknewstuff2-4_4.7.2-0r3_i386.deb
libknewstuff3-4_4.7.2-0r3_i386.deb
libknotifyconfig4_4.7.2-0r3_i386.deb
libkntlm4_4.7.2-0r3_i386.deb
libkparts4_4.7.2-0r3_i386.deb
libkprintutils4_4.7.2-0r3_i386.deb
libkpty4_4.7.2-0r3_i386.deb
libkrosscore4_4.7.2-0r3_i386.deb
libkrossui4_4.7.2-0r3_i386.deb
libktexteditor4_4.7.2-0r3_i386.deb
libkunitconversion4_4.7.2-0r3_i386.deb
libkutils4_4.7.2-0r3_i386.deb
libnepomuk4_4.7.2-0r3_i386.deb
libnepomukquery4a_4.7.2-0r3_i386.deb
libnepomukutils4_4.7.2-0r3_i386.deb
libplasma3_4.7.2-0r3_i386.deb
libsolid4_4.7.2-0r3_i386.deb
libthreadweaver4_4.7.2-0r3_i386.deb

Next package is kadebase. Again, we find the missing deps with:

mdione@mustang:~/src/system/debian$ sudo apt-get build-dep kdebase
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Build-Depends dependency for kdebase cannot be satisfied because candidate version of package kde-sc-dev-latest can't satisfy version requirements

The message is simlar to the one about shared-desktop-ontologies, but I have not even the slightest idea which one of the packages above is the one responsible for it, so I'll just install everything, as most probably they're build-deps for most of the remaining packages (it's kde5libs, after all):

mdione@mustang:~/src/system/debian$ sudo dpkg -i *.deb
(Reading database ... 218481 files and directories currently installed.)
[...]

After that I try again, but still the same thing. Poking around I find that a package with that name exists (it's the first time I met him, enchanté), so I get the source in the usual way:

mdione@mustang:~/src/system/debian$ apt-get source meta-kde
Reading package lists... Done
Building dependency tree
Reading state information... Done
Need to get 14.0 kB of source archives.
Get:1 http://qt-kde.debian.net/debian/ experimental-snapshots/main meta-kde 5:71~pre15 (dsc) [2,098 B]
Get:2 http://qt-kde.debian.net/debian/ experimental-snapshots/main meta-kde 5:71~pre15 (tar) [11.9 kB]
Fetched 14.0 kB in 0s (49.3 kB/s)
gpgv: Signature made Mon 21 Nov 2011 09:38:12 PM CET using RSA key ID 73A85F31
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./meta-kde_71~pre15.dsc
dpkg-source: info: extracting meta-kde in meta-kde-71~pre15
dpkg-source: info: unpacking meta-kde_71~pre15.tar.gz

And to compile:

mdione@mustang:~/src/system/debian$ ( cd meta-kde-71~pre15 && nice -n 19 dpkg-buildpackage -j3 -b -us -uc )
dpkg-buildpackage: source package meta-kde
dpkg-buildpackage: source version 5:71~pre15
dpkg-buildpackage: source changed by Debian Qt/KDE Maintainers <debian-qt-kde@lists.debian.org>
dpkg-buildpackage: host architecture i386
[...]

That generates several packages, but I just install the one I want:

mdione@mustang:~/src/system/debian$ sudo dpkg -i kde-sc-dev-latest_4.7.2+5.71~pre15_all.deb
Selecting previously unselected package kde-sc-dev-latest.
(Reading database ... 218469 files and directories currently installed.)
Unpacking kde-sc-dev-latest (from kde-sc-dev-latest_4.7.2+5.71~pre15_all.deb) ...
Setting up kde-sc-dev-latest (4:4.7.2+5.71~pre15) ...

Now the build-deps again:

mdione@mustang:~/src/system/debian$ sudo apt-get build-dep kdebase
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  libkatepartinterfaces4
The following NEW packages will be installed:
  libqimageblitz-dev libtidy-dev
0 upgraded, 2 newly installed, 1 to remove and 8 not upgraded.
Need to get 205 kB of archives.
After this operation, 1,932 kB disk space will be freed.
Do you want to continue [Y/n]?
[...]

You'll probably won't notice (as I didn't) that it has removed a package, libkatepartinterfaces. This is because in my system I have the old version of it and there seems to be some kind of conflict somewhere. I just cross my fingers it won't break much3.

Now get the sources and compile:

mdione@mustang:~/src/system/debian$ apt-get source kdebase
Reading package lists... Done
Building dependency tree
Reading state information... Done
NOTICE: 'kdebase' packaging is maintained in the 'Git' version control system at:
git://git.debian.org/pkg-kde/kde-sc/kdebase.git
Need to get 2,721 kB of source archives.
Get:1 http://qt-kde.debian.net/debian/ experimental-snapshots/main kdebase 4:4.7.1-0r2 (dsc) [3,217 B]
Get:2 http://qt-kde.debian.net/debian/ experimental-snapshots/main kdebase 4:4.7.1-0r2 (tar) [2,685 kB]
Get:3 http://qt-kde.debian.net/debian/ experimental-snapshots/main kdebase 4:4.7.1-0r2 (diff) [33.1 kB]
Fetched 2,721 kB in 26s (105 kB/s)
gpgv: Signature made Tue 27 Sep 2011 09:43:53 PM CEST using RSA key ID 73A85F31
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./kdebase_4.7.1-0r2.dsc
dpkg-source: info: extracting kdebase in kdebase-4.7.1
dpkg-source: info: unpacking kdebase_4.7.1.orig.tar.bz2
dpkg-source: info: unpacking kdebase_4.7.1-0r2.debian.tar.gz
dpkg-source: info: applying enable_debianabimanager.diff
dpkg-source: info: applying enable_dlrestrictions.diff

mdione@mustang:~/src/system/debian$ ( cd kdebase-4.7.1 && nice -n 19 dpkg-buildpackage -j3 -b -us -uc )
[...]

Installing the resulting packages is another history, as some of them depend on kde-runtime, so it's better if you compile kde-runtime before4. Trying to install its build-deps, you realize taht you actually need to compile soprano first.

You get the idea. Now I'll just show the generated packages I installed by hand after each compilation, but only those that suited my needs:

soprano:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
soprano-daemon_2.7.3+dfsg.1-0r0_i386.deb \
libsoprano4_2.7.3+dfsg.1-0r0_i386.deb \
libsoprano-dev_2.7.3+dfsg.1-0r0_i386.deb

kde-runtime:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
kdebase-runtime_4.7.2-0r3_all.deb \
khelpcenter4_4.7.2-0r3_i386.deb \
kde-config-phonon-xine_4.7.2-0r3_i386.deb \
kde-runtime-data_4.7.2-0r3_all.deb \
kde-runtime_4.7.2-0r3_i386.deb \
plasma-scriptengine-javascript_4.7.2-0r3_i386.deb

While installing kde-runtime-data I had to remove kdebase-runtime-data:

[...]
Selecting previously unselected package kde-runtime-data.
dpkg: regarding kde-runtime-data_4.7.2-0r3_all.deb containing kde-runtime-data:
 kde-runtime-data breaks kdebase-runtime-data (<< 4:4.7.2)
  kdebase-runtime-data (version 4:4.6.5-1) is present and installed.
[...]

mdione@mustang:~/src/system/debian$ sudo dpkg --remove kdebase-runtime-data
(Reading database ... 218612 files and directories currently installed.)
Removing kdebase-runtime-data ...
[...]

kdebase:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
dolphin_4.7.1-0r2_i386.deb \
kdebase-bin_4.7.1-0r2_i386.deb \
kdebase-data_4.7.1-0r2_all.deb \
kdepasswd_4.7.1-0r2_i386.deb \
konq-plugins_4.7.1-0r2_i386.deb \
konqueror_4.7.1-0r2_i386.deb \
konqueror-nsplugins_4.7.1-0r2_i386.deb \
libkonq5abi1_4.7.1-0r2_i386.deb \
libkonq-common_4.7.1-0r2_i386.deb \
plasma-widget-folderview_4.7.1-0r2_i386.deb

While trying to get akonadi I got this error message:

Failed to fetch http://qt-kde.debian.net/debian/pool/main/a/akonadi/akonadi_1.6.2-0r1.dsc  Hash Sum mismatch

Checking the MD5, SHA1 and SHA256 checksums and comparing to the ones in the .dsc file revealed no difference. lisandro told me to use dget instead:

mdione@mustang:~/src/system/debian$ dget http://qt-kde.debian.net/debian/pool/main/a/akonadi/akonadi_1.6.2-0r1.dsc
dget: retrieving http://qt-kde.debian.net/debian/pool/main/a/akonadi/akonadi_1.6.2-0r1.dsc
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2620  100  2620    0     0   6842      0 --:--:-- --:--:-- --:--:--  8161
dget: using existing akonadi_1.6.2.orig.tar.bz2
dget: using existing akonadi_1.6.2-0r1.debian.tar.gz
akonadi_1.6.2-0r1.dsc:
dscverify: akonadi_1.6.2-0r1.dsc failed signature check:
gpg: Signature made Thu 03 Nov 2011 08:55:52 AM CET using RSA key ID 73A85F31
gpg: Can't check signature: public key not found
Validation FAILED!!

Unpacking by hand:

mdione@mustang:~/src/system/debian$ dpkg-source -x akonadi_1.6.2-0r1.dsc
gpgv: Signature made Thu 03 Nov 2011 08:55:52 AM CET using RSA key ID 73A85F31
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./akonadi_1.6.2-0r1.dsc
dpkg-source: info: extracting akonadi in akonadi-1.6.2
dpkg-source: info: unpacking akonadi_1.6.2.orig.tar.bz2
dpkg-source: info: unpacking akonadi_1.6.2-0r1.debian.tar.gz
dpkg-source: info: applying x11_not_required.diff

akonadi:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
akonadi-server_1.6.2-0r1_i386.deb \
akonadi-backend-sqlite_1.6.2-0r1_i386.deb \
libakonadi-dev_1.6.2-0r1_i386.deb \
libakonadiprotocolinternals1_1.6.2-0r1_i386.deb

Here I cheated: I really didn't want to spend 10 minutes of iterative attempts to find the minimal set of lib packages to install:

libkdepim:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
kdepimlibs5-dev_4.7.2-0r1_i386.deb \
kdepimlibs-kio-plugins_4.7.2-0r1_i386.deb \
lib*.deb

kde-wallpapers:

mdione@mustang:~/src/system/debian$ sudo dpkg -i \
kde-wallpapers-default_4.7.2-0r0_all.deb

kde-workspace:

sudo dpkg -i --auto-deconfigure \
systemsettings_4.7.2-0r7_i386.deb \
plasma-desktop_4.7.2-0r7_i386.deb \
plasma-scriptengine-python_4.7.2-0r7_all.deb \
plasma-widgets-workspace_4.7.2-0r7_i386.deb \
plasma-dataengines-workspace_4.7.2-0r7_i386.deb \
klipper_4.7.2-0r7_i386.deb \
kdm_4.7.2-0r7_i386.deb \
kde-workspace-bin_4.7.2-0r7_i386.deb \
kde-workspace-data_4.7.2-0r7_all.deb \
kde-window-manager_4.7.2-0r7_i386.deb \
kdebase-workspace_4.7.2-0r7_all.deb \
kdebase-workspace-bin_4.7.2-0r7_all.deb \
libkworkspace4_4.7.2-0r7_i386.deb \
kde-workspace-kgreet-plugins_4.7.2-0r7_i386.deb \
kde-style-oxygen_4.7.2-0r7_i386.deb \
libkdecorations4_4.7.2-0r7_i386.deb \
libkephal4abi1_4.7.2-0r7_i386.deb \
libkwineffects1abi2_4.7.2-0r7_i386.deb \
libkworkspace4_4.7.2-0r7_i386.deb \
libplasmagenericshell4_4.7.2-0r7_i386.deb \
libtaskmanager4abi2_4.7.2-0r7_i386.deb \
libplasmaclock4abi2_4.7.2-0r7_i386.deb \
libksgrd4_4.7.2-0r7_i386.deb \
libplasma-geolocation-interface4_4.7.2-0r7_i386.deb \
libsolidcontrol4abi2_4.7.2-0r7_i386.deb \
libprocesscore4abi1_4.7.2-0r7_i386.deb \
libweather-ion6_4.7.2-0r7_i386.deb \
libkscreensaver5_4.7.2-0r7_i386.deb \
libprocessui4a_4.7.2-0r7_i386.deb \
libsolidcontrolifaces4abi2_4.7.2-0r7_i386.deb \
kde-workspace_4.7.2-0r7_all.deb \
ksysguard_4.7.2-0r7_i386.deb \
freespacenotifier_4.7.2-0r7_i386.deb \
libksignalplotter4_4.7.2-0r7_i386.deb \ 
ksysguardd_4.7.2-0r7_i386.deb

I'll just finish this post here. The rest is just more of the same. The final list of packages a compile in right order is:

kde4libs
meta-kde
soprano
kde-runtime
kdebase
akonadi
prison
kdepimlibs
kde-wallpapers
kde-workspace

All in all, this took some 10 hours of finding deps, compiling and installing, and 6.4 GiB between original .tar.gz files, compilation dirs and generated packages.


  1. It's nice that I don't need to compile libqt4*, those take ages5:. 

  2. Later lisandro from qkd pointed me to the dependency graph that confirms my guess. 

  3. I did all this while running a KDE SC 4.6.5 session, and even writing this post in kate. Thanks to Linux and Debian not much actually broke, I only lost the ability to browse with konqueror after I installed the first batch of packages because KIO could not instantiate an HTTP/HTTPS ioslave anymore. 

  4. Note that the dependency graph two notes above is actually a build dependency graph. 

  5. Not that this compilation was fast. kdelibs without parallelism took some 3h and kdepimlibs and kde-workspace are two huge beasts. 

satyr in GitHub

After several months thinking about it, and just two requests, I finally decided to publish satyr's code. I decided to use github because I already switched to satyr from hg to git, mainly for testing and understanding it. I think I can live with hg, althought branch management in git seems to be given more thought and a good implementation.

So, without further ado: satyr in github

Remember, it's still a test software, by no means polished or ready for human consumption, and with very low development force. Still, I think it has some nice features, like interchangeable skins and a well defined backend, D-Bus support, quick tag edition, reasonable collection managment, and, thanks to Phonon, almost-gapless playback and things like «stop after playing the current this file» (but not «after any given file» yet).

In Debian Sid it mostly works only with the GStreamer backend; I haven't tried the xine one and I know VLC does not emit a signal needed for queueing the next song, so you have to press «next» after each song. AFAIK this is fixed upstream.

Recovering partitions with pen and paper

Note: this is a translation of an old post. I decided to translate it because now I'm looking for a SysAdmin position (tell your friends!) and I would like this post to show how I work.

Last Saturday I received an email from one of the guys from work with the subject «urgennnnnnnnt: heeeeeeeeeelp»[sic]. He says he was idling on Friday night when his machine stopped emiting sound via the soundcard and then it behaved erratically. When he tried rebooting it, it didn't boot anymore. «It says something about disk not bootable...».

Monday morning I go to work and go to see the machine. Precisely, it said something about «disk not bootable». I boot with a USB key with GRML and I find that the disk has no partitions.

Panic.

The guy is doing a PostDoc in something astronomical (literally) and all his work is in that machine. No backups, as usual, so I prepare myself to rescue the partitions.

In that same USB key I have a system with parted. I boot with it and I try using parted's rescue tool. Nothing. I ask the guy how the disk was partitioned, etc. He tells me that he only installed Kubuntu clicking 'Next'. Kubuntu by default creates a swap partition and an ext3 partition for / and that's it, which made what was coming relatively easy.

I reboot in GRML and I use hexdump -C /dev/sda | more to see the disk's content. This is not the first time that I juggle with partitions and MBRs, but last time I did it, I used a tool that is now discontinued (the tool was called DiskEdit, included in The Norton Utilities), which had special edit modes for MBRs, FATs, and a lot of useful things... in MS universe.

First I confirm that, yes, the first sector is a MBR (at least it has the 0x55aa signature at the end), and that the whole partition table is empty, but in the second sector of the disk there seems to be a copy. I take pen and paper, write down what I found, but it turns out not only I have half the data, the partition I thought I found was too small.

So I decide to look for the partition by hand. To do that I needed to find out first how does the ext3 kernel code know wether a partition is ext3 or not. I knew it would be some kind of magic signature, but I had no idea which. So I installed the sources for 2.6.29 in my laptop and started to look at ext3's code. After going around a lot, including reading the code that is excuted when you mount a filesystem of type ext3, where we can see that it uses a magic signature3and the structure of the ext3 superblock, where we can see the magic's offset is 0x38.

So the problem of finding an ext3 partition is reduced to the problem of finding 0x53ef (damn little endian) at a sector's offset 0x38 in the disk. Luckily more has a find tool, so I sit down to search every occurrence of 53 ef, hoping that the address at the left ends in 30 and that they would be the 9th and 10th bytes in the line (damn 0 based offsets).

A few 'next' after, I get my first candidate. It looks good, because I was also comparing my findings with a similar dump from my USB key (which I have formatted as ext2, and luckily ext2 and ext3 share those structures), and also I spot something that looks like a uuid.

This candidate's address is 0x80731038. I substract 0x38 and I get the address 0x80731000, a nice round number for a superblock. Converted to decimal that's 2.155.024.384, some 2GiB from the disk's begginning. Looks really good! The swap partition could be before the root one, and could have that size.

I use fdisk /dev/sda to see the (still empty) partition table. It says there's 16.065 sectors per cylinder, times 512 bytes per sector, equals 8.225.280 bytes per cylinder. Almost all distros (actually I think all of them) partition disks at cylinder boundaries1, so if the sector I found is right at the beginning of a cylinder...

I divide 2.155.024.384/8.225.280=...

(suspense pause)2

262.000124494...

¡Damn! I almost had it... Hmm, how much is the factional part? (262.000124494-262)*8.225.280=... ¡1024! ¿Is it that...?

I run strace debugfs -R show_super_stats /dev/sdb1 (the partition in my USB key) and I see that it actually seeks 1024 bytes within the partition for reading the superblock!

This is it. With 262 in my head, I fire fdisk /dev/sda and I create two partitions: swap in cylinders 1-261 and root from cylinder 262 till the end. I save, cross my fingers and I run debugfs -R show_super_stats /dev/sda1. It fails! What's wrong? I reboot and I try again, just in case the kernel did not re-read correctly the partition table. It fails again. WTF?

Ah, duh, it's sda2, where do I have my head... Ok, debugfs -R show_super_stats /dev/sda2... it works, the sonofabitch works! I can't believe it. I risk it: fsck -n /dev/sda2. «Filesystem is clean». Damn, I try harder: fsck -n -f /dev/sda2...

Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sda2 etc etc...

It's fine! But the MBR doesn't have GURB installed, so I do the usual GRUB reinstall process, I reboot...

It boots like nothing has happened, and it finishes in a beautiful login. Satisifed, I pat myself in the back, pack my things and I start my weekend4.


  1. ... wasting some 8MiB between the MBR and the first partition. 

  2. The sharp ones reading this will notice that this can not give an integer by no means. 

  3. Reiser magics are funny. Looks like he started the fad that now AdOlEsCeNtS use. 

  4. Update from 2023: yes, you got the math right. It took me a whole week of trial and error between other pressing tasks. 

Upgrade Debian lenny to squeeze

Today I decided to upgrade my home server (the one that serves this blog) from lenny to squeeze. Here is a 'log' of the experience.

My first mistake was on the name: it is not squeezy, it's squeeze.

Second, the server once was also a minimal desktop, so I deinstalled a lot of desktop soft to make the upgrade smaller and easier. I simply used my favorite package manipulation toll, dselect1, and selected for purging all the optional and extra packages in sections libs, python and perl. When the consequences where shown to me, I just marked as install the software I wanted. After that, ~450 packages where removed.

Following the release notes, and after checking the known upgrade issues, I did the first suggested step: a minimal upgrade.

mdione@cobra:~$ sudo apt-get upgrade
[...]
233 upgraded, 0 newly installed, 0 to remove and 142 not upgraded.
Need to get 67.2MB of archives.
After this operation, 11.5MB of additional disk space will be used.
[...]

I have apt-listbugs installed, so I got this question just before accepting the upgrade:

serious bugs of libslang2 (2.1.3-3 -> 2.2.2-4) <marked as done in some version>
 #615909 - Copyright file does not clearly state licence terms (Fixed: slang2/2.2.3-1)
serious bugs of deborphan (1.7.27 -> 1.7.28.3) <marked as done in some version>
 #618895 - orphaner enteres infinite loop on sparc (Fixed: deborphan/1.7.28.4)
critical bugs of initscripts (2.86.ds1-61 -> 2.88dsf-13.1) <unfixed>
 #612594 - On boot thw wait have no job to wait for, and fail into reboot.
serious bugs of libpcre3 (7.8-2 -> 8.02-1.1) <unfixed>
 #616660 - /usr/bin/pcretest must not be shipped in libpcre3
Summary:
 libpcre3(1 bug), libslang2(1 bug), deborphan(1 bug), initscripts(1 bug)
Are you sure you want to install/upgrade the above packages? [Y/n/?/...]

The critical bug for initscripts looked ugly, so I checked it in more depth. It seemed to affect usplash, which I don't use (no use in a headless server, right?), so I bit the bullet and continued. The bug's discussion said that the solution was to purge usplash anyways... The rest of the bugs were, pragmatically talking, not interesting to me.

Then apt-listchanges showed me the unread entries in the NEWS.Debian.gz files for the upgraded packages, with no news that applied to my server. Interesting was the split of pam_cracklib into itself and pam_pwhistory, so now we can test the reuse of passwords without checking for dictionary attacks, in the strange case one would want to do so, or the other way around. That means that if you want both, you got to enable both.

Besides some conffile resolution (it would be nice to be able to resolve diff with meld or xxdiff), the upgrade went smooth.

Upgrading the kernel was painless too. The kernel NEWS included a note on PATA devices rename due to the new SCSI/PATA drivers, but I was aware of that because I read about the upgrade issues first :) I just had to tell linux-base to please update the disk devices in my config files, which is the default anyways.

udev was a little bit more bumpy:

critical bugs of udev (0.125-7+lenny3 -> 164-3) <unfixed>
 #593083 - udev - system hangs at login screen
serious bugs of util-linux (2.13.1.1-1 -> 2.17.2-9) <unfixed>
 #613592 - /sbin/fdisk: Can't create at sector 63
 #613589 - /sbin/cfdisk: Bad Table error after fresh Squeeze install

The first one seemed to be something quite handlable, and the other two were not interesting to me. The bullet just needed a little bit more of squeezing, that's all (Hahahahaaa! I told a joke! I can do this crap too!2).

For the last step I used dselect again, as I love the way it presents dependency resolution. I took the opportunity to purge all the obsolete packages, and I got no dependency problem with that, which means the upgrade should be complete and smooth. This last step meant:

128 upgraded, 68 newly installed, 32 to remove and 0 not upgraded.
Need to get 96.5MB of archives.
After this operation, 33.1MB disk space will be freed.

Yeah, smooth indeed, except for these:

serious bugs of wget (1.11.4-2+lenny2 -> 1.12-2.1) <marked as done in some version>
 #614373 - wget: mixes dpatch and 3.0 (quilt) (Fixed: wget/1.12-3)
serious bugs of lvm2 (2.02.39-8 -> 2.02.66-5) <marked as done in some version>
 #603710 - root and swap devices on lvm do not correctly show up in udev (missing symlinks) (Fixed: lvm2/2.02.84-1)
   Merged with: 593625
serious bugs of libgssapi-krb5-2 ( -> 1.8.3+dfsg-4) <marked as done in some version>
 #611906 - GSSAPI in krb5 1.8 fails to delegate credentials to W2K8R2 (Fixed: krb5/1.8.3+dfsg-5)
grave bugs of libdpkg-ruby1.8 (0.3.2 -> 0.3.6+nmu1) <unfixed>
 #585448 - Leaves files open as it scans, resulting in too many open files
   Merged with: 600260
grave bugs of dash (0.5.4-12 -> 0.5.5.1-7.4) <unfixed>
 #540512 - dash upgrade breaks mksh-as-/bin/sh
 #538822 - dash fails to upgrade if /bin/sh is locally diverted
grave bugs of grub-pc ( -> 1.98+20100804-14) <unfixed>
 #593648 - grub-pc install fails on RAID1 (unknown filesystem)
 #590884 - grub-pc: upgrading with vmlinuz-2.6.32-5-amd64 kernel fails on device detection
 #612220 - after update to squeeze grub2 don't load the system
 #620663 - grub-pc hangs after upgrading lenny to squeezy
grave bugs of openssh-client (1:5.1p1-5 -> 1:5.5p1-6) <unfixed>
 #607267 - /usr/bin/scp: fails to notice close() errors
grave bugs of elinks (0.12~pre2.dfsg0-1 -> 0.12~pre5-2) <unfixed>
 #617713 - Caches documents in violation of HTTP spec and general sanity
serious bugs of apt (0.7.20.2+lenny2 -> 0.8.10.3) <unfixed>
 #558784 - apt: re-adds removed keys
serious bugs of python2.5 (2.5.2-15+lenny1 -> 2.5.5-11) <unfixed>
 #598372 - python2.5: uses the network during build
serious bugs of lvm2 (2.02.39-8 -> 2.02.66-5) <unfixed>
 #603036 - lvm2: fails to install due to incorrect dependencies in init.d LSB header
serious bugs of munin (1.2.6-10~lenny2 -> 1.4.5-3) <unfixed>
 #619399 - munin shouldn't recreate apache conf on every update
serious bugs of grub (0.97-47lenny2 -> 0.97-64) <unfixed>
 #594283 - grub: non-standard gcc/g++ used for build (gcc-4.3)
serious bugs of insserv ( -> 1.14.0-2) <unfixed>
 #598020 - barfs when there are "invalid" init scripts

I focused on grub-pc bugs, because I don't have a monitor and having the server unbootable is not my idea of a funny way to spend my afternoon. 612220 and 620663 seemed the most graves ones, so I checked them. The latter seems more complicated, so I just added another bullet in my mouth and continued. The rest of the bugs seemed harmless enough for me. The NEWS had nothing either. The moment of truth was coming closer.

During the Debconf part, I chose to put grub-pc in the boot sector of the root partition and chainload it from grub-legacy which is still installed in the MBR. I also kept the old kernel, just in case.

With a mouthfull of bullets in different states of chewedness, I rebooted the server. As spected, nothing happened; that is, it booted fine, no problems, all services still there. Dissapointed, now I' looking intently to a more complicated server for an upgrade.


  1. yes, I know dselect is in maintenance mode now, and that it knows nothing about automatic packages, but I mostly know what I need and what not. In any case, what's missing can be installed later. Nothing critical is run in this server. 

  2. If you hadn't, you really have to see Ahmed, the suicide terrorist 

Porting big codebases to Android

Last year and a half I was working in research. This position was about, among other things, to port a programming language (two, actually, Hop and Bigloo) to the Antroid platform. I already had written something about it, but this time I want to show my high level impressions of the platform. What follows is part of a report I wrote at the end of that job, which includes the things I wanted to say.

The Android port can be viewed as four separate sub-tasks. Hop is software developed in the Scheme language; more particularly, it must be compiled with the Bigloo Scheme compiler, which in turn uses gcc for the final compilation. That means that we also needed to port Bigloo first to the platform, not because we were planning to use it in the platform, but because we need the Bigloo runtime libraries ported to Android, as Hop and any other program compiled with Bigloo uses them. The other three subtasks, which are discussed later, are porting Hop itself; developing libraries to access devices and other features present in the platform; and, we'll see later the reasons, make the port work with threads.

When we started to investigate how to port native code to the platform we found that there wasn't much support. At fisrt the only documentation we could find was blog posts of people trying to do it by hand. They were using the compiler provided in Android's source code to compile static binaries that could be run on the platform. Because Bigloo uses dinamic libraries to implement platform dependent code and modules, we aimed to find a way to compile things dinamically. After 3 or 4 weeks we found a wrapper written in Ruby that managed all the details of calling gcc with the proper arguments. With this we should be able to port anything that uses gcc as the compiler, just like Bigloo does. At the same time, the first version of Android's NDK (Native Development Kit) appeared, but it wasn't easy to integrate in our build system.

(Note: Actually I think most of the problems we faced doing this port stem from this. The NDK forces you to write a new set of Makefiles, but our hand-made build system and build hierarchy made such an effort quite big. Also, that mean supporting a parallel build system, while it should not be so crazy to spect a cleaner way to integrate the toolchain into an existing build system, not only in hand-made like in this case, but also the most common ones, like autotools, cmake, etc.)

Even having the proper compiler, we found several obstacles related to the platform itself. First of all, Bigloo relies heavily on the C and pthread libraries to implement lowlevel functionality. Bigloo can use both glibc, GNU's implementation, or µlibc, an implementation aimed for embedded aplications. Bigloo also relies on Boehm's Garbage Collector (GC) for its memory management. The C library implementation in Android is not the glibc or the µlibc, but an implementation developed by Google for the platform, called Bionic. This version of the C library is tailored to the platform's need, with little to no regards to native application development.

The first problem we found is that GC compiled fine with Bionic, but the apllications that used GC did not link: there was a missing symbol that normally is defined in the libc, but that Bionic did not. We tried cooperating with the GC developers, we tried inspecting a Mono port to Android, given that this project also uses GC, trying to find a solution that could be useful for everyone, but at the end we just patched our sources to fake such symbol with a value that remotely made sense.

We also found that Bionic's implementation of pthreads is not only incomplete, but also has some glitches. For instance, in our build system, we test the existence of a function like everybody else: we compile a small test program wich uses it. With this method we found at least one function that is declared but never defined. That means that Bionic declares that the function exists, but then it never implements it. Another example is the declaration and definition of a function, but the lack of definition of constants normally used when calling this function.

Also, because most of the tests also must be executed to inform about the peculiarities of each implementation, we had to change our build system to be able to execute the produced binaries in the Android emulator.

Google also decided to implement their own set of tools, again, trimmed down to the needs of the platform, instead of using old and proven versions, like Busybox. This means that some tools behave differently, with no documentation about it, so we mostly had to work around this differences everytime a new one apperared.

All in all, we spent two and a half months just getting Bigloo to run in Android, dismissing the problem that Boehm's GC, using its own build system, detected that the compiler declared to not support threads, and refused to compile with threads enabled. This meant that Bigloo itself could not be compiled with pthreads support.

With this caveat in mind, we tackled the second subtask, porting Hop itself. This still raised problems with the peculiarities of the platform. We quickly found that the dinamic linker wasn't honoring the LD_LIBRARY_PATH environment variable, which we were trying to use to tell the system where to find the dynamic libraries.

The Android platform installs new software using a package manager. The package manager creates a directory in the SD card that it's only writable by the applilcation being installed. Within this directory the installer puts the libraries declared in the package. Bigloo, besides the dinamic libraries, requieres some aditional files that initialize the global objects. This files are not extracted by the installer, so we had to make a frontend in Java that opens the package and extract them by hand. But the installer creates the directory for the libraries in such a way that the application later cannot write in it.

Also, we found that the dinamic linker works for libraries linked at runtime, but does not for dlopen()'ing them, so we also had to rewrite a great part of our build system for both Bigloo and Hop to produce static libraries and binaries. This also needed disabling the dynamic loading of libraries, and with them, their initialization, so we had to initialize them by hand.

To add more unsuspected work, the Android package builder, provided with the SDK, ignores hidden files, which Bigloo uses to map Scheme module names to dynamic libraries. We had to work around this feature in the unpacking algorithm.

Then we moved to improve the friendliness of the frontend. So far, we could install Hop in the platform, either in a phone or in the emulator, but we could only run it in the emulator, because we were using a shell that runs as root on the emulator, but that runs as a user in a real device. This user, for the reasons given above, cannot even get into Hop's install dir. Even when Android has a component interface that allows applications to use components from other apps, none of the terminal apps we found at that time declared the terminal itself as a reusable component. We decided to use the code from the most popular one, which was based on a demo available on Android's source code, but not installed in actual devices. We had to copy the source code and trimm it down to our needs.

Having a more or less usable Hop package for Android, we decided to try and fix the issue we mentioned before: GC didn't compile with threads enabled. This means that we can't use the pthreads library, which is very useful for Hop. Hop uses threads to attend several requests at the same time. Bigloo implements two threads APIs, one based on pthreads and another which implements fair threads. Hop is able to use 5 different request schedulers, but works better with the one based on pthreads.

For these reasons we decided to focus in getting GC to use threads with the Android platform. GC's build system tests the existence of a threading platform checking the thread model declared by gcc. The gcc version provided with Android's SDK declares to have a 'single thread model', but we couldn't find what does this mean in terms of the code produced by gcc or how this could affect to GC's execution.

(Note: we didn't manage to make GC compile with threads.)

With a threadless Hop running, we had to add code to the server so we could talk between the server and the frontend while at the same time it is attending the requests from a web client. After several attempts to attack this problem, we decided that the best solution was to make this interface another service served by Hop. This meant less modifications to Hop itself, but a bigger one to the frontend we already had.

During these changes we found out a problem with JNI. The terminal component we imported into our code uses a small C library for executing the application inside (normaly a shell, in the original code, but Hop in our case) which is accessed from Java using JNI. The original Term application exported this class as com.android.term.Exec, but our copy exported it as fr.inria.hop.Exec. Even with this namespace difference JNI got confused and tried to use the Exec class from the original Term app. This is just another example how the platform is hard to work with. We found that the community support is more centered around Java and that very few people know about JNI, the NDK or any other native related technologies. We couldn't find an answer to this problem, so we worked around this by renaming the class.

So that's it. I can provide all the technical details for most the assertions I postulated above, but that would make this post unreadbal for its length. If you have any question about them, just conact me.

Merging disjoint histories with mercurial

At work I was tasked to develop three different lines of work. Each one seemed to be auto-contained and with an specific and compact obective. Of course, for each one of these projects, I set up a VCS repo. The chosen one here at work is Mercurial.

But then, the fourth task proved to be the integration of the three previous tasks in one demo that would show them off and then evolve into an application. At fisrt I just relied on setting up a fourth repo, adding to it only the new files, and symlinking the files I needed from the other repos. It was hacky, but it allowed me to continue the development quickly, and as long as I kept making commits in the four different repos, all was peachy.

Today, after the demo was presented, it occured to me that this setup wasn't easy to share with the rest of the team. So far I was working alone, but now that this project will get more and more developers, sharing was crucial.

So I asked in #mercurial and the short answer was:

11:13 < hstuart> StucKman, you can pull --force and merge the unrelated heads together, but it'll be a two-way merge and there'll potentially be a lot of conflict resolution if they have diverged in the same files

As the four repos had disjoint file set, this seemed enough for me. So, the first step was to set up a fifth empty clean repo:

$ hg init

No surprises there. Now I pulled from the four lines of history from other directories:

$ hg pull --force ../soap/
pulling from ../soap/
requesting all changes
adding changesets
adding manifests
adding file changes
added 7 changesets with 9 changes to 3 files
(run 'hg update' to get a working copy)

$ hg pull --force ../stomp/
pulling from ../stomp/
searching for changes
warning: repository is unrelated
adding changesets
adding manifests
adding file changes
added 43 changesets with 49 changes to 8 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)

$ hg pull --force ../phidgets/
pulling from ../phidgets/
searching for changes
warning: repository is unrelated
adding changesets
adding manifests
adding file changes
added 29 changesets with 40 changes to 7 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)

$ hg pull --force ../smartimmo/
pulling from ../smartimmo/
searching for changes
warning: repository is unrelated
adding changesets
adding manifests
adding file changes
added 26 changesets with 58 changes to 20 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)

So far so good. I checked with hgview what I had: four lines of history with no common point. Do you see that each pull made a new head? Here, look:

$ hg heads
changeset:   104:a923c43e33fa
tag:         tip
user:        Marcos Dione <mdione@sophia.inrira.fr>
date:        Fri Dec 17 11:04:24 2010 +0100
summary:     * soft-coded image filepaths (patch by Manuel).

changeset:   78:1a3faf6e6e8a
branch:      single-ifkit
user:        Marcos Dione <mdione@sophia.inrira.fr>
date:        Thu Dec 16 16:12:40 2010 +0100
summary:     * wrong message for up-> nobody.

changeset:   74:53c2942a8548
user:        Marcos Dione <mdione@grulic.org.ar>
date:        Wed Dec 01 14:58:53 2010 +0100
summary:     * s/state-update/light-state-update/ in event name, so it doesn´t clash with the same event in the temp controller.

changeset:   49:9480820604fe
user:        Marcos Dione <mdione@sophia.inrira.fr>
date:        Thu Dec 16 16:13:04 2010 +0100
summary:     + si-heater-switch.

changeset:   6:be775f2b6af5
user:        Marcos Dione <mdione@sophia.inrira.fr>
date:        Fri Jul 09 11:43:58 2010 +0200
summary:     * cleanup.

Notice how the tip is set to the tip of the last repo I pulled from. If you're very 'detaillist' (I'm not sure that word exists in English; it does in Spanish), take that in account. Me, I didn't care that much. So the next step was to merge them:

$ hg merge --force 6
3 files updated, 0 files merged, 0 files removed, 0 files unresolved
(branch merge, don´t forget to commit)
$ hg ci -m "|\ soap history."

The first one went fine, but the second one presented a small problem:

$ hg merge --force 49
merging smart-immo.scm
merging smart-immo.scm failed!
7 files updated, 0 files merged, 0 files removed, 1 files unresolved
use 'hg resolve' to retry unresolved file merges or 'hg update -C' to abandon

So I went back, before the merge, moved the file, and continued:

$ hg update -C 105
1 files updated, 0 files merged, 7 files removed, 0 files unresolved
$ hg mv smart-immo.scm smart-immo-soap.scm
$ hg ci -m "* rename to avoid file clashes."
$ hg merge --force 49
8 files updated, 0 files merged, 0 files removed, 0 files unresolved
(branch merge, don´t forget to commit)
$ hg ci -m "|\ obix/stomp history."
$ hg merge --force 74
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
(branch merge, don´t forget to commit)
$ hg ci -m "|\ smartimmo demo history."

I still had something left: one of the hsitories had a branch which wasn't default but was tip, so I had to merge that too. I don't like how this is done in Mercurial, but I know the moves:

$ hg update -C single-ifkit
2 files updated, 0 files merged, 31 files removed, 0 files unresolved
$ hg ci --close -m "_ closing before merging."
$ hg update -C default
33 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ hg merge single-ifkit
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
(branch merge, don´t forget to commit)
$ hg ci -m "|\ single-ifkit."
$ hg heads
changeset:   110:bfc9eb117a76
tag:         tip
parent:      108:d53b4b9a3ffc
parent:      109:b41570e653c9
user:        Marcos Dione <mdione@sophia.inrira.fr>
date:        Fri Dec 17 11:34:26 2010 +0100
summary:     |\ single-ifkit.

Tadaaa! Yes, I would like a hg merge --close <head> command that did all this dance for me, but as I said, as I know the hoops I have to jump through, and as long they're not in flames, I can live with it. The only thing I miss from other VCS is Bazaar's --show-diff commit option, which lets me review the diff while editing the commit message.

One thing that I figured out out of this: histories in Mercurial are acyclic directed graphs, yes, but they don't need to have only one root. Nice.

Greedy record labels

Note: I began to write this post a long time ago, then I thought it was too much a rant, and left it there. Today, while procrastinating a lot from work, I read it again and decided it wasn't so bad after all. So, here it is.

Today I sat down to write some lines of code for satyr. I has been more than one month since I wrote any line of code in it, which is a shame, but I really wasn't in the mood. For that, I decided to queue some songs so the music was not very much eclectic. Just to be not very boring, I put Slash's solo album. During my small breaks from the code (I do them from time to time; seems like I can't be focused on anything for too long) I was editing the tags of the songs and reading the Wikipedia's page about the album. When I got to the tracklisting I noticed that there were no more and no less than 9, yes, 9 different editions.

This is not the first time I see this; normally I had seen alternative editions, specially japanese editions, most of the time with one more or different track, but it never really bothered me. I really don't know why, but this time I think it's plain abuse.

Combining the other 8 editions from the one I have there are 5 more songs, plus an english version of one of those (as it is a song in the 2, yes, 2 japanese versions, I can only assume the original is in japanese) and a DVD track. And of course none of the editions has them all; the biggest one is the canadian deluxe, with only 3 of those. So, no matter which disc one buys, there is no way to have them all.

Of course, besides buying several discs. How many? Hold your socks, pants and jaw: The japanese deluxe edition, the iTunes version, the canadian deluxe edition and the Monster energy drink edition; and then you get twice the same song (the english version of 'Sahara'). Yes, count them: 4. Not to mention that two of them are only sold in two countries not only far away from one to another, but also faraway from where I'm living, and you can't buy them online and have them delivered in your house.

I think the message is obvious already: those [expeletive] from the record labels are out to get your money more than ever. Just look at the list of different labels involved in this opportunity and you'll find the usual suspects: EMI, Universal Music, Roadrunner Records and Sony Music. What a bunch of greedy bastards that we already know for their greediness and stupidity:

  • Sony and its copy protection scandal.
  • Universal has at least 4 dubious actions.
  • EMI, which used to have DRM in their sales through iTunes, but not anymore. Also they tried to buy Warner and convert two greedy bastards in one huge greedy bastard, but got bought by Terra Firma instead. They laid off ~2k jobs.
  • Roadrunner is part of Warner, which blocks videos in YouTube even when they're not officially WMG content. And they have more.

Another example of how things are wrong is the album 'Live at The Greek', a live album recorded by The Black Crowes and Jimmy Page, the guitarrist from Led Zeppelin. In this album we can only find the songs they played that were not from the TBC repertorie 'due to contractual problems with their [TBC's] record comapny'. Of course, most of what's left are songs from Led Zeppelin, which makes it a fantastic live/cover album for Led Zeppelin songs, but then I can't hear TBC songs with legend Jimmy Page as first/second guitar, something never ever again is going to happen. I simply can't. Ok, I can, and you know how? Because someone recorded the concert with a video camera, and then this hero/heroine put it online, and I downloaded it. Yes, completely ilegal.

I simply can't wait 'till this greedy bastards dissapear once and for all, and culture goes back to flow more freely among us, as it should be. To finish, I give you the translation of a quote I have among my random fortunes. Unluckily the original article where this comes from is not available online anymore:

La razón por la que la industria de medios es tan amiga del término “piratería” es que inmediatamente evoca una imagen negativa, de violencia y saqueo, en la que ellos son las víctimas que se ven privadas de “su propiedad”. [...] en una conferencia que diera la abogada de la Federación Internacional de la Industria Fonográfica, [...] escuchamos cómo [este] abogado se alarmaba del hecho de que “la sociedad se está apropiando de la cultura”.

The reason why the media industry likes so much the term “piracy” is because it evokes a negative image, of violence and sacking, where they appear as the victims who are deprived of ”their property”. [...] in a conference given by the lawyer of the International Federation of the Phonographic Industry, [...] we hear how [this] lawyer is alarmed of the fact that ”society is appropiating the culture”.

Create a disk image with a booting running Debian

Recently I had to do something that sounds very simple to any good SysAdmin: create a disk image with a booting Debian installation, from a script, with no human interaction. The idea is to later install our software on it. Those who want to test out soft would just need to download the image and boot it in any virtual machine they have: qemu, virtualbox1, you name it.

So the process could be thought as this: create a disk image, partition it, install Debian, install a bootloader, profit! Let's try to tackle them separately, looking at different approaches2:

A disk image is simply a file big enough: 1GiB, 10GiB, whatever you want. A string of 1Gi of 0s should be enough:

dd bs=$((1024*1024)) count=1024 if=/dev/zero of=stable.img

Now, that file is using 1GiB of space, but we're not sure if we're going to use it all, and so is kinda a waste of space. Luckly, Linux is able to handle sparse files: files that do not reserve all the file system blocks would normally be needed, only those where data is written. So for instance, a way to create a 1GiB (almost) empty sparse file is this:

dd bs=1 count=1 seek=$((1024*1024*1024)) if=/dev/zero of=stable.img

That is, we write a 0 at the end of a 1GiB file3, but even if the file is so big, it's actually using one file system block (4096 bytes, according to dumpe2fs).

A simpler, or maybe more-intuitive-when-you-read-it4 alternative is to use a tool that comes in qemu-utils:

qemu-img create -f raw stable.img 1G

That was easy. Now, how do we partition it? The first answer it's obvious, namely, fdisk, but it is not scriptable. So we look for alternatives, and one that comes to mind is parted: it is designed with scriptability in mind, it should be perfect!

Almost. parted needs a partition table signature in the MBR5 and it has no way to create one. This is at least surprising, but a little more (ab)use of dd can save the day. It's just a matter of writing the bytes 0x55 0xaa8 in the last two bytes of the first sector of the image. A disk sector, up to recently, is just 512 bytes, so:

echo -e "\x55\xaa" | dd bs=1 count=2 seek=510 of=stable.img conv=notrunc

The notrunc is so dd doesn't truncate the image to be 512 bytes long (it took me a while figuring that out). Now to parted:

parted -s stable.img mkpart primary ext2 0 1G
Warning: The resulting partition is not properly aligned for best performance.

But this will rise a problem that I'll mention later, at its proper time. So instead, and given that we're gonna install another package anyways, we're gonna use sfdisk67:

sfdisk -D stable.img <<EOF
,,L,*
;
;
;
EOF

The next step is to format the partition inside the image. If this were a partition image we could simply apply mkfs.ext2 (or whatver filesystem type you want) to the file, because the filesystem would start from the beginning of the file. But as this is a disk image, the partition starts at an offset from the beginning:

sfdisk --list stable.img
Disk stable.img: cannot get geometry

Disk stable.img: 130 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

    Device Boot Start     End   #cyls    #blocks   Id  System
stable.img1   *      0+    129     130-   1044193+  83  Linux
stable.img2          0       -       0          0    0  Empty
stable.img3          0       -       0          0    0  Empty
stable.img4          0       -       0          0    0  Empty

The 0+ in the third column tells us that the partition doesn't start exactly in the cylinder 0. That would mean it starts where the MBR is. Actually it starts in the cylinder 0 but in the second head. According to CHS reported by sfdisk, there are 63 sectors per track, so we just need to skip so many bytes: 63x512=32256. Coincidentally, 130- in the fourth column means that the partition does not reach the end of cylinder 130, which is exactly what parted was complaining above9.

To fix the aligment we will have to do it the other way around: instead of discovering the CHS from the image size, we'll compute the image size from some desired CHS and a minimum image size. This can be done as such:

# bytes per sector
bytes=512
# sectors per track
sectors=63
# heads per track
heads=255
# bytes per cylinder is bytes*sectors*head
bpc=$(( bytes*sectors*heads ))
# number of cylinders
cylinders=$(($img_size/$bpc))
# rebound the size
img_size=$(( ($cylinders+1)*$bpc ))
qemu-img create -f raw stable.img $image_size

So we will have to somehow tell to mkfs.ext2 about the partition offset inside the disk image. We can use something that we have been using unknowingly: loopback devices. Who hasn't mounted an ISO-9660 image in the past? We used something like this:

mount -o loop debian-505-i386-CD-1.iso /mnt

This is more or less equivalent to:

losetup /dev/loop0 debian-505-i386-CD-1.iso
mount /dev/loop0 /mnt

Good thing is, we can tell losetup to simulate the start of the device some bytes inside the file. And given that everything is a file in Linux, we can even chain loop devices, such as:

losetup /dev/loop0 stable.img
losetup -o 32256 /dev/loop1 /dev/loop0

Now /dev/loop0 points to the disk image and /dev/loop1 points to the partition. There's really not much option here, so we skip to the formatting part, which is even more straightforward:

mkfs.ext2 /dev/loop1

Now to install Debian in this beast. Here we won't be exploring much either, but I will explain a couple of tricks I learned to complete this task successfully. The tool of choice is debbootstrap, which is able to install packages in a directory as if it where the root partition, so we will need to mount it first:

mount /dev/loop1 mnt

In my case I will need to install several packages besides the base install:

debootstrap --arch i386 --include=cdbs,debhelper,libsqlite3-dev,\
libssl-dev,libgstreamer-plugins-base0.10-dev,libgmp3-dev,build-essential,\
linux-image-2.6-686,grub-pc stable mnt

Notice that the base set of packages does not include nor a kernel or a boot loader, because this is normally installed by Debian Installer, so I added them to the list of packages. But this is not the only thing that the installer does (and that there is no way to repeat besides by hand): it also sets up the environment, users, apt config (from the ones used to install) and more. We will have to set those by hand.

Before running anything else, which will run under chroot, we will need to also setup some of the virtual filesystems that are running on a normal GNU/Linux setup; namely, /dev, /dev/pts and /proc. We will reuse the host's ones, using the hability to mount a dir in another:

mount -o bind /dev/ mnt/dev
mkdir mnt/dev/pts
mount -o bind /dev/pts mnt/dev/pts
mount -o bind /proc mnt/proc

Some minimal config needed includes:

# apt
echo "deb http://http.us.debian.org/debian stable         main" >  mnt/etc/apt/sources.list
echo "deb http://security.debian.org       stable/updates main" >> mnt/etc/apt/sources.list
# otherwise perl complains during installation that it can't set the locale
# actually we will have to do some little more than just this; see below
echo "en_US.UTF-8 UTF-8" > mnt/etc/locale.gen
# when installing the kernel, if this setting is not present, it thinks the
# bootloader is not able to handle initrd images[^10]
echo "do_initrd = Yes" > mnt/etc/kernel-img.conf

So we use this basic config to complete even more the installation:

chroot="chroot mnt"
# compile the locales as per /etc/locale.gen
$chroot locale-gen
# download package definitions
$chroot apt-get update
# resolve virtual packages and finish the setup of packages
$chroot apt-get -f -y --force-yes install
# while we're at it, install upgrades
$chroot apt-get -y --force-yes upgrade

The last step is to install a bootloader. Here we have several options. lilo was the first Linux bootloader, which was started in 1992. Even if it can bootload almost any operating system in lots of filesystems, one of its main drawbacks is it staticness: it reads a config file, compiles the bootloader and installs it. After that you can't change anything (except for adding more boot parameters to the kernel), so if you wrote something wrong and your system does not boot, it's hard to recover. Also, if you change anything in the config file, you have to compile and install the bootloader again.

The second and third options are the two flavors of grub, the GNU GRand Unified Bootloader. The first iteration of grub, grub1 or grub-legacy how it is called now, is no longer under development or support, but a lot of people still use it for its simplicity and power. First developed in 1999, it has the hability to read the config file at boot time and it lets edit it and read the filesystems before booting. Its successor, grub2 or grub-pc, is even more modular and flexible, but takes time to relearn it.

Even with this last two options, I couldn't managed to reliably get a booting image. To be fair, I managed to do it with grub-pc, but my script had to work in a machine that boots with grub-legacy. Installing both at the same time is impossible, and I need to use the host's bootloader because I can't reliably fake the devices in a chrooted environment and using any virtual machine was imposible because the image doesn't boot yet! Talk about chicken and eggs... For the record, here's how I managed to make it work with grub-pc:

grub-install --root-directory=mnt/ --no-floppy --modules 'part_msdos ext2' /dev/loop0

So, I needed to find a bootloader that could be installed in the host machine without changing the actual bootloader in use. Luckily I talked to a friend sysadmin/guru, Ignacio Sánchez, which pointed me to extlinux, which is part of the syslinux family of bootloaders. This family also includes isolinux, famously known for booting the iso images of most of the distributions for years. I knew about the latter two, and I even used syslinux in a company I worked for two years ago in a floppy disk (!!!) used to boot the old firewall12 and another set of diskettes for two diskless thin clients. extlinux is the youngest of the family, which is able to read and boot from extX partitions. The config file looks like a very simple lilo.conf:

default Hop
timeout 30

label Hop
    kernel /boot/vmlinuz-2.6.28-5
    append initrd=/boot/initrd.img-2.6.28-5 root=UUID=e3447f08-f8b2-4c25-93e4-76420c467384 ro

The UUID can be obtained whit this command:

blkid /dev/loop1

Installing it actually consists of two steps: first installing MBR code that boots from the partition marked as bootable11. The syslinux family comes with such a MBR code, so we use it:

dd if=/usr/lib/extlinux/mbr.bin of=stable.img conv=notrunc

We're almost there. Installing extlinux is really straightforward:

extlinux --heads $heads --sectors $sectors --install mnt/boot/syslinux

It only rests to umount and dismantle the loop devices in the reverse order:

umount mnt/
losetup -d /dev/loop1
losetup -d /dev/loop0

Sometimes you need to wait a couple of seconds between these commands, because they seem to be asynchronous. Otherwise you'll get errors that the device is still busy, because the previous command has finished, but the async process in the kernel has not.

The image as it is is bootable with qemu and virtualbox, but if you want to make it bootable in other, closed virtual machines, you must convert it to vmdk. qemu-utils to the rescue again:

qemu-img convert -O vmdk stable.img stable.vmdk

I have lots of things more to mention, but this post has got long enough as it is. Mostly they were references to the sites I got info from, but I know that if I try to clean it up I will procrastinate it for another month or so and probably forget about it.


  1. Currently this needs an image conversion. 

  2. Of course, I strongly recommend to check the manpages of the mentioned tools. 

  3. Quick, which is the actual size of the file? You can answer with powers of 2 if it makes it easier for you :) 

  4. I have the tendency to write as-understandable-as-possible code; that means, I know that I'll have to read and try to understand it 6 months after I wrote it, soI try to make it as readable as possible. That includes using long options when I invoke tools in scripts and, of course, sensible class and variable names. 

  5. Master Boot Record, the first sector in a disk. 

  6. You will have to read sfdisk's manpage to understand what's all that. 

  7. sfdisk has a neat trick: you can dump the partition table from one disk and pipe it to a sfdisk affecting another disk, actually copying the partition scheme. It comes very handy when adding disks to a raid setup. 

  8. Technically we're marking it as a MSDOS type partition table. 

  9. Notice that it only complains about the end bound, not the beginning bound. 

  10. One interesting note: even if above I told debbootstrap to install a Linux kernel, it actually hasn't. The package linux-image-2.6-686 is a virtual one, and debbootstrap seems to not resolve this ones, but it doesn't complain either. 

  11. See that Boot column in the output of sfdisk at the beginning of the post? And the * in the first and only partition? That shows it as bootable. This is an old relic from the times when operating systems relied on a dumb MBR code to boot. And now we're using exactly that to load a bootloader. 

  12. Really old; we're talking about a Cyrix 486DX2 at 50MHz with 16MB de RAM, 4 NICs, all of them ISA, two of them still donning 10Base2 connectors and configurable via jumpers. We really didn't need anything bigger since the ADSL line was merely 2.5Mib/s. 

OSM to the rescue

A few years back I made a map of Córdoba's downtown by hand, using inkscape, to be used by people coming to a free software conference. You can watch the results here, but I can assure you that if it weren't for the mostly squarish/triangularish shape of Córdoba's blocks, it would have been very hard to do it.

Today someone asked me for the .svg file to do something similar. I searched and searched through my files, but couldn't find the originals. Then I remembered one of my current projects, which I'll mention bellow, which is based on OpenStreetMaps's data. So I told him to find the region he wanted and export it to PNG or something with the 'export' tab. Then, thoughts kept coming, and I even managed to remember MapOsMatic, which lets you select a region in OSM's maps and generate not only a nice printable map in several formats, but also a street index for the back of the same map (even as .csv, so you could have a .png of the map for your site and populate a database to look for streets). Even more, I generated a map that covers a slightly bigger area than my humble attempt from 4 years ago. So, there.

The project I'm toying now with is to make a bird's view map of the region I live now. This map will have only the towns and the main interconencting ways between them, much like maps like this one12. For that I first tried using josm to download the data from the region, filter out the small routes, and work from there, but the region is quite big, and OSM's servers sensibly refuse to dump so much data at once. Asking in freenode's #openstreetmap3 channel, they told me I could use XAPI to filter out only the data I wanted. So that's what I did: downloaded all roads from highways down to secondary roads, and rivers. While I was at it, I also downloaded the coastlines, because I'm close to the sea and there are also several lakes around. So, without further ado, here are the links I used:

wget -S -O routes.osm 'http://www.informationfreeway.org/api/0.6/way[highway=motorway|motorway_link|trunk|primary|secondary][bbox=5.5,43,8.5,45.5]'
wget -S -O routes.osm 'http://www.informationfreeway.org/api/0.6/way[waterway=river][bbox=5.5,43,8.5,45.5]'
wget -S -O routes.osm 'http://www.informationfreeway.org/api/0.6/way[natural=coastline][bbox=5.5,43,8.5,45.5]'

Notes: I also wanted waterway=riverbank, but for some reason the server never answered such a request. This does not include town names, hmmm... I could also have included historic=* and tourism=*. I might as well. There might be some follow ups to this project.


  1. RANT: the webmaster of the official turism site decided that is a good idea to put most of the contents in a flash applet, so I can't actually link to the map I wanted. Instead, here's a map from a neighbouring province, San Luis. 

  2. If you follow the link, yes, you'll get a 'site not found' error. That's because it's actually a curse to that site's webmaster 'encoded' into an url. 

  3. If you try to get into #osm, you're redirected to #joomla. Go fugure.