Near future of ayrton

It might look like ayrton's development has stalled. First, yes, a little. Last week I barely put any time on it. Second, development will be less release centered, and only because I have three things in mind that are very interesing to me that I want to implement, and it seems that more or less I will be doing them on a «I want to hack on this right now» basis.

One is to replace sh. So far I have been tiptoeing around it to make output go to the terminal by default, which forces the intoroduction of Capture, and to not raise an exception if the command fails. But recently I tried to make an ayrton script to more or less automatize releases and I found out that sh really messes with the subprocess' view of the terminal. In short, editors are not usable if they're launched from sh. I could not say that this is the last straw; I still think that sh is very good at what it does, but this definetely is like a sequoia trunk after a few straws. This is called issue #15.

Another is to try to make several changes on the way output is captured, piped and redirected. I find this thing of messing with _out too annoying. Piping could be handled by |, but currently is not possible to do it. Redirections could be handled with >, but as long as I don't have |, it doesn't make much sense. I'm already playing with checking the AST produced by the source, because I'm trying to minimize the situations in which unknown names are resolved into executables, and these things also require AST handling, so it should be more or less easy to achieve.

There is another, more ambitious thing. So far I've been using Python3's syntax, but this imposes some restrictions. I like sh's keyword arguments, but the syntax doesn't allow you to put positional arguments after keyword arguments, like:

rsync (archive=True, src, dst)

So the goal is to use another parser. As this is the only modification in mind, it only makes sense to take Python3's parser, modify what I want, and ship with that. But the question is, where is the parser code?

The answer is not simple, but at least exists besides, the source code. I mean, I tried to untangle how this works, but as Eli Bendersky shows in this post, it is not a simple thing. Luckily, what I plan to change is not implemented in the parser itself, but in the syntax checker, which is implemented in the file ast.c. So, by simply copying that one and ast.py, modifying the latter so it uses my modified syntax checker, and modifying the setup.py file, I should be done. Let's see how it goes...

ayrton 0.2

It's that time of the year already: a new ayrton version. This time I got more improvements than before, and for the first time1 I introduce a backwards incompatible change, but at least is well Documented in the Debian-like NEWS.rst file, and it's for a good cause2.

So, without further addo, the ChangeLog (mixed with the NEWS)3:

  • New function options() is similar to bash's set command. So far only the errexit and its short versions is accepted.
  • The ssh() context manager was renamed to remote() so the ssh executable is stills reachable from code. This was due to the fact that ssh is too complex to mimic.
  • New function shift() similar to bash's command of the same name. See the docs.
  • New function source() similar to bash's command4 allows to "import" another script's local definitions into the "sourcing" script's.

As usual, you can grab the sources from pypi or GitHub.


  1. No, really, who can actually come up with that phrases when the product has only 3 previous releases... take it with a grain of salt. 

  2. That cause being that I'm lazy and I don't plan to reimplement all the functionality available in the ssh executable. 

  3. Actually, this changelog is proof-read nad more complete that any other that you will find elsewhere. Code burns in the disk and I'm not so used to this release thing yet. I hope to get better soon. 

  4. Do you see the pattern here? 

ayrton 0.1.2

I had put quite a lot of time into ayrton, now that it's on the loose. This lead to two rapid-fire, consecutive minor releases, one unnamed, one called "Release early". Here's both changelogs:

ayrton (0.1.2) unstable; urgency=low

  • The "Release early" release.
  • RunninCommand.exit_code is a property, not a function. Closes #13.

ayrton (0.1.1) unstable; urgency=low

  • The remote code (the body of a with ssh (..): ...) can be either pure Python or ayrton. Pure Python imposes less dependencies on the remote.
  • You can access the original argv in the remote.
  • More documentation, more examples, even some that are useful!

I've been creating more examples that show how sh is actually breaking input for the spawned processes. Unluckily, this is not scheduled for fixing soon.

I started using GitHub issue system, so I can plan releases in a better way than just a TODO file in the sources. Those interested can go check here.

ayrton 0.1

So, after finally managing to handle over the local variables so they can still be reached in the remote (which was not really complicated, except for some internal changes in ayrton), I decided to release the first version. As before, the code can be found in its github page, but now you can also find it in pypi and you can find some docs here.

From now on I will try to focus in real world examples, so I can shape it better. It's a pity that I started developing it when I'm not longer a full time sysadmin. But I still have some ideas in my sleeve.

with ssh() pt2: Transfering Python code and executing it remotely

Yesterday4 I left when the problem got really interesting: how to transfer code to another machine and execute it there. I already advanced part of the solution: use pickle to convert something returned by ast.parse() into something transferable. Let's see how hard it really is:

import paramiko
import ast
import pickle

p= ast.parse ('print ("yes!")')
pick= pickle.dumps (p)
c= paramiko.SSHClient()
c.load_host_keys ('/home/mdione/.ssh/known_hosts')
c.connect ('localhost', allow_agent=False, password='foobarbaz')
(i, o, e)= c.exec_command ('''python -c "import pickle
from ast import Print, Module, Str
import sys
c= pickle.load (sys.stdin)
code= compile (c, 'remote', 'exec')
exec (code)"''')
i.write (pick)
o.readline ()

This happily prints 'yes!' on the last line.

There are a lot of caveats in this code. First, this doesn't work on Python3, only because there's no official/working port of paramiko for that version. Jan N. Schulze a.k.a. nischu7 has made a port, which looks quite active (last commit from around a month ago), but I tried it with Python 3.3 and didn't work out of the box. Furthermore, even when pickle's doc says that it automatically detects the format of the stream, which means that technically I could pickle something in Python2 and unpickle it back in Python3, the same does not happen with the ast module. Hence, I'm also using Python2 in the remote2. This implies that I will have to check if the reconstruction works and if the reconstructed code actually compile()'s. But I already knew that.

Second, this assumes that you have the remote machine already in the known_hosts file. Third, I'm importing things from ast specifically for reconstructing the parsed code (ast.dump (p) returns "Module(body=[Print(dest=None, values=[Str(s='yes!')], nl=True)])"). I hadn't checked yet, but somehow from ast import * is not enough. Last, the transfered code is simple enough, makes no references to local or remote variables (for whichever definition or local and remote; I will have to be consistent in the future when using those words), nor references other modules, present or not in the remote machine (there, remote is the machine mentioned in the parameter of ssh()[^3]3)1. But this is a promising step.

Another thing to notice is that the code is sent via stdin. This might cause trouble with script expecting things that way, let's see:

import paramiko
import ast
import pickle

p= ast.parse ('foo= raw_input (); print (foo)')
pick= pickle.dumps (p)
c= paramiko.SSHClient()
c.load_host_keys ('/home/mdione/.ssh/known_hosts')
c.connect ('localhost', allow_agent=False, password='foobarbaz')

command= '''python -c "import pickle
from ast import Print, Module, Str, Assign, Name, Call, Load, Store, dump
import sys
c= pickle.loads (sys.stdin.read (%d))
code= compile (c, 'remote', 'exec')
exec (code)"''' % len (pick)

(i, o, e)= c.exec_command (command)
i.write (pick)
i.write ('bar!\n')
o.readline ()

This works, but only after someone tells you that you should use raw_input() instead of input(), which triggers the realization that you're reading Python3's doc but using Python2. Damn you, paramiko!

So, in conclusion, This technique starts to show promise. The good thing about it is that it barely requires any setup. Future developments could include a ssh client cache. The next step is to get the variables in the remote machine and gluing it with the previous developments.


  1. Another caveat: that one is definitely not my password :) 

  2. Clearly this sentence was written before this other one[^3]:. 

  3. There, I just invented bodynotes :) 

  4. I have again been bitten by a post that takes days, if not months, to publish. 

with ssh() pt3.5: Clarifying a little

Earlier I posted the current implementation of ayrton's with ssh(). I did a mistake in the examples of the resulting code, and there was no explicit example, so this post is mostly to clarify that:

with ssh ('localhost', allow_agent=False, password='foobarbaz') as s:
    foo= input ()
    print ('yes!', foo)
    print ('I said yes!')

(i, o, e)= s
i.write (b'no! maybe!\n')

print (o.readlines ())
print (e.readlines ())

ayrton's code will rewrite this to something equivalent to this:

with ssh (pickle.dumps (<module>))), 'localhost', allow_agent=False,
          password='foobarbaz') as <random_var>:
    s= <random_var>

(i, o, e)= s
i.write (b'no! maybe!\n')

print (o.readlines ())
print (e.readlines ())

where <module> is built by creating an ast.Module who's body is the with's body; and <random_var> is a name randomly generated (it's a sequence of 4 groups of a lowercase letter and an digit; for instance, j6r3t8y9).

The execution of said program gives the following:

[b'yes! no! maybe!\n', b'I said yes!\n']
[]

But trying to explain why we need the s= <random_var> step, I noticed that actually we don't: according to Python's documentation, the only things that are blocks who define binding scopes are modules, classes and functions/methods. This means that names binded as the target of a with statement outlives the body of said with statement. This puts us back to the simpler situation where we just replaced the with's body with pass.

So, finally, our previous example ends up like:

with ssh (pickle.dumps (<module>))), 'localhost', allow_agent=False,
          password='foobarbaz') as s:
    pass

(i, o, e)= s
i.write (b'no! maybe!\n')

print (o.readlines ())
print (e.readlines ())

and works all the same :)

with ssh() pt3: Modifying the AST to avoid local execution

Putting part 1 and part 2 together is not much more effort. We already know how to find the body of a with statement, and we already know how to compile it, transfer it, and execute it remotely. Putting them together looks like1:

    def __enter__ (self):
        file, line= traceback.extract_stack (limit=2)[0][:2]
        code= ast.parse (open (file).read ())

        found= None
        for node in ast.walk (code):
            if type (node)==ast.With:
                if node.items[0].context_expr.func.id=='ssh':
                   if node.lineno==line:
                       found= ast.Module(body=node.body)
                       break

        if found is not None:
            data= pickle.dumps (found)
            print (ast.dump (found))

            self.client= paramiko.SSHClient ()
            self.client.load_host_keys (bash ('~/.ssh/known_hosts')[0])
            self.client.connect (*self.args, **self.kwargs)

            command= '''python3 -c "import pickle
 from ast import Module, Assign, Name, Store, Call, Load, Expr
 import sys
 c= pickle.loads (sys.stdin.buffer.read (%d))
 code= compile (c, 'remote', 'exec')
 exec (code)"''' % len (data)

           (i, o, e)= self.client.exec_command (command)
           i.write (data)
           return (i, o, e)
       else:
           raise BodyNotFoundError (file, line, code)

There are two complications that arise. One is already fixed in that code: to detect from the current entering into a context (the execution of the above method __enter__()) what file and line are we being executed. This is solved in the first line with extract_stack() from the traceback module. The only difference with the original body extraction mechanism is that we also check that we're in the right line number. Just in case, there is an exception when we don't manage to find the original code.

The second complication is... well, more complicated. We successfully execute the body in the remote and we're amused that it even works. But here's the hitch: the body is also executed locally. This is annoying.

This means that we have to not only manage to find the body of the with statement to execute it remotely, we have to make sure that it is not executed locally. In other words, we have to locally replace it with innocuous code, like pass.

Luckily, ayrton is already loading and compiling the script's code before executing it. Adding a step that somehow saves the body of all with ssh() statements but also replaces them with pass should be easy. In fact, it's disappointingly easy:

class CrazyASTTransformer (ast.NodeTransformer):

    def visit_With (self, node):
        call= node.items[0].context_expr
        if call.func.id=='ssh':
            m= Module (body=node.body)
            data= pickle.dumps (m)
            s= Bytes (s=data)
            s.lineno= node.lineno
            s.col_offset= node.col_offset
            call.args.insert (0, s)

            p= Pass ()
            p.lineno= node.lineno+1
            p.col_offset= node.col_offset+4
            node.body= [p]

This time I'm using a NodeTransformer for the task. I'm simply taking the body, wrapping it around a Module, pickling that, creating a new Bytes object with that pickle and prepending it to the arguments of ssh(). On the other hand, I'm replacing the whole body with a pass statement. So:

with ssh (...):
    <body>

Becomes:

with ssh (pickle.dumps ( Module (body=Bytes (s=<body>)) ), ...):
    pass

Easy, right? Back to the context manager, its constructor now takes the pickle of the code to execute remotely as the fist argument, and the __enter__() method now does not have to look for the code anymore.

There is one more complication that I want to address in this post, so I can more or less finish with all this. paramiko's SSHClient.exec_command() method returns a sequence of 3 objects, that represent the stdin, out and err for the remote. It would be nice if we could locally refer to them so we can interact with the remote; particularly, get it's output. This means that somehow we have to manage to capture that sequence and bind it to a local name before it's too late. There is no obvious answer for this, specially because it means that I have to create a local name, or take it from somewhere, in such a way that, either it doesn't clash with the local environment, or the user expects it in a particular name.

So I more or less chose the latter. I'm extending the construct in such a way that if we write with ssh() as foo: ..., that sequence ends in foo and you can use it after the with's body. So instead of the pass statement for local execution, I want that sequence assigned to foo. For that, we we'll need a random variable that will replace foo in the as foo part, and replace pass with foo= <random_var>. It complicates things a little, but nothing really otherworldly :)

            # take the `as foo`, make it `with ssh() as <random>: foo= <random>`
            # so we can use foo locally
            local_var= node.items[0].optional_vars.id
            remote_var= random_var ()

            # ` ... as <random>`
            node.items[0].optional_vars.id= remote_var

            # add `foo= <random>` to the body
            last_lineno= node.body[-1].lineno
            col_offset= node.body[0].col_offset

            target= Name(id=local_var, ctx=Store())
            target.lineno= last_lineno+1
            target.col_offset= col_offset

            value= Name(id=remote_var, ctx=Load())
            value.lineno= last_lineno+1
            # this is a little AR :)
            value.col_offset= col_offset+len (local_var)+ len ('= ')

            ass= Assign (targets=[target], value=value)
            ass.lineno= last_lineno+1
            ass.col_offset= col_offset

            node.body= [ ass ]

Just to be clear, the final code looks like:

with ssh (pickle.dumps ( Module (body=Bytes (s=<body>)) ), ...) as <random_var>:
    foo= <random_var>

The next step is to be able to pass the locals() to the remote so it can access the local values.


  1. One note here: I finally managed to make paramiko behave in Python3, so this code is slightly different from the one in the previous post. 

Shelling Python

Since a long time I've been toying with the idea of having a better programing language for (shell) scripts than bash. Put in another way, a better shell language. Note that I'm not looking for a better shell per se, i just want a language that has better data manipulation than the rudimentary set of tools that normal shells give. I might as well be overlooking more powerful shells than bash like zsh, but so far I have seen them more pluggable than anything else. Alas, if that language could be Python, the better.

Enter sh. It's a nice Python module that allows you to call programs as if the were mere functions defined in it. Behind the curtains sh does some magic passes to make it so. It is fairly documented, commented (in the code) and maintained (via GitHub issues).

So I started using sh for replacing some shell scripts I had for their Python equivalents. So far the experience has been more or less satisfactory, with some papercuts. I thinks it's easier to explain with a simpl-ish example:

Exhibit A. Exhibit B. Try to view the side to side, I aligned them as much as I could. The Python version then diverged to using another set of data.

The most notable thing to notice is that the data manipulation in Python is so better done. This stems from the fact that bash has no float handling, much less concepts like floor or ceiling, So instead of a couple of ifs in the inner loop, I have to define three arrays, fill them according to some cases that handle the 'signs' of two different 'floats' (they're strings, really). Also setting the variables west, south, east and north is not only simpler, but it also has more error checking. We also save a loop: Python's version has two nested loops, bash's has three.

Now, if you squint a little, you'll see where Python starts to drag. One of the first things to do is to import a lot of modules. It's impressive how many, seven from the standard library, sh itself and one of my own (file_test). Then we try to figure the extent of this PBF file by piping the output of one command into another. In bash this is just a matter of, you know, using a pipe. sh provides us functions and we can even nest them, making the ouput of the inner command go to the outer one. I can live with that, but for someone coming from shell scripting might (just might) find it confusing.

There's something I think will definetely confuse a shell scripter (a SysAdmin?): the fact that by default, sh makes the commands think that their stdout is a TTY, while that is not the case. In my case that meant that osmpbf-outline1 spat out colors for formatting the output, which meant that I had to explicitly say that the stdout should be a plain file (_tty_out=False). Also, at the beginning, the error handling of sh took me by surprise. That's why at first I said that an error code of 1 is ok (_ok_code=1), while later I did proper error handling with a try: ... except: block.

Notice that I also tried to use Python modules and functions were it made as much or more (or the only) sense than using an external command, like when I use os.chdir()2 or os.unlink() instead of rm.

Also I find lacking in sh is more functions to do shell expansion. It only handles globbing, and only because Python's glob.glob() returns None if the pattern does not match any file, while bash leaves the patter as it is.

So my conclusion is that sh is already a very goo step forward towards what I would like to have in a shell scripting language, but I see space for some improvements. That's why I started hacking another module, called ayrton to try to tackle all this. Notice that at the beginning I tried to hack it in such a way that instead of having to say sh.ls or from sh import ls, you would simply use ls and me in the backstage would do all the juggling necessary to be equivalent to those3. That is not possible without breaking Python itself, but now that I'm starting to convert my scripts, I see a place for it. I will also try to incorporate whatever I hack back into sh. I'll glob about its details soon.

In the meantime, before I start to really document it, you can take a look of the current version of the most advanced script in ayrton so far4.


  1. Notice how in the Python script this is written as osmpbf_outline. This is due to the fact that function names in Python cannot have -s in them (it's the 'minus' operator), so sh makes a trick where if you put a _ in the function name and such a command does not exist, it will try replacing them all with -s ans try again. Hacky, if you want, but works for me. 

  2. There's no cd command, that's a bash builtin command, and it wouldn't make any senses anyways, as the change would only affect the subcommand and not the Python process. 

  3. This includes messing with the resolution of builtins, which strangely works in Python3 but only from the interactive interpreter. I tried to figure out why, but after a while I decided that if it didn't work out of the box, and because what I wanted to do was a terribly ugly hack, I dropped out. 

  4. I chose the .ay extension. I hope it doesn't become .ay! :) 

Checking out from Google

I never wanted Gmail. I always had an email account to which I identify more strongly than any free service around, even when it means not having the hability of «ubiquitousness», but I never needed it anyways. I also already had a Jabber account to which I also identified a lot; both accounts where from two NGOs where I participated.

But then a lot of friends already had Gmail accounts when Gtalk came out. Before that we used mostly other IM networks to communicate, and now a new one came. Everybody was using it, and even when we knew that the backing protocol was XMMP/Jabber, initially Gtalk's servers were not open to other networks, which meant that the only way to communicate via Jabber with people with Gmail accounts was having one too. So I did.

Even then, I never used that Gmail/Gtalk account as an email account. At some point I started using it as a disposable account for registering to other services that I suspected or knew that could spammy, or that I didn't want them to be associated to my real email.

Then I went to Europe and that changed a few things. First, I started another blog in Blogspot for communicating with my family; I already had one, but before Google bought them. I also started using Picasa for showing my family and friends where I have been and what I had seen. This meant that I started mixing a disposable account with a not so much one, but at the time I didn't care.

Google was never shy on using the network effect. Google Plus was another of the products I was dragged into (just to see a friend's photos); I never correctly setup my personal account for using Google Groups, so I used the Gmail account instead; also Calendar and Docs. Currently my Google account also includes Profile (whatever that is), AdSense (yes, I tried to jump into that wandbagon too), Android Devices and Play, Chrome Sync (I should had disabled it at first use; I don't really use the browser), iGoogle (does anybody use that?), Tasks (I don't even remember setting those two up) and of course YouTube (which I used to access «adult-only» content).

Apart from all that, I don't always use Google search, I switch between Yahoo! and DuckDuckGo, or more specialized sites like the Wikipedia or the documentation sites for the languages and software I use. I can say that the only product I would miss if I couldn't use, and for which I don't need an account, is Maps. Most of the time I use it for calculating times from place A to place B, sometimes passing through place C. For maps I prefer OpenStreetMap. An interesting note about Maps: Google has been using GeoIP for «guessing» the language for searchs, completely ignoring the settings in my browser (hint: I live in a country whose official language is not English, which I prefer for technical stuff, or Spanish, which is my native tongue), but Maps always starts in USA (I definetely don't live there).

Yesterday morning I openend that Gmail account and got this beautiful surprise message:

«Oye, esto es importante: si alguna vez pierdes el acceso a tu cuenta, podemos enviarte los datos de restablecimiento de contraseña a [my personal email account]». Translated says: «Hey, this is important: if you ever loose the access to your account, we can send you the password recovering data to [my personal email account]».

This was too much: I never told google that the two accounts were related; in particular, this is not a message reminding me that they're using that account for password recovery. They're telling me for the first time that somehow they figured out that that's my other email account and that is that. I'm pretty sure is not hard to figure that out, but still I think is another step into my digital life.

Today I will start the process of make this account dissapear. Since a few months now I started using a Jabber account associated to my personal email account (the address is the same), and given that for some years already you can connect to Gmail/Gtalk users from other XMMP/Jabber servers, I've been slowly migrating users to the new jabber address. Today I sent a last batch of «indispensables» and I'll add the rest on a need-to basis.

Initially, the data export offers me 4 options: my Picasa photos, which I don't care about, as I already host my own galleries, my Profile data, my Stream data, whatever that is, and my circles and Contacts data. Good thing is once you actually are in Google Takeout, you can select which services you want to export, including Blogger, which is the one I'm most insterested in.

So after selecting Profile, Blogger and Contacts, I create and download the archive. What is interesting is that during this process I was asked my password several times. Probably is just extra security for those who have their accounts constantly logged in in their browsers.

The data I extracted was mostly complete, or at least at the levels I wanted: All the posts from the 3 blogs in Atom format (for a second I though it would only include those that normally are in the feed, like the last 10 or 15), the contacts in VCS format and a json on my profile, with not much else. All this contained in a zip file. Mike Hommey has an analisys of the data you get while exporting your Reader account, and how is not enough for him (and I would guess, a lot of people that used reader for more than just reading feeds).

It's incredible that after all this, it still took me 10 minutes of staring at «you're about to delete everything on Google» to finally hit the «yes, I do» button. The process was way less painful that I envised, but then Google is the company whose motto used to be «do no evil». Definetely, they're not Facebook.

For closing, a statistic that shows the mean lifespan of all the services closed by Google.

Acer Iconia A200

Disclaimer: rough language ahead.

So we bought an Acer Iconia A200. Reasons are not important, and I was hesitating about an Android device. But features are good, I can't indefinitely deny that Android is the option to Apple. A clear case of choosing the lesser of all evils.

Today it came. I did the classic deboxing photo shooting, and that was that. I opted out from all the options that were presented to me that meant telling google what I was doing or where, got the wifi working, upgraded to 4.0.3, and started testing.

First things first, play with the widgets. There's a 4 square panel one that uses the top 2 panels for showing the hour, the lower right one for the date (which is not a shortcut to the agenda... WTF?) and the lower left one has a small sun-and-clouds-and-sky image that suggested a weather applet. So I hit it and it tells me that because I opted out from using wifi spots as alternative ways to find the device on the surface of our unique world planet, I can't simply use the applet. I mean, there is no way to tell the applet, "hey, you know what? for no particular reason I want to know the weather in, let's say, Saint-Symphorien-du-Mahun, France". I can either enable the "network Location Service" (or something like that, I set the device in Spanish) or not use it.

So I forget about it. Next step, install a real map application. For that, of course, I had to yield a Google account. I have one, or several, laying around, so I use one of those to let Google Play install things. That's already shitty. But then what does it do? It also uses that account for configuring the gmail client. Who the fuck told it to do it? I was expecting to set it up with my girlfriend's account. I hope I will be able to do it later. Do these people ever thought that a tablet could be used by more than one person? 1 Also, account sync got activated. I had to go to the settings and turn everything off. Still, the gmail app keeps being linked to that Google account.

At some point, too, a multimedia server went on. WTF from? Who set it up? Why is it enabled by default? Who decides these things? Ok, this is getting close to a rant. I see a tag coming. It also comes with an app that shows in the "task list" (there is no concept of app in Android) which is actually a "apps you last used up to certain value of «last»" or something like that; the app is called VirusScan. When I try to activate it, it gives me back to the home screen. Go figure.

So back to Google Play. I had already skipped the yielding of credit card info. Now it's time to opt out of being spammed about news and offers from Google Play. I think I see a pattern here, but at least some things are being asked, "can I, pretty pretty please?". Most of the time, my answer will be "No!", sometimes followed by "Go to your bedroom, no dinner or dessert!".

I select Maverick, which I remember from the time I had a Motorola Droid from work. I choose it because it can handle offline maps. This tablet has a GPS, but it does not have GPRS. The rights it asks for are a little permissive (for instance, taking photos and videos), but I hope it's just a matter of not using those features and that's it.

I fire it up. I choose OSM maps and it works just fine. The guide for the offline maps is nice, not fancy, but it doesn't have to be. It only needs an µSD, VFAT formatted, which I don't have (I lost an 8Gb recently with a lot of wine labels from wines that I liked. It's hard to live in a country where wines sold by features completely orthogonal to what you're used to). Makes me wonder why I bought a 32GiB device. I'll have to fix that soon.

For some reason I try the image viewer. Of course it went to the picassa account associated to that Google account. That was really a bad move. Even the calendar is tied to that account. I'll have to check how to reset that shit. Probably I'll have to reset it to shipping state. Not that I did a lot in it. Also, I missed some privacy settings, like the backup in Google servers, which at least deactivating them erases that data from Google servers... or so they say >:-)

Android apart, the tablet is nice. Not very heavy, looks good, responsive, can act as USB master, has internal GPS as I already said, and whatever else. I will be actively looking for an app that lets me embed another two, so I can use it as a GPS and media center in the car, which is the primary goal that I had for my non-charging-from-USB-anymore N900 :(

So last tests: The GPS didn't got a good signal in my balcony (My god it's cold outside. What's the temperature? I don't know 2, the weather applet does not work, the second one I put in the home screen reset itself to NYC (!??!) and I didn't try the third I installed), but at least it found it fast. The first guess put me some 80 meters from my actual position, but it slowly got it right.

Now audio: let's see how well it plays MP3s, OGGs and FLACs. I connect it via USB and simultaneously I get two surprises: a) the tablet does not seem to charge via USB. Now, sincerely, WTF? I corroborate this by plugging it to a wall USB charger. Nothing. I should had suspected something when I saw at first that the wallwart has a cylindrical connector. And b) it announces via MTP device, not an USB mass storage as others do, with an option of PTP for the photos. Amarok can handle MTP devices, but then Amarok tries to bee too clever and handling a big-ish collection with lots of missing data and inconsistencies is a nightmare, so I skip it. mtpfs is somehow broken in Debian Sid, and I don't have the willpower to compile kio-mtp.

So that's it: I have a nice 11" tablet with a operating system I knew I hate, which in spite it has GPS can't charge via USB and is not even a USB mass storage, when it can act as a USB master. Probably the bottom line is: I should have investigated better the device before buying it. If only my N900 would charge via USB again 5...


  1. Ok, yes, it has support for several Google accounts, or at least Gmail ones. 

  2. It's 15°, according to the weather applet in my notebook. How difficult would it be to install GNU/Linux on this beauty 3

  3. Just kidding 4

  4. Actually, not so much. I mean, the device runs an Google/Linux system. 

  5. For EUR 300 I probably could buy a not refurbished-but-sold-as-new one. But then, I'm not the only intended user for this device.