Artikel aus edv

  • Feiertage in remind: Jetzt Bundesweit

    Vor einem bunten Schaufenster steht eine bunte Jesusfigur auf einem kleinen, leintuchumhüllten Podest, darunter ganz viel Grünstreu und Blumen.

    Vielleicht braucht ein Post zu Feiertagsdaten nicht unbedingt eine Illustration. Aber wo sollte ich diese Konkurrenz zwischen Fronleichnamskult (2014 in Walldürn) und moderner Schaufensterdeko sonst unterbringen?

    In meinem Post zu Feiertagen in remind habe ich gesagt:

    Mit wenig Mühe sollte das auf die Verhältnisse in anderen Bundesländern anzupassen sein. Wer das tut, darf die Ergebnisse gerne hierherschicken. Als großer Freund des Feiertags an und für sich würde ich hier sehr gerne ein Repositorium von Feiertagsdateien pflegen.

    Nun, tatsächlich lohnt es sich eigentlich gar nicht, so etwas crowdzusourcen, denn es gibt eine recht nützliche Übersicht über die Feiertage in den Astronomischen Grundlagen für den Kalender, und das wiederum ist schnell in Python übersetzt (will sagen: Fehler sind meine). Das Ergebnis: remind-feiertage.

    Das ist ein Python-Skript, das ohne weitere Abhängigkeit läuft und einen oder mehrere Bundesland-Kürzel nimmt:

    $ python remind-feiertage.py
    Usage: remind-feiertage.py land {land}.
    Gibt remind-Feiertagsdateien für deutsche Länder aus.
    Länderkürzel: BW BY BE BB HB HH HE MV NDS NRW RLP SH TH.
    Erklärung: SL=Saarland, SN=Sachsen, SA=Sachsen-Anhalt)
    

    Übergibt mensch alle Kürzel, kommen auch alle Feiertagsdateien raus. Ihr könnt also auch einfach die Daten für euer Bundesland von hier cutten und pasten:

    $ python remind-feiertage.py BW BY BE BB HB HH HE MV NDS NRW RLP SA SH SL SN TH
    
    ============= BB =============
    # Feiertage in BB
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Oct 31 MSG Reformationstag
    
    
    ============= BE =============
    # Feiertage in BE
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Mar 8 MSG Frauentag
    
    
    ============= BW =============
    # Feiertage in BW
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Jan 6 MSG Epiphanias
    REM [ostern+60] MSG Fronleichnam
    REM Nov 1 MSG Allerheiligen
    
    
    ============= BY =============
    # Feiertage in BY
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Jan 6 MSG Epiphanias
    REM [ostern+60] MSG Fronleichnam
    REM Aug 15 MSG M. Himmelfahrt
    REM Oct 31 MSG Reformationstag
    
    
    ============= HB =============
    # Feiertage in HB
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Oct 31 MSG Reformationstag
    
    
    ============= HE =============
    # Feiertage in HE
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    
    
    ============= HH =============
    # Feiertage in HH
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Oct 31 MSG Reformationstag
    
    
    ============= MV =============
    # Feiertage in MV
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Mar 8 MSG Frauentag
    REM Oct 31 MSG Reformationstag
    
    
    ============= NDS =============
    # Feiertage in NDS
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Oct 31 MSG Reformationstag
    
    
    ============= NRW =============
    # Feiertage in NRW
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    REM Oct 31 MSG Reformationstag
    
    
    ============= RLP =============
    # Feiertage in RLP
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    REM Oct 31 MSG Reformationstag
    
    
    ============= SA =============
    # Feiertage in SA
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Jan 6 MSG Epiphanias
    REM Oct 31 MSG Reformationstag
    
    
    ============= SH =============
    # Feiertage in SH
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM Oct 31 MSG Reformationstag
    
    
    ============= SL =============
    # Feiertage in SL
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    REM Aug 15 MSG M. Himmelfahrt
    REM Oct 31 MSG Reformationstag
    
    
    ============= SN =============
    # Feiertage in SN
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    REM Oct 31 MSG Reformationstag
    REM Wednesday Nov 16 MSG Buß+Bettag
    
    
    ============= TH =============
    # Feiertage in TH
    # CC0; siehe auch https://codeberg.org/AnselmF/remind-feiertage
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM [ostern-2] MSG Karfreitag
    REM [ostern+1] MSG Ostermontag
    REM May 1 MSG Maifeiertag
    REM [ostern+39] MSG Himmelfahrt
    REM [ostern+50] MSG Pfingstmontag
    REM Oct 3 MSG Nationalfeiertag
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern+60] MSG Fronleichnam
    REM Sep 20 MSG Weltkindertag
    REM Oct 31 MSG Reformationstag
    

    Hinweise, wie das mit remind verwendbar ist, findet ihr im Baden-Württemberg-Post.

    Lasst mich zur Klarheit und auch als mein äußerstes Zugeständnis an Search Engine Optimisation gerade noch die Bundesland-Kürzel ausschreiben:

    BW:Baden-Württemberg
    BY:Bayern
    BE:Berlin
    BB:Brandenburg
    HB:Bremen
    HH:Hamburg
    HE:Hessen
    MV:Mecklenburg-Vorpommern
    NDS:Niedersachsen
    NRW:Nordrhein-Westfalen
    RLP:Rheinland-Pfalz
    SA:Sachsen-Anhalt
    SH:Schleswig-Holstein
    SL:Saarland
    SN:Sachsen
    TH:Thüringen
  • Feiertage in Baden-Württemberg für die Terminverwaltung remind

    Screenshot eines Terminals mit blauem Hintergrund. Gezeigt ist die Kommandozeile remind -cu+2 ~/.reminders 2024-03-24 und ein ASCII-Kalender, in dem Karfreitag und Ostermontag markiert sind.

    Gut: In der Realität sehe ich meinen remind-Kalender meist als Tk-Widget oder in HTML, aber im Zweifel geht auch ASCII, etwa, wenn ich wie jetzt meine Feiertage vorführen will.

    Als ich neulich zu Debian bookworm migriert bin, musste ich mich endlich vom GPE-Kalender[1] verabschieden, weil er nach langen Jahren als verwaistes Paket schließlich doch noch einen Konflikt mit was Wichtigem eingefangen hat. Es war aber ohnehin höchste Zeit, für die Terminverwaltung zu etwas Sinnvollerem zu migrieren. In meinem Fall: remind. Das nun fühlt sich – zusammen mit tkremind (auch Debian-paketiert) und einem:

    reminders = subprocess.run(["remind", "-pp", "-c+3",
        "/home/msdemlei/.reminders"],
      capture_output=True).stdout
    reminders_html = subprocess.run(["rem2html", "-tableonly"],
      capture_output=True, input=reminders).stdout
    

    in dem Python-Skript, das mir meine tägliche Zusammenfassung in HTML produziert – so an, als könnte das für die nächsten 20 Jahre halten.

    Mit diesem Gefühl wollte ich nun endlich die Anzeige von Feiertagen konfigurieren, etwas, das ich mit dem GPE-Kalender bis zu dessen bitterem Ende Jahr um Jahr prokrastiniert habe. Allein, zu einer Anfrage "remind" Feiertage "Baden-Württemberg" ist weder Google noch Duckduckgo etwas Brauchbares eingefallen.

    Um das zu ändern, schreibe ich diesen Post. Und zwar habe ich gerade die folgende remind-Datei mit den gesetzlichen Feiertagen in Baden-Württemberg geschrieben:

    # Feiertage in Baden-Württemberg (Stand 2024)
    #
    # Verteilt unter CC0.
    
    SET ostern EASTERDATE($Uy)
    
    REM Jan 1 MSG Neujahr
    REM Jan 6 MSG Epiphania
    REM May 1 MSG Kampftag
    REM Oct 3 MSG Nationalfeiertag
    REM Nov 1 MSG Allerheiligen
    REM Dec 25 MSG Weihnachten 1
    REM Dec 26 MSG Weihnachten 2
    REM [ostern-2] Karfreitag
    REM [ostern+1] Ostermontag
    REM [ostern+39] Himmelfahrt
    REM [ostern+50] Pfingstmontag
    REM [ostern+60] Fronleichnam
    

    Mit wenig Mühe sollte das auf die Verhältnisse in anderen Bundesländern anzupassen sein. Wer das tut, darf die Ergebnisse gerne hierherschicken. Als großer Freund des Feiertags an und für sich würde ich hier sehr gerne ein Repositorium von Feiertagsdateien pflegen.

    Wie verwende ich das? Nun, ich habe ein Verzeichnis für allerlei Kram, der längere Zeit irgendwo in meinem Home sein soll, aber nicht gerade in dessen Wurzel: ~/misc. Dort leben jetzt auch diese Feiertage als bawue.rem.

    Die eigentlichen Termine habe ich – wie aus dem Python oben schon ahnbar und mit großem Vergnügen XDG-unkonform – in einer Datei ~/.reminders. Und dort steht jetzt bei mir:

    INCLUDE /usr/share/remind/lang/de.rem
    DO misc/bawue.rem
    

    Die erste Zeile macht deutschsprachige Beschriftung, das DO (statt include) in der zweiten Zeile ist wichtig, damit remind den Pfad relativ zum Pfad der reminders-Datei auflöst.

    Und damit werde ich nie wieder dienstliche Termine auf Feiertage legen. So.

    [1]GPE steht hier für das längst vergessene GPE Palmtop Environment; demnach roch auch der GPE-Kalender schon seit einem Jahrzehnt ziemlich streng.
  • Select And Merge Pages From Lots Of PDFs Using pdftk

    For most of my ad-hoc PDF manipulation needs (cut and paste pages, watermark, fill forms, attach files, decrypt, etc), I am relying on pdftk: Fast, Debian-packaged (in pdftk-java), and as reliable as expectable given the swamp of half-baked PDF writers. So, when I recently wanted to create a joint PDF from the first pages of about 50 other PDFs, I immediately started thinking along the lines of ls and perhaps a cat -b (which would number the lines and thus files) and then pdftk.

    Why cat -b? Well, to do cut-and-merge with pdftk, you have to come up with a command line like:

    pdftk A=input1.pdf B=input2.pdf cat A1-4 B5-8 output merged.pdf
    

    This would produce a document merged.pdf from pages 1 through 4 of input1.pdf and pages 5 through 8 of input2.pdf. I hence need to produce a “handle” for each input file, for which something containing the running number would a appear an obvious choice.

    My initial plan had therefore been to turn lines like 1 foo.pdf from ls | cat -b into doc1=foo.pdf with a dash of sed and go from there. If I were more attentive than I am, I would immediately have realised that won't fly: With handles containing digits, pdftk would have no robust way to tell whether doc12 means “page 12 from doc“, “page 2 from doc1“, or “all pages from doc12”. Indeed, pdftk's man page says:

    Input files can be associated with handles, where a handle is one or more upper-case letters[.]

    Oh dang. I briefly meditated whether I could cook up unique sequences of uppercase handles (remember, I had about 50 files, so just single uppercase letters wouldn't have done it) using a few shell hacks. But I then decided[1] that's beyond my personal shell script limit and calls for a more systematic programming language like, umm, python[2].

    The central function in the resulting little program is something that writes integers using uppercase letters only. Days later, I can't explain why I have not simply exploited the fact that there are a lot more uppercase letters than there are decimal digits, and hence making uppercase labels from integers is solvable using string.translate. A slightly overcompact rendering of that would be:

    DIGIT_TO_LETTER = {ascii: chr(ascii+17) for ascii in range(48, 59)}
    def int_to_uppercase(i):
      return str(i).translate(DIGIT_TO_LETTER)
    

    (if you don't remember the ASCII table: 48 is the ASCII code for zero, and 48+17 is 65, which is the ASCII code for the uppercase A).

    But that's not what I did, perhaps because of professional deformation (cf. my crusade against base-60). Instead, I went for a base-26 representation using uppercase letters only, just like the common base-16 (“hex”) representation that, however, uses 0-9 and A-F and thus is unsuitable here. With this, you would count like this (where more signifiant “digits“ are on the right rather than on the western-conventional left here because it doesn't matter and saves a reverse):

    A, B, C, D..., X, Y, Z, AB, BB, CB, ... ZB, AC, BC...
    0, 1, ..............25, 26, 27,.......      52, 53
    

    I freely admit I was at first annoyed that my handles went from Z to AB (rather than AA). It did take me longer than I care to confess here to realise that's because A is the zero here, and just like 01 is the same as 1 decimal[3], AA is equal to A (and BA equal to B) in that system. Consequently, my function for unique handles didn't produce AA even though I hadn't realised the problem when writing the function – there's nothing as practical as a good theory.

    With that function, the full ad-hoc script to pick pages one (that's encoded in the f"{hdl}1" in case you want other page ranges) from all files matching /some/dir/um*.pdf looks like this:

    import glob
    import os
    import subprocess
    
    def make_handle(ind):
        """returns a pdftk handle for a non-negative integer.
    
        This is a sequence of one or more uppercase letters.
        """
        hdl = []
        while True:
            hdl.append(chr(65+ind%26))
            ind = ind//26
            if not ind:
                break
        return "".join(hdl)
    
    
    sources = [(make_handle(ind), name)
      for ind, name in enumerate(sorted(glob.glob("/some/dir/um*.pdf")))]
    subprocess.check_call(["pdftk"]+[f"{hdl}={name}" for hdl, name in sources]+
        ["cat"]+[f"{hdl}1" for hdl, _ in sources]+
        ["output", "output.pdf"])
    

    Looking back, not only the massively silly base-26 handles are unnecessarily complicated. Had I realised from the beginning I would be using python in the end, I would probably have gone for pdfrw right away; while the complexity in terms of Debian dependencies is roughly the same (“one over what you'll already have”), avoiding a subprocess call is almost always a win[4].

    But these misgivings are one reason why I wrote this post: This is a compact illustration of the old programmers' wisdom to “Plan to throw one away – you will anyway“. Except that for tiny little ad-hoc scripts like this, a bit of baroque adornment and an extra process do not hurt and the code above ought to work just fine if you need to produce a PDF document from some fixed page range of a few dozen or hundred other PDF documents.

    [1]Decided foolishly, by the way, as tr 0123456789 ABCDEFGHIJ immediately turns a sequence of distinct integers into a sequence of distinct uppercase-only strings.
    [2]I don't feel too good about being in the mainstream for a change, but I can prove that I'd have chosen python long before it became fashionable.
    [3]Not in Python, though, where 01 thankfully is a syntax error, and not neccessarily in C, where you may be surprised to see that, for instance, 077 works out to 63 decimal. I would rank this particular folly among the most questionable design decisions in the history of programming languages.
    [4]That, and my growing suspicion that “you'll already have a Java runtime on your box” is quickly becoming a rather daring assumption. Once the assumption is plain wrong, pdftk stops being a cheap dependency, as it will pull in a full JRE.
  • Saner Timestamps With DIT: In Pelican and Beyond

    The other day Randall Munroe posted XKCD 2867:

    This lament about time calculus struck me as something of a weird (pun alarm) synchronicity, as one evening or two before that I had written a few lines of flamboyant time-related code.

    Admittedly, I was neither concerned with “sin to ask” nor with „impossible to know“: Both are a consequence of the theory of relativity, which literally states that (against Newton) there is no absolute time and hence when two clocks are in two different places, even synchronising them once is deep science.

    Sold on Decimal Internet Time

    No, my coding was exclusively about the entirely unnecessary trouble of having to account for time zones, daylight savings time, factors of 60, 24, sometimes 30, 31, 29, or 28, and quite a few other entirely avoidable warts in our time notation. Civil time on Earth is not complicated because of physics. On human scales of time, space, velocity, gravitation, and precision, it is not particularly hard to define an absolute time even though it physically does not exist.

    Rather, civil time calculations are difficult because of the (pun alarm) Byzantine legacy from Babylon – base-60 and base-12, seven-day weeks, moon calendar – exacerbated by misguided attempts of patching that legacy up for the railway age (as in: starting in 1840, by and large done about 1920). On top of that, these patches don't work particularly well even for rail travel. I speak from recent experience in this particular matter.

    Against this backdrop I was almost instantly sold on DIT, the Decimal Internet Time apparently derived from a plan a person named Anarkat (the Mastodon link on the spec page is gone now) proposed: Basically, you divide the common day in what currently is the time zone UTC-12 into 10 parts and write the result in decimal. Call the integer part “Dek” and the first two digits after the dot “Sim”. That's a globally valid timestamp precise to about a (Babylonian) minute. For example, in central Europe what's now 14:30 (or 15:30 during daylight savings time; sigh!) would be 0.62 in DIT, and so would Babylonian 13:30 in the UK or 8:30 in Boston, Mass. This may look like a trivial simplification, but makes a universe of a difference in how much less painful time calculations become.

    I admit I'd much rather have based time keeping on the second (the SI unit of time), but I have to give Anarkat that the day is much more important in most people's lives than the second. Thus, their plan obviously is a lot saner for human use than any I would have come up with (“let's call the kilosecond kes and use that instead of an hour…”)[1].

    If you use pelican…

    Since I think that this would be a noticeably better world if we adopted DIT (clearly, in a grassrootsy step-by-step process), I'd like to do a bit of propaganda for it. Well, a tiny bit perhaps, but I am now giving the timestamps of the posts on this blog in StarDIT, which is an extension of DIT where you count the days in a (Gregorian, UTC-12) year and number the years from the “Holocene epoch”, which technically means “prepend a one to the Gregorian year number“ (in other words, add 10'000 to “AD”).

    Like DIT itself, with sufficient adoption StarDIT would make some people's lives significantly simpler, in this case in particular historians (no year 0 problem any more!). I would like that a lot, too, as all that talk about “Domini” doesn't quite cater to my enlightened tastes.

    How do I do produce the starDITs? Well, I first wrote a rather trivial extension for my blog engine, pelican, which adds an attribute starDIT to posts. You will find it as ditdate.py in my pelican plugins repo on codeberg. Activate it by copying the file into your blog's plugins directory and adding "ditdate" to the PLUGINS list in your pelicanconf.py. You can then use the new attribute in your templates. In mine, there is something like:

    <a href="http://blog.tfiu.de/mach-mit-bei-dit.html">DIT</a>
    <abbr class="StarDIT">{{ article.starDIT[:-4] }}</abbr>
    (<abbr class="date">{{ article.date.strftime("%Y-%m-%d") }}</abbr>)
    

    If you don't use pelican…

    I have also written a Python module to convert between datetimes and DITs which shows a Tkinter UI when called as a program:

    A small grey window on top of some bright background; sans-serif letters say 12023:351 (small) 1.08.5 (large).

    I have that on my desktop now. And since alarmingly many people these days use a web browser as their primary execution platform, I have also written some HTML/Javascript to have the DIT on a web page and its title (also hosted here).

    Both of these things are in my dit-py repo on codeberg, available under CC0: Do with them whatever you want. (Almost) anything furthering the the cause of DIT is – or so I think I have argued above – very likely progress overall.

    [1]If you speak German or trust automatic translation, I have a longer elaboration of DIT aspects I don't like in a previous blogpost.
  • Mach mit bei DIT

    [In case you're coming here from an English-language article, see here]

    A small grey window on top of some bright background; sans-serif letters say 12023:351 (small) 1.08.5 (large).

    Hier zeigt meine DIT-Uhr die Zeit (und das Datum) in meinem sawfish-Dock. Nein, das ist kein Startrek-Unfug. Ich hoffe stattdessen, dass etwas in dieser Art im Laufe der Zeit zum In-Accessoire werden wird: Wer keins hat, darf nicht mehr Digitalisierung sagen [nun: glücklicherweise hat niemand, der_die sowas wollen könnte, Mittel, mit denen so ein Verbot durchzusetzen wäre].

    Heraus aus der babylonischen Verwirrung!

    Es gibt nach 3000 Jahren nicht mehr allzu viele Gründe, sauer auf die großen KriegsherrInnen aus Babylon und ihre mesopotamischen KollegInnen zu sein. Mit dem babylonischen Klerus sieht das anders aus: Nicht nur sexagesimale Koordinaten etwa in der Astronomie geht auf ihn zurück, sondern auch all der krumme Kram mit Faktoren von 60 oder 24 oder 7, mit dem wir uns völlig ohne Not[1] immer noch in der Zeitrechnung herumschlagen.

    Keine Schuld haben die mesopotamischen PriesterInnen am Ärgernis Zeitzonen und dem damit zusammenhängenden Sommerzeit-Elend, aber ich wollte auch die schon ewig loswerden, nicht nur wie neulich aus Betroffenheit.

    So hat die Decimal Internet Time (DIT) mein Herz (fast) im Sturm genommen, ein Vorschlag, die Zeit durch Zehnteln des Tages zu notieren. Dieser Stundenersatz heißt Dek (von Dekatag) und entspricht fast zweieinhalb (nämlich 24/10) babylonischen Stunden.

    Selbst für sehr grobe Zeitangaben werden Deks in der Regel nicht reichen, weshalb sie in hundert Sims (von Decimal Minute) aufgeteilt werden. So ein Sim entspricht 86 Sekunden, ist also ziemlich nahe an einer babylonischen Minute. Das wäre wohl so die Einheit für Verabredungen: „Mittagessen um neun komma fünfundsiebzig“ oder meinetwegen „fünfungzwanzig vor null“, denn um die 100 Sekunden warten sollten für niemand ein Problem sein, und viel genauer fährt die Bahn nicht mal in der Schweiz. Aber weils Dezimal ist, wärs auch kein Problem, einfach nach den Zehnern aufzuhören: „Ich breche dann um 7.8 auf“, eine Angabe, die etwa auf eine Viertelstunde genau ist – sehr menschengemäß in meinem Buch.

    Ich finde das total plausibel; wenn euch das demgegenüber komisch vorkommt, ist das, ich muss es euch sagen, sehr parallel zur Abneigung von in imperialen Einheiten aufgewachsenen Leuten, etwas wie „ein Meter Fünfundachtzig“ zu sagen, wo doch „six foot two inches“ soo viel intuitiver ist.

    Um ein Gefühl für die Dezimalzeit zu bekommen, hätte ich folgende Kurzreferenz für BRD-Gewohnheiten anzubieten:

    DIT MEZ in Worten
    0 Mittag (13:00)
    1.5 Nachmittag (~16:30)
    2 Früher Abend (~18:00)
    3 Abend (20:00)
    4.5 Mitternacht
    6 Unchristliche Zeit (3:30)
    7.5 Morgen (7:00)
    9 Vormittag (10:30)

    Deseks: Vielleicht nicht so nützlich

    Weniger begeistert bin ich von der kleinsten Zeiteinheit von DIT, der Dezimalsekunde, Desek oder kurz Sek; das ist ein Tag/100'000, gegenüber einem Tag/86'400 bei der SI-Sekunde.

    Als SI-Taliban hätte ich die ganze dezimale Zeitrechnung ja ohnehin lieber auf die Sekunde aufgebaut und die Kilosekunde (ungefähr eine Viertelstunde) als Stundenersatz etabliert. Zwar gebe ich zu, dass die DIT-Wahl des Bezugs auf den Tag für menschliche Nutzung ein besserer Plan ist als die Kilosekunde (von der es 86.4 in einem Tag gibt, was eingestandenermaßen blöd ist).

    Aber für rein menschliche Nutzung (Verabredungen, Tagesplan, Fahrpläne…) spielen Zeiten im Sekundenbereich in der Regel keine Rolle, und so hätte ich die Deseks einfach rausgelassen und gesagt: Wers genauer braucht, soll zu den PhysikerInnen gehen und von denen die Sekunde nehmen. Dass ein Sim ziemlich genau aus 86.4 von diesen SI-Sekunden besteht, ist eher eine putzige Kuriosität als eine praktische Schwierigkeit, und jedenfalls nicht nennenswert lästiger als die 60 Sekunden, die eine babylonische Minute hat.

    Und nein, die physikalische Sekunde als Tag/100,000 umzudefinieren lohnt den Aufwand nicht; dafür ist die Erdrotation längst zu ungenau, und dann wir wollen ohnehin den Schaltsekunden-Unfug nicht mehr. Die Sekunde ist Physik, die braucht nichts mit menschlichen Zeiten zu tun zu haben. Insofern: Es wäre schöner, wenn es keine Desek gäbe, aber ich will auch nicht streiten.

    Good riddance, Zeitzonen

    Der neben der Nutzung des Dezimalsystems zweite große Fortschritt von DIT ist, dass sie auf der ganzen Welt einheitlich verläuft. Es gibt also in DIT keine Zeitzonen mehr.

    Mehr nebenbei ist das so gemacht, dass das babylonische 12 Uhr, die Mittagszeit bzw. 5 Deks in DIT, in der aktuellen UTC-12-Zeitzone (der „frühesten“, die es gibt), tatsächlich ungefähr mit der Kulmination der Sonne, also einer naiven Mittagsdefinition, zusammenfällt. Aber das spielt – im Gegensatz zum etwas antibritisch klingenden Sentiment in der DIT-Spec – eigentlich keine Rolle. Relevant ist nur, dass DIT-Uhren auf der ganzen Welt den gleichen Wert anzeigen. Ich darf meine Fantasie von neulich für DIT aktualisieren:

    Wäre es wirklich ein Problem, wenn Menschen, die in Kasachstan leben, 2 Deks für eine gute Zeit fürs Mittagessen halten würden und sich die Leute in New York eher so gegen siebeneinalb Deks über ihres hermachten? Ich wette, alle würden sich schnell dran gewöhnen. Es ist jedenfalls einfacher als das Sommerzeit-Mantra „spring forward, fall back“.

    Eingestanden: Wenn ich DIT entworfen hätte, hätte ich die auf die Referenz 12 babylonische Stunden entfernt von UTC verzichtet, denn alle anständigen Zeitstempel sind bereits jetzt in UTC. Wenn mensch für die DIT davon weggeht, verschränken sich Datum und Zeit bei der Umrechnung dieser anständigen Zeitstempel zu DIT – beim Übergang von babylonischer Zeit zu DIT kann sich also auch das Datum ändern.

    Das ist eine Komplikation, die keinen erkennbaren Nutzen hat; es ist eben kein Privileg, dass die Sonne um 5 Deks kulminiert, und so ist der Versuch albern, dabei möglichst wenige Menschen „zu bevorzugen“. Aber seis drum.

    Das Datum zur Zeit: StarDIT

    Insbesondere spielt das keine Rolle mehr, wenn mensch auch das Datum in DIT schreibt. Dazu gibt es eine Erweiterung von DIT zu größeren Zeiträumen hin, die im Vorschlag StarDIT genannt wird. Ob die Gesellschaft schon durchnerdet genug ist, um mit so einem Namen durchzukommen? Weiß nicht.

    An sich ist, wo wir schon bei Namen sind, ja auch das I, „Internet“, in DIT nicht so richtig seriös. Ich würde es vielleicht lieber als „International“ lesen – Internationalismus ist und bleibt einer der sympathischeren Ismen.

    Im StarDIT-Plan jedenfalls besteht das Datum aus (gregorianischem) Jahr zu einer leicht entchristlichten Epoche sowie der laufenden Tagesnummer innerhalb eines Jahres, mit einem Doppelpunkt getrennt, also für heute etwa 12023:350. Wer Wochen haben will, nimmt den Zehneranteil und schreibt ein x dahinter; aktuell haben wir also die Woche 35x.

    Zehntagewochen bergen ein wenig das Risiko, dass aus fünf Arbeitstagen acht werden; ein analoger Effekt hat schon dem Französischen Revolutionskalender (in meiner Geschichtserzählung) den Hals gebrochen. Aber wir müssen ja gerade sowieso über drastische Arbeitszeitverkürzung reden, um irgendwie die immer noch wachsende CO₂-Emission in den Griff zu kriegen. Da könnte der Übergang zu DIT durchaus mit einem Zwischenmodell mit weiterhin fünf Tagen Lohnarbeit, dafür dann auch fünf Tagen Selbstbestimmung („Wochenende“) zusammengehen – bevor die Lohnarbeit weiter abnimmt, natürlich.

    Putzig, wenn auch nicht allzu praktikabel für den Alltag, finde ich die DIT-Idee, die christliche Epoche (zu meinen eigenen Bedenken vgl. Fußnote 1 hier) durchs Holozän-Jahr zu ersetzen. Das ist wie das normale Gregorianische Jahr, nur dass die Zählung 9'999 vdCE anfängt (das heißt: Zählt einfach 10'000 zu ndCE-Jahren dazu).

    Es ist sicher prima, wenn die Leute nicht mehr durch Kennungen wie „v. Chr“ oder „n. Chr“ letztlich fromme Märchen verbreiten, und es ist auch großartig, wenn das Jahr-0-Problem (es gibt nämlich kein Jahr 0: die derzeitige Jahreszählung geht direkt von 1 v. zu 1 n., und drum ist auch die DIT-Referenzepoche etwas krumm) zumindest aus der post-mittelsteinzeitlichen Geschichtsschreibung komplett verschwindet. Ob jedoch das ein Deal ist, wenn mensch dafür mit einer Extraziffer in Jahreszahlen bezahlen muss? Fünf ist, haha, eben nicht zwingend Trümpf.

    Andererseits: Das StarDIT ist trivial aus der gewohnten Jahreszahl auszurechnen, und realistisch würden die Leute auch mit DIT im Alltag wohl weiterhin „Dreiundzwanzig“ oder „Zwanziger Jahre“ sagen und nicht „Zwölftausenddreiundzwanzig“ oder „zwölftausendzwanziger Jahre“. Insofern: Meinen Segen haben sie.

    Implementation: Python und Javascript

    Um nun mit gutem Beispiel voranzugehen, will ich selbst ein Gefühl für DIT bekommen. Dazu habe ich ein Python-Modul geschrieben, das Konversionen von Python-Datetimes von und nach DIT unterstützt. Das ist so wenig Code, dass ich lieber niemand verführen würde, das als Dependency zu importieren. Drum habe ich es auch nicht ins pyPI geschoben; guckt einfach in mein codeberg-Repo. Meine vorgeschlagene Vorgehensweise ist copy-paste (oder halt einfach das Modul in den eigenen Quellbaum packen).

    Das Modul funktioniert auch als Programm; legt es dazu einfach in euren Pfad und macht es ausführbar. Es zeigt dann eine DIT-Uhr in einem Tkinter-Fenster an. Ich habe das in meinen Sawfish-Dock aufgenommen – siehe das Eingangsbild.

    Ich habe außerdem noch ein Stück Javascript geschrieben, das DITs ausrechnen und anzeigen kann. Es ist eingebettet in der Datei dit.html im Repo oder unter https://blog.tfiu.de/media/2023/dit.html erreichbar. Menschen, die (ganz anders als ich) breit Tabs in ihren Browsern nutzen, können die Webseite öffenen und haben mit etwas Glück (wenn der Browser nämlich das Javascript auch …

  • Kritischer Hörtipp: „Afrika im Aufbruch“

    Eine relativ leere breite Straße mit Hochhäusern drumrum und Palmen drauf.

    Luanda im Jahr 2013 (inzwischen sieht es noch viel schlimmer aus): Hochhäuser, breite Straßen und jede Menge Beton. Will mensch wirklich zu sowas aufbrechen? Foto CC-BY Fabio Vanin

    Im November brachte der Deutschlandfunk im Hintergrund (täglich 18:40 bis 19:00) eine kleine Reihe mit dem Titel „Afrika im Aufbruch“. Das war zunächst recht erfreulich, denn die Erzählung dabei war nicht die typische beim europäischen Blick nach Süden.

    Es war nämlich nicht (in erster Linie) die Rede von Mord, Totschlag, Bürgerkrieg mit Macheten oder „Wellen“ von Menschen, die „gegen unsere Grenzen branden“. Es lohnt sich durchaus, sich die alternativen und vermutlich erheblich repräsentativeren Erzählungen anzuhören:

    Im Detail allerdings tun viele der Geschichten doch weh. Ich bin selbst schon hineingewachsen in eine Welt, in der der Name „3. Welt-Laden“ nicht mehr ohne Anführungszeichen denkbar war, in der Probleme mit Begriffen wie „Entwicklungsland“ Gemeinplätze waren. Das Wort behauptet ja ganz offen, dass „die anderen“ sich doch bitte „entwickeln“ sollen, etwas weniger offen dazuzudenken „zu uns hin“. Damals haben kritischere Menschen vielleicht „Trikont“ gesagt (Folgen), woraus heute eher „globaler Süden“ geworden ist.

    In der DLF-Serie hingegen wird nun zwar durchaus im Kultur- und Identitätsbereich von Dekolonisierung geredet, ökonomisch aber ist fast alles geradezu erschreckend orthodox. In der Sendung vom 14.11. zum Beispiel sagt ein Manager eines südafrikanischen „Start-ups“, das Rohstoffextraktion mit Großindustrie verbinden will:

    Wir haben die Chance, eine brandneue Megaindustrie in Afrika aufzubauen. Die Batterien sollen zuerst hier auf dem Kontinent genutzt werden. Siebzig Prozent Afrikas hat keine stabile Energieversorgung, ohne Strom keine Industrialisierung. Wenn wir dieses Problem durch Speichermöglichkeiten lösen, kann mehr produziert werden, Jobs werden geschaffen und Armut abgebaut.

    Der Gedanke, dass Armut heute nicht daher kommt, dass die Leute zu wenig arbeiten (bzw. produzieren), blitzt zwar kurz vorher im Zusammenhang mit Botswana mal auf, aber hier wie auch ganz schlimm in der Bezahl-Apps-Folge ist die Präsupposition ganz klar, dass auch „die Afrikaner“ unseren Unsinn kopieren sollen.

    Und Unsinn ist es, wenn sich möglichst viele Menschen jeden Tag für eine Stunde in einen rollenden Blechkäfig einsperren, um dann für acht Stunden lästigen Mist zu tun, der typischerweise unter Freisetzung von viel Dreck die Welt in der Regel schlechter macht (Herstellung von Autos und anderen Waffen, Werbung und anderen Rauschmedien, Finanz-„Produkten“ und anderen legalen Suchtstoffen, Einfamilenhäusern und Ölpipelines, Glyphosat und High Fructose Corn Syrup und so weiter und so fort).

    Um den Irrsinn ganz rund zu machen, haben die TeilnehmerInnen dieser Veranstaltung trotz historisch notwendig einmaliger Verschwendung von Naturressourcen immer noch (und nicht mal ohne Grund) Schiss, ob sie nächstes Jahr noch ein Dach über dem Kopf haben oder ob die Rente „zum Leben reicht”, mehr Angst vermutlich als Durchschnittmenschen in weiten Teilen des gegenwärtigen Afrika.

    Würde mensch dagegen einfach vernünftig überlegen, was das Minimum an Produktion ist, das die Grundbedürfnisse aller Menschen dauerhaft und verlässlich deckt, und zwar bei minimaler Belastung für Umwelt und Mensch: Nun, das wäre dann wirklich das „Überspringen“ von Fehlern des Nordens, das mal kurz in der Sendung vom 16.11. anklang. Stattdessen macht Antje Diekhans das Überspringen dort fest zunächst an Festnetz- gegen Mobiltelefonie und mittelbar ausgerechnet an „Fintech“ – mal ehrlich: lässt sich eine wüstere Verschwendung menschlicher Kreativität vorstellen?

    Dabei gestehe ich offen, dass ich keine Ahnung habe, wer in den verschiedenen Gegenden dieses Fünftels der Landfläche der Erde weniger europäische Ansätze vertritt. Klar wird die Wachstums-Religion in Afrika so verbreitet sein wie hier auch. Insofern mag mensch argumentieren, dass Afrika im Aufbruch einfach guter Journalismus ist, also eine Betrachtung der Welt, wie sie halt mal ist. Vielleicht passt mir auch nur das Wort „Aufbruch“ nicht, obwohl ja niemand sagt, dass du nur wohin aufbrechen kannst, wo es besser ist.

    Auf die im Eingangsbild symbolisierte Entwicklung in Luanda (verlinkt ist die Regierungsversion) hat mich übrigens ein Vortrag von Boniface Mabanza Bambu im Juni 2023 hingewiesen. Von ihm gibt es auch einiges Kontrastprogramm zur Aufbruch-Serie im Netz, z.B. seinen Artikel in Uneven Earth vom Juni 2019 oder auch ein Interview mit dem programmatischen Titel Die EU sollte Afrika in Ruhe lassen in der taz vom April (vgl. dazu). Oder, wegen der BASF-Connection für Leute im Rhein-Neckar-Raum besonders relevant, seine Abhandlung zum achten Jahrestag des Marikana-Massakers.

  • Another Bookworm Regression: D-bus, X11 Displays, purple-remote, Oh My!

    When I reported on what broke when I upgraded to Debian bookworm, I overlooked that my jabber presence management (where I'm offline at night and on weekends) no longer worked. Figuring out why and fixing it was a dive into D-Bus and X11 that may read like a noir detective novel, at least if you are somewhat weird. Let me write it up for your entertainment and perhaps erudition.

    First off, against the March post, I have migrated to pidgin as my XMPP (“jabber”) client; at its core, presence management still involves a script in /etc/network/if-*.d where I used to call something like:

    su $DESKTOP_USER -c "DISPLAY=:0 purple-remote getstatus"
    

    whenever a sufficiently internetty network interface went up or down, where DESKTOP_USER contains the name under which I'm running my X session (see below for the whole script with the actual presence-changing commands).

    Purple-remote needs to run as me because it should use my secrets rather than root's. But it was the DISPLAY=:0 thing that told purple-remote how to connect to the pidgin instance to interrogate and control. As most boxes today, mine is basically a single-user machine (at least as far as “in front of the screen” goes), and hence guessing the “primary” X display is simple and safe.

    Between X11 and the D-Bus

    That purple-remote needed the DISPLAY environment variable was actually almost a distraction from the start. There are many ways for Unix programs to talk to each other, and DISPLAY might have pointed towards 1980ies-style X11 inter-client communication. But no, the purple-remote man page alreads says:

    This program uses DBus to communicate with Pidgin/Finch.

    Correctly spelled D-Bus, this is one of the less gruesome things to come out of the freedesktop.org cauldron, although it is still riddled with unnecessarily long strings, unnecessarily deep hierarchies, and perhaps even unnecessary use of XML (though I feel sympathies in particular for that last point).

    But that's not what this post is about. I'm writing this because after upgrading to Debian bookworm, purple-remote no longer worked when used from my if-up.d script. Executing the command in a root shell (simulating how it would be called from ifupdown) showed this:

    # DESKTOP_USER=anselm su $DESKTOP_USER -c "DISPLAY=:0 purple-remote getstatus"
    No existing libpurple instance detected.
    

    A quick glance at the D-Bus Specification gives a hint at how this must have worked: dbus-launch – which is usually started by your desktop environment, and my case by a:

    export $(dbus-launch --exit-with-x11)
    

    in ~/.xinitrc – connects to the X server and leaves a “property” (something like a typed environment variable attached to an X11 window) named _DBUS_SESSION_BUS_ADDRESS in, ah… for sure the X server's root window [careful: read on before believing this]. As the property's value, a D-Bus client would find a path like:

    unix:path=/tmp/dbus-1cAbvsX6FD,guid=795a0d...
    

    and it could open that socket to talk to all other D-Bus clients started within the X session.

    Via apropos to xprop to Nowhere

    So… Does that property exist in the running X server? Hm. Can I figure that out without resorting to C programming? Let's ask the man page system:

    $ apropos property
    [..lots of junk...]
    xprop (1)            - property displayer for X
    [...]
    

    Typing in man xprop told me I was on the right track:

    $ man xprop
    
    SYNOPSIS
         xprop  […] [format [dformat] atom]*
    
    SUMMARY
      The xprop utility is for displaying window and font properties in an
      X server.
    
    OPTIONS
      […]
      -root   This argument specifies that X's root window is the target win‐
              dow.   This  is  useful  in situations where the root window is
              completely obscured.
    

    So, let's see:

    $ xprop -root _DBUS_SESSION_BUS_ADDRESS
    _DBUS_SESSION_BUS_ADDRESS:  not found.
    

    Hu? Has dbus-launch stopped setting the property? Let's inspect Debian's change log; a major change like that would have to be noted there, wouldn't it? Let's first figure out which package to look at; the documentation then is in /usr/share/doc/<packagename>:

    $ dpkg -S dbus-launch
    dbus-x11: /usr/bin/dbus-launch
    $ zless /usr/share/doc/dbus-x11/changelog.Debian.gz
    

    Looking for “property” or “BUS_ADDRESS” in there doesn't yield anything; that would make it unlikely that the property was somehow dropped intentionally. I have to admit I had halfway expected that, with something like “for security reasons”. But then if someone can read your root window's properties, access to your session bus is probably the least of your problems.

    Still, perhaps someone is slowly dismantling X11 support on grounds that X11 is kinda uncool? Indeed, you can build dbus-launch without X11 support. If the Debian maintainers built it that way, the respective strings should be missing in the binary, but:

    $ strings `which dbus-launch` | grep _DBUS_SESSION
    _DBUS_SESSION_BUS_PID
    _DBUS_SESSION_BUS_ADDRESS
    _DBUS_SESSION_BUS_SELECTION_
    

    No, that's looking good; dbus-launch should still set the properties.

    Skimming the Docs is Not Reading the Docs.

    If I did not see the property a moment ago, perhaps I have used xprop the wrong way? Well, actually: I didn't read the D-Bus spec properly, because what it really says is this:

    For the X Windowing System, the application must locate the window owner of the selection represented by the atom formed by concatenating:

    • the literal string "_DBUS_SESSION_BUS_SELECTION_"
    • the current user's username
    • the literal character '_' (underscore)
    • the machine's ID

    – and then find the _DBUS_SESSION_BUS_PID on the window owning that selection. The root window thing was my own fantasy.

    If you bothered to skim the ICCCM document I linked to above, you may recognise the pattern: that's just conventional X inter-client communication – no wonder everyone prefers D-Bus.

    This is beyond what I'd like to do in the shell (though I wouldn't be surprised if xdotool had a hack to make that feasible). I can at least establish that dbus-launch still produces what the spec is talking about, because the “atoms” – a sort of well-known string within the X server and as a concept probably part of why folks are trying to replace X11 with Wayland – are all there:

    $ xlsatoms | grep DBUS
    488   _DBUS_SESSION_BUS_SELECTION_anselm_d162...
    489   _DBUS_SESSION_BUS_ADDRESS
    490   _DBUS_SESSION_BUS_PID
    

    The Next Suspect: libdbus

    Given that, dbus-launch clearly is exonerated as the thing that broke. The next possible culprit is purple-remote. It turns out that's a python program:

    $ grep -i dbus `which purple-remote`
    import dbus
        obj = dbus.SessionBus().get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject")
    purple = dbus.Interface(obj, "im.pidgin.purple.PurpleInterface")
                data = dbus.Interface(obj, "org.freedesktop.DBus.Introspectable").\
    

    So, this is using the python dbus module. Let's see if its changelog says anything about dropping X11 support:

    $ zless /usr/share/doc/python3-dbus/changelog.Debian.gz
    

    Again, nothing for X11, property, or anything like that. Perhaps we should have a brief look at the code:

    $ cd /some/place/for/source
    $ apt-get source python3-dbus
    […]
    dpkg-source: info: extracting dbus-python in dbus-python-1.3.2
    […]
    $ cd dbus-python-1.3.2/
    

    You will see that the python source is in a subdirectory called dbus. Let's see if that talks about our property name:

    $ find . -name "*.py" | xargs grep _DBUS_SESSION_BUS_ADDRESS
    $
    

    No[1]. Interestingly, there's no mention of X11 either. Digging a bit deeper, however, I found a C module dbus_bindings next to the python code in dbus. While it does not contain promising strings (X11, property, SESSION_BUS…) either, that lack made me really suspicious, since at least the environment variable name should really be visible in the source. The answer is in the package's README: “In addition, it uses libdbus” – so, that's where the connection is being made?

    Another Red Herring

    That's a fairly safe bet. Let's make sure we didn't miss something in the libdbus changelog:

    $ zless /usr/share/doc/libdbus-1-3/changelog.Debian.gz
    

    You will have a déjà-vu if you had a look at dbus-x11's changelog above: the two packages are built from the same source and hence share a Debian changelog. Anyway, again there are no suspicious entries. On the contrary: An entry from September 2023 (red-hot by Debian stable standards!) says:

    dbus-user-session: Copy XDG_CURRENT_DESKTOP to activation environment. Previously this was only done if dbus-x11 was installed. This is needed by various freedesktop.org specifications…

    I can't say I understand much of what this says, but it definitely doesn't look as if they had given up on X11 just yet. But does that library still contain the property names?

    $ dpkg -L libdbus-1-3
    […]
    /lib/i386-linux-gnu/libdbus-1.so.3
    […]
    $ strings /lib/i386-linux-gnu/libdbus-1.so.3 | grep SESSION_BUS
    DBUS_SESSION_BUS_ADDRESS
    $
    

    No, it doesn't. That's looking like a trace of evidence: the name of the environment variable is found, but there's nothing said of the X11 property. If libdbus evaluated that property, it would stand to reason that it would embed its name somewhere (though admittedly there are about 1000 tricks with which it would still do the right thing without the literal string in its binary).

    Regrettably, that's another red herring. Checking the libdbus from the package in bullseye (i.e., the Debian version before bookworm) does not yield the property …

  • How to Pin a Wifi Access Point in Debian – and Why You Probably Don't Want to in Lufthansa Planes

    A vertical gradient from black to light blue, lots of unfilled template variables in double curly braces in white.

    That's what you see in Lufthansa's onboard wifi when you don't let just about anyone execute client-side Javascript on your machine. See below for a more useful URI in the onboard wifi.

    I have already confessed I was flying recently (albeit only in German). What was new versus the last time I've been in a plane five years ago[1]: Not only did wifi signals apparently no longer confuse the aircraft's navigation systems but there was actually an onboard wifi network with no less than seven access points within my machine's range.

    Somewhat surprisingly, I had a hard time getting a connection that would not break after a few seconds. I'll confess that's not the first time I've had trouble connecting to fancy networks recently, where the syslog contained cryptic messages like:

    kernel: wlan0: deauthenticated from <redacted> (Reason: 252=<unknown>)
    kernel: wlan0: disassociated from <redacted> (Reason: 1=UNSPECIFIED)
    

    In all these cases, there were a lot of access points with the same ESSID around, and so I suspect whatever selects the access points is currently broken on my machine; it chooses really weak access points and then gets badly mangled signals. While I'm waiting for this to heal by itself, I am resorting to manually picking and pinning the access points. In case you use ifupdown to manage your wifi, perhaps this little story is useful for you, too.

    The first part is to pick an access point. To do that, I ignore the warning of the authors of iw (from the eponymous package) not to parse its output and run:

    sudo iw wlan0 scan | egrep "^BSS|signal: .*dBm|SSID:"
    

    Nachtrag (2023-11-02)

    Well, in non-plane situations it's wise to get the SSIDs, too, so you see which APs actually are for the network you want to join. Hence, I've updated the grep in the command line above.

    The output of this looked like this on the plane I was in:

    BSS 00:24:a8:83:37:93(on wlan0)
            signal: -68.00 dBm
    BSS 00:24:a8:ac:1d:93(on wlan0)
            signal: -41.00 dBm
    BSS 00:24:a8:83:37:82(on wlan0)
            signal: -62.00 dBm
    BSS 00:24:a8:ac:1d:82(on wlan0)
            signal: -48.00 dBm
    BSS 00:24:a8:83:37:91(on wlan0)
            signal: -60.00 dBm
    BSS 00:24:a8:83:76:53(on wlan0)
            signal: -76.00 dBm
    BSS 00:24:a8:83:77:e2(on wlan0)
            signal: -82.00 dBm
    

    The things after the “BSS” are the MAC addresses of the access points, the numbers after signal is some measure for the power that reaches the machine's antenna[2] from that access point, where less negative means more power. So, with the above output you want to pick the access point 00:24:a8:ac:1d:93.

    With ifupdown, you do that by editing the stanza for that Wifi and add a wireless-ap line; for me, this then looks like:

    iface roam inet dhcp
      wireless-essid Telekom_FlyNet
      wireless-ap 00:24:a8:ac:1d:93
    

    – and this yields a stable connection.

    I must say, however, that the services on that network (I'm too stingy for actual internet access, of course) are a bit lacking, starting with the entirely botched non-Javascript fallback (see above). At least there is http://services.inflightpanasonic.aero/inflight/services/flightdata/v1/flightdata where you will see some basic telemetry in JSON. Or wait: it's actually perimetry if you see speed, height, and other stuff for the plane you're on.

    Fetching the numbers from the json you will save a lot of power versus the web page that becomes extremely network-chatty and CPU-hoggy (on webkit, at least) once you let Lufthansa execute Javascript. I'm afraid I have too much flight shame (and hence too little use for it) to cobble something nice together with that API and qmapshack. But it certainly looks like a fun project.

    [1]Ah wait… now that I think again, I seem to remember that during one of my last sinful travels there has already been a plane that had on-board Wifi. But it certainly is a nicer story with the little lie of news when coming back after five years.
    [2]Since “dBm” stands for „decibel milliwatt“, you could compute that power as 10s ⁄ 10  W. I'd not trust the absolute numbers, as they would indicate here that one access point is a factor of ten thousand stronger than another one, which sounds implausible primarily because I'd be surprised if the circuitry of the Wifi card could deal with such a high dynamic range. And “I'm getting 0.0001 milliwatts from the AP“ is a statement in dire need of interpretation anyway (e.g., “in the carrier? Bolometric?”). But let's not go there.
  • How to Disable pdf.js in Webkit on Debian

    A window of the zathura PDF viewer showing the GDPR.

    This is how I want my PDFs rendered. And I want a j to scroll down a bit. That pdf.js fails on both accounts is just the first two of its defects.

    When I upgraded to Debian bookworm, I noticed with great dismay that the webkit browser engine it comes with has a pdf.js-based PDF renderer built in.

    That means that my preferred browser, luakit, is basically broken when dealing with PDFs: where I disable Javascript (i.e., by default), I see nothing at all. Where I allow Javascript, my PDFs appear in a UI I consider rather nasty. On top of that, I lose the nice archive of PDFs I've recently read that came with luakit's viewpdf extension. That holds true even if I do manage to properly open the PDF in my preferred renderer (zathura) using pdf.js's Save, as that blindly calls all PDFs “document.pdf”.

    Regrettably, there doesn't seem to be a runtime switch to turn off the in-browser PDF rendering. After poking around a bit in webkit's source code, I have convinced myself that I won't add that switch myself. I am just not desperate enough to start hacking on one of the major browser engines.

    But there is a build-time switch to turn pdf.js off. I have always shied away from building my own webkit packages because there's so horribly much code and C++ compilers are so terribly resource-hungry. But my suffering with the pdf.js disaster has reached a level that made me overcome that horror. So, here's how to build a Webkit such that browsers based on it will again handle PDFs properly (sc. by handing them over to the system). All this is for Debian bookworm and derivatives; let's hope it won't be necessary beyond that.

    1. Get the source:

      mkdir -p src/webkit
      cd src/webkit
      apt-get source webkit2gtk
      cd webkit2gtk*
      

      This will only work if you have configured a source repo for your suite in your /etc/apt/sources.list (or equivalent) and run apt update after that.

      This is pulls in about 50 Megabytes, which in itself is an argument in favour of netsurf. But these 50 Megs are peanuts compared to what's coming: by the time you've done a full build, this directory will have exploded into more than 3 GB (in i386). Let's fix the web so plain browsing doesn't require such monsters.

    2. Configure your build. Fortunately, you mostly only touch the debian/rules file. In there, change:

      ENABLE_SOUP2=YES
      ENABLE_SOUP3=YES
      ENABLE_GTK4=YES
      

      to (presumably):

      ENABLE_SOUP2=YES
      ENABLE_SOUP3=NO
      ENABLE_GTK4=NO
      

      That's for luakit that is built on top of soup2; if your browser uses a different API, make a different choice here. Each build takes forever and gobbles up about 3 Gigs in the process, so be stingy here.

      Then, locate the line -DENABLE_MINIBROWSER=ON (which currently concludes the EXTRA_CMAKE_ARGUMENTS) and change it to:

      -DENABLE_MINIBROWSER=ON \
      -DENABLE_PDFJS=OFF \
      -DENABLE_JOURNALD_LOG=OFF
      

      Disabling the journald log is not strictly necessary, but it helps building on non-systemd boxes, and I doubt it actually hurts anyone.

      Nachtrag (2024-01-21)

      At least with 2.40.3, this procedure ends in a:

      dh_install: error: missing files, aborting
      

      presumably because we are not building for two APIs. I think that's a bug, but from dh_install's manpage I cannot even understand why it thinks it should fail because of missing files, and consequently futzing around with debian/not-installed or the various options went nowhere. Because I'm really grumpy with the whole state of affairs, I quickly resigned into simply emptying all debian/*.install files not pertinent to the packages I want to build.

    3. Remove the systemd build dependency. We can do that because we have just disabled the JOURNALD_LOG. So, in debian/control, delete the line:

      libsystemd-dev [linux-any],
      
    4. Install the build dependencies:

      sudo apt-get build-dep webkit2gtk
      

      On non-systemd boxes, this will say something like:

      libelogind0 : Conflicts: libsystemd0
      

      because you have not removed the libsystemd dependency from apt's database in step (3), and webkit at this point doesn't know it could build with libelogind0-dev, too. Don't worry about it as long as all the other build-dependencies came in.

    5. Make a changelog entry so your system knows your build is “newer” than Debian's and you can later tell it's your custom build:

      dch -i
      

      You probably want to put something like “rebuild with PDFJS disabled“ in there, but that's exclusively for your own comfort unless you start distributing your package.

    6. Do the build:

      dpkg-buildpackage -j6 -b -uc -us -rfakeroot
      

      Do that on a cold day, because this will turn your machine into a space heater for several hours (unless you have a very fast machine, in which case you probably don't need another space heater in the first place).

    7. When this is done, you will have about a dozen binary packages in the build directory's parent. You probably don't want to dpkg -i *.deb, as there's no point installing debug packages (for starters). For luakit, I've run this:

      sudo dpkg -i gir1.2-javascriptcoregtk-4.0_2.*.deb gir1.2-webkit2-4.0_2.*.deb libjavascriptcoregtk-4.0-18_2.*.deb libjavascriptcoregtk-4.0-bin_2.*.deb libjavascriptcoregtk-4.0-dev_2.*.deb libwebkit2gtk-4.0-37_2.*.deb libwebkit2gtk-4.0-dev_2.*.deb
      

      This could be a slight over-installation.

    By the way, in case the build fails somewhere in the middle but is fundamentally sane, you can resume it by calling:

    fakreroot debian/rules binary
    

    Doing dpkg-buildpackage as above resets the build and will discard everything the computer has built in perhaps hours.

    Given the extreme cost of building a webkit, getting pdf.js out in this way is not a long-term plan, at least if you want your webkit to be halfway up-to-date (which is a good idea in particular if you're indiscriminate as to who can execute Javascript in your browser). Until someone kindly implants a run-time switch, I'm going to shut out pdfjs-infested upgrades until some really, really unnerving (that is, even more unnerving than usual) webkit vulnerability surfaces. To do that, I'm dropping:

    # my webkit with patched-out pdfjs
    Package: libjavascriptcoregtk-4.0-18
    Pin: version 2.40.5-1~deb12u1.1
    Pin-Priority: 1001
    

    into /etc/apt/preferences.d/10pins (where your version will probably different; check the version tag in the names of the generated package files). That will make the messages from apt upgrade quite a bit uglier, and of course I'll have a webkit with published security bugs (you have been warned in case you're doing as I do). But in my book that's totally worth it just to get rid of the wretched pdf.js.

  • mdns-scan complains that IP_ADD_MEMBERSHIP failed

    Last weekend I had to use a video projector via Google cast or chromecast or whatever it's called this month – it was mounted at the ceiling and was unreachable by cables.

    What I could work out about Google cast from a few web searches sounded like it should be simple: encode what's on the local screen to a video and then transmit that to some more or less bespoke endpoint through – I think – Secure Reliable Transport, a UDP-based protocol for which there's a Debian package called srt-tools.

    Whether or not that's roughly right, what I failed to answer is: Where do you transmit to? It seems the way to figure that out is to ask zeroconf alias Bonjour the right questions, and that in turn seems to require multicasting DNS-ish requests and then collecting responses from devices that reply to these multicasts. Aw! If only avahi – the usual mDNS implementation on Linux – wasn't among the first things I purge from machines if I find it.

    While trying to nevertheless cobble something together that would tell me where to send my stream to I got an interesting error message when I experimentally ran mdns-scan:

    IP_ADD_MEMBERSHIP failed: No such device
    

    This was while I was connected to the projector's built-in Wifi access point. And I didn't have the foggiest idea what the thing was saying. Search engines didn't bring up satisfying explanations (although there was some unspecific mumbling about “routes”). So, I straced the thing to see what syscalls it does before giving up:

    $ strace mdns-scan
    [the dynamic linker in action]
    ugetrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0
    munmap(0xf7f13000, 132486)              = 0
    socket(AF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
    setsockopt(3, SOL_IP, IP_MULTICAST_TTL, [255], 4) = 0
    setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
    bind(3, {sa_family=AF_INET, sin_port=htons(5353), sin_addr=inet_addr("224.0.0.251")}, 16) = 0
    setsockopt(3, SOL_IP, IP_ADD_MEMBERSHIP, {imr_multiaddr=inet_addr("224.0.0.251"), imr_interface=inet_addr("0.0.0.0")}, 12) = -1 ENODEV (No such device)
    write(2, "IP_ADD_MEMBERSHIP failed: No suc"..., 41IP_ADD_MEMBERSHIP failed: No such device
    ) = 41
    close(3)                                = 0
    exit_group(1)                           = ?
    

    – failing after so few syscalls is actually brilliant. Try strace on your average web browser or even your python interpreter and you know what I mean.

    And while I don't really know much about multicasting, this gave me an idea what was going on. You see, the projector hadn't set a default route. My box's routing table was simply:

    $ ip route
    192.168.2.0/24 dev wlan0 proto kernel scope link src 192.168.2.32
    

    I guess that's rather typical, and that's why I'm writing this: I'd expect other people trying Google cast or Airplay to projectors may run into that same problem.

    The reason this is a problem is that mdns-scan wants to (I think; don't believe me without researching further) subscribe to the address 224.0.0.251 via some network interface. That particular IP address looks less crazy than it is, because it's a multicast address, which makes it mean something rather special, and this one is special special because it basically means “I want to see multicast DNS packets floating around on the local network” (and send them to everyone using the same router as I do). Saying this means that the machine has to have an idea where to send packets to, and with the routing table the projector's DHCP server set up, it felt it didn't know that. I have to admit I have not quite worked out just why it felt that, but I'm rather confident that's why the setsockopt above failed.

    In that special situation – when you are not connected to the internet anyway – it is safe to just set the default route to the projector:

    $ ip route add default via 192.168.2.1 dev wlan0
    

    (where you will probably have to change the IP address to whatever your projector binds to; it's almost certainly the address of the DHCP server on the projector's access point, which you'll find in your syslog). This is enough to let mdns-scan do its multicasting magic, and indeed it now yielded a handle for the chromecasting service.

    But that still was not enough to figure out where to srt-live-transmit my video stream to, and hence I didn't get to improvise screen mirroring in the time I had before the event started. I eventually projected using a Windows (well: at least not Android…) box with a silly chromecast dongle that came with the projector and had some nasty driver software for the dongle on some built-in USB mass storage.

    Regrettably (or fortunately, if you want), I don't have access to the device any more, so I cannot continue my chromecast hacking. If you are aware of a little script that does the zeroconf queries and connection setup in a few lines, please let me know: It would be nice to be prepared when next I encounter one of these beasts.

  • Wer kennt den s2ram-Killer der BVG?

    Foto: Bushaltestelle mit abfahrendem Bus, im Hintergrund eine querende S-Bahn

    Februar 2019: Damals bin ich BVG-Bus gefahren, ohne dass der Schlaf meines Rechners gestört worden wäre.

    TL;DR: Hat jemand anders auch den Effekt, dass irgendwas in Berlin, vermutlich irgendwas in Bussen oder U-Bahnen, schlafende („Suspend to RAM“, s2ram, ACPI S3) Rechner hart ausschaltet?

    Ich laufe und fahre seit 20 Jahren eigentlich immer mit einem schlafenden Rechner im Rucksack durch die Gegend. Und auch wenn dieser Rechner – seit 10 Jahren ein Lenovo X240 – durchaus etwas lappiger Plastikkram ist, geht das verblüffenderweise auch durchweg gut – wenn ich wieder sitze und tippen will, wacht das Ding zuverlässig auf.

    Bis ich neulich in Berlin war. Über eine Woche hinweg war der Rechner drei Mal beim Auspacken nach Ausflügen hart aus, also nicht mehr im S3-Schlaf, sondern sozusagen S5-aus. Und da die Mühle beim Hochfahren erstmal ihre Dateisystem-Journale durchgegangen ist, hat da auch nichts sanft abgeschaltet: Die Kiste ist einen plötzlichen (wenn auch nur temporären) Tod gestorben.

    Na schön, dachte ich mir, dann ist da wohl durch die Reisestrapazen irgendein Wackelkontakt oder Haarriss auf der Hauptplatine entstanden. Um den Verdacht zu erhärten, habe eifrig geschüttelt, geklopft und gedrückt. Nichts jedoch vermochte den gerechten Schlaf der Maschine zu stören, friedlich heartbeatete die Deckel-LED, immer wachte die Kiste brav wieder auf. Inzwischen bin ich schon fast seit einer Woche wieder daheim, und es gab keine weiteren Fälle von plötzlichem Computertod.

    Wackelkontakt- oder Haarriss-Theorien verlieren so zunehmend an Plausibilität. Was also , wenn die Ausfälle durch irgendwas in Berlin verursacht wurden? Das wären dann sehr wahrscheinlich keine mechanischen Einwirkungen, sondern doch eher elektromagnetische.

    So ganz plausibel scheint mir zwar nicht, dass da irgendwas mit Schmackes induziert und die Kiste mit einer raschen Selbstentleibung aus dem Schlaf heraus reagiert – aber wer weiß, und wer weiß vor allem, was der gruselige Embedded Controller macht, wenn von irgendwo hinreichend viel unerwarteter Strom kommt?

    Leider habe ich keine gute Vermutung, was eine Funkstörung dieser Größenordnung wohl jeweils ausgelöst haben könnte, denn ich habe zu spät angefangen, auf Systematiken zu achten. Am ehesten habe ich Busse oder U-Bahnen – vor allem letztere haben auch kräftige Elektromotoren mit vermutlich sehr ordentlichen Wechselfeldern drumrum – im Verdacht, aber beides bin ich früher auch in BVG-Versionen und auch schon mit der Maschine gefahren, ohne dass etwas Eigenartiges passiert wäre.

    Klar mag ich dabei einfach zufällig hinreichenden Abstand von den Störquellen gehabt haben. Aber wie genau wäre ich jetzt plötzlich gleich drei Mal in kurzer Folge nahe genug gewesen? Ich kann nicht sagen, dass ich diese Erklärung irgendwo in der Nähe des Wortfelds von „befriedigend“ sähe.

    Vielleicht sinds ja auch irgendwelche neuen Edel-E-Autos, die in Berlin rumfahren und in der Heidelberger Gegend (noch) nicht? Oder sind es gar heimliche, aber zahlreiche Anti-Digitalisierungs-AktivistInnen (sie hätten meine Sympathie), die sich was gebastelt haben, um Menschen im ÖPNV durch milde Sabotage mal wieder dazu zu bringen, kurz aufzublicken?

    Wer sowas auch erlebt hat oder gar weiß, was da vorgeht: ich würde sehr gerne von euch hören.

  • OpenSSL, Syslog, and Unexpected Consequences of Usrmerge: Upgrading to bookworm

    A few weeks after the release of Debian bookworm, I have recently dist-upgraded my main, ah well, workstation, too. As mentioned in my bullseye upgrade post, that box's file system is ancient, and the machine does many things in perhaps unusual ways, which includes still booting with sysvinit rather than systemd for quite a few reasons. Hence, it always brings up the some interesting upgrade probl^H^H^H^H^Hchallenges. While for bullseye, the main… um… challenge for me was the migration to python3, this time the big theme was dropped crypto engines.

    Rsyslogd, wmnet

    Much more harmless than those, but immediately visible after the upgrade, was that my syslog display remained empty. The direct reason was that the rsyslog daemon was not running. The reason for that, in turn, was that there was not even a init script for it in /etc/init.d, let alone rc.d links to it. But the rsyslogd package was installed. What would the purpose be of installing a daemon package without an init script?

    The Debian bug tracker had something like an answer: the maintainer took it out, presumably to shed files they considered cruft in the age of systemd. Although I have to concur with Naranyan's remark in the bug report that rsyslog will typically be in place exactly when systemd (with its own log daemon) is not, at least that bug (#1037039) offers the (simple) fix: Install the orphan-sysvinit-scripts package.

    Something a bit harder to explain is that the nice wmnet applet for monitoring transfers on network interfaces came up blank after the upgrade. This is fixed by passing a -n option to it, which tells it to draw into a normal window rather than something suitable for the Windowmaker dock. Wmnet (as perhaps other Windowmaker applets, too) tries to guess where to draw based on some divination. Perhaps my window manager sawfish started to pretend it's Windowmaker in bookworm? Or indicated to wmnet in some other way it was living in a Windowmaker dock? Hm: Given that the last changelog entry on sawfish itself is from 2014 (which I consider a sign of quality), that seems unlikely, but then I can't bring myself to investigate more closely.

    The usr Merge and Bashism in the Woodwork

    Although I had merged the root and usr file systems on that box last time I migrated to a new machine, I had postponed doing the usrmerge thing (i.e., making the content of /bin and /usr/bin identical) on the box until the last possible moment – that is, the bookworm installation – because I had a hunch some hack I may have made 20 years ago would blow up spectacularly.

    None did. Except… it turned out I had linked /bin/sh to /bin/bash for some long-forgotten and presumably silly reason; if you had asked me before the upgrade, I'd have confidently claimed that of course all my little glue scripts are executed by Debian's parsimonious dash rather than the relatively lavish bash. Turns out: they weren't.

    With the installation of the usrmerge package during the bookworm dist-upgrade that is over. /bin/sh is now dash as I had expected it to be all the time. I have to admit I am a bit disappointed that I do not notice any difference in system snappiness at all.

    But I did notice that plenty of my scripts were now failing because they contained a bashism: Comparison for string equality in POSIX-compliant [ ... ] constructs is not the C-like == but the SQL-like = even though bash accepts both. I don't know when I forgot this (or, for that matter, whether I ever knew it), but a dozen or so of my (often rather deeply embedded) shell scripts started to fail with messages like:

    script name: 22: [: tonline: unexpected operator
    

    So, repeat after me: In shell scripts, compare strings with = and numbers with -eq. And I have to admit that this experience made me a bit more sympathetic to the zero shell paradigm behind systemd. But for the record: I still think the advantages of having hooks for shell scripts almost everywhere overall outweigh these little annoyances.

    The OpenSSL Upgrade

    With the bookworm upgrade, a fair number of hashes and ciphers were declared “legacy” in openssl, which means that in the default configuration, it will reject them. That had a few rather disruptive consequences: For one, I needed to update a few snake-oil certificates I had generated for playing with https on my box.

    Also, fetchmail failed for a POP server I had configured with a message like:

    fetchmail: <hostname> SSL connection failed.
    fetchmail: socket error while fetching from <whatever>
    

    I was puzzled for a while until I realised that the recipe said:

    with proto TLS1
    

    That was probably valuable in, like, 2004, to suppress ancient (relatively) easily breakable SSL versions, but by now it didn't let fetchmail negotiate crypto that was still allowed by openssl. Removing the proto TLS1 fixed that problem.

    The most unnerving breakage, however, was that my preferred disk crypto, encfs (cf. this advocacy in German), broke for some volumes I had created more than a decade ago: they failed to mount because openssl now refuses (I think) the blowfish cipher. I fiddled around a bit with re-enabling legacy algorithms as per Debian bug 1014193 but quickly lost my patience with the slightly flamboyant semantics of openssl.cnf. To my surprise, downgrading to encfs_1.9.5-1+b2_i386.deb from bullseye (by briefly re-adding the sources.list lines) let me mount the old volumes again. I then simply created new encfs volumes and rsync -av-ed from the old decrypted volume into the new decrypted volume. Finally, after unmounting everything encfs, I overwrote the old encrypted volumes with the new encrypted volumes and upgraded back to bookworm encfs.

    Since I can't explain why downgrading encfs would have fixed the problem as I've analysed it and hence suspect that a part of my analysis (and fix) is wrong, I'd strongly recommend to run:

    encfsctl info <encrypted volume>
    

    on each encfs directory you have before the upgrade. If it says something like:

    Filesystem cipher: "ssl/blowfish", version 2:1:1 (using 3:0:2)
    

    or even just:

    Version 5 configuration; created by EncFS 1.2.5 (revision 20040813)
    

    (where I have not researched the version where encfs defaults became acceptable for bookworm openssl; 1.9 is ok, at any rate), copy over the decrypted content into a newly created encfs container; it's quick and easy.

    Relatedly, bookworm ssh also disallows a few crypto methods now deemed insecure by default, in particular SHA-1 hashes for host keys. Now, I have to connect to a few hosts I cannot upgrade (either because I'm not root or because they are stuck on some ancient kernel because of proprietary kernel components). For these, when trying to connect I now get messages like this:

    Unable to negotiate with 192.168.20.21 port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss
    

    You could reasonably argue I should discard boxes of that type. On the other hand, nobody will spend 50'000 Euro to eavesdrop on my communications with these machines[1] – that's the current estimate for producing a hash collision for an ssh host key, which this is about. Hence, I'm happy to risk man-in-the-middle attacks for these machines.

    To deal with such situations, openssh lets you selectively re-allow SHA-1 hashes on RSA host keys. Helpfully, /usr/share/doc/openssh-client/NEWS.Debian.gz gives a recipe to save those hosts; put host stanzas like:

    Host ancient-and-unupdatable.some.domain
      HostKeyAlgorithms=+ssh-rsa
      PubkeyAcceptedKeyTypes +ssh-rsa
    

    into ~/.ssh/config (and do read ssh_config (5) if you are not sure what I'm talking about, regardless of whether or not you have this particular problem). Incidentally, just to save that one machine where you forgot to update your ancient DSA public key, you can for a brief moment change the second line to:

    PubkeyAcceptedKeyTypes +ssh-rsa,ssh-dsa
    

    If you don't have an RSA key yet, create one (ssh-genkey -t rsa) – RSA keys work even on the most venerable openssh installations that don't yet know about the cool ed25519 keys. Connect to the server, install the RSA public key, and re-remove the ssh-dsa part in the config again.

    Kudos to the openssh maintainers for keeping compatibility even in crypto over more than 20 years. And shame on many others – including me – who don't manage to do that even in non-crypto software.

    Terrible Font Rendering in Swing

    One of the more unexpected breakages after the upgrade was that some Java Swing (a once-popular GUI toolkit) applications suddenly had terribly jagged fonts, such as my beloved TOPCAT:

    Part of a screenshot of a menu with horribly jaggy letters

    I cannot exactly say why this looks so terrible[2]. Perhaps in the age of 300 dpi displays font hinting – which is supposed to avoid overly jagged pixelisation when rendering vector fonts at low resolutions – has become out of fashion, perhaps OpenJDK now …

  • Fixing “libqca-ossl is missing”

    In all honesty, I don't expect many people who might profit from this post will ever see the message below. But since common web searches don't yield anything for it (yet), I figure I should at least document it can happen. I also want to praise kwallet's author(s) because whatever went wrong yielded what turned out to be a rather useful error message rather than a spectacular crash:

    createDLGroup failed: maybe libqca-ossl is missing
    

    Here's what lead up to it: in Debian bookworm, my old Mastodon client tootle started crashing when viewing images. Its development has moved to a new client called Tuba, and even though that is not packaged yet I figured I might as well move on now rather than fiddle with tootle. Tuba, however, needs a password manager more sophisticated than the PGP-encrypted text file I use otherwise. So I bit the bullet and installed kwalletmanager; among the various password managers, it seemed to have the most reasonable dependencies.

    With that, Tuba can do the oauth dance it needs to be able to communicate with the server. But when it tries to save the oauth token it gets from the Mastodon instance, I got the error message above. Tuba can still talk to the the server, but once the session is over, the oauth token is lost, and the next time I start Tuba, I have to do the oauth dance again.

    Fixing the error seemed simple enough:

    $ apt-file search libqca-ossl
    libqca-qt5-2-plugins: /usr/lib/i386-linux-gnu/qca-qt5/crypto/libqca-ossl.so
    $ sudo apt install libqca-qt5-2-plugins
    

    – as I said: kwallet's is a damn good error message. Except the apt install has not fixed the problem (which is why I bother to write this post). That's because kwalletmanager starts a daemon, and that daemon is not restarted just because the plugins are installed.

    Interestingly, just killing that daemon didn't seem to fix the problem; instead, I had to hit “Close“ in kwalletmanager explicitly and then kill the daemon (as in killall kwalletd):

    Screenshot: kdewallet with a close button and two (irrelevant) tabs.

    I give you that last part sounds extremely unlikely, and it's possible that I fouled something up the first time I (thought I) killed kwalletd. But if you don't want to do research of your own: Just hit Close and relax.

    You could also reasonably ask: Just what is this “ossl” thing? Well… I have to admit that password wallets rank far down in my list of interesting software categories, and hence I just gave up that research once nothing useful came back when I asked Wikipedia about OSSL.

  • Taming an LTE card in Linux

    When I wrote my request for help on how to do voice calls with a PCI-attached cellular modem I realised that writing a bit about how I'm dealing with the IP part of that thing might perhaps be mildly entertaining to some subset of the computer-literate public. After all, we are dealing with rather serious privacy issues here. So, let's start with these:

    Controlling registration

    Just like almost any other piece of mobile phone tech, an LTE card with a SIM inserted will by default try to register with the network operator's infrastructure when it is switched on (or resumed, more likely, in the case of a notebook part). If this is successful, it will create a data point in the logs there, which in turn will be stored for a few days or, depending on the human rights situation in the current jurisdiction (as in: is telecom data retention in effect?), for up to two years. This record links you with a time (at which you used your computer) and a location (at which you presumably were at that point). That's fairly sensitive data by any measure.

    So: You don't want to create these records unless you really want network. But how do you avoid registration? There are various possible ways, but I found the simplest and probably most robust one is to use Linux's rfkill framework, which is in effect a refined version of airline mode. To make that convenient, I am defining two aliases:

    alias fon="sudo rfkill unblock wwan"
    alias keinfon="sudo rfkill block wwan"
    

    (“keinfon“ is “no phone“ in German); put these into your .bashrc or perhaps into .aliases if your .bashrc includes that file.

    Since I consider rfkill relatively a relatively unlikely target for privilege escalation, I have added , NOPASSWD: usr/sbin/rfkill to my system user's line in /etc/sudoers.d, but that's of course optional.

    With that, when I want to use internet over LTE, I type fon, wait a few seconds for the registration to happen and then bring up the interface. When done, I bring down the interface and say keinfon. It would probably be more polite to the service operators if I de-registered from the infrastructure before that, but for all I can see only marginally so; they'll notice you're gone at the next PLU. I don't think there are major privacy implications either way.

    It might be wiser to do the block/unblock routine in pre-up and post-down scripts in /etc/network/interfaces, but since registration is slow and I rather regularly find myself reconnecting while on the cell network, I'd consider that over-automation. And, of course, I still hope that one day I can do GSM voice calls over the interface, in which case the card shouldn't be blocked just because nobody is using it for an internet connection.

    Phone Status

    In case I forget the keinfon, I want to be warned about my gear leaking all the data to o2 (my network operator). I hence wrote a shell script display-phone-status.sh like this:

    #!/bin/sh
    if /usr/sbin/rfkill list | grep -A3 "Wireless WAN" | grep 'blocked: yes' > /dev/null; then
      echo "WWAN blocked."
    else
      /usr/games/xcowsay -t 10 -f "Steve Italic 42" --at 0,520 --image ~/misc/my-icons/telephone.xpm 'Ich petze gerade!'
    fi
    

    The notification you'll want to change, for instance because you won't have the nice icon and may not find the font appropriate. The German in there means ”I'm squealing on you.“. Here's how this works out:

    Screenshot: an old-style telephone with a baloon saying „Ich petze gerade“

    I execute that at every wakeup, which is a bit tricky because xcowsay needs to know the display. If you still run pm-utils and are curious how I'm doing that, poke me and I'll write a post.

    Connection

    Mainly because tooling for MBIM and other more direct access methods felt fairly painful last time I looked, I am still connecting through PPP, even though that's extremely silly over an IP medium like LTE. Part of the reason I'm writing this post is because duckduckgo currently returns nothing useful if you look for “o2 connection string“ or something like that. I tried yesterday because surprisingly while the internet connection worked over GSM, when connected over LTE (see below on how I'm controlling that), executing the good old:

    AT+CGDCONT=1, "IPV4V6", "internet"
    

    would get me an ERROR. That command – basically specifying the protocol requested and the name of an „access point“ (and no, I have never even tried to figure out what role that „access point“ might have had even in GSM) – admittedly seems particularly silly in LTE, where you essentially have an internet connection right from the start. I'm pretty sure it didn't use to hurt LTE connections three years ago, though. Now it does, and so that's my chat script for o2 (put it into /etc/ppp/chat-o2 with the peer definition below):

    IMEOUT 5
    ECHO ON
    ABORT 'BUSY'
    ABORT 'ERROR'
    ABORT 'NO ANSWER'
    ABORT 'NO CARRIER'
    ABORT 'NO DIALTONE'
    ABORT 'RINGING\r\n\r\nRINGING'
    '' "ATZ"
    OK 'ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0'
    OK "\d\dATD*99#"
    CONNECT ""
    

    You can probably do without almost all of this and just run ATD*99# if you're stingy; but over the past 15 years of using cellular modems in one way or another, each piece of configuration was useful at one time. I'm not claiming they are now.

    Similarly, my /etc/ppp/peers/o2 configuration file might contain a bit of cruft:

    /dev/ttyACM0
    115200
    debug
    noauth
    usepeerdns
    ipcp-accept-remote
    ipcp-accept-local
    remotename any
    user thing
    local
    nocrtscts
    defaultroute
    noipdefault
    connect "/usr/sbin/chat -v -f /etc/ppp/chat-o2"
    
    lcp-echo-interval 300
    lcp-echo-failure 10
    

    I'd expect the liberal LCP configuration at the bottom of the file is still very beneficial in the o2 network.

    To manage the network, I use normal Debian ifupdown with this stanza in /etc/network/interfaces:

    iface o2 inet ppp
      provider o2
    

    To bring up the interface, I have an icon on my desktop that executes sudo ifup o2.

    Monitoring

    To see what's going through a network connection, I have a script monitor in /etc/network/if-up.d; this is unconditionally executed once an interface comes up. A case statement brings up wmnet instances with parameters somewhat adapted to the respective interfaces:

    #!/bin/sh
    case $IFACE in
    wlan* )
      su - anselm -c 'DISPLAY=:0 nohup wmwave -r 200' > /dev/null 2>&1 &
      su - anselm -c "DISPLAY=:0 nohup wmnet -l -x 1000000 -d 200000 -r green -t red -W $IFACE" > /dev/null 2>&1 &
      ;;
    ppp*)
      su - anselm -c "DISPLAY=:0 nohup wmnet -l -x 1000000 -d 200000 -r green -t red -W $IFACE" > /dev/null 2>&1 &
      ;;
    o2 | n900)
      su - anselm -c "DISPLAY=:0 nohup wmnet -l -x 1000000 -d 200000 -r green -t red -W ppp0" > /dev/null 2>&1 &
      ;;
    esac
    

    The complicated su logic is necessary because again, the little window maker dockapps need access to the X display.

    That whole part is a bit weak, not only because of the hard-coded user name and DISPLAY (these are fairly safe bets for a personal computer) and because it relies some configuration of your window manager to place the dockapps at predictable positions.

    More importantly, ifupdown executes the script too early: To ifupdown, the interface is up when the pppd is up. But it is only then that pppd starts to negotiate, and these negotiations fail quite easily (e.g., when you're in a dead zone, and there are plenty of those with o2). If that happens, you have an essentially dead wmnet on the desktop. I clean up rather unspecifically in /etc/network/if-down.d/monitor:

    #!/bin/sh
    case $IFACE in
    wlan* )
      killall wmwave
      killall wmnet
      ;;
    ppp*|o2|n900)
      killall wmnet
      ;;
    esac
    exit 0
    

    The implicit assumption here that the computer will only have one wireless network connection at a time.

    Modem Configuration

    I used to have to do a bit of modem configuration in the olden days. It's rarer these days, but I thought I might as well publish the source of a program, I wrote back then to encapsulate that configuration. I still find it is useful now and then to choose between the access methods LTE (fast, but perhaps smaller cells hence less stable) and GSM (slow, but perhaps more robust with larger cells and better coverage), which this script can do if your card supports the AT+XACT command. While I would guess that includes many Sierra modems, I have no idea how many that may be. Anyway, here's how that ought to look like (and perhaps the most relevant piece of information is the <home>, which means there's an infrastructure connection – as opposed to, for instance, <offline>):

    $ modemconfig.py -a LTE
    Modem is >home<
    Using LTE
    Running on band BAND_LTE_1
    

    If you think you might find this kind of thing useful: It's on https://codeberg.org/AnselmF/sierra-config, and it's trivial to install.

  • Help wanted: PCM telephony on a Sierra EM7345

    For fairly silly reasons I would like to do voice calls using a Sierra Wireless EM7345 4G LTE wireless modem built into a Lenovo Thinkpad X240. However, I am stuck between a lack of documentation and the horrors of proprietary firmware blobs to the extent that I am even unsure whether that's possible without reprogramming the whole device. If you can help me, I'd appreciate any sort of hand-holding. If you think you know someone who might be able to help me, I'd appreciate if you pointed them to this post.

    The analog stuff

    What works nicely is calling voice numbers. After a minicom -D /dev/ttyACM0, I can do:

    AT DT 062210000000
    NO CARRIER
    AT DT 062210000000;
    OK
    
    NO CARRIER
    

    The first command is attempting a data connection that fails because it's a real telephone at the other end. The semicolon in the second command says “do voice“. It actually makes the remote phone ring a few seconds after the modem said OK. I can pick up the call on that remote phone, too, but fairly unsurprisingly there is just silence at the computer and, and whatever I say at either end goes nowhere. The eventual NO CARRIER is when I hang up the phone.

    The other way round works as well: seeing the good old RING and typing ATA like in the good old days warmed my heart. Hanging up with an ATH was fun, too. But when no sound is being transported through the Sierra card, these games quickly become boring.

    As usual for entities like Sierra, they don't give their documentation to just anyone (as in „me”). I still happen to have a PDF titled „MP 700 Series GPS Rugged Wireless Modem AT Command Reference”, which pertains to some different but reasonably similar device. There, it says:

    If your account allows for it, you can attach a headset to your modem and use it as a mobile phone. You require a 4-wire headset with a 2.5 mm connector, to use your modem as a phone. (This plugs into the Audio connector on the back of the modem. You may need an extension cable if the modem is installed in the trunk. Contact your service provider to determine what extension cables are supported.)

    Well… The small EM 7345 certainly does not have a 2.5 mm connector. There aren't even soldering pads visible without peeling off stickers:

    Photo of the interior of a computer with some small extension cards.  One of them has a big sticker on it saying it's a Sierra device.

    The Sierra modem as fitted into a Lenovo X240: Certainly no 2.5 mm connectors visible.

    There is also no trace of the Sierra card in the ALSA mixer, and neither is there an ALSA card they could have put in as a USB audio device. Hence, at this point I believe getting out some sort of analog-ish audio is unrealistic.

    Go digital

    However, what if I could pull PCM bytes or perhaps GSM-encoded audio from the device in some way? A thread in the Sierra forum seems to indicate it could work but then trails off into mumbling about firmware versions. Some further mindless typing into search engines suggested to me that the a “version 6“ of the firmware should be able to do PCM voice (in some way not discussed in useful detail). Version 6 sounds a bit menacing to me in that my device says:

    at+GMR
    V1.1,00
    

    I faintly remember having once tried to update the firmware and eventually giving up after some quality time with WINE. On that background, skipping five major versions sounds particularly daring („the Evel Knievel upgrade: what could possibly go wrong except 40 to 50 broken bones“). But then Sierra's support page doesn't even acknowledge the 7345's existence any more.

    While Sierra itself does not give its documentation to the unwashed masses, on some more or less shady page I found documentation on the AT commands of one of its the successors, the EM7355. That appears to have a lot of PCM-related commands. In particular:

    Note: To enable audio on an audio-capable device, use the “ISVOICEN” customization for AT!CUSTOM (see page 32 for details).

    Regrettably, on my box:

    AT !CUSTOM ISVOICEEN=1
    ERROR
    

    Actually, it would seem that none of the various Sierra-proprietary AT commands starting with a bang are present in my firmware.

    That's where I stand. Does anyone have deeper insights into whether I could have GSM voice calls on that board without reverse-engineering the whole firmware?

    A tale of two cards

    In case you are wondering why I would even want to do GSM telephony with my computer… Well, I have a 4.99 Euro/month 1 GB+telephony flatrate with Winsim (turn off Javascript to avoid their broken cookie banner). While I can recommend Winsim for a telephone support far better than you'd expect for that price (of course: the network coverage isn't great, it's just a Telefonica reseller, and forget about using the e-mail support), they'll charge you another five Euro or so monthly for a second SIM card in that plan, whereas you can get a SIM card for free if you get a second pre-payed contract.

    I'm not sure what reasoning is behind two contracts with two cards being cheaper than one contract with two cards, but then telephony prices stopped making any sense a long time ago.

    Since my phone can only do UMTS and GSM (i.e., only GSM these days in Germany) and I have the LTE modem inside the computer anyway, I recently transferred the SIM with the flatrate into the LTE modem so my garden office has a faster internet connection than when I'm using the phone as a modem. Consequenly, I now have another (pre-paid) card in the phone. The net effect is that I could do telephone calls for free on the computer if I could just figure out the audio part – whereas naive VoIP doesn't really work in much of the network because of packet loss, latencies, low bandwitdth and so on – and I pay 9 ct per minute for GSM telephony on the phone.

    I give you that's probably not a sufficient reason to sink hours of research into the stupid Sierra card. But I'd also have the BIGGEST PHONE ON THE WHOLE TRAIN if I just could pull it off!

    Nachtrag (2023-06-21)

    Well, on the “new firmware“ part, I found https://lists.freedesktop.org/archives/libqmi-devel/2018-August/002951.html. And oh my, of course Intel don't publish the sources to their firmware flash thingy. That's extremely bad for me because they don't bother to build i386 binaries and now I have to dual-arch in half a linux system:

    ldd /opt/intel/platformflashtoollite/bin/platformflashtoollite
            linux-vdso.so.1 (0x00007ffcf43ed000)
            libdldrapi.so => /opt/intel/platformflashtoollite/lib/libdldrapi.so (0x00007f61d4000000)
            libCore.so => /opt/intel/platformflashtoollite/lib/libCore.so (0x00007f61d3c00000)
            libNetwork.so => /opt/intel/platformflashtoollite/lib/libNetwork.so (0x00007f61d3800000)
            libDeviceManager.so => /opt/intel/platformflashtoollite/lib/libDeviceManager.so (0x00007f61d3400000)
            libLogger.so => /opt/intel/platformflashtoollite/lib/libLogger.so (0x00007f61d3000000)
            libJson.so => /opt/intel/platformflashtoollite/lib/libJson.so (0x00007f61d2c00000)
            libDldrManager.so => /opt/intel/platformflashtoollite/lib/libDldrManager.so (0x00007f61d2800000)
            libUtilityWidgets.so => /opt/intel/platformflashtoollite/lib/libUtilityWidgets.so (0x00007f61d2400000)
            libQt5Xml.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Xml.so.5 (0x00007f61d2000000)
            libQt5Widgets.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Widgets.so.5 (0x00007f61d1600000)
            libQt5Gui.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Gui.so.5 (0x00007f61d0c00000)
            libQt5Network.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Network.so.5 (0x00007f61d0800000)
            libQt5Script.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Script.so.5 (0x00007f61d0200000)
            libxfstk-dldr-api.so => /opt/intel/platformflashtoollite/lib/libxfstk-dldr-api.so (0x00007f61cfe00000)
            libPlatformUtils.so => /opt/intel/platformflashtoollite/lib/libPlatformUtils.so (0x00007f61cfa00000)
            libQt5Core.so.5 => /opt/intel/platformflashtoollite/lib/libQt5Core.so.5 (0x00007f61cf200000)
            libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f61d3ebc000)
            libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f61d4530000)
            libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f61d322c000)
            libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f61d450c000)
            librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f61d4502000)
            libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f61d44fc000)
            /lib64/ld-linux-x86-64.so.2 (0x00007f61d457b000)
            libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f61d2e33000)
            libUSBScan.so => /opt/intel/platformflashtoollite/lib/libUSBScan.so (0x00007f61cee00000)
            libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f61d44a0000)
            libgthread-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0 (0x00007f61d449b000)
            libglib-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f61d3ad1000)
            libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f61d4486000)
            libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f61d36bd000)
            libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f61d3e35000)
            libusb-0.1.so.4 => /lib/x86_64-linux-gnu/libusb-0.1.so.4 (0x00007f61cea00000)
            libboost_program_options.so.1.46.1 => /opt/intel/platformflashtoollite/lib/libboost_program_options.so.1.46.1 (0x00007f61ce600000)
            libicui18n.so.54 => /opt/intel/platformflashtoollite/lib/libicui18n.so.54 (0x00007f61ce000000)
            libicuuc.so.54 => /opt/intel/platformflashtoollite/lib/libicuuc.so.54 (0x00007f61cdc00000)
            libicudata.so.54 => /opt/intel/platformflashtoollite/lib/libicudata.so.54 (0x00007f61cc000000)
            libudev.so.0 => /usr/lib/x86_64-linux-gnu/libudev.so.0 (0x00007f61d447d000)
            libffi.so.7 => /usr/lib/x86_64-linux-gnu/libffi.so.7 (0x00007f61d4471000)
            libpcre.so.3 …
  • Feedback and Addenda in Pelican Posts

    Screenshot: a (relatively) rude comment and a reply, vaguely reminiscent of classic slashdot style.

    Blog comments may be dead out there; here, I'd like to at least pretend they're still alive, and thus I've written a pelican plugin to properly mark them up.

    When I added a feedback form to this site about a year ago, I also created a small ReStructuredText (RST) extension for putting feedback items into the files I feed to my blog engine Pelican. The extension has been sitting in my pelican plugins repo on codeberg since then, but because there has not been a lot of feedback on either it or the posts here (sigh!), that was about it.

    But occasionally a few interesting (or at least provocative) pieces of feedback did come in, and I thought it's a pity that basically nobody will notice them[1] or, (cough) much worse, my witty replies.

    At the same time, I had quite a few addenda to older articles, and I felt some proper markup for them (plus better chances for people to notice they're there) would be nice. After a bit of consideration, I figured the use cases are similar enough, and I started extending the feedback plugin to cover addenda, too. So, you can pull the updated plugin from codeberg now. People running it on their sites would certainly be encouragement to add it to the upstream's plugin collection (after some polishing, that is).

    Usage is simple – after copying the file to your plugins folder and adding "rstfeedback" to PLUGINS in pelicanconf.py, you write:

    .. feedback::
        :author: someone or other
        :date: 2022-03-07
    
        Example, yadda.
    

    for some feedback you got (you can nest these for replies) or:

    .. addition::
      :date: 2022-03-07
    
      Example, yadda.
    

    for some addition you want to make to an article; always put in a date in ISO format.

    In both cases a structured div element is generated in the HTML, which you can style in some way; the module comment shows how to get what's shown in the opening figure.

    The extension also adds a template variable LAST_FEEDBACK_ITEMS containing a list of the last ten changes to old posts. Each item is an instance of some ad-hoc class with attributes url, kind (feedback or addendum), the article title, and the date. On this blog, I'm currently formatting it like this in my base template:

    <h2>Letzte Ergänzungen</h2>
    <ul class="feedback">
    {% for feedback_item in LAST_FEEDBACK_ITEMS %}
            <li><a href="{{ SITEURL }}/{{ feedback_item.url }}">{{ feedback_item.kind }} zu „{{ feedback_item.title }}“</a> ({{ feedback_item.date }})</li>
    {% endfor %}
    </ul>
    

    As of this post, this block is at the very bottom of the page, but I plan to give it a more prominent place at least on wide displays real soon now. Let's see when I feel like a bit of CSS hackery.

    Caveats

    First of all, I have not localised the plugin, and for now it generates German strings for “Kommentar” (comment), “Nachtrag” (addendum) and “am” (on). This is relatively easy to fix, in particular because I can tell an article's language from within the extension from the article metadata. True, that doesn't help for infrastructure pages, but then these won't have additions anyway. If this found a single additional user, I'd happily put in support for your preferred language(s) – I should really be doing English for this one.

    This will only work with pages written in ReStructuredText; no markdown here, sorry. Since in my book RST is so much nicer and better defined than markdown and at the same time so easy to learn, I can't really see much of a reason to put in the extra effort. Also, legacy markdown content can be converted to RST using pandoc reasonably well.

    If you don't give a slug in your article's metadata, the plugin uses the post's title to generate a slug like pelican itself does by default. If you changed that default, the links in the LAST_FEEDBACK_ITEMS will be wrong. This is probably easy to fix, but I'd have to read a bit more of pelican's code to do it.

    I suppose the number of recent items – now hardcoded to be 10 – should become a configuration variable, which again ought to be easy to do. A more interesting (but also more involved) additional feature could be to have per-year (say) collections of such additions. Let's say I'm thinking about it.

    Oh, and error handling sucks. That would actually be the first thing I'd tackle if other people took the rstfeedback plugin up. So… If you'd like to have these or similar things in your Pelican – don't hesitate to use the feedback form (or even better your mail client) to make me add some finish to the code.

    [1]I made nginx write logs (without IP addresses, of course) for a while recently, and the result was that there's about a dozen human visitors a day here, mostly looking at rather recent articles, and so chances are really low anyone will ever see comments on old articles without some extra exposure.
  • Browsing Peace and Privacy With dnsmasq

    Screenshot of the dnsmasq extra configuration page in freetz

    You can even have the DNS-based adblocking discussed here in your whole network if your router runs dnsmasq (it probably does) and you can edit its configuration (you probably can't). As shown here, with freetz you can.

    I'm not a big fan of in-browser adblocking. For one, I have my doubts about several of the extensions – Adblock plus, for instance, comes from a for-profit, though I give you this critique might be partisan. Also, I like to switch browsers freely and certainly don't want to maintain block lists for each of them, and finally quite a few clients other than browsers may render HTML and hence ads.

    At least with the pages I want (and don't want) to read, there's a much lighter alternative: DNS-based adblocking. You see, on the relatively few commercial pages I occasionally have reason to visit, ads, tracking pixels, and nasty javascript typically are served from a rather small set of domains – doubleclick.net, googleadservices.com, and a few more like these. If I can make my computer resolve these names to 127.0.0.1 – that is, my computer in IPv4, or yours, if you type that address –, everything your browser would pull from these servers is instantly gone in everything rendering HTML.

    So, how do you do that? Well, you first make sure that your computer does the name resolution itself[1]. On Debian, you do that by installing the packages resolvconf (without a second e; in a systemd environment I think you want to use systemd-resolved instead) and dnsmasq; that's really all, and that ought to work out of the box in all reasonably common situations:

    $ sudo apt install resolvconf dnsmasq
    

    You will probably have to bring your network down and up again for this to take effect.

    Once that's done, you can tell dnsmasq what names to resolve to what. The man page dnsmasq(8) documents what to do under the --address option – you could actually configure dnsmasq through command line options exclusively –, where you can read:

    -A, --address=/<domain>[/<domain>...]/[<ipaddr>]

    Specify an IP address to return for any host in the given domains. […] A common use of this is to redirect the entire doubleclick.net domain to some friendly local web server to avoid banner ads. The domain specification works in the same was [sic, as of bullseye] as for --server […]

    – and from the documentation of --server you learn that <domain> is interpreted as a suffix (if you will), such that if you give an address for, say, google.com, it will also be used for foo.google.com or foo.bar.google.com.

    But where do these address expressions go? Well, at least in Debian, dnsmasq will read (essentially, see the README in there) any file you drop into /etc/dnsmasq.d and add its content to its configuration. Having configuration snippets in different files really helps maintenance and dist-upgrades in general; in this case, it also helps distributing the blacklist, as extra configuration that may be inappropriate on a different host is kept in some other file.

    I tend to prefix snippet names with numbers in case order might one day matter. So, I have a file /etc/dnsmasq.d/10spamreduce.conf containing:

    address=/doubleclick.net/127.0.0.1
    address=/xiti.com/127.0.0.1
    address=/adform.net/127.0.0.1
    address=/qualtrics.com/127.0.0.1
    address=/criteo.com/127.0.0.1
    address=/exactag.com/127.0.0.1
    address=/optimizely.com/127.0.0.1
    address=/googleadservices.com/127.0.0.1
    address=/googletagmanager.com/127.0.0.1
    address=/ivwbox.com/127.0.0.1
    address=/ivwbox.de/127.0.0.1
    address=/connect.facebook.de/127.0.0.1
    address=/facebook.net/127.0.0.1
    address=/facebook.com/127.0.0.1
    address=/addthis.com/127.0.0.1
    address=/update.googleapis.com/127.0.0.1
    address=/googleusercontent.com/127.0.0.1
    address=/edgekey.net/127.0.0.1
    address=/ioam.de/127.0.0.1
    address=/cookiebot.com/127.0.0.1
    address=/moatads.com/127.0.0.1
    address=/fonts.gstatic.com/127.0.0.1
    address=/fonts.googleapis.com/127.0.0.1
    address=/ping.chartbeat.net/127.0.0.1
    address=/cookielaw.org/127.0.0.1
    

    When you do the same thing, you should restart dnsmasq and then see the effect like this:

    $ sudo service dnsmasq restart
    $ dig +short fonts.gstatic.com
    127.0.0.1
    

    As you can see, I have also included some trackers and other sources of annoyance in my address list. Of course, if you actually want to read Facebook (ugh) or need to pull Google's fonts (ughugh), you'll have to adapt that list a bit.

    In case you have interesting and useful contributions to this list: Please do write in!

    [1]Regrettably, with things like DNS over HTTPS, it could be that your browser actually will not use your computer's DNS resolver. Adblocking hence is one extra reason to disable DoH when you see it.
  • What to do when github eats 100% CPU in luakit

    I can't help it: As probably just about every other programming life form on this planet I have to be on github now and then. Curse the network effect and all those taking part in it (which would by now include me).

    Anyway, that's why the last iteration of luakit bug #972 (also on github. Sigh) bit me badly: as long as the browser is on a github page, it will spend a full 100% of a CPU on producing as many error messages as it can, each reading:

    https://github.githubassets.com/<alphabet soup>1:8116:
    CONSOLE JS ERROR Unhandled Promise Rejection:
    TypeError: undefined is not an object (evaluating 'navigator.clipboard.read')
    

    Github being a commercial entity I figured it's a waste of time trying to fill in a bug report. And the problem didn't fix itself, either.

    So, I went to fix it (in a fashion) with userscript. Since the problem apparently is that some github code doesn't properly catch a missing (or blacklisted) clipboard API in a browser (and I still consider blacklisting that API an excellent idea), I figured things should improve when I give github something similar enough to an actual clipboard. It turns out it does not need to be terribly similar at all. So, with a few lines of Javascript, while github still sucks, at least it doesn't eat my CPU any more.

    What do you need to do? Just create a userscript like this (for luakit; other browsers will have other ways):

    cd
    mkdir -p .local/share/luakit/scripts
    cat > .local/share/luakit/scripts/github.user.js
    

    Then paste the following piece of Javascript into the terminal:

    // ==UserScript==
    // @name          clipboard-for-github
    // @namespace     http://blog.tfiu.de
    // @description   Fix github's 100% CPU usage due to unhandled clipboard errors
    // @include       https://github.com*
    // ==/UserScript==
    navigator.clipboard = Object()
    navigator.clipboard.read = function() {
            return "";
    }
    

    As usual with this kind of thing, at least have a quick glance at what this code does; these four lines of source code sufficient here at least are easy to review. Finish off with a control-D, go to a luakit window and say :uscripts-reload.

    If you then go to, say bug #972, your CPU load should stay down. Of course, as long as github blindly tries to use the navigator.clipboard object for „copy link“-type operations, these still won't work. But that's now github's problem, not mine.

    And anyway: Give up Github.

  • Work-Life Balance and Privacy with Bash, D-Bus, gajim and ifupdown

    A small screenshot showing an offline icon

    Sunday morning: my gajim is automatically offline. This post explains how I'm doing that.

    I still consider XMPP the open standard for “chat” (well, instant messaging), and I have been using Psi as an XMPP client for almost 20 years now. However, since Psi has occasionally crashed on me recently (as in: at least since Bullseye), presumably on receiving some message, I consider it a certainty that it is remotely exploitable. Given its large codebase I don't think I want to fix whatever is wrong myself, and I don't think there are still people maintaing Psi.

    I therefore recently migrated to gajim last week; after all, one of the nice things about open standards is that there are usually multiple implementations. This, however, made me update an ancient hack to automatically manage my status so that I'm XMPP-offline when it's nobody's business whether or not my machine is on.

    In this post, I'd like to tell you how that works, hoping it may be useful to solve other (but similar; for instance: get offline when doing talks) problems, too.

    Not Always Online

    First off, the major reason I'm not much of a fan of synchronous messaging (which IM is, and email is not) is that it requires some sort of “presence” notification: something needs to know whether I am online, and where I can be reached. At least in XMPP, additionally all your contacts get to know that, too.[1]

    While I admit that can be useful at times, during the night and on weekends, I really don't want to publish when my computer is on and when it's not. Hence I have so far told my Psi and I am now telling my gajim to not automatically re-connect on Weekends or between 20:00 and 7:00. That I can specify this perhaps somewhat unique preference illustrates how great shell integration everywhere is. The ingredients are:

    • ifupdown, Debian's native network management. If you're using systemd or NetworkManager or something, I think these use other hooks [if you've tried it, let me know so I can update this].
    • D-Bus, a framework to communicate between programs sitting on a common X11 display (though with gajim, D-Bus becomes somewhat hidden).
    • the shell, which lets you write little ad-hoc programlets and duct-tape together all the small utilities that accumulated in Unix since the early 1970ies (here: logger, date, and egrep).

    Inter-Process Communication with D-Bus

    The first thing I want to do is make tajim offline before a network interface goes down. That way, people don't have to wait for timeouts to see I am unavailable (unless someone pulls the cable or the Wifi disappears – without a network, gajim can't sign off). That means I have to control a running gajim from the outside, and the standard way to do that these days is through D-Bus, a nifty, if somewhat over-complicated way of calling functions within programs from other programs.

    One of these other programs is qdbus, which lets you inspect what listens on your sessions's (or, with an option, system's) D-Bus and what functions you can call where. For instance:

    $ qdbus org.gajim.Gajim /org/gajim/Gajim
    ...
    method void org.gtk.Actions.SetState(QString action_name, QDBusVariant value, QVariantMap platform_data)
    ...
    

    In Psi, with a bit of fiddling, a generic D-Bus tool was enough to switch the state. Since there's a QDBusVariant in the arguments gajim's SetState method wants according to the qdbus output, I don't think I could get away with that after the migration – qdbus does not seem to be able to generate that kind of argument.

    Enter gajim-remote

    But gajim comes with a D-Bus wrapper of its own, gajim-remote, and with that, you can run something like:

    gajim_remote change_status offline
    

    Except that won't work out of the box. That's because gajim comes with remote control disabled by default.

    To enable it, go to Preferences → Advanced, click Advanced Configuration Editor there, and then look for the remote_control configuration item. I have no idea why they've hidden that eminently useful setting so well.

    Anyway, once you've done that, you should be able to change your status with the command above and:

    gajim_remote change_status online
    

    ifupdown's Hooks

    I now need to arrange for these commands to be executed when network interfaces go up and down. These days, it would probably be smart to go all the way and run a little daemon listening to D-Bus events, but let me be a bit less high-tech, because last time I looked, something like that required actual and non-trivial programming.

    In contrast, if you are using ifupdown to manage your machine's network interfaces (and I think you should), all it takes is a bit of shell scripting. That's because ifupdown executes the scripts in /etc/network/if-up.d once a connection is up, and the ones in /etc/network/if-down.d before it brings a connection down in a controlled fashion. These scripts see a few environment variables that tell them what's going on (see interfaces(5) for a full list), the most important of which are IFACE (the name of the interface being operated on), and MODE, which would be start or stop, depending on what ifupdown is doing.

    The idea is to execute my change_status commands from these scripts. To make that a bit more manageable, I have a common script for both if-up.d and if-down.d. I have created a new subdirectory /etc/network/scripts for such shared ifupdown scripts, and I have placed the following file in there as jabber:

    #!/bin/sh
    # State management of gajim
    
    DESKTOP_USER=msdemlei
    
    
    case $MODE in
    start)
      case $IFACE in
      eth* | wlan* | n900)
        if ! date +'%w/%H' | grep '[1-5]/\(0[789]\|1[0-9]\)'  > /dev/null; then
          exit 0
        fi
        su - $DESKTOP_USER -c 'DISPLAY=:0 gajim-remote change_status online "Got net"' > /dev/null || exit 0
        ;;
      esac
      ;;
    
    stop)
      case $IFACE in
      eth* | wlan* | n900)
        if [ tonline == "t`su $DESKTOP_USER -c 'DISPLAY=:0 gajim-remote get_status'`" ]; then
          su - $DESKTOP_USER -c "DISPLAY=:0 gajim-remote change_status offline 'Losing network'" || exit 0
          sleep 0.5
        fi
        ;;
      esac
      ;;
    esac
    

    After chmod +x-ing this file, I made symbolic links like this:

    ln -s /etc/network/scripts/jabber /etc/network/if-down.d/
    ln -s /etc/network/scripts/jabber /etc/network/if-up.d/
    

    – and that should bascially be it (once you configure DESKTOP_USER).

    Nachtrag (2023-12-02)

    Let me admit that this never really worked terribly well with gajim, manly because – I think – its connections don't time out, and so once a status update hasn't worked for one reason or another, gajim would be in a sort of catatonic state. That's one of the reasons I switched on to pidgin, and its state management again broke when upgrading to Debian bookworm. My current script is near the bottom of this December 2023 post

    Debugging Admin Scripts

    Because it is a mouthful, let me comment a bit about what is going on:

    logger Jabber: $MODE $IFACE $LOGICAL
    

    logger is a useful program for when you have scripts started deeply within the bowels of your system. It writes messages to syslog, which effectively lets you do printf Debugging of your scripts. Once everything works for a script like this, you probably want to comment logger lines out.

    Note that while developing scripts of this kind, it is usually better to just get a normal shell, set the environment variables (or pass the arguments) that you may have obtained through logger, and then run them interactively, possibly with a -x option (print all statements executed) passed to sh. For instance:

    $ MODE=start IFACE=wlan0 sh -x /etc/network/scripts/jabber
    + DESKTOP_USER=anselmf
    + logger Jabber: start wlan0
    + case $MODE in
    + case $IFACE in
    + date +%w/%H
    + grep '[1-5]/\(0[789]\|1[0-9]\)'
    + exit 0
    

    – that way, you see exactly what commands are executed, and you don't have to continually watch /var/log/syslog (or journalctl if that's what you have), not to mention (for instance) bring network interfaces up and down all the time.

    Case Statments in Bourne's Legacy

    The main control structure in the script is:

    case $MODE in
    start)
      ...
      ;;
    stop)
      ...
      ;;
    esac
    

    Case statements are one of the more powerful features of descendants of the Bourne shell. Read about them in the excellent ABS in case you are a bit mystified by the odd syntax and the critically important ;; lines.

    The particular case construct here is there so I can use the same script for if-up.d and if-down.d: it dispatches on whatever is in MODE. In case MODE is something other than start or stop, we silently do nothing. That is not always a good idea – programs failing without complaints are a major reason for the lack of hair on my head –, but since this isn't really user-callable, it's probably an acceptable behaviour.

    General rule of thumb, though: Be wary of case .. esac without a *) (which gives commands executed when nothing …

  • Garmin GPSmap 60Cx and 60CSx do support SDHC and SDXC

    Photo of a hand-held GPS receiver showing, among others “Höhe 9662 m”

    Ah, nostalgia! I just migrated back to the type of my first GPS receiver, in action here when I was – flight shame! – at just about 9700 metres.

    Since there is so much confident misinformation about on this about on the net, I'm stooping to the depravations of SEO and have put my main point into the headline: If you have a Garmin GPSmap device from the later 60 series and worry whether you'll still get SD cards that will work with it, relax: I just bought a 64 GB SDXC card and made it work inside a 60csx that will replace the 64s I had since 2017 (which hopefully has found a new home in Paris. Ahem).

    While it is by no means certain that a vintage 2006 machine can deal with SDHC (since that was only specified in January 2006), and machines that only support SD 1.0 usually get confused with higher-capacity cards, Garmin got it right for these devices. You may be out of luck for the vintage 2003 60C and 60CS, though, but I have no means of ascertaining that.

    In case you're wondering why anyone bothers with hardware that's coming of age these days: I was surprised myself that the things still go for about 100 Euro on ebay, but they are well-made, and very frankly: except for the support for multiple gmapsupp.imgs, I see the GPSmap 64s as a step back over my original 60Cx, in particular when finding features. The GPSmaps are also sturdy devices surviving 20 years of (ab)use, and they're running on standard NiMH cells. What's not to like?

    But, sure enough, when you insert a card you get at your local chemist's today (which will be SDXC) into the 60CSx, it will not (immediately) be recognised. That is because while the machine can deal with SDHC (and thus SDXC) Card-Specific Data and thus the hardware is fine, it cannot deal with exFAT file systems (and preformatted exFAT, really, is the difference between SDXC and SDHC). To my surprise, it could deal with a FAT32 file system, so running, on a linux host with a card reader, sudo mkfs.fat /dev/mmcblk0p1 was all I needed to do to make the device see the file system.

    For reference: the way to figure this out is to create a Garmin subdirectory on the card, and put some (sufficiently small; there's still the 4GB limit on file sizes with FAT32) gmapsupp.img file in there. If you don't have one, my favourite sources at this point are Computerteddy's classics or frikart's renderings.

    You should now see the map data on the device.

    In case reformatting as FAT32 does not do the trick for you: I seem to remember my old 60CS insisted on FAT16, which would explain the talk about a 4 GB limit for the card that's reported in multiple places. If this is true for your 60CS, fetch gparted from your distribution's repository, run it on your SD card, resize[1] the existing partition to 4096 MB, tell gparted to put a FAT16 file system on it, and then try again.

    Nachtrag (2023-10-03)

    There is another snag when you run GPSmap devices on suitably configured Linux systems and want to use the pleasantly unconventional gpsman to manage tracks and waypoints: power management.

    The way gpsman interacts with the USB port makes Linux suspect it doesn't use it at all and then suspend the device if you have enabled USB autosuspend; in consequence, you cannot talk to the device any more, and gpsman's “check device” will fail. And you probably have enabled USB autosuspend if you are on a mobile platform and use software like tlp.

    To still make gpsman work, turn off USB autosuspend at least for the GPSmap. In the tlp case, the way to do that is to figure out the USB id for the GPSmap (use lsusb), which turns up 091e:0003 for the 60CSx. Then tell tlp to leave such devices alone by adding that id to tlp's USB_DENYLIST variable. Don't edit /etc/tlp.conf directly, though, because that will make your dist-upgrades more painful. Rather, create a file /etc/tlp.d/10-stop-autosuspend and put something like:

    USB_DENYLIST="091e:0003"
    

    there.

    [1]Resize instead of drop and recreate because the card vendors seem to be doing some voodoo when aligning their pre-created partitions, supposedly to improve speed (or perhaps even to reduce wear, which probably isn't an issue on the Garmin devices which essentially never write). By resizing, you don't disturb the voodoo. Because, you know, it may work even if you don't believe in it.
  • BahnBonus ohne Google-Id und auf dem eigenen Rechner

    Screenshot: Ein bunter App-Bildschirm mit wenig Information und einem Spendenaufruf.  Es handelt sich um die BahnBonus-App der Bahn.

    Objekt der Begierde: Die BahnBonus-App, die mich wieder in die DB-Lounges Einlass finden wird. Und zwar ganz ohne Apple und nur mit einer einfachen Überdosis Google.

    Vor einem knappen Jahr habe ich eine Großbeichte abgelegt: Ja, ich nehme an so einem blöden, schnüffelnden Kundenbindungsprogramm teil, und dann noch an dem von der Bahn, bei dem VielfahrerInnen gemütlich im Sessel Kakao schlürfen, während gewöhnliche Reisende draußen am Bahnsteig frieren oder sich um die wenigen Sitzgelegenheiten in den Bahnhofsgebäuden streiten müssen: Ehemals bahn.comfort, jetzt BahnBonus.

    Im zitierten Post habe ich dem Lounge-Zugang hinterhergeweint, denn seit einem knappen Jahr lässt die Bahn nur noch Menschen in die Lounges, die ferngewartete Computer („Smartphones“), und dann noch ziemlich neue davon, verwenden. Statt der alten Plastikkarte brauchte es jetzt eine, hust, App. Die tut nun (wie ich jetzt weiß und vorher ahnte) nicht nicht viel mehr als den Login-Bildschirm der Bahn-Webseite anzuzeigen und dann Links auf QR-Codes zu generieren. Wahrscheinlich etwas naiv habe damals gehofft, dass die Bahn die paar Zeilen Javascript, die es dafür braucht, auch auf ihre normale Webseite packt.

    Das ist aber nicht passiert. Als die Bahn neulich BahnBonus-Papierwerbung geschickt hat („Sie haben Gold-Status!”), habe ich erneut eine Mail an den Bahn-Support geschrieben, wie es denn mit QR-Codes auf der Webseite stehe. Erneut war die Antwort ein nicht weiter erklärtes Nein. Dass die Bahn mit der Negativantwort aber etliche Gutscheine (insbesondere zum Lounge-Zugang) schickte, nahm ich als „nee, wir machen das nie, auch wenn es einfach ist“. Mag sein, dass es dabei ums Datensammeln geht, mag sein, dass das einfach Konzernpolitik ist.

    Jedenfalls: Wenn ich wieder im Warmen Kakao schlürfen will, muss ich irgendwie auf Dauer an die QR-Codes kommen. Ferngewartete Computer kommen für mich allenfalls in virtuellen Maschinen in Frage, und so dachte ich mir: Ich probier mal, ob ich die BahnBonus-App nicht auch auf meinem normalen Rechner zum Laufen kriege.

    Stellt sich raus: Das geht, und wenn mensch sich Google in einer VM austoben lässt, sogar mit vertretbarem Aufwand. Ich schreibe hier mal auf, was es für mich gebraucht hat; das mag ja bei anderen Digitalzwängen auch ein wenig helfen.

    Android in QEMU aufsetzen

    Ich gehe für die folgenden Schritte aus von einem Debian (bullseye) auf einem Intel- oder AMD-System, das nicht wesentlich älter ist als 15 Jahre. Im Prinzip dürfte aber fast alles auch auf jeder anderen Plattform gehen, auf der Qemu läuft.

    Wenn ihr bei den folgenden Schritten irgendwo ins Schleudern kommt, lasst es mich bitte wissen – ich erweitere diese Erzählung gerne so, dass sie auch nicht übermäßig nerdigen Menschen etwas sagt.

    (1) Qemu installieren – Qemu ist zunächst ein Emulator von allerlei Hardware. Da aber Android enorm ressourcenhungrig ist (also: jetzt für meine Verhältnisse), wäre alles furchtbar lahm, wenn der Android-Code nicht direkt von der CPU in eurem Rechner ausgeführt würde – ich werde Qemu also als Virtualisierer verwenden und nur in sehr zweiter Linie als Emulator. Achtet jedenfalls darauf, dass qemu KVM machen kann. Zum Ausgleich braucht ihr nur die amd64-Fassung, nicht all die anderen Architekturen, und insbesondere nicht ARM. In Bullseye sollte sowas hier reichen:

    apt install qemu-system-gui qemu-system-amd64
    

    [ich selbst habe an der Stelle aus Geiz qemu-system-x86 genommen; das geht auch, und damit ist alles etwas kompakter].

    (2) Android-x86 besorgen – ich gestehe ehrlich, dass ich mich nicht sehr um die Vertrauenswürdigkeit der Leute rund um den Port von Android auf x86-Prozessoren gekümmert habe. Ich habe einfach ein passendes ISO-Image von deren FOSSHUB-Seite (Krapizität 10 lässt hoffen) runtergeladen; wenn ihr die amd64-Qemu installiert habt, wollt ihr jetzt das „64-bit ISO file“.

    (3) Container fürs Android-Filesystem anlegen – euer Android muss seine Dateien irgendwo hinspeichern, und ihr wollt ihm gewiss keinen Zugriff auf euer echtes Dateisystem geben. Erzeugt also eine „virtuelle“ Festplatte für die Qemu-Daten. Dabei kommt ihr mit einem Gigabyte auch bei i386 nicht aus. Wenn ihr euch um Plattenplatz keine Sorgen macht: baut lieber gleich eine mit vier Gigabyte (4G am Ende der Kommandozeile).

    Sucht euch auch irgendeinen Platz, wo ein Klops von der Größe nicht schlimm stört. Ich nehme hier mal ~/containers (was ihr dann wohl aus eurem Backup rausnehmen solltet):

    mkdir -p ~/containers
    qemu-img create -f qcow2 ~/containers/android.img 2G
    

    Display-Probleme

    Jetzt stellt sich das Problem, dass euer künftiges Android die Bildschirmausgabe irgendwo hinschicken muss. Qemu kann in ein ordinäres X-Fenster ausgeben, aber das ist – aus Gründen, die ich nicht untersucht habe – furchtbar lahm. Was für mich gut funkioniert hat: VNC. Wenn ihr damit nicht zurechtkommt, probiert unten mal QDISPLAY="-display gtk" (könnte halt kreuzlahm sein).

    (4) Android-Installer starten – das braucht ein paar Optionen, damit das Ding auch ins Netz kommt und die beiden nötigen Dateien (die virtuelle Platte und den Android-Installer) findet:

    QDISPLAY="-display vnc=localhost:0"
    qemu-system-amd64 $QDISPLAY -enable-kvm -m 2000 \
      -net nic -net user -drive file=$HOME/containers/android.img,format=qcow2 \
      -boot d -cdrom /media/downloads/android-x86-9.0-r2.iso
    

    Den Pfad in der -cdrom-Option müsst ihr ganz sicher anpassen, damit er auf das ISO zeigt, das ihr gerade runtergeladen habt. Lasst jetzt einen VNC-Client auf localhost:5600 los; ich empfehle in diesen Tagen remmina (aus dem gleichnamigen Debian-Paket).[1]

    (5) Den Android-Container konfigurieren – wählt Installation to Hard disk aus, dann Create/Modify Devices. Ihr kommt in einen guten, alten, textbasierten Partitionierer. Als Disklabel wollt ihr nicht GPT haben (weil das später Ärger mit dem Bootloader GRUB gibt). Der Speicher, den ihr da partitioniert, ist der, den ihr in Schritt 3 angelegt habt. Packt die ganze „Platte“ in eine Partition, sagt Write (keine Sorge, mit den Optionen oben könnt ihr keine Daten von euch kaputtmachen) und dann Quit.

    Ihr kommt dann zurück in den Android-Installer. Nach einem Ok könnt ihr das Filesystem auswählen – nehmt ext4.

    Dann fragt der Installer, ob ihr einen GRUB haben wollt – ja, wollt ihr, sonst kommt euer Android nachher nur mit viel Mühe hoch.

    Das System Directory wollt ihr wahrscheinlich nicht read/write haben (es sei denn, ihr wollt ernsthaft mit dem Android spielen). Das spart einiges an Platz.

    (6) Android ins Netz lassen – an der Stelle sollte euch der Installer anbieten, Android-x86 zu starten. Tut das. Wählt eine Sprache aus – ich habe es bei „English (United States)“ belassen.

    Es kann sein (ist bei mir passiert), dass das Ding nach der Sprachabfrage abstürzt und wieder beim Grub-Prompt vom Installer ist. Wenn das passiert, beendet die qemu (also: Control-C in deren Fenster) und schaut unten bei VM starten nach der Kommandozeile, die die Qemu ohne Installer hochzieht. Wir haben es hier mit kommerzieller Software zu tun, Gesundbooten ist also durchaus eine legitime Option.

    Jedenfalls: nach der Sprachwahl will das Ding ins Netz, und ich fürchte, es hat keinen Sinn, ihm das zu verbieten. Sucht also nach Netzen. Ihr solltet genau eines sehen, VirtWifi oder sowas. Wählt das aus, seufzt und lasst schon mal auf eurem richtigen Rechner ein tcpdump -n laufen, um zu sehen, womit euer neues Android so alles plaudert (vgl. Die Wunden lecken).

    Das „Checking for Updates“ hat bei mir über Minuten hinweg 100% CPU verbraten (mensch will gar nicht wissen, was es dabei zu rechnen gibt). Da ich den „ich tu gerade was“-Feedback im emulierten Android generell nicht so prall finde, könnt ihr die Zeit ja damit verbringen, eure CPU-Last-Anzeige am Desktop auf Vordermann zu bringen (mein Tipp: wmcore).

    Dann fragt Android, ob es von irgendwoher eure Daten ziehen kann. Klar: das hätte Google gerne. Zum Glück gibts einen kleinen „Don't Copy“-Knopf. Genauso ist auch der Skip-Knopf im nächsten Dialog, dem Google-Signin, ziemlich klein, und Google nervt extra nochmal, wenn mensch ihn wählt. Wählt ihn trotzdem. Date and Time sind zur Abwechslung problemlos abnickbar, dann kommt ein Dialog zu „Google Services“, die mensch alle manuell ausschalten muss.

    Das ist offenbar die Benutzerfreundlichkeit („User Experience“), über deren Mangel im Free Software-Bereich ich immer so viel höre. Ums Akzeptieren, dass Google immer und zu jeder Zeit Kram auf die VM packen kann, kommt mensch glaube ich nicht rum. Aber dafür ist es ja auch eine VM.

    Den folgenden „Protect your Tablet“-Dialog finde ich interessant, weil die Benutzerführung, die mir gerade noch Vertrauen zu Google überhelfen wollte, nun Misstrauen gegen andere Menschen sät, und das gleich noch mit einem zweiten Extra-Mahn-Dialog, wenn ich keine Lust auf Geräte-PINs habe. Also ehrlich: wenn ich mit Google zu tun habe, mache ich mir doch über menschliche DiebInnen keine Sorgen…

    Die abschließende Frage nach der Home-App verstehe ich auch nicht. Macht einfach irgendwas. Damit seid ihr im Android-Geschäft.

    Apps ohne App-Store

    (7) Home-Screen aufräumen – Wenn ihr gleich mal den „Home-Screen“ aufräumen wollt: jeweils lang ein Icon klicken und ziehen. Dann erscheint ein „Remove“-Feld, auf das ihr das Icon ziehen könnt. Macht das am besten mit allem außer dem Chrome. Den brauchen wir gleich. Die widerliche Google-Bar lässt sich, glaube ich, mit diesem Mitteln nicht entfernen. Wozu auch – der Container gehört ja, wie ihr gerade abgenickt habt, sowieso Google.

    (8) Bahn-App finden – Die Bahn veröffentlicht, so weit ich weiß, keine APKs (also Pakete) von ihrer App. Insofern müsst ihr …

  • Eine neue Metrik für Webseiten: Crapicity

    Screenshot einer Webseite mit großem Banner "Crapicity" über einer Eingabezeile und einer Balkengrafik, die nach Lognormal-Verteilung aussieht.

    Die Krapizität der Webseiten, auf die ich hier so verlinkt habe (und noch ein paar mehr): works with netsurf!

    Kurz nachdem das Web seine akademische Unschuld verloren hat, Ende der 1990er, habe ich den UNiMUT Schwobifying Proxy geschrieben, ein kleines Skript, das das ganze Web in Schwäbisch, hust, erlebbar machte. Tatsächlich hat mir das gegen 2002 meine 15 Minuten des Ruhms verschafft, inklusive 200'000 Zugriffen pro Tag (fürs damalige Netz rasend viel), Besprechungen auf heise.de, spiegel.de und, für mich offen gestanden am schmeichelndsten, in Forschung aktuell (also gut: Computer und Kommunikation) im Deutschlandfunk.

    Ein heimlich verwandtes Web-Experiment war dass Dummschwätzranking von 1998, das – nicht völlig albern – die Dichte von (damals gerade modernen) Heißdampfworten in Webseiten beurteilt hat – dafür interessieren[1] sich bis heute Leute.

    Beide Spielereien sind praktisch abgestellt, teils, weil das kommerzielle Internet und SEO solche Sachen rechtlich oder praktisch sprengen. Teils aber auch, weil sie darauf bauen, dass das, was Leute im Browser sehen, auch in etwa das ist, was im HTML drinsteht, das so ein Programm von den Webservern bekommt. Das aber ist leider im Zuge der Javascriptisierung des Web immer weniger der Fall.

    Nun ist sicherlich die Habituierung der Menschen an „lass einfach mal alle Leute, von denen du was lesen willst, Code auf deiner Maschine ausführen“ die deutlich giftigere Folge des Post-Web-1-Megatrends. Aber schade ist es trotzdem, dass zumindest im kommerziellen Web kein Mensch mehr in die Seiten schreibt, was nachher in den dicken Javascript-Browsern angezeigt werden wird.

    Und weil das schade ist, habe ich einen postmodernen Nachfolger des Dummschwätzrankings geschrieben: Die Crapicity-Maschine (sorry, nicht lokalisiert, aber sprachlich sowieso eher arm). Ihre Metrik, namentlich die Crapicity oder kurz c7y: das Verhältnis der Länge lesbaren Textes (das, was ohne Javascript an Zeichen auf den Bildschirm zu lesen ist) zur Gesamtlänge der Seite mit allem eingebetteten Markup, Javascript und CSS (also: externes Javascript, CSS usf nicht gerechnet). In Python mit dem wunderbaren BeautifulSoup-Modul ist das schnell berechnet:

    def compute_crapicity(doc):
      """returns the crapicity of html in doc.
    
      doc really should be a str -- but if len() and BeautifulSoup() return
      something sensible with it, you can get away with something else, too.
      """
      parsed = BeautifulSoup(doc, "html.parser")
      content_length = max(len(parsed.text), 1)
      return len(doc)/content_length
    

    Um diese knappe Funktion herum habe fast 700 Zeilen herumgeklöppelt, die Ergebnisse in einer SQLite-Datenbank festhalten und ein Webinterface bereitstellen. Als Debian-und-eine-Datei-Programm sollte das recht einfach an vielen Stellen laufen können – wer mag, bekommt die Software bei codeberg.

    Die Web-Schnittstelle bei https://blog.tfiu.de/c7y hat aber den Vorteil, dass sich die Scores sammeln. Spielt gerne damit rum und empfiehlt es weiter – ich fände es putzig, da vielleicht 10'000 Seiten vertreten zu haben. Ich habe selbst schon knapp 200 Web-Ressourcen durchgepfiffen, meistenteils Links aus dem Blog hier.

    Im Groben kommt raus, was wohl jedeR erwartet hat: Das kommerzielle Netz stinkt, alter Kram und Techno-Seiten sind meist ganz ok. Allerdings habe ich den aktuellen Spitzenreiter, eine Reddit-Seite mit einem c7y von über 17'000, noch nicht ganz debuggt: Erstaunlicherweise ist die Seite auch im netsurf und ohne Javascript lesbar. Wie sie das macht: nun, das habe ich in 800 kB Wirrnis noch nicht rausgefunden, und der Quellcode der Seite sieht so schrecklich aus, dass der Score sicherlich verdient ist.

    Ich nehme mal an, dass derzeit alle youtube-Seiten bei c7y=8222 liegen; dort ist ja durchweg ohne Javascript nichts zu sehen, und so haut auch dieser Score prima hin. Bei taz.de (gerade 892) geht es vielleicht nicht so gerecht zu, denn die Seite funktioniert in der Tat auch ohne Javascript ganz gut. Eventuell ist hier BeautifulSoup schuld. Hochverdient hingegen sind die 682 von nina.no – das ist ohne Javascript leer. Eine Twitter-Seite liegt bei 413, Bandcamp bei 247.

    Vernüftige Seiten liegen dagegen zwischen etwas über eins (minimales Markup) und zehn (z.B. wenig Text mit viel CSS). Dishonorable Mention: uni-heidelberg.de liegt trotz Akademia bei 177. Tatsächlich ist die Seite auch in normalen Browsern[2] halbwegs lesbar. Der schlechte Score liegt vor allem an eingebetten SVGs, ist also schon ein ganz klein wenig unfair. Aber ehrlich: wer für ein bisschen Glitz ein paar hundert Zeichen Text auf satte 680k aufbläht, hat eine große crapicity verdient, auch wenn die Seite selbst nicht richtig kaputt ist. Wer unbedingt so viel Glitz haben will, soll externe Bilder verwenden – die muss ich nicht runterladen, wenn ich nicht will.

    Wer interessante Krapizitäten findet: Die Kommentarbox wartet auf euch.

    [1]Na gut, viel von dem Interesse kam aus erkennbar aus SEO-Kreisen; das war dann auch einer der Gründe, warum ich das eintragen von Links beim Dummschwätzranking abgestellt habe.
    [2]Definiert als: Alles außer den Monstren Firefox, Chrome, Webkit und ihren Derivaten.
  • Speech Recognition with Whisper.cpp

    Today I stumbled across Whispers of A.I.'s Modular Future by James Somers, a piece that, at least by the standards of publications aimed at the general public, makes an excellent point of why whisper.cpp might finally be some useful and non-patronising output of the current AI hype.

    What can I say? I think I'm sold. And perhaps I'm now a little bit scared, too. If you want to understand way and speak a bit of German, you can skip to The Crazy right away.

    The Good

    You know, so far I've ignored most of the current statistical modelling (“AI”, “Machine Learning“) – if you need a graphics chip with drivers even worse than Intel's, and that then needs 8 GB of video RAM before anything works, I'm out. And I'm also out when the only way I can use some software is on some web page because there's proprietary data behind it.

    Not so for whisper.cpp. This is software as it was meant to be: trivial dependencies, compact, works on basically any hardware there is. To build it, you just run:

    git clone https://github.com/ggerganov/whisper.cpp/
    cd whisper.cpp
    make
    

    – and that's it. No dependency juggling down to incompatible micro versions, no fancy build system, just a few C(++) sources and a Makefile. The thing works in place without a hitch, and it has a sensible command line interface.

    Well, you need the language models, of course. There are some reasonably free ones for English. The whisper.cpp distribution's models/README.md explains how to obtain some. I got myself ggml-small.en.bin, recorded a few words of English into a file zw.wav and ran:

    ./main -m models/ggml-small.en.bin ~/zw.wav
    

    The machine demanded I use a samplerate of 16 kHz, I made audacity oblige, ran the thing again and was blown away when – admittedly after a surprisingly long time – my words appeared on the screen.

    I immediately tried to figure out how to stream in data but then quickly decided that's probably not worth the effort; the software needs to see words in context, and for what I plan to do – transcribing radio shows – having an intermediate WAV file really does not hurt.

    I quickly cobbled together a piece of Python wrapping the conversion (using the perennial classic of audio processing, sox) somewhat cleverly, like this:

    #!/usr/bin/python
    # A quick hack to transcribe audio files
    #
    # Dependencies:
    # * sox (would be mpv, but that's somehow broken)
    # * a build of whispercpp (https://github.com/ggerganov/whisper.cpp/)
    # * a language model (see models/README.md in the whisper source)
    
    import contextlib
    import os
    import subprocess
    import sys
    import tempfile
    
    WHISPER_DIR = "/usr/src/whisper.cpp"
    
    
    @contextlib.contextmanager
    def workdir(wd):
            prev_dir = os.getcwd()
            try:
                    os.chdir(wd)
                    yield
            finally:
                    os.chdir(prev_dir)
    
    
    def transcribe(audio_source, model, lang):
            """transcibes an audio file, creating an in-place .txt.
    
            model must be the name of a model file in WHISPER_DIR/models;
            lang is the ISO language code in which the output should turn up.
            """
            audio_source = os.path.join(os.getcwd(), audio_source)
            with tempfile.TemporaryDirectory(suffix="transcribe", dir="/var/tmp") as wd:
                    with workdir(wd):
                            subprocess.check_call(["sox",
                                    audio_source,
                                    "-b", "16", "-r", "16000", "-c", "1",
                                    "audiodump.wav"])
    
                            out_name = os.path.splitext(audio_source)[0]
                            subprocess.check_call([WHISPER_DIR+"/main",
                                    "-l", lang,
                                    "-m", WHISPER_DIR+"/models/"+model,
                                    "-otxt", "-of", out_name,
                                    "audiodump.wav"])
    
    
    def parse_command_line():
            import argparse
            parser = argparse.ArgumentParser(description="Wrap whisper.cpp to"
                    " bulk-transcribe audio files.")
            parser.add_argument("model", type=str, help="name of ggml language"
                    f" model to use, relative to {WHISPER_DIR}/models")
            parser.add_argument("audios", type=str, nargs="+",
                    help="Sox-translatable audio file to transliterate.")
            parser.add_argument("--lang", type=str, default="en",
                    help="Spoken language to try and recogonise")
    
            return parser.parse_args()
    
    
    if __name__=="__main__":
            args = parse_command_line()
            for audio in args.audios:
                    transcribe(audio, args.model, args.lang)
    

    Nachtrag (2023-06-26)

    (Added a --lang option as per ron's feedback below)

    I have that as transcribe.py in my path, and I can now enter the rip of an audiobook and say:

    transcribe.py ggml-small.en.bin *.ogg
    

    (provided I have downloaded the model as per whisper.cpp's instructions). After a little while (with high CPU usage), there is a transcript on my disk that's better what I had typed myself even after two rounds of proff-reading, except that whisper.cpp doesn't get the paragraphs right.

    For the first time in the current AI hype, I start getting carried away, in particular when I consider how much speech recognition sucked when I last played with it around 2003, using a heap of sorry failure called viavoice.

    The Bad

    Skip the rant to get to the exciting part.

    Trouble is: What I'd mainly like to transcribe is German radio, and whisper.cpp does not come with a German language model. Not to worry, one would think, as whisper.cpp comes with conversion scripts for the pyTorch-based whisper models like those one can get from Hugging Face. I downloaded what I think is the model file and cheerfully ran:

    $ python convert-h5-to-ggml.py /media/downloads/model.bin
    Traceback (most recent call last):
      File "/home/src/whisper.cpp/models/convert-h5-to-ggml.py", line 24, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    

    Oh bummer. Well, how hard can it be? Turns out: Surprisingly hard. There is no pytorch package Debian stable. Ah… I very much later realised there is, it's just that my main system still has an i386 userland, and pytorch is only available for amd64. But I hadn't figured that out then. So, I enabled a virtual python (never mix your system python and pip) and ran:

    $ pip install torch
    ERROR: Could not find a version that satisfies the requirement torch
    ERROR: No matching distribution found for torch
    

    Huh? What's that? I ran pip with a couple of -v sprinkled in, which at least yielded:

    [...]
    Skipping link: none of the wheel's tags match: cp38-cp38-win_amd64: https://download.pytorch.org/whl/cpu/torch-1.9.0%2Bcpu-cp38-cp38-win_amd64.whl (from https://download.pytorch.org/whl/cpu/torch/)
    [...]
    Given no hashes to check 0 links for project 'torch': discarding no candidates
    ERROR: Could not find a version that satisfies the requirement torch
    ERROR: No matching distribution found for torch
    [...]
    

    The message with “Given no“ has a certain lyric quality, but other than that from the “Skipping“ messages I concluded they don't have 32 bit builds any more.

    Well, how hard can it be? Pypi says the sources are on github, and so I cloned that repo. Oh boy, AI at its finest. The thing pulls in a whopping 3.5 Gigabytes of who-knows-what. Oh, come on.

    python setup.py build fails after a short while, complaining about missing typing_extensions. Manually running pip install typing_extensions fixes that. But I killed setup.py build after a few minutes when there were only 50/5719 files built. Has AI written that software?

    In the meantime, I had gone to a machine with a 64 bit userland, and to be fair the experience wasn't too bad there, except for the hellish amount of dependencies that pytorch pulls in.

    So, my expectations regarding “AI code” were by and large met in that second part of the adventure, including the little detail that the internal links on https://pypi.org/project/torch/ are broken because right now their document processor does not produce id attributes on the headlines. Yeah, I know, they're giving it all away for free and all that. But still, after the brief glimpse into the paradise of yesteryear's software that whisper.cpp afforded, this was a striking contrast.

    The Crazy

    So, I converted the German language model doing, in effect:

    git clone https://github.com/openai/whisper.git
    git lfs install
    git clone https://huggingface.co/bofenghuang/whisper-small-cv11-german
    python convert-h5-to-ggml.py whisper-small-cv11-german/ whisper tmp
    

    (where I took convert-h5-to-ggml.py from whisper.cpp's repo). Then I moved the resulting tmp/ggml-model.bin to german-small.ggml and ran:

    transcribe.py german-small.ggml peer_review_wie_objektiv_ist_das_wissenschaftliche_dlf_20221214_1646_8a93e930.mp3
    

    with my script above and this German-language mp3 from Deutschlandfunk. From the English experience, I had expected to get an almost flawless transliteration of the German text. What I got instead was (paragraphs inserted by me); listen to the audio in parallel if you can:

    Germany. Research is on [that was: Deutschlandfunk Forschung aktuell]

    A Nobel Prize for Science is not easy without further ado. They really need to find something out. For example, Vernon Smith, who is now 95 years old, is now the father of the Experimental Economy. In 2002 he won the Nobel Prize for Science.

    This made such a prize and renommee also make impression on other Fachleuteen and that actually influenced the unabhängig well-office method for scientific publications. This has recently shown a study of Business Science in the Fachmagazin PNS. Anike Meyer spoke with one of the authors.

    When Jürgen Huber and his colleagues thought about the experiment, it was clear to them that this is not fair. The same manuscript was given by two different authors, Vernon …

  • Trailing blanks, vim and git

    Trailing blanks may be␣␣␣␣␣
    evil when git displays diffs.␣␣␣␣␣␣␣
    Time to remove them.
    

    I'm currently going through a major transition on my main machine in that I have configured my vim to strip trailing blanks, that is, to automatically remove space characters (as in U+0020) immediately before the ends of lines[1].

    Why do I do this? I suppose it mainly started with PEP 8, a style guide für Python source code which says trailing whitespace is evil. It has a point, but I have to say trailing whitespace really became a problem only when style checkers started rejecting trailing blanks, which then made all kinds of tools – including other peoples' editors – automatically strip trailing whitespace.

    That, in turn, causes the diffs coming out of version control systems to inflate, usually without anyone – neither the people leaving the trailing whitespace nor the ones whose tools remove them – actually wanting that. And well, I tackled this about now because I was fed up with humonguous continuous integration runs failing at the very end because they found a blank at the end of some source file.

    So, while I can't say I'm convinced trailing whitespace actually is as evil as all that, I still have to stomp it out to preserve everyones' nerves.

    Configuring vim to replace trailing blanks with nothing when saving files is relatively straightforward (at least if you're willing to accept a cursor jump now and then). The internet is full of guides explaining what to do to just about any depth and sophistication.

    Me, I am using a variant of a venerable vintage 2010 recipe that uses an extra function to preserve the state over a search/replace operation to avoid jumping cursors. I particularly like about it that the Preserve function may come in handy in other contexts, too:

    function! Preserve(command)
      " run command without changing vim's internal state (much)
      let _s=@/
      let prevpos = getcurpos()
      execute a:command
      let @/=_s
      call cursor(prevpos[1], prevpos[2])
    endfunction
    
    au BufWritePre * if !&binary | call Preserve("%s/  *$//e") | endif
    

    That is now in my ~/.vimrc.

    But I still have all the repositories containing files having trailing blanks. To keep their histories comprehensible, I want to remove all trailing blanks in one commit and have that commit only do these whitespace fixes. The trouble is that even with version control (that lets you back out of overzealous edits) you will want to be careful what files you change. Strip trailing blanks in a (more or less) binary file and you will probably break that file.

    So, here is what I do to fix trailing blanks in files that need it while leaving alone the ones that would break, using this blog's VCS (about) as an example:

    1. In preparation, make sure you have committed all other changes. Bulk operations are dangerous, and you may want to roll back everything in case of a fateful typo. Also, you don't want to pollute some other, meaningful commit with all the whitespace noise.

    2. In the root of the repository, look for candidate files containing trailing blanks, combining find and grep like this:

      find . -type f | xargs grep -l ' $'
      

      A brief reminder what's going on here: grep -l just lists file names with matches of the regular expression, ' $' is a regular expression matching a blank at the end of a line; xargs is a brilliant program reading command line arguments for the program named in its arguments from stdin, and the find invocation prints all names of actual files (as opposed to directories) below the current directory.

      It may be preferable to use some grep with built-in find functionality (I sometimes use ripgrep), but if I can make do with basic GNU or even better POSIX, I do, because that's something that's on many boxes rather reliably.

      The price to pay in this particular case: this recipe won't work if you have blanks in your file names (using -print0 in find and -0 in xargs would fix things here, but then the next step would break). Do yourself a favour and don't have blanks in your filenames. Having dashes in them looks-better-anyway: it makes you look like a die-hard-LISP-person.

    3. Now use egrep -v to filter file names, building patterns of names to ignore and process later, respectively. For instance, depending on your VCS, you will usually have lots of matches in .git or .svn or whatever, and most of these will break when you touch them (not to mention they won't spoil your history anyway). Coversely, it is clear that I want to strip trailing blanks on ReStructuredText files. My two patterns now grow in two separate egrep calls, one for files I don't want to look at, the other for files I will want to strip trailing blanks in:

      find . -type f |\
        egrep -v '\.git' |\
        egrep -v '\.rst$' | xargs grep -l ' $'
      

      This prints a much smaller list of names of files for which I have not yet decided whether or not to strip them.

    4. Repeat this: On the side of files I shouldn't touch, I spot some names ending in .jpeg, .png, and .db. On the side of files that need processing, I notice .html, .css, and .py. So, my next iteration is:

      find . -type f |\
        egrep -v '\.git|\.(jpeg|png|db)$' |\
        egrep -v '\.(rst|html|css|py)$' |\
        xargs grep -l ' $'
      

      That's a still smaller list of file names, among which I spot the index files used by my search engine in .xapian_db, .pyc files used by Python, and a vim .swp file. On the other hand I do want to process some files without an extension, so my next search command ends up as:

      find . -type f |\
        egrep -v '\.git|\.xapian_db|\.(jpeg|png|db|pyc|swp)$' |\
        egrep -v 'README|build-one|\.(rst|html|css|py)$' |\
        xargs grep -l ' $'
      

      That's it – this only leaves a few files as undecided, and I can quickly eyeball their names to ascertain I do not want to touch them. My second pattern now describes the set of files that I want to strip trailing blanks from.

    5. Stripping trailing blanks is easily done from the command line with sed and its inline (-i) option: sed -i 's/  *$//' <file1> <file2>...[2]. The file names I can produce with find alone, because at least GNU find supports the extended regular expressions I have just produced in my patterns; it needs a -regexptype option to correctly interpret them, though:

      find . -regextype egrep -regex 'README|build-one|.*\.(rst|html|css|py)$' |\
        xargs grep -l ' $'
      

      The advantage of using find alone over simply inverting the egrep (by dropping the -v) is that my gut feeling is the likelihood of false positives slipping through is lower this way. However, contrary to the egrep above, find's -regex needs to match the entire file name, and so I need the .* before my pattern of extensions, and editing REs might very well produce false positives to begin with… Ah well.

      Have a last look at the list and then run the the in-place sed:

      find . -regextype egrep -regex 'README|build-one|.*\.(rst|html|css|py)$' |\
        xargs grep -l ' $' |\
        xargs sed -i 's/  *$//'
      
    6. Skim the output of git diff (or svn diff or whatever). Using the blacklist built above, you can see whether you have indeed removed trailing whitespace from files you wanted to process:

      find . -type f |\
        egrep -v '\.git|\.xapian_db|\.(jpeg|png|db|pyc|swp)$' |\
        xargs grep -l ' $'
      

      If these checks have given you some confidence that the trailing blanks have vanished and nothing else has been damaged, commit with a comment stressing that only whitespace has been changed. Then take a deep breath before tackling the next repo in this way.

    [1]This post assumes your sed and you agree on what marks the end of the line. Given it's been quite a while since I've last had to think about CRs or CRLFs, it would seem that's far less of a problem these days than it used to be.
    [2]Incidentally, that's a nice example for why I was so hesitant about stripping white space for all these years: Imagine some edits make it so a line break sneaks in between sed -i 's/ and *$//'. Then both blanks that are there are gone, and even if the text is reflowed again later, it will still be broken (though not catastrophically so in this particular case).

Seite 1 / 4 »

Letzte Ergänzungen