Feeds

March 21, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Impact of parallelism and processor architecture while building a kernel

Given that Bálint just braggedblogged about how efficiently he can build a Linux kernel (less than 8 seconds, wow! Well, yes, until you read it is the result of aggressive caching and is achieved only for a second run), and that a question just popped up today on the Debian ARM mailing list, «is an ARM computer a good choice? Which one?», I decided to share my results of an experiment I did several months ago, to graphically show to my students the effects of parallelism, the artifacts of hyperthreading, the effects of different architecture sets, and even illustrate about the actual futility of my experiment (somewhat referring to John Gustafson’s reevaluation of Amdahl’s law, already 30 years ago — «One does not take a fixed-size problem and run it on various numbers of processors except when doing academic research»; thanks for referring to my inconsequential reiterative compilations as academic research! 😉)

I don’t expect any of the following images to be groundbreaking, but at least, next time I need to find them it is quite likely I’ll be able to find them — and I will be able to more easily refer to them in online discussions 😉

So… What did I do? I compiled Linux repeatedly, on several of the machines I had available, varying the -j flag (how many cores to use simultaneously), starting with single-core, and pushing up until just a bit over the physical number of cores the CPU has.

Sadly, I lost several of my output images, but the three following are enough to tell interesting bits of the story:

  • A nice little server my Institute acquired in early 2021: Xeon Silver 4208, with 8 physical cores (plus hyperthreading)

  • My laptop, an 8-ARM-core Lenovo Yoga C630. Do note it’s a “big.LITTLEâ€� system, where 4 cores are smaller and 4 are bigger.

  • A Raspberry Pi 4 (8GB version)

Of course, I have to add that this is not a scientific comparison; the server and my laptop have much better I/O than the Raspberry’s puny micro-SD card (and compiling hundreds of thousands of files is quite an IO-stressed job, even though the full task does exhibit the very low compared single-threaded performance of the Raspberry even compared with the Yoga).

No optimizations were done (they would be harmful to the effects I wanted to show!), the compile was made straight from the upstream sources.

21 March, 2023 06:02PM

March 20, 2023

Russ Allbery

Review: The Star Fraction

Review: The Star Fraction, by Ken MacLeod

Series: Fall Revolution #1
Publisher: Orbit
Copyright: 1995
Printing: 2001
ISBN: 1-85723-833-8
Format: Trade paperback
Pages: 341

Ken MacLeod is a Scottish science fiction writer who has become amusingly famous for repeatedly winning the libertarian Prometheus Award despite being a (somewhat libertarian-leaning) socialist. The Star Fraction is the first of a loose series of four novels about future solar system politics and was nominated for the Clarke Award (as well as winning the Prometheus). It was MacLeod's first novel.

Moh Kohn is a mercenary, part of the Felix Dzerzhinsky Workers' Defence collective. They're available for hire to protect research labs and universities against raids from people such as animal liberationists and anti-AI extremists (or, as Moh calls them, creeps and cranks). As The Star Fraction opens, he and his smart gun are protecting a lab against an attack.

Janis Taine is a biologist who is currently testing a memory-enhancing drug on mice. It's her lab that is attacked, although it isn't vandalized the way she expected. Instead, the attackers ruined her experiment by releasing the test drug into the air, contaminating all of the controls. This sets off a sequence of events that results in Moh, Janis, and Jordon Brown, a stock trader for a religious theocracy, on the run from the US/UN and Space Defense.

I had forgotten what it was like to read the uncompromising old-school style of science fiction novel that throws you into the world and explains nothing, leaving it to the reader to piece the world together as you go. It's weirdly fun, but I'm either out of practice or this was a particularly challenging example of the genre. MacLeod throws a lot of characters at you quickly, including some that have long and complicated personal histories, and it's not until well into the book that the pieces start to cohere into a narrative. Even once that happens, the relationship between the characters and the plot is unobvious until late in the book, and comes from a surprising direction.

Science fiction as a genre is weirdly conservative about political systems. Despite the grand, futuristic ideas and the speculation about strange alien societies, the human governments rarely rise to the sophistication of a modern democracy. There are a lot of empires, oligarchies, and hand-waved libertarian semi-utopias, but not a lot of deep engagement with the speculative variety of government systems humans have proposed. The rare exceptions therefore get a lot of attention from those of us who find political systems fascinating.

MacLeod has a reputation for writing political SF in that sense, and The Star Fraction certainly delivers. Moh (despite the name of his collective, which is explained briefly in the book) is a Trotskyist with a family history with the Fourth International that is central to the plot. The setting is a politically fractured Britain full of autonomous zones with wildly different forms of government, theoretically ruled by a restored monarchy. That monarchy is opposed by the Army of the New Republic, which claims to be the legitimate government of the United Kingdom and is considered by everyone else to be terrorists. Hovering in the background is a UN entirely subsumed by the US, playing global policeman over a chaotic world shattered by numerous small-scale wars.

This satisfyingly different political world is a major plus for me. The main drawback is that I found the world-building and politics more interesting than the characters. It's not that I disliked them; I found them enjoyably quirky and odd. It's more that so much is happening and there are so many significant characters, all set in an unfamiliar and unexplained world and often divided into short scenes of a few pages, that I had a hard time keeping track of them all. Part of the point of The Star Fraction is digging into their tangled past and connecting it up with the present, but the flashbacks added a confused timeline on top of the other complexity and made it hard for me to get lost in the story. The characters felt a bit too much like puzzle pieces until the very end of the book.

The technology is an odd mix with a very 1990s feel. MacLeod is one of the SF authors who can make computers and viruses believable, avoiding the cyberpunk traps, but AI becomes relevant to the plot and the conception of AI here feels oddly retro. (Not MacLeod's fault; it's been nearly 30 years and a lot has changed.) On-line discussion in the book is still based on newsgroups, which added to the nostalgic feel. I did like the eventual explanation for the computing part of the plot, though; I can't say much while avoiding spoilers, but it's one of the more believable explanations for how a technology could spread in a way required for the plot that I've read.

I've been planning on reading this series for years but never got around to it. I enjoyed my last try at a MacLeod series well enough to want to keep reading, but not well enough to keep reading immediately, and then other books happened and now it's been 19 years. I feel similarly about The Star Fraction: it's good enough (and in a rare enough subgenre of SF) that I want to keep reading, but not enough to keep reading immediately. We'll see if I manage to get to the next book in a reasonable length of time.

Followed by The Stone Canal.

Rating: 6 out of 10

20 March, 2023 04:08AM

March 19, 2023

Review: Allow Me to Retort

Review: Allow Me to Retort, by Elie Mystal

Publisher: The New Press
Copyright: 2022
ISBN: 1-62097-690-0
Format: Kindle
Pages: 257

If you're familiar with Elie Mystal's previous work (writer for The Nation, previously editor for Above the Law, Twitter gadfly, and occasional talking head on news commentary programs), you'll have a good idea what to expect from this book: pointed liberal commentary, frequently developing into rants once he works up a head of steam. The subtitle of A Black Guy's Guide to the Constitution tells you that the topic is US constitutional law, which is very on brand. You're going to get succinct and uncompromising opinions at the intersection of law and politics. If you agree with them, you'll probably find them funny; if you disagree with them, you'll probably find them infuriating.

In other words, Elie Mystal is the sort of writer one reads less for "huh, I disagreed with you but that's a good argument" and more for "yeah, you tell 'em, Elie!" I will be very surprised if this book changes anyone's mind about a significant political debate. I'm not sure if people who disagree are even in the intended audience.

I'm leery of this sort of book. Usually its function is to feed confirmation bias with some witty rejoinders and put-downs that only sound persuasive to people who already agree with them. If I want that, I can just read Twitter (and you will be unsurprised to know that Mystal has nearly 500,000 Twitter followers). This style can also be boring at book length if the author is repeating variations on a theme.

There is indeed a lot of that here, particularly in the first part of this book. If you don't generally agree with Mystal already, save yourself the annoyance and avoid this like the plague. It's just going to make you mad, and I don't think you're going to get anything useful out of it. But as I got deeper into this book, I think Mystal has another, more interesting purpose that's aimed at people who do largely agree. He's trying to undermine a very common US attitude (even on the left) about the US constitution.

I don't know if most people from the US (particularly if they're white and male) realize quite how insufferably smug we tend to be about the US constitution. When you grow up here, the paeans to the constitution and the Founding Fathers (always capitalized like deities) are so ubiquitous and unremarked that it's difficult not to absorb them at a subconscious level. There is a national mythology about the greatness of our charter of government that crosses most political divides. In its modern form, this comes with some acknowledgment that some of its original provisions (the notorious three-fifths of a person clause, for instance) were bad, but we subsequently fixed them and everything is good now. Nearly everyone gets taught this in school, and it's almost never challenged. Even the edifices of the US left, such as the ACLU and the NAACP, tend to wrap themselves in the constitution.

It's an enlightening experience to watch someone from the US corner a European with a discussion of the US constitution and watch the European plan escape routes while their soul attempts to leave their body. And I think it's telling that having that experience, as rare as it might be given how oblivious we can be, is still more common than a white person having a frank conversation with a black person in the US about the merits of the constitution as written. For various reasons, mostly because this is not very safe for the black person, this rarely happens.

This book is primarily Mystal giving his opinion on various current controversies in constitutional law, but the underlying refrain is that the constitution is a trash document written by awful people that sets up a bad political system. That system has been aggressively defended by reactionary Supreme Courts, which along with the designed difficulty of the amendment process has prevented fixing many obviously broken parts. This in turn has led to numerous informal workarounds and elaborate "interpretations" to attempt to make the system vaguely functional.

In other words, Mystal is trying to tell the US reader to stop being so precious about this specific document, and is using its truly egregious treatment of black people as the main fulcrum for his argument. Along the way, he gives an abbreviated tour of the highlights of constitutional law, but if you're at all interested in politics you've probably heard most of that before. The main point, I think, is to dig up any reverence left over from a US education, haul it out into the light of day, and compare it to the obvious failures of the constitution as a body of law and the moral failings of its authors. Mystal then asks exactly why we should care about original intent or be so reluctant to change the resulting system of government.

(Did I mention you should not bother with this book if you don't agree with Mystal politically? Seriously, don't do that to yourself.)

Readers of my reviews will know that I'm fairly far to the left politically, particularly by US standards, and yet I found it fascinating how much lingering reverence Mystal managed to dig out of me while reading this book. I found myself getting defensive in places, which is absurd because I didn't write this document. But I grew up surrounded by nigh-universal social signaling that the US constitution was the greatest political document ever, and in a religious tradition that often argued that it was divinely inspired. If one is exposed to enough of this, it becomes part of your background understanding of the world. Sometimes it takes someone being deliberately provocative to haul it back up to the surface where it can be examined.

This book is not solely a psychological intervention in national mythology. Mystal gets into detailed legal arguments as well. I thought the most interesting was the argument that the bizarre and unconvincing "penumbras" and "emanations" reasoning in Griswold v. Connecticut (which later served as the basis of Roe v. Wade) was in part because the Lochner era Supreme Court had, in the course of trying to strike down all worker protection laws, abused the concept of substantive due process so badly that Douglas was unwilling to use it in the majority opinion and instead made up entirely new law. Mystal argues that the Supreme Court should have instead tackled the true meaning of substantive due process head-on and decided Griswold on 14th Amendment equal protection and substantive due process grounds. This is probably a well-known argument in legal circles, but I'd not run into it before (and Mystal makes it far more interesting and entertaining than my summary).

Mystal also joins the tradition of thinking of the Reconstruction Amendments (the 13th, 14th, and 15th amendments passed after the Civil War) as a second revolution and an attempt to write a substantially new constitution on different legal principles, an attempt that subsequently failed in the face of concerted and deadly reactionary backlash. I first encountered this perspective via Jamelle Bouie, and it added a lot to my understanding of Reconstruction to see it as a political fight about the foundational principles of US government in addition to a fight over continuing racism in the US south. Maybe I was unusually ignorant of it (I know I need to read W.E.B. DuBois), but I think this line of reasoning doesn't get enough attention in popular media. Mystal provides a good introduction.

But, that being said, Allow Me to Retort is more of a vibes book than an argument. As in his other writing, Mystal focuses on what he sees as the core of an controversy and doesn't sweat the details too much. I felt like he was less trying to convince me and more trying to model a different way of thinking and talking about constitutional law that isn't deferential to ideas that are not worthy of deference. He presents his own legal analysis and possible solutions to current US political challenges, but I don't think the specific policy proposals are the strong part of this book. The point, instead, is to embrace a vigorous politics based on a modern understanding of equality, democracy, and human rights, without a lingering reverence for people who mostly didn't believe in any of those things. The role of the constitution in that politics is a flawed tool rather than a sacred text.

I think this book is best thought of as an internal argument in the US left. That argument is entirely within the frame of the US legal tradition, so if you're not in the US, it will be of academic interest at best (and probably not even that). If you're on the US right, Mystal offers lots of provocative pull quotes to enjoy getting outraged over, but he provides that service on Twitter for free.

But if you are on the US left, I think Allow Me to Retort is worth more consideration than I'd originally given it. There's something here about how we engage with our legal history, and while Mystal's approach is messy, maybe that's the only way you can get at something that's more emotion than logic. In some places it degenerates into a Twitter rant, but Mystal is usually entertaining even when he's ranting. I'm not sorry I read it.

Rating: 7 out of 10

19 March, 2023 03:59AM

March 18, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

Qi charger stand

I've got a Qi-charging phone cradle at home which orients the phone up at an angle which works with Apple's Face ID. At work, I've got a simpler "puck"-shaped one which is less convenient, so I designed a basic cradle to raise both the charger and the phone up.

cradle without phone

I did two iterations, and the second iteration was "good enough" to use that I stopped there, although I would make some further alterations if I was to print it again: more of a cut-out for the USB-C cable, raise the plinth for the Qi charger so that USB-C cables with long collars have enough room, elongate the base to compensate for the changed weight distribution.

cradle with phone

18 March, 2023 10:02PM

March 17, 2023

hackergotchi for Sean Whitton

Sean Whitton

consfigurator 1.3.0

I’ve just realised Consfigurator 1.3.0, with some readtable enhancements. So now instead of writing

      (firewalld:has-policy "athenet-allow-fwd"
#>EOF><?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
  <ingress-zone name="trusted"/>
  <egress-zone name="internal"/>
</policy>
EOF)

you can write

      (firewalld:has-policy "athenet-allow-fwd" #>>~EOF>>
                            <?xml version="1.0" encoding="utf-8"?>
                            <policy priority="-40" target="ACCEPT">
                              <ingress-zone name="trusted"/>
                              <egress-zone name="internal"/>
                            </policy>
                            EOF)

which is a lot more readable when it appears in a list of other properties. In addition, instead of writing

(multiple-value-bind (match groups)
      (re:scan-to-strings "^uid=(\\d+)" (connection-connattr connection 'id))
    (and match (parse-integer (elt groups 0))))

you can write just (#1~/^uid=(\d+)/p (connection-connattr connection 'id)). On top of the Perl-inspired syntax, I’ve invented the new trailing option p to attempt to parse matches as numbers.

Another respect in which Consfigurator’s readtable has become much more useful in this release is that I’ve finally taught Emacs about these reader macros, such that unmatched literal parentheses within regexps or heredocs don’t cause Emacs (and especially Paredit) to think that the code couldn’t be valid Lisp. Although I was able mostly to reuse propertising algorithms from the built-in perl-mode, I did have to learn a lot more about how parse-partial-sexp really works, which was pretty cool.

17 March, 2023 06:39PM

March 16, 2023

Antoine Beaupré

Picking a USB-C dock and charger

Dear lazy web, help me pick the right hardware to make my shiny new laptop work better. I want a new USB-C dock and travel power supply.

Background

I need advice on hardware, because my current setup in the office doesn't work so well. My new Framework laptop has four (4!) USB-C ports which is great, but it only has those ports (there's a combo jack, but I don't use it because it's noisy). So right now I have the following setup:

  • HDMI: monitor one
  • HDMI: monitor two
  • USB-A: Yubikey
  • USB-C: USB-C hub, which has:
    • RJ-45 network
    • USB-A keyboard
    • USB-A mouse
    • USB-A headset

... and I'm missing a USB-C port for power! So I get into this annoying situation where I need to actually unplug the USB-A Yubikey, unplug the USB-A expansion card, plug in the power for a while so it can charge, and then do that in reverse when I need the Yubikey again (which is: often).

Another option I have is to unplug the headset, but I often need both the headset and the Yubikey at once. I also have a pair of earbuds that work in the combo jack, but, again, they are noticeably noisy.

So this doesn't work.

I'm thinking I should get a USB-C Dock of some sort. The Framework forum has a long list of supported docks in a "megathread", but I figured people here might have their own experience with docks and laptop/dock setups.

So what should USB-C Dock should I get?

Should I consider changing to a big monitor with a built-in USB-C dock and power?

Ideally, i'd like to just walk in the office, put the laptop down and insert a single USB-C cable and be done with it. Does that even work with Wayland? I have read reports of Displaylink not working in Sway specifically... does that apply to any multi-monitor over a single USB-C cable setup?

Oh, and what about travel options? Do you have a fancy small form factor USB-C power charger that you really like?

Current ideas

Here are the devices I'm considering right now...

USB chargers

The spec here is at least 65W USB-C with international plugs.

TOFU power station

I found that weird little thing through this Twitter post from Benedict Reuschling, from this blog post, from 2.5 admins episode 127 (phew!).

I ordered a TOFU power station in February (2023-02-20) and it landed on my doorstep about two weeks later (2023-03-08).

The power output is a little disappointing: my laptop tells me it's charging at 30W instead of the rated 45W, which is already less than the 65W provided by the normal Framework charger. I suspect it will have a hard time keeping up with a full-on, all CPU blaring power consumption, so I'm still considering a separate charger. It should be fine for charging the laptop overnight during my travels, which is basically my use case here.

The "travel" thing is a little plastic contraption that holds three different power adapters: Australian, British_plugs_and_sockets), Europe, and USA. The clever thing here is the other end is what looks like a IEC 60320 C7/C8 coupler, AKA a "figure-8", "infinity" or "shotgun", according to Wikipedia. It seems design to fit with Macbook charger cable adapters, but it also seems to physically fit inside a classic Thinkpad power supply, which means you can use this thing to turn a normal Thinkpad power supply into an international power supply, at the cost of removing a good chunk of wire. It is not compatible with the Framework power supply, which uses a three pin, grounded, C5/C6 coupler, AKA a "cloverleaf" or "Mickey Mouse" connector.

Strangely, the travel adapters also have a fourth adapter which is not really an adapter, it's a flashlight, rechargeable with Micro USB connector.

I'm still a little worried about overload: this thing is supposed to be designed as a power bar and a charger, but they warn against "overloading" it, with a picture of a hair drier... So what is it? Is it a full on 15A power bar or not? 220V? There's an odd lack of documentation about all of this. The specifications on the cover are:

AC:

  • Input: 100V-240V
  • Output: 100V-240V

DC:

  • Type-C: 36W/45W (PD)
  • Type-C: 18W (PD)
  • USB-Ax2: 15W (share)

Dimensions:

  • 82mm(ø)x28mm(H)
  • Weight: 201g
  • 7A auto-reset fuse
  • Cable: 85cm

Update: I found the main TOFU website and the user manual which is a little more detailed.

So I guess you can only draw 7A from the power source? That would mean 700W at 100V, or 1680W at 240V, which I'm a little suspicious of.

The specs for the "traveler" are:

Dimensions:

  • 3cm x 3.8cm x 5.8cm
  • UK/EU/AU/US
  • Weight: 62g

The two devices come in a small carrying case that is about 5" x 3.75" x 2" (or 12.7cm x 9.25cm x 5.08cm), so it's actually pretty bulky once everything is packed together. The actual power cable that wraps around the device is actually 2'7", or 78.74cm, the 85cm figure about probably counts the width of the device itself, which is a little disingenuous. There's a USB-C cable provided to actually charge your laptop, but it's tiny, only about a feet (11⅝") or 30cm.

Compared to the Framework power supply, which has a 6'8" (203cm) USB-C cable and a 3'2" (96cm) power cable (so 9'10" total, or 3 meter long!), it's kind of ridiculous. That said, I can easily take the USB-C cable from the Framework power supply and carry it alongside the TOFU to get a ~280cm (~9'2") cable, which is then somewhat reasonable. It feels very "crammed" in the carrying case with the longer cable, unfortunately.

At this stage, I'll definitely try this device as my main power source when I leave the office, but I'll probably bring a backup for my first international travels in case something goes wrong. I'm looking at Ugreen and Volta chargers as a backup for those.

Update: in a real-world charging test, the power supply provided a about 28W (not 45W!) of charge, so it definitely can't sustain full power operation. A Anker GANPrime charger rated for 65W also doesn't provide the full 60W and peaks at 38W. This graph shows the Framework laptop (rated for PD 3.0, 100W) charging for about 15 minutes then switching to the Anker charger.

A graph from the GNOME Power Statistics program showing samples oscillating between 24 and 30W and then jumping to about 36W

Ugreen

So I was recommended the Ugreen chargers, but unfortunately it seems their international edition just disappeared from their website. A first attempt at contacting them yielded no response, and a second one yielded a bounce from qq.com telling me (in Chinese) "出 错原因:该邮件内容涉嫌大量群发,并且被多数用户投诉为垃圾邮件。" which Google translates to "Reason for error: The content of this email is suspected of being mass-sent, and is complained by most users as spam."

The Support button on their website does exactly fuckall, so I guess that's it for Ugreen.

Volta

Volta has been a little more helpful and clarified it's possible to get extra international adapters for their chargers by email (which wasn't obvious from the website). But their charger is currently (2023-03-13) marked as "sold out", so I guess I'm stuck there as well.

USB Docks

Specification:

  • must have 2 or more USB-A ports (3 is ideal, otherwise i need a new headset adapter)
  • at least one USB-C port, preferably more
  • works in Linux
  • 2 display support (or one big monitor?), ideally 2x4k for future-proofing, HDMI or Display-Port, ideally also with double USB-C/Thunderbolt for future-proofing
  • all on one USB-C wire would be nice
  • power delivery over the USB-C cable
  • not too big, preferably

Note that I move from 4 USB-A ports down to 2 or 3 because I can change the USB-A cable on my keyboard for USB-C. But that means I need a slot for a USB-C port on the dock of course. I also could live with one less USB-A cable if I find a combo jack adapter, but that would mean a noisy experience.

Options found so far:

  • ThinkPad universal dock/40ay0090us): 300$USD, 65-100W, combo jack, 3x USB3.1, 2x USB2.0, 1x USB-C, 2x Display Port, 1x HDMI Port, 1x Gigabit Ethernet

  • Caldigit docks are apparently good, and the USB-C HDMI Dock seems like a good candidate (not on sale in there Canada shop), but leaves me wondering whether I want to keep my old analog monitors around and instead get proper monitors with USB-C inputs, and use something like Thunderbolt Element hub (230$USD). Update: I wrote Caldigit and they don't seem to have any Dock that would work for me, they suggest the TS3 plus which only has a single DP connector (!?). The USB-C HDMI dock is actually discontinued and they mentioned that they do have trouble with Linux in general.

  • I was also recommended OWC docks as well. update: their website is a mess, and live chat has confirmed they do not actually have any device that fits the requirement of two HDMI/DP outputs.

  • Anker also has docks (e.g. the Anker 568 USB-C Docking Station 11-in-1 looks nice, but holy moly 300$USD... Also, Anker docks are not all equal, I've heard reports of some of them being bad. Update: I reached out to Anker to clarify whether or not their docks will work on Linux and to advise on which dock to use, and their response is that they "do not recommend you use our items with Linux system". So I guess that settles it with Anker.

  • Cable Matters are promising, and their "USB-C Docking Station with Dual 4K HDMI and 80W Charging for Windows Computers might just actually work. It was out of stock on their website and Amazon but after reaching out to their support by email, they pointed out a product page that works in Canada.

Also: this post from Big Mess Of Wires has me worried that anything might work at all. It's where I had the Cable Matters reference however...

Update: I ordered a this dock from Cable Matters from Amazon (reluctantly). It promises “Linux” support and checked all the boxes for me (4x USB-A, audio, network, 2xHDMI).

It kind of works? I tested the USB-A ports, charging, networking, and the HDMI ports, all worked the first time. But! When I disconnect and reconnect the hub, the HDMI ports stop working. It’s quite infuriating especially since there’s very little diagnostics available. It’s unclear how the devices show up on my computer, I can’t even tell what device provides the HDMI connectors in lsbusb.

I’ve also seen the USB keyboard drop keypresses, which is also ... not fun. I suspect foul play inside Sway.

And yeah, those things are costly! This one goes for 300$ a pop, not great.

Update 2: Cable Matters support responded by simply giving me this hack that solved it at least for now. Just reverse the USB-C cable, and poof, everything works. Magic.

Your turn!

So what's your desktop setup like? Do you have docks? a laptop? a desktop? did you build it yourself?

Did you solder a USB-C port in the back of your neck and interface directly with the matrix and there's no spoon?

Do you have a 4k monitor? Two? A 8k monitor that curves around your head in a fully immersive display? Do you work on a Occulus rift and only interface the world through 3d virtual reality, including terminal emulators?

Thanks in advance!

16 March, 2023 08:01PM

March 15, 2023

Thomas Koch

My exclusion from the Debian project

Posted on March 15, 2023
Tags:

As requested, I lay out what happened from my point of view.

Date: Sun, 15 Nov 2020 10:01:53 +0100 (CET)
From: Thomas Koch
To: debian-private@lists.debian.org
Subject: Basic information about Corona in German

TL;DR: If you’re not concerned about the worldwide restrictions to basic rights implemented to combat Covid-19, you can stop reading now.

I wrote a summary about this so called pandemic in German language: http://corona.koch.ro

Never before have I been so concerned in my life. Please have an open mind and take care what you believe.

Otherwise, sorry for the noise.

Date: Sun, 27 Feb 2022 13:31:59 +0200 (EET)
From: Thomas Koch
To: debian-private@lists.debian.org
Subject: transparency and accountability of DAM work?

BCC: da-managers@ (sorry for double-post)

Dear fellow Debian members,

yesterday I reactivated my blog and re-added it to planet-debian. My first post was about the things that I am starting to work on right now and the motivation for it.

My blog has been removed afterwards from planet-debian and I received this message:

“”" Hello Thomas,

Back in November 2020, we warned you that Debian is not a platform for COVID19-related (or any) conspiracy theories, and asked you to please keep them out of Debian.

Today you added your blog to planet.debian.org, and then posted https://blog.koch.ro/posts/2022-02-26-corona-plandemic.html to it.

Are you able to, and willing to commit to, keep your Debian involvement disconnected from your Corona activism?

For DAM: <REDACTED> “”"

I believe that the DAMs carry a lot of responsibility and therefor should be thanked a lot. They also have the power as can be seen in this case, to censor voices. I don’t deny that there should be oversight about what should and shouldn’t be written on any debian owned platform.

However, to my knowledge there is no way for regular DDs to follow what actions DAMs take and for what reason. (Of course one could watch the git history of planet-debian…) This seems problematic. How many other DDs have been warned not to discuss Corona or any other controversial topic? Am I really the only one?

May I suggest, that DAMs consider ways to be transparent and accountable, e.g. by a DD-only mailing list where such serious actions and their justifications are logged? After all, DAMs receive their mandate from DDs. (Of course this means additional overhead. I’d volunteer to help but this is of course silly in the current circumstances.)

To reply to the message sent to me:

I don’t intend to write anything further about corona on any Debian platform or list after this mail. I will also not write any additional mail to debian-private@ in this thread. History will (soon) show who was right.

I removed the “debian” tag from the indicated blog post and would like to re-add the debian-tag feed to planet-debian.

I already started looking into the distributed search engine YaCy.net (RFP #768171) and consider packaging it so that it could end up on freedombox. I’d like to post about stuff that I learn along the way.

Thomas

Date: Wed, 23 Mar 2022 10:51:56 +0200 (EET)
From: Thomas Koch <thomas@koch.ro>
To: debian-private@lists.debian.org, REDACTED (individuals)
Subject: Re: Covid restrictions in Germany?

Hi REDACTED,

please don’t get fooled to believe that Novavax would be harmless!

Dr. Wodarg explains the problems with Novavax here starting from 3:29:00: https://odysee.com/@Corona-Investigative-Committee:5/Sitzung-77-eng:5

Of course Wodarg is a conspiracy theorist has the wrong friends and thus it’s no use to listen to him!

The slides are here: https://www.wodarg.com/impfen/

TL;DR: Novavax also contains Spike-Proteins and these proteins are what cause the many side effects.

So much on that topic on debian-private@. I’m happy to talk more in private.

I just hope that we meet again healthy! I remember our nice conversations in Heidelberg in 2015.

Date: Sun, 3 Apr 2022 13:05:32 +0300 (EEST)
From: Thomas Koch
To: REDACTED (individuals)
Cc: debian-private@lists.debian.org
Subject: DPL candidate question: geographical diversity

Dear DPL candidates,

to my knowledge, most Debian project members come from the US and western Europe. Do you have any ideas, plans or motivation to make Debian more geographically universal?

I thought about this question when I read about a ban of foreign Software in Russia and the tiny(?) number of Russian DDs, but the question is of course not limited to current events or one country:

Thank you for your candidacy!

Thomas

Date: Fri, 12 Aug 2022 10:07:57 +0300 (EEST)
From: Thomas Koch
To: "debian-private@lists.debian.org" <debian-private@lists.debian.org>
Cc: REDACTED (individuals)
Subject: neutrality of publicity

“But remember that the COVID-19 pandemic is not over yet, so take all necessary measures to protect attendees.” - https://bits.debian.org/2022/08/debianday2022-call-for-celebration.html

“Team members must word articles in a way that reflects the Debian project’s stance on issues rather than their own personal opinion.” - (Updating the Debian Publicity Team delegation) https://lists.debian.org/debian-devel-announce/2018/05/msg00004.html

Please don’t comment on hot political topics like Covid-19, Ukraine, Taiwan, Kosovo, etc. on official Debian channels. I continue to try to do the same.

Thank you for your work for Debian!

Team members taken from: https://wiki.debian.org/Teams/Publicity#DPL-delegated_members

From: REDACTED <da-manager@debian.org>
To: debian-private@lists.debian.org
Subject: Debian membership of Thomas Koch
Date: Mon, 15 Aug 2022 16:02:15 +0200

Hello everyone,

Today we revoked Thomas Koch’s official Debian Membership.

A timeline of events:

  • In November 2020 Thomas posted about “this so-called pandemic” to debian-private@l.d.o

  • DAM issued a warning on the 15th November 2020, clearly telling him that Debian is not a platform for spreading conspiracy theories and to please keep them out of Debian.

  • In February 2022 he added his blog to Planet Debian to have a “Corona Plandemic” post appear.

  • His blog was removed from Planet on the same day (added later again, with the condition to not have any further corona posts appear on Planet Debian).

  • DAM sent another mail asking to keep this off Debian; Thomas committed to “not write anything further about corona on any Debian platform or list after this mail”.

  • In March 2022 he sent mail to REDACTED and debian-private about Novavax and spike proteins and so breaking his earlier commitment.

  • In August 2022 he flamed on debian-private against publicity team’s standard COVID-19 warning in a post about organising parties.

We don’t mean to make COVID-19 off-topic in Debian: there are lots of ways in which it affects the project, which is important to be able to discuss. But one should not use Debian as a platform to amplify conspiracy theories. In this specific case, one is expected to take repeated feedback into account.

– Greetings, REDACTED, for the DAMs

15 March, 2023 12:00AM

March 13, 2023

Dima Kogan

Debian at SCaLE 20x

SCaLE 20x just wrapped up. We spent three days running the Debian booth: passing out stickers, penguin swag, coffee and cookies, and telling everyone that would listen about about our great OS. As usual, Richard Hecker, Chris McKenzie and I attended as the "LA Debian contingent". Mathias Gibbens flew in from Albuquerque, and Ha Lam and Syed Reza stopped by periodically.

Chris created extra demand by restricting the supply of plushy penguins. Some kid was shocked at my old laptop, only to see Mathias pull out an even older one. And we finished off the conference by listening to Ken Thompson's tale about his music collection. Good times.

The crew:

R0003400.jpg

R0003423.jpg

Looking forward to next year!

13 March, 2023 07:58PM by Dima Kogan

Antoine Beaupré

Framework 12th gen laptop review

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com.

There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets.

I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years.

Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well.

This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine.

On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right.

Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in.

Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock.

Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well.

This blog post has been maturing for months now. It started in September 2022 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers

Those are things I wish I would have known before buying:

  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure

  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports

  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings

  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two

  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage

  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status

I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in.

I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure.

Specifications

Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.

  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD

Actual build

This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration

  • CPU: Intel® Core™ i5-1240P, 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$

Customization

  • Keyboard: US English, included

Expansion Cards

  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage – 1TB $199
  • Sub-total: 384$

Accessories

  • Power Adapter - US/Canada $64.00

Total

  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping

Quick evaluation

This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros

Cons

  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)

  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working

  • 12th gen might have issues with thunderbolt docks

  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped (?) in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.

  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner

  • no space for a 2.5" drive

  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it

  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while

  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)

  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob (!)

  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)

  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup

A breeze.

Accessing the board

The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them.

The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off (!) to get access to the internals. Kind of scary.

I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done.

Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes

Unfortunately, the QR codes I tested (in the expansion card slot, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are.

After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF).

The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404.

I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:

https://guides.frame.work/Guide/Framework\+Laptop\+DIY\+Edition\+Quick\+Start\+Guide/57

... which Firefox immediately transforms in:

https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57

I'm puzzled as to why they would send the URL that way, the proper URL is of course:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory

The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual.

Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed.

Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review

Keyboard and touch pad

The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented.

At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy.

The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works.

Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:

cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF
Section "InputClass"
      Identifier "libinput touch pad catchall"
      MatchIsTouchpad "on"
      MatchDevicePath "/dev/input/event*"
      Driver "libinput"
      Option "Tapping" "on"
      Option "TappingButtonMap" "lmr"
EndSection
EOF

But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button

The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button.

So obviously, don't do that. But that's not very helpful.

An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:

[Login]
HandlePowerKey=suspend
HandlePowerKeyLongPress=poweroff

You might have to create the directory first:

mkdir /etc/systemd/logind.conf.d/

Then restart logind:

systemctl restart systemd-logind

And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends...

Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings

There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.

Key Equivalent Effect Command
p Pause lock screen xset s activate
b Break ? ?
k ScrLk switch keyboard layout N/A

It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there.

Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player".

The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark.

A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience.

Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all.

This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022.

In the meantime, the fancy functionality can be supposedly disabled with:

echo 'blacklist hid_sensor_hub' | sudo tee /etc/modprobe.d/framework-als-blacklist.conf

... and a reboot. This solution is also documented in the upstream guide.

Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches

The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound.

Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor

The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright.

I have yet to use it in full sunlight.

The dimmed light is very dim, which I like.

Screen backlight

I bind brightness keys to xbacklight in i3, but out of the box I get this error:

sep 29 22:09:14 angela i3[5661]: No outputs have backlight property

It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:

Section "Device"
    Identifier  "Card0"
    Driver      "intel"
    Option      "Backlight"  "intel_backlight"
EndSection

This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support

I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great.

I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:

  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card

Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test).

But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports.

But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs.

My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:

autorandr -l framework-external-dual-lg-acer

The magic:

autorandr -l horizontal

... also works well.

The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:

autorandr -l horizontal &&
eho Xft.dpi: 96 | xrdb -merge &&
systemctl restart terminal xcolortaillog background-image emacs &&
i3-msg restart

Kind of disruptive.

Expansion ports

I ordered a total of 10 expansion ports.

I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around).

The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually.

There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this.

I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked.

I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing.

Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards.

Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't).

The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging

One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least.

(As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.)

I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!)

This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:

  • red/amber: charging
  • white: charged
  • off: unplugged

I couldn't find documentation about this, but the forum answered.

This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well.

Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed.

Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices

Those things also work:

  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below

There's also a light sensor, but it conflicts with the keyboard brightness controls (see above).

There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests

The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere.

It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board.

With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking.

On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework.

Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises.

In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop.

See also my audio page.

Performance tests

Compiling Linux 5.19.11

On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:

5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k
10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps

This was using 16 watts of power, with full screen brightness.

With all 16 cores (make -j16), it takes less than 25 minutes:

19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k
8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps

I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts).

During compilation, fans were spinning really hard, quite noisy, but not painfully so.

The laptop was sucking 55 watts of power, steadily:

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  87.9   0.0  10.7   1.4   0.1 17.8 6583.6 5054.3 233.0 223.9 233.1  55.96
 GeoMean  87.9   0.0  10.6   1.2   0.0 17.6 6427.8 5048.1 227.6 218.7 227.7  55.96
  StdDev   1.4   0.0   1.2   0.6   0.2  3.0 1436.8  255.5 50.0 47.5 49.7   0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  85.0   0.0   7.8   0.5   0.0 13.0 3594.0 4638.0 117.0 111.0 120.0  55.52
 Maximum  90.8   0.0  12.9   3.5   0.8 38.0 10174.0 5901.0 374.0 362.0 375.0  56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.

memtest86+

I ran Memtest86+ v6.00b3. It shows something like this:

Memtest86+ v6.00b3      | 12th Gen Intel(R) Core(TM) i5-1240P
CLK/Temp: 2112MHz    78/78°C | Pass  2% #
L1 Cache:   48KB    414 GB/s | Test 46% ##################
L2 Cache: 1.25MB    118 GB/s | Test #3 [Moving inversions, 1s & 0s] 
L3 Cache:   12MB     43 GB/s | Testing: 16GB - 18GB [1GB of 15.7GB]
Memory  :  15.7GB  14.9 GB/s | Pattern: 
--------------------------------------------------------------------------------
CPU: 4P+8E-Cores (16T)    SMP: 8T (PAR))  | Time:  0:27:23  Status: Pass     \
RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51  | Pass:  1        Errors: 0
--------------------------------------------------------------------------------

Memory SPD Information
----------------------
 - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23)







                          Framework FRANMACP04
 <ESC> Exit  <F1> Configuration  <Space> Scroll Lock            6.00.unknown.x64

So about 30 minutes for a full 16GB memory test.

Software setup

Once I had everything in the hardware setup, I figured, voilà, I'm done, I'm just going to boot this beautiful machine and I can get back to work.

I don't understand why I am so naïve some times. It's mind boggling.

Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI

First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS.

At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI.

So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:

Unable to find medium containing a live file system

This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix.

I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition.

At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations

Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion.

I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive.

Oops.

It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet

Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:

/etc/ssh
/var/lib/puppet

So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet.

I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support

It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:

apt install fwupd-amd64-signed
fwupdmgr refresh
fwupdmgr get-updates
fwupdmgr update

Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:

fwupdmgr enable-remote lvfs-testing
echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf 
fwupdmgr update

Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks

The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling".

The Debian wiki page has a few tricks for this.

Console

This will make the console and grub fonts more readable:

cat >> /etc/default/console-setup <<EOF
FONTFACE="Terminus"
FONTSIZE=32x16
EOF
echo GRUB_GFXMODE=1024x768 >> /etc/default/grub
update-grub

Xorg

Adding this to your .Xresources will make everything look much bigger:

! 1.5*96
Xft.dpi: 144

Apparently, some of this can also help:

! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter:  lcddefault
Xft.hintstyle:  hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb

It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.

The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness".

TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues

BIOS configuration

The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive.

The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update.

Note that the following keybindings are active in the BIOS POST check:

Key Meaning
F2 Enter BIOS setup menu
F12 Enter BIOS boot manager
Delete Enter BIOS setup menu

WiFi compatibility issues

I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.)

Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network.

Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly.

Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life

I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with).

(Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.)

The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box.

That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected.

Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:

Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   1.7   0.0   0.5  97.6   0.2  1.2 4684.9 1985.2 126.6 39.1 128.0   7.57
 GeoMean   1.4   0.0   0.4  97.6   0.1  1.2 4416.6 1734.5 111.6 27.9 113.3   7.54
  StdDev   1.0   0.2   0.2   1.2   0.0  0.5 1584.7 1058.3 82.1 44.0 80.2   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.2   0.0   0.2  94.9   0.1  1.0 2242.0  698.2 82.0 17.0 82.0   6.36
 Maximum   4.1   1.1   1.0  99.4   0.2  3.0 8687.4 4445.1 463.0 249.0 449.0   9.10
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
System:   7.57 Watts on average with standard deviation 0.71

Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power).

Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me.

Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background...

In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross.

(When writing about abusive power usage generates more power usage, is that an heisenbug? Or schrödinbug?)

Power management

Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.

  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1

TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:

  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)

Background on CPU architecture

There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new.

The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:

Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.

All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite.

Also note that the firmware updates for various chipsets are supposed to improve things eventually.

On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage

I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat).

Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states...

As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop.

Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats

Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there.

Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests

Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:

Device Minimum Average Max Stdev Note
Screen, 100% 2.4W 2.6W 2.8W N/A
Screen, 1% 30mW 140mW 250mW N/A
Backlight 1 290mW ? ? ? fairly small, all things considered
Backlight 2 890mW 1.2W 3W? 460mW? geometric progression
Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use
Radios 100mW 250mW N/A N/A
USB-C N/A N/A N/A N/A negligible power drain
USB-A 10mW 10mW ? 10mW almost negligible
DisplayPort 300mW 390mW 600mW N/A not passive
HDMI 380mW 440mW 1W? 20mW not passive
1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy
MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy
Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card

So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:

It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.

Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.)

The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning.

The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed.

Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W.

The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland

Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS

I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:

Device Minimum Average Max Stdev Note
Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios
2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive...
3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev
1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade
1 USB-A 1.97W 2.18W 4.02W 530mW unchanged
2 USB-A 1.97W 2.00W 2.08W 30mW unchanged
3 USB-A 1.94W 1.99W 2.03W 20mW unchanged
MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving!
MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation
DisplayPort 2.28W 2.31W 2.37W 20mW unchanged
1 HDMI 2.43W 2.69W 4.53W 460mW unchanged
2 HDMI 2.53W 2.59W 2.67W 30mW unchanged
External USB 3.85W 3.89W 3.94W 30mW new result
Ethernet 3.60W 3.70W 4.91W 230mW unchanged

Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers.

Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved.

The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage

I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out.

My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]

... to this:

sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]

That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad!

In comparison, here is my really old x220, before:

sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now                     =   5070 [mWh]

... after:

sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now                     =   4980 [mWh]

... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby.

And here is the Framework performing a similar test, before:

sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full                    =   3518 [mAh]
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now                     =   2861 [mAh]

... after:

sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now                     =   2812 [mAh]

... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad.

Note that this was done with the following expansion cards:

  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN

Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA).

This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA).

Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:

sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now                     =   2544 [mAh]

... after:

sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now                     =   2542 [mAh]

... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery.

It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life.

You can confirm the current sleep mode with:

# cat /sys/power/mem_sleep
s2idle [deep]

In the above, deep is selected. You can change it on the fly with:

printf s2idle > /sys/power/mem_sleep

Here's another test:

sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now                     =   1619 [mAh]
sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now                     =   1613 [mAh]

... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby.

A longer test:

oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now                     =   3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now                     =   3147 [mAh]

That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.

I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests — running them with whatever card I had plugged in without paying attention — so it's likely the cause of the high power usage and fluctuations.

It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post.

In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:

oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now                     =   3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now                     =   3145 [mAh]

Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey.

So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below.

There is also some hope that the Chromebook edition — specifically designed with a specification of 14 days standby time — could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since.

TODO: one final solution here is suspend-then-hibernate, which Windows uses for this

TODO: consider implementing the S0ix sleep states , see also troubleshooting

TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results

This table is a summary of the more extensive standby-tests I have performed:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1
s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1
normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1
1 USB-C 0.23W 15mA ~10
2 USB-C 0.23W 14.9mA same as above
1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card!
2 USB-A 1.11W 72mA 2 +360mW
3 USB-A 1.48W 96mA <2 +370mW
1TB SSD 0.49W 32mA <5 +260mW
MicroSD 0.52W 34mA ~4 +290mW
DisplayPort 0.85W 55mA <3 +620mW (!!)
1 HDMI 0.58W 38mA ~4 +250mW
2 HDMI 0.65W 42mA <4 +70mW (?)

Conclusions:

  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required

  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)

  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)

  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)

  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)

  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?

A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS

Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage.

The update came out in December 2022 and I redid my analysis with the following results:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 no cards, same as before upgrade
1 USB-C 0.25W 16mA 9 same as before
2 USB-C 0.25W 16mA 9 same
1 USB-A 0.80W 62mA 3 +550mW!! worse than before
2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad!
Ethernet 0.62W 40mA 3-4 new result, decent
1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA)
MicroSD 0.51W 22mA 4 same
DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW
1 HDMI ? 38mA ? same
2 HDMI ? 45mA ? a bit worse than before (+3mA)
Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A

Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either.

Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection

The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:

sudo ectool fwchargelimit 80

I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there.

Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:

root@angela:/home/anarcat# tlp-stat -b
--- TLP 1.5.0 --------------------------------------------

+++ Battery Care
Plugin: generic
Supported features: none available

+++ Battery Status: BAT1
/sys/class/power_supply/BAT1/manufacturer                   = NVT
/sys/class/power_supply/BAT1/model_name                     = Framewo
/sys/class/power_supply/BAT1/cycle_count                    =      3
/sys/class/power_supply/BAT1/charge_full_design             =   3572 [mAh]
/sys/class/power_supply/BAT1/charge_full                    =   3541 [mAh]
/sys/class/power_supply/BAT1/charge_now                     =   1625 [mAh]
/sys/class/power_supply/BAT1/current_now                    =    178 [mA]
/sys/class/power_supply/BAT1/status                         = Discharging

/sys/class/power_supply/BAT1/charge_control_start_threshold = (not available)
/sys/class/power_supply/BAT1/charge_control_end_threshold   = (not available)

Charge                                                      =   45.9 [%]
Capacity                                                    =   99.1 [%]

One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card

The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop"....

The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop.

I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots.

I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings.

It seems like this:

echo 'on' > '/sys/bus/usb/devices/4-2/power/control'

... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:

echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'

Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:

By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.

So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:

USB_AUTOSUSPEND=0

... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:

USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"

and this on the kernel commandline:

usbcore.quirks=0bda:8156:k

... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:

# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"

# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768

Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  238             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works.

In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs

The framework does need proprietary firmware to operate. Specifically:

  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package

Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable — I estimate 200-500ms — lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty.

One workaround is to delete the two affected firmware files:

cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u

You will get the following warning during build, which is good as it means the problematic firmware is disabled:

W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915

But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast.

The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:

[Unit]
Description=start compositing manager
PartOf=graphical-session.target
ConditionHost=angela

[Service]
Type=exec
ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc
Restart=on-failure

[Install]
RequiredBy=graphical-session.target

compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:

anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0

When running from the base session, I would get this instead:

compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Couldn't load plugin 'ccp'
compiz (core) - Error: Couldn't load plugin 'ccp'

Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum.

Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.

Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:

[   19.534429] Intel(R) Wireless WiFi driver for Linux
[   19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[   19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[   19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[   19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[   19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[   19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[   19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[   19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[   19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[   19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[   19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm

Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them.

I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them.

There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration

After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears.

Running in a window gives me odd results, as the gears basically grind to a halt:

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds =  5.022 FPS

Ouch. 5FPS!

But interestingly, once the window is in full screen, it does hit the monitor refresh rate:

300 frames in 5.0 seconds = 60.000 FPS

I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?).

I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo).

Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:

user_pref("media.ffmpeg.vaapi.enabled", true);

Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird.

The guide also mentions installing the intel-media-driver package, but I could not find that in Debian.

The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs

It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:

export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"

It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!).

Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:

exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"

This was mostly done in this Puppet commit.

I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs

I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming.

The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29).

It could also be that bug because the error message I get is actually:

Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled

It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:

echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0" | sudo tee /etc/modprobe.d/i915.conf

from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches

I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches.

The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

Shipping details

I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late.

People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand.

As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline

There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:

  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)

This comes from the crowdsourced spreadsheet, which should be updated when the status changes here.

I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering.

It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia

I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took:

I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs.

There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources

13 March, 2023 02:38PM

how to audit for open services with iproute2

The computer world has a tendency of reinventing the wheel once in a while. I am not a fan of that process, but sometimes I just have to bite the bullet and adapt to change. This post explains how I adapted to one particular change: the netstat to sockstat transition.

I used to do this to show which processes where listening on which port on a server:

netstat -anpe

It was a handy mnemonic as, in France, ANPE was the agency responsible for the unemployed (basically). That would list all sockets (-a), not resolve hostnames (-n, because it's slow), show processes attached to the socket (-p) with extra info like the user (-e). This still works, but sometimes fail to find the actual process hooked to the port. Plus, it lists a whole bunch of UNIX sockets and non-listening sockets, which are generally irrelevant for such an audit.

What I really wanted to use was really something like:

netstat -pleunt | sort

... which has the "pleut" mnemonic ("rains", but plural, which makes no sense and would be badly spelled anyway). That also only lists listening (-l) and network sockets, specifically UDP (-u) and TCP (-t).

But enough with the legacy, let's try the brave new world of sockstat which has the unfortunate acronym ss.

The equivalent sockstat command to the above is:

ss -pleuntO

It's similar to the above, except we need the -O flag otherwise ss does that confusing thing where it splits the output on multiple lines. But I actually use:

ss -plunt0

... i.e. without the -e as the information it gives (cgroup, fd number, etc) is not much more useful than what's already provided with -p (service and UID).

All of the above also show sockets that are not actually a concern because they only listen on localhost. Those one should be filtered out. So now we embark into that wild filtering ride.

This is going to list all open sockets and show the port number and service:

ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu

For example on my desktop, it looks like:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu
          [::]:* users:(("unbound",pid=1864))        
22  users:(("sshd",pid=1830))           
25  users:(("master",pid=3150))        
53  users:(("unbound",pid=1864))        
323 users:(("chronyd",pid=1876))        
500 users:(("charon",pid=2817))        
631 users:(("cups-browsed",pid=2744))   
2628    users:(("dictd",pid=2825))          
4001    users:(("emacs",pid=3578))          
4500    users:(("charon",pid=2817))        
5353    users:(("avahi-daemon",pid=1423))  
6600    users:(("systemd",pid=3461))       
8384    users:(("syncthing",pid=232169))   
9050    users:(("tor",pid=2857))            
21027   users:(("syncthing",pid=232169))   
22000   users:(("syncthing",pid=232169))   
33231   users:(("syncthing",pid=232169))   
34953   users:(("syncthing",pid=232169))   
35770   users:(("syncthing",pid=232169))   
44944   users:(("syncthing",pid=232169))   
47337   users:(("syncthing",pid=232169))   
48903   users:(("mosh-client",pid=234126))  
52774   users:(("syncthing",pid=232169))   
52938   users:(("avahi-daemon",pid=1423))  
54029   users:(("avahi-daemon",pid=1423))  
anarcat@angela:~$

But that doesn't filter out the localhost stuff, lots of false positive (like emacs, above). And this is where it gets... not fun, as you need to match "localhost" but we don't resolve names, so you need to do some fancy pattern matching:

ss -pluntO --no-header | \
    sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | \
    grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -e '^172\.' | \
    sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' |\
    sort -gu

This is kind of horrible, but it works, those are the actually open ports on my machine:

anarcat@angela:~$ sudo ss -pluntO --no-header |         sed 's/^\([a-
z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' |      
   grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -
e '^172\.' |         sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\
1\t/;s/,fd=[0-9]*//' |        sort -gu
22  users:(("sshd",pid=1830))           
500 users:(("charon",pid=2817))        
631 users:(("cups-browsed",pid=2744))   
4500    users:(("charon",pid=2817))        
5353    users:(("avahi-daemon",pid=1423))  
6600    users:(("systemd",pid=3461))       
21027   users:(("syncthing",pid=232169))   
22000   users:(("syncthing",pid=232169))   
34953   users:(("syncthing",pid=232169))   
35770   users:(("syncthing",pid=232169))   
48903   users:(("mosh-client",pid=234126))  
52938   users:(("avahi-daemon",pid=1423))  
54029   users:(("avahi-daemon",pid=1423))

Surely there must be a better way. It turns out that lsof can do some of this, and it's relatively straightforward. This lists all listening TCP sockets:

lsof -iTCP -sTCP:LISTEN +c 15 | grep -v localhost | sort

A shorter version from Adam Shand is:

lsof -i @localhost

... which basically replaces the grep -v localhost line.

In theory, this would do the equivalent on UDP

lsof -iUDP -sUDP:^Idle

... but in reality, it looks like lsof on Linux can't figure out the state of a UDP socket:

lsof: no UDP state names available: UDP:^Idle

... which, honestly, I'm baffled by. It's strange because ss can figure out the state of those sockets, heck it's how -l vs -a works after all. So we need something else to show listening UDP sockets.

The following actually looks pretty good after all:

ss -pluO

That will list localhost sockets of course, so we can explicitly ask ss to resolve those and filter them out with something like:

ss -plurO | grep -v localhost

oh, and look here! ss supports pattern matching, so we can actually tell it to ignore localhost directly, which removes that horrible sed line we used earlier:

ss -pluntO '! ( src = localhost )'

That actually gives a pretty readable output. One annoyance is we can't really modify the columns here, so we still need some god-awful sed hacking on top of that to get a cleaner output:

ss -nplutO '! ( src = localhost )'  | \
    sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Netid\tPort\tProcess/' | \
    sort -nu

That looks horrible and is basically impossible to memorize. But it sure looks nice:

anarcat@angela:~$ sudo ss -nplutO '! ( src = localhost )'  | sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Port\tNetid\tProcess/' | sort -nu

Port    Netid   Process
22  tcp sshd
500 udp charon
546 udp NetworkManager
631 udp cups-browsed
4500    udp charon
5353    udp avahi-daemon
6600    tcp systemd
21027   udp syncthing
22000   udp syncthing
34953   udp syncthing
35770   udp syncthing
48903   udp mosh-client
52938   udp avahi-daemon
54029   udp avahi-daemon

Better ideas welcome.

13 March, 2023 01:46PM

Russell Coker

Firebuild

After reading Bálint’s blog post about Firebuild (a compile cache) [1] I decided to give it a go. It’s non-free, the project web site [2] says that it’s free for non-commercial use or commercial trials.

My first attempt at building a Debian package failed due to man-recode using a seccomp() sandbox, I filed Debian bug #1032619 [3] about this (thanks for the quick response Bálint). The solution for me was to edit /etc/firebuild.conf and add man-recode to the dont_intercept list. The new version that’s just been uploaded to Debian fixes it by disabling seccomp() and will presumably allow slightly better performance.

Here are the results of building the refpolicy package with Firebuild, a regular build, the first build with Firebuild (30% slower) and a rebuild with Firebuild that reduced the time by almost 42%.

real    1m32.026s
user    4m20.200s
sys     2m33.324s

real    2m4.111s
user    6m31.769s
sys     3m53.681s

real    0m53.632s
user    1m41.334s
sys     3m36.227s

Next I did a test of building a Linux 6.1.10 kernel with “make bzImage -j18“, here are the results from a normal build, first build with firebuild, and second build. The real time is worse with firebuild for this on my machine. I think that the relative speeds of my CPU (reasonably fast 18 core) and storage (two of the slower NVMe devices in a BTRFS RAID-1) is the cause of the first build being relatively so much slower for “make bzImage” than for building the refpolicy, as the kernel build process involves a lot more data. For the final build I moved ~/.cache/firebuild to a tmpfs (I have 128G of RAM and not much running on my machine at the time of the tests), even then building with firebuild was slightly slower in real time but took significantly less CPU time (user+real being 20mins instead of 36m). I also ran several tests with the kernel source tree on a tmpfs but for unknown reasons those tests each took about 6 minutes. Does firebuild or the Linux kernel build process dislike tmpfs for some reason?

real    2m43.020s
user    31m30.551s
sys     5m15.279s

real    8m49.675s
user    64m11.258s
sys     19m39.016s

real    3m6.858s
user    7m47.556s
sys     9m22.513s

real    2m51.910s
user    10m53.870s
sys     9m21.307s

One thing I noticed from the kernel build tests is that the total CPU time taken by the firebuild process (as reported by ps) was more than 2/3 of the run time and top usually reported it as taking around 75% of a CPU core. It seems to me that the firebuild process itself is a bottleneck on build speed. Building refpolicy without firebuild has an average of 4.5 cores in use while building the kernel haas 13.5. Unless they make a multi-threaded version of firebuild it seems that it won’t give the performance one would hope for from a CPU with 18+ cores. I presume that if I had been running with hyper-threading enabled then firebuild would have been even worse for kernel builds as it would sometimes get on the second thread of a core. It looks like firebuild would perform better on AMD CPUs as they tend to have fewer CPU cores with greater average performance per core so a single CPU core for firebuild will be less limited. I presume that the firebuild developers will make it perform better with large numbers of cores in future, the latest Intel laptop CPUs have 16+ cores and servers with 2*40core CPUs are common.

The performance improvement for refpolicy is significant as a portion of build time, but insignificant in terms of real time. A full build of refpolicy doesn’t take enough time to get a Coke and reducing it doesn’t offer a huge benefit, if Firebuild was available in past years when refpolicy took 20 minutes to build (when DDR2 was the best RAM available) then it would be a different story.

There is some potential to optimise the build of refpolicy for the non-firebuild case. Getting it to average more than 4.5 cores in use when there’s 18 available should be possible, there are a number of shell for loops in the main Makefile and maybe some of them can be replaced by make constructs to allow running in parallel. If it used 7 cores on average then it would be faster in a regular build than it currently is with firebuild and a hot cache. Any advice from make experts would be appreciated.

13 March, 2023 12:07PM by etbe

Xmpp Tools

For a while I’ve had my monitoring systems alert me via XMPP (Jabber). To do that I used the sendxmpp command-line program which worked well for it’s basic tasks. I recently noticed that my laptop and workstation which I had upgraded to Debian/Testing weren’t sending messages, I’m not sure when it started as my main monitoring of such machines is to touch a key and see if there’s a response – if I’m not at the keyboard then a failure doesn’t bother me too much.

I’ve filed Debian bug #1032868 [1] about this. As sendxmpp is apparently not supported upstream and we are preparing for a release it could be that the next version of Debian is released without this working (if it’s specific to talking to Prosody) or without sendxmpp (if it fails on all Jabber servers).

I next tested xmppc which doesn’t send messages (gives no error when I have apparently correct parameters and just doesn’t send anything) and doesn’t display any text output for info related commands while not giving error messages or an error return code. I filed Debian bug #1032869 [2] about this.

Currently the only success I’ve found with Debian/Testing for this is with go-sendxmpp. To configure that you setup a file named ~/.config/go-sendxmpp/config with the following contents:

username: JABBER-ID
password: PASSWORD

Go-sendxmpp can take a username and password on the command-line but that’s bad for security as in the absence of SE Linux or other advanced security systems the password can be seen by any user on the same system who runs ps. To send a message run “echo $MESSAGE | go-sendxmpp $ADDR” to send $MESSAGE to $ADDR. It also has the option “go-sendxmpp -l” to listen for incoming messages. I don’t have an immediate need to receive messages from the command-line but it’s handy to have the option.

I probably won’t be able to get a new version of etbemon in Debian for the Bookworm release. So to get go-sendxmpp to work with etbemon you need to edit /usr/lib/mon/alert.d/mailxmpp.alert and change this sendxmpp line to this go-sendxmpp line:

open (XMPP, "| /usr/bin/sendxmpp -a /etc/ssl/certs -t @xmpprec -r $host") ||

open (XMPP, "| /usr/bin/go-sendxmpp @xmpprec") ||

13 March, 2023 07:13AM by etbe

March 11, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.2.3 on CRAN: Minor Update

kitten

A new release 0.2.3 of pkgKitten arrived on CRAN earlier, and will be uploaded to Debian. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier.

This release improves the created ‘Description:’, and updated some of the continuous integration.

Changes in version 0.2.3 (2023-03-11)

  • Small improvement to generated Description: field and Title:

  • Maintenance for continuous integration setup

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 March, 2023 06:35PM

March 10, 2023

Thorsten Alteholz

My Debian Activities in February 2023

FTP master

This month I accepted 284 and rejected 49 packages. The overall number of packages that got accepted was 286.

I love this calm and peaceful time now within the Debian project, when everybody only cares for RC bugs and NEW does not grow.

Debian LTS

This was my hundred-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 8h.

During that time I uploaded:

  • [DLA 3310-1] xorg-server security update for one CVE

As I added all missing ELA uploads to the git repository I also had a look at package-operations and added stuff to make my life a bit easier.

Debian ELTS

This month was the fifty fifth ELTS month.

  • [ELA-794-1] xorg-server security update of Jessie and Stretch for one CVE

I also made myself familiar with the mandatory git workflow and committed all my packages of this years ELA to the corresponding repository.

Debian Astro

This month I uploaded improved packages or new versions of:

Debian Printing

This month I uploaded new versions or improved packages of:

As ippsample does not build on i386, I filed a RM bug for this architecture. Maybe in a later upstream release it will be available again on all architectures.

I could also close lots of bugs that happen to be fixed upstream, but have not been closed with the upload of the new version.

Parts of this work is generously funded by Freexian!

Other stuff

This month I uploaded improved packages of:

The upload of feynmf could only happen due to the help of several people (please see #1029439). Thanks a lot!

10 March, 2023 06:45AM by alteholz

March 09, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solving a 1998 problem with 2023 methods

A long time ago, in 1998, our family entered a contest with a puzzle; given a bunch of company names (they were the ones participating in a loyalty program known as Domino, which has since gone defunct), try to spell out as many Norwegian names as possible. (The name list was fixed, but you actually had to buy a book to find it.)

The prize was fairly attractive, so I went to work with a computer program instead of trying to figure it out by hand. I remember running it literally for weeks on my 400 MHz machine at the time; at some point, we even went on vacation for more than a month, and I came back disappointed to see that the search hadn't really gone that much further. Over time, I optimized it to use randomization in addition to backtracking, some bit fiddling tricks and so on. We thought we had a good shot.

Unfortunately, it turned out we had interpreted the rules differently from what was intended (or what others could get away with; I don't honestly think the organizers had thought it much through), and the prize was split between four other competitors who all had used the same name multiple times, giving them more names than the 25 we found.

A couple of weeks ago, this contest just struck my mind a bit out of nowhere, and I wanted to finally figure out how to attack this old problem. I dug up the code and name list (complete with RCS logs!), and set about solving it using 2023 technology while on a plane. It turns out that with modern SAT solvers (I used the constraint solver from OR-Tools), this is really really easy even on my laptop; before I'd landed, I had the answer:

Allowing only one of each name
==============================

Letters available: BOHUS CUBUS DRESSMANN EXPERT ICA MAXBO MEKKA RIMI STATOIL
SPARMAT TELENOR MOBIL TYBRINGGJEDDE
Best solution: BO, JO, ASK, BEN, DAG, GRY, INE, ISA, KIM, LIN, LIS, MAX, NUP,
PER, RUT, SAM, SOL, TEA, TED, TIM, TOM, URD, BETH, EBBE, MARC
Found 25 words, used 76/81 letters
./domino.py  0,89s user 0,68s system 251% cpu 0,622 total

Allowing each name multiple times
=================================
Letters available: BOHUS CUBUS DRESSMANN EXPERT ICA MAXBO MEKKA RIMI STATOIL
SPARMAT TELENOR MOBIL TYBRINGGJEDDE
Best solution: BO x 4, JO, BEN, DAG, DAN, GRY, KEN x 2, LIS x 3, MAX x 2, PER
x 2, RUT, SAM x 2, TEA, TED, TIM x 2, TUE, CHRIS
Found 27 words, used 78/81 letters
./domino.py  0,74s user 0,66s system 307% cpu 0,453 total

So our answer of 25 was optimal all along… under that rule set.

(For reference, I don't think there were any tiebreakers, but my original program tried to use more letters for some reason. You can do it with as little as 74 letters in the only-one case, or you can use all 81. Similarly, with the repeat case, you can use as little as 76, or all. The formulation is dead simple, just make an 0–1 integer variable per possible name and add constraints that the sum of the names with A can't be more than 7, the sum of the names with B can't be more than 5, etc.—and then remember that some names can have the same letter multiple times. The objective to maximize is the sum of all variables. To allow repeats, allow each integer variable to go up to 100 or whatever.)

Closure, I guess?

Edit: Not quite closure; looking through some logs I had missed, it turned out I had only gotten to 23 before the end of the contest (so we were not optimal until afterwards), but more intriguingly, the winners all had 29! So one would wonder what their solutions looked like, and how it could have been accepted. There is, of course the chance of an error in my name list, but I tried with a newer, larger one (almost certainly not allowed under the 1998 rules), and it still didn't get to 29, so I'm pretty sure there's some foul play here. Unfortunately, I can't come 25 years later and accuse someone of cheating :-)

09 March, 2023 10:32PM

hackergotchi for Charles Plessy

Charles Plessy

If you work at Dreamhost, can you help us?

Update: thanks to the very kind involvment of the widow of our wemaster, we could provide enough private information to Dreamhost, who finally accepted to reset the password and the MFA. We have recovered evrything! Many thanks to everybody who helped us!

Due to tragic circumstances, one association that I am part of, Sciencescope got locked out of its account at Dreamhost. Locked out, we can not pay the annual bill. Dreamhost contacted us about the payment, but will not let us recover the access to our account in order to pay. So they will soon close the account. Our website, mailing lists and archives, will be erased. We provided plenty of evidence that we are not scammers and that we are the legitimate owners of the account, but reviewing it is above the pay grade of the custommer support (I don't blame them) and I could not convince them to let somebody higher have a look at our case.

If you work at Dreamhost and want to keep us as custommers instead of kicking us like that, please ask the support service in charge of ticket 225948648 to send the recovery URL to the secondary email adddresses (the ones you used to contact us about the bill!) in addition to the primary one (which nobody will read anymore). You can encrypt it for my Debian Developer key 73471499CC60ED9EEE805946C5BD6C8F2295D502 if you worry it gets in wrong hands. If you still have doubts I am available for calls any time.

If you know somebody working at Dreamhost can you pass them the message? This would be a big, big, relief for our non-profit association.

09 March, 2023 01:35PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.3 on CRAN: Maintenance

A new minor release 0.2.3 of our RcppRedis package arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads on a trading floor for several years.

This update is fairly mechanical. CRAN wants everybody off the C++11 train which is fair game given that it 2023 and most sane and lucky people are facing sane and modern compilers so this makes sense. (And I raise a toast to all those poor souls facing RHEL 7 / CentOS 7 with a compiler from many moons ago: I hear it is a vibrant job market out there so maybe time to make a switch…). As with a few of my other packages, this release simply does away with the imposition of C++11 as the package will compile just fine under C++14 or C++17 (as governed by your version of R).

The detailed changes list follows.

Changes in version 0.2.3 (2023-03-08)

  • No longer set a C++ compilation standard as the default choices by R are sufficient for the package

  • Switch include to Rcpp/Rcpp which signals use of all Rcpp features including Modules

Courtesy of my CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 March, 2023 12:57AM

March 08, 2023

hackergotchi for Joey Hess

Joey Hess

Jelmer Vernooij

The Kali Janitor

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Kali Linux have been running their own instance of the Janitor for the last year, under the kali-bot user on GitLab. Their web site has some excellent documentation explaining how the bot works.

Both projects share some common components - the core janitor codebase, Silver-Platter and the various codemods (lintian-brush and deb-new-upstream). The site and some of the review logic is different for Kali.

The Kali bot has several campaigns:

The last campaign doesn’t exist in the Debian janitor, and pulls in new changes from packages that have been imported from other distributions.

For more information about the Janitor’s lintian-fixes efforts, see the landing page.

08 March, 2023 09:25PM by Jelmer Vernooij

Outreachy Dating

Recognizing relationships and false accusations in GSoC and Outreachy

For about five years now Debian fanatics and their rent-a-mob have been spreading rumors about a mentor.

Many of us trust Debian as an operating system for our computers and servers. But can we really trust the people who make Debian?

Here is Ariadne Conill spreading rumours about a mentor girlfriending one of the GSoC interns:

Ariadne Conill

The last woman this mentor was responsible for is Elena Gjevukaj. In the middle of her internship, she sent the mentor a picture of her wedding.

Oops. Debian lies. Ariadne lies. If the woman got married in the middle of the internship then it is both very rude and very absurd for Debian people to suggest she was the mentor's girlfriend.

Subject: 	Surprise
Date: 	Wed, 15 Aug 2018 01:14:54 +0200
From: 	Elena Gjevukaj <gjevukaje@gmail.com>
To: 	Daniel Pocock <daniel@pocock.pro>

We got married! 😂
Elena Gjevukaj

Yet Ariadne persists. She is even stalking the mentor on Twitter, despite the fact the mentor doesn't have any social media accounts.

Ariadne Conill

There is a lot more evidence too. In fact, the mentor was denied funding to attend DebConf18 in 2018. Here is the email:


Subject: Your bursary request for DebConf18: status updated
Date: Wed, 13 Jun 2018 18:35:52 -0000
From: <bursaries@debconf.org>
To: <daniel@pocock.pro>

Dear Daniel Pocock,

The bursaries team has updated the status of your bursary request for DebConf18.

Travel bursary
--------------

Your request for a travel bursary has been evaluated and ranked. However, we are
unable to grant it at this time: our travel budget is very limited, and we had
to defer a lot of strong applications. We will let you know as soon as possible,
hopefully before the end of June, if we can grant you the amount you have
requested, as our budget evolves and higher ranked applicants finalize their
plans.


Food bursary
------------

You have told us that you would be completely unable to come to DebConf if you
weren't granted a travel bursary. Your food bursary is therefore pending an
update on the travel bursaries front. If you're able to join us nonetheless,
let the bursaries team know so we can update your "level of need". Note that
this will be reflected in your travel bursary ranking.


Accommodation bursary
---------------------

You have told us that you would be completely unable to come to DebConf if you
weren't granted a travel bursary. Your accommodation bursary is therefore
pending an update on the travel bursaries front. If you're able to join us
nonetheless, let the bursaries team know so we can update your "level of need".
Note that this will be reflected in your travel bursary ranking.


You can review the full status of your bursary request in your profile[1] on the
DebConf website.

[1] https://debconf18.debconf.org/users/pocock/
-- 
The DebConf18 bursaries team

Mentors do a lot of unpaid work for Google and Outreachy. Why did Debian and Google block this mentor going to DebConf18? Were they hiding something from the mentor?

It looks like other developers, inner members of the Debian cabal, wanted to have some personal time with the female interns. Jiin-Mei Lin published a photo gallery.

The gallery includes one inconvenient photo. It is the developer Lior Kaplan with his arm around an Outreachy. In fact, the woman concerned was subsequently employed by GNOME Foundation. She joined GNOME at the same time as Molly de Blanc.

Congratulations to this woman. She survived DebConf and she outlasted both Molly de Blanc and Neil McGovern at GNOME. The Albanian woman in Lior's arms is the last man standing.

Lior Kaplan, DebConf18, GNOME, Outreachy

At DebConf19 in Brazil, there was an even bigger controversy. We saw pictures of the Debian Project Leader, Chris Lamb (top left), with a table full of Albanian women at the conference dinner:

Chris Lamb, Anisa Kuci, DebConf19, Brazil, Albanian women

If travel budgets are so tight, how did they find money to buy all these tickets from Albania to Brazil?

Eight weeks later and the woman sitting closest to Lamby won the Outreachy internship, $6,000 and more free trips:

Anisa Kuci, Chris Lamb, Outreachy, favoritism

How do other women feel when they waste two or three evenings doing the Outreachy application test and then they see photos suggesting the Debian leader had a romantic history with the winner?

FSFE is at it too

Here is that picture from OSCAL in Tirana, Albania where we see the FSFE president Matthias Kirschner (on the right) with a table full of young Albanian girls.

Matthias Kirschner, OSCAL, Tirana, Albania, FSFE, women

Now we found another picture, it is Kirschner's predecessor at the FSFE, Karsten Gerloff taking a patriarchal pose with his arm around a smiling young woman from Eastern Europe:

Karsten Gerloff, FSFE, Women

RMS signatories were not victims

Here one of the women tells us she was not a victim.

RMS, Richard Stallman, petition

Nicolas Dandrimont was at it too

Dandrimont is one of the Debian Account Managers. He tried to bring his girlfriend into Outreachy.

Subject: Recusing myself from Outreachy applicant selection decisions, internships funding
Date: Fri, 14 Oct 2016 12:37:46 +0200
From: Nicolas Dandrimont <olasd@debian.org>
To: <leader@debian.org>, <outreach@debian.org>
CC: <mapreri@debian.org>, <pocock@debian.org>

Hey all,

As of today, the person I'm involved with, Pauline Pommeret, is applying to an
Outreachy internship in Debian (on the GPG cleanroom environment project - I
don't see her mail on the list archive yet, so something must have gone wrong,
but it should arrive soon enough).

To avoid an obvious conflict of interest, I am recusing myself for any
decisions regarding applicant selections for this round.

I am of course still happy to serve as a liaison with the Outreachy program
administrators, and to forward our applicants to them for general funding when
selected, if the money allocated by Debian runs out.

This would especially be relevant, in my opinion, to RTC projects, as I'm not
sure at all that we should fund them from Debian money directly. Karen Sandler
also told me that one of the Outreachy sponsors was interested in funding
interns on Reproducible Builds. All in all, we should be able to have two or
three internship slots with Debian only disbursing one.

I'll stay on the outreach@d.o alias for now, but let me know if you need help
ranking applicants, and I'll ask DSA to remove me so you can discuss at ease.

Cheers,
-- 
Nicolas Dandrimont

Debian women: marriages and children

The story of Alexander Reichle-Schmehl and Meike Reichle is not uncommon in Debian

Subject: Ditto: Retiring
Date: Tue, 31 Dec 2013 12:40:37 +0100
From: Meike Reichle 
To: debian-private@lists.debian.org

Hi all

> I'm very sorry but as I'm unable to dedicate any time to anything
> related to Debian, I think it's best to retire. Sad truth is, that I'm
> quite busy with my family and my job.
> 
> Last year I hoped, that it would only be a temporary thing, but well I
> still don't have much time, and the time I have left I prefer to spend
> with my family.
> 
> However, I really hope, I'll be able to rejoin when time permits it.

As expected, the same goes for me :-/

With job(s), kid, house, life etc. computers been playing a continuously
smaller role in our family life. Most of these days I am glad if I manage
to check my email once a day. I'd really hoped to find a way to combine
family and Free Software, but I don't seem to be able to really pull it off.

So, as Alex, I am herewith declaring my resignation. *sniff*

Being a part of the Debian project was one of the greatest experiences in
my life and I owe a lot to you (including my lovely husband). I hope we'll
still be able to maintain a close connection to the project and find other
ways to support it until we can return to full DD'dom.

In the meantime there'll at least always be a free guest room waiting,
should anyone of you ever need a place to stay in the Southern
Germany/Switzerland area.

Best Regards,
Meike

PS This message shall never be disclosed.


-- 
Please respect the privacy of this mailing list. Some posts may be declassified
3 years after posting as per http://www.debian.org/vote/2005/vote_002

Archive: file://master.debian.org/~debian/archive/debian-private/

To UNSUBSCRIBE, use the web form at .

08 March, 2023 06:15PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.0 released and new ISO images using Debian 12 bookworm/testing

After more than a year, a new major FAI release is ready to download.

Following new features are included:

  • add support for release specification in package_config via release=<name>
  • the partitioning tool now supports partition labels with GPT
  • support partition labels and partition uuids in fstab
  • support for Alpine Linux and Arch Linux package managers in install_packages
  • Ubuntu 22.04 and Rocky Linux 9 support added
  • add support for NVme devices in fai-kvm
  • add ssh key for root remote access using classes

We have included a lot of bug fixes for free of course.

Even if FAI 6.0 will only be included into Debian bookworm, you can install it on a bullseye FAI server and create a nfsroot using bookworm without any problems. The combination of a bullseye FAI server with FAI 6.0 and a bullseye nfsroot should also work.

New ISO images are available at https://fai-project.org/fai-cd/

The FAI.me build service is not yet using FAI 6.0, but support will be added in the future.

FAI

08 March, 2023 12:01PM

Launch of new FAI project website

After more than 13 years, I've launched a new design for the FAI project web site

https://fai-project.org

It now uses Materialize CSS and will work much better on mobile devices. Thanks to Thorsten Bülo who did the first part of converting the web pages to the new design.

I hope you all enjoy the new layout.

FAI

08 March, 2023 12:00PM

March 07, 2023

hackergotchi for Norbert Preining

Norbert Preining

End of support and updates to the KDE/Plasma Debian builds

It has been many years that I have provided up-to-date builds of KDE/Plasma for Debian stable, testing, unstable. It is now more than a year that I don’t use Debian anymore. Time to send this off.

As already mentioned in some comments to various blog posts here, I will not invest more work into the current repositories. I invite anyone with interest in continuing the work to contact me. I will also write up a short howto guide on what I generally did and how I worked with this amount of packages.

I feel sad about leaving this behind, but also relieved from the amount of work, not to speak of the insults (“You are a Nazi” etc) I often get from the Debian side. I also feel sorry for all of you who have relied on these packages for long time, have given valuable feedback and helpful comments.

It was a nice and long run.

So long, and thanks for all the fish.

07 March, 2023 08:19PM by Norbert Preining

hackergotchi for Jonathan Dowland

Jonathan Dowland

Welcome Oblivion 10th Anniversary

I haven’t done one of these for a while, and they’ll be less frequent than I once planned as I’m working from home less and less. I'm also trying to get back into exploring my digital music collection, and more generally engaging with digital music again.

 picture of a vinyl record

It’s the ten year anniversary of the first (and last) LP by How To Destroy Angels (HTDA), the side-project of Trent Reznor with his wife, his Nine Inch Nails (NIN) partner in crime Atticus Ross and visual artist (and NIN artistic director) Rob Sheridan.

This album was a real pleasure. For NIN fans, it wasn't clear what the future held after the start of HTDA. But this work really stood alone, similar in some ways to NIN but sufficiently different to be fresh and exciting. In stark contrast to NIN (at the time), it was interesting to see the members of HTDA presented on an equal footing, especially Rob Sheridan, who wasn't a musician. The intent was to try and put the visual work on the same level of esteem as the musical.

HTDA performed a few live shows, but none outside the US. They were apparently quite a spectacle.

As an artefact, this is a gorgeous LP. The gatefold cover and all four sides of the two record sleeves are covered in unique pieces of Sheridan's glitch art. When I originally bought this I had a rather generously-sized individual office at the University, so I framed and displayed many of these pieces on my office walls.

Sheridan has since written extensively on the processes and techniques he used for this style of art, and has produced many more works using the same techniques. You can see some on his website, patreon, fine art print shop or Threadless store.

Late last year I treated myself to a large print of some related work, analog(Oblivion)000b, which (once the framing is done) I'm going to hang in my home office.

The LP had two tracks that were not present in the CD or digital release versions of the album, although a CD was bundled in the LP which included the tracks. (The Knife did something similar with Shaking the Habitual, at around the same time).

I've had some multitrack stems from this album sitting in my "for archive.org" folder for a while, so I took the opportunity of the 10th anniversary to upload them, here: https://archive.org/details/htda_multitracks

07 March, 2023 11:08AM

hackergotchi for Robert McQueen

Robert McQueen

Flathub in 2023

It’s been quite a few months since the most recent updates about Flathub last year. We’ve been busy behind the scenes, so I’d like to share what we’ve been up to at Flathub and why—and what’s coming up from us this year. I want to focus on:

  • Where Flathub is today as a strong ecosystem with 2,000 apps
  • Our progress on evolving Flathub from a build service to an app store
  • The economic barrier to growing the ecosystem, and its consequences
  • What’s next to overcome our challenges with focused initiatives

Today

Flathub is going strong: we offer 2,000 apps from over 1,500 collaborators on GitHub. We’re averaging 700,000 app downloads a day, with 898 million HTTP requests totalling 88.3 TB served by our CDN each day (thank you Fastly!). Flatpak has, in my opinion, solved the largest technical issue which has held back the mainstream growth and acceptance of Linux on the desktop (or other personal computing devices) for the past 25 years: namely, the difficulty for app developers to publish their work in a way that makes it easy for people to discover, download (or sideload, for people in challenging connectivity environments), install and use. Flathub builds on that to help users discover the work of app developers and helps that work reach users in a timely manner.

Initial results of this disintermediation are promising: even with its modest size so far, Flathub has hundreds of apps that I have never, ever heard of before—and that’s even considering I’ve been working in the Linux desktop space for nearly 20 years and spent many of those staring at the contents of dselect (showing my age a little) or GNOME Software, attending conferences, and reading blog posts, news articles, and forums. I am also heartened to see that many of our OS distributor partners have recognised that this model is hugely complementary and additive to the indispensable work they are doing to bring the Linux desktop to end users, and that “having more apps available to your users” is a value-add allowing you to focus on your core offering and not a zero-sum game that should motivate infighting.

Ongoing Progress

Getting Flathub into its current state has been a long ongoing process. Here’s what we’ve been up to behind the scenes:

Development

Last year, we concluded our first engagement with Codethink to build features into the Flathub web app to move from a build service to an app store. That includes accounts for users and developers, payment processing via Stripe, and the ability for developers to manage upload tokens for the apps they control. In parallel, James Westman has been working on app verification and the corresponding features in flat-manager to ensure app metadata accurately reflects verification and pricing, and to provide authentication for paying users for app downloads when the developer enables it. Only verified developers will be able to make direct uploads or access payment settings for their apps.

Legal

So far, the GNOME Foundation has acted as an incubator and legal host for Flathub even though it’s not purely a GNOME product or initiative. Distributing software to end users along with processing and forwarding payments and donations also has a different legal profile in terms of risk exposure and nonprofit compliance than the current activities of the GNOME Foundation. Consequently, we plan to establish an independent legal entity to own and operate Flathub which reduces risk for the GNOME Foundation, better reflects the independent and cross-desktop interests of Flathub, and provides flexibility in the future should we need to change the structure.

We’re currently in the process of reviewing legal advice to ensure we have the right structure in place before moving forward.

Governance

As Flathub is something we want to set outside of the existing Linux desktop and distribution space—and ensure we represent and serve the widest community of Linux users and developers—we’ve been working on a governance model that ensures that there is transparency and trust in who is making decisions, and why. We have set up a working group with myself and Martín Abente Lahaye from GNOME, Aleix Pol Gonzalez, Neofytos Kolokotronis, and Timothée Ravier from KDE, and Jorge Castro flying the flag for the Flathub community. Thanks also to Neil McGovern and Nick Richards who were also more involved in the process earlier on.

We don’t want to get held up here creating something complex with memberships and elections, so at first we’re going to come up with a simple/balanced way to appoint people into a board that makes key decisions about Flathub and iterate from there.

Funding

We have received one grant for 2023 of $100K from Endless Network which will go towards the infrastructure, legal, and operations costs of running Flathub and setting up the structure described above. (Full disclosure: Endless Network is the umbrella organisation which also funds my employer, Endless OS Foundation.) I am hoping to grow the available funding to $250K for this year in order to cover the next round of development on the software, prepare for higher operations costs (e.g., accounting gets more complex), and bring in a second full-time staff member in addition to Bartłomiej Piotrowski to handle enquiries, reviews, documentation, and partner outreach.

We’re currently in discussions with NLnet about funding further software development, but have been unfortunately turned down for a grant from the Plaintext Group for this year; this Schmidt Futures project around OSS sustainability is not currently issuing grants in 2023. However, we continue to work on other funding opportunities.

Remaining Barriers

My personal hypothesis is that our largest remaining barrier to Linux desktop scale and impact is economic. On competing platforms—mobile or desktop—a developer can offer their work for sale via an app store or direct download with payment or subscription within hours of making a release. While we have taken the “time to first download” time down from months to days with Flathub, as a community we continue to have a challenging relationship with money. Some creators are lucky enough to have a full-time job within the FLOSS space, while a few “superstar” developers are able to nurture some level of financial support by investing time in building a following through streaming, Patreon, Kickstarter, or similar. However, a large proportion of us have to make do with the main payback from our labours being a stream of bug reports on GitHub interspersed with occasional conciliatory beers at FOSDEM (other beverages and events are available).

The first and most obvious consequence is that if there is no financial payback for participating in developing apps for the free and open source desktop, we will lose many people in the process—despite the amazing achievements of those who have brought us to where we are today. As a result, we’ll have far fewer developers and apps. If we can’t offer access to a growing base of users or the opportunity to offer something of monetary value to them, the reward in terms of adoption and possible payment will be very small. Developers would be forgiven for taking their time and attention elsewhere. With fewer apps, our platform has less to entice and retain prospective users.

The second consequence is that this also represents a significant hurdle for diverse and inclusive participation. We essentially require that somebody is in a position of privilege and comfort that they have internet, power, time, and income—not to mention childcare, etc.—to spare so that they can take part. If that’s not the case for somebody, we are leaving them shut out from our community before they even have a chance to start. My belief is that free and open source software represents a better way for people to access computing, and there are billions of people in the world we should hope to reach with our work. But if the mechanism for participation ensures their voices and needs are never represented in our community of creators, we are significantly less likely to understand and meet those needs.

While these are my thoughts, you’ll notice a strong theme to this year will be leading a consultation process to ensure that we are including, understanding and reflecting the needs of our different communities—app creators, OS distributors and Linux users—as I don’t believe that our initiative will be successful without ensuring mutual benefit and shared success. Ultimately, no matter how beautiful, performant, or featureful the latest versions of the Plasma or GNOME desktops are, or how slick the newly rewritten installer is from your favourite distribution, all of the projects making up the Linux desktop ecosystem are subdividing between ourselves an absolutely tiny market share of the global market of personal computers. To make a bigger mark on the world, as a community, we need to get out more.

What’s Next?

After identifying our major barriers to overcome, we’ve planned a number of focused initiatives and restructuring this year:

Phased Deployment

We’re working on deploying the work we have been doing over the past year, starting first with launching the new Flathub web experience as well as the rebrand that Jakub has been talking about on his blog. This also will finally launch the verification features so we can distinguish those apps which are uploaded by their developers.

In parallel, we’ll also be able to turn on the Flatpak repo subsets that enable users to select only verified and/or FLOSS apps in the Flatpak CLI or their desktop’s app center UI.

Consultation

We would like to make sure that the voices of app creators, OS distributors, and Linux users are reflected in our plans for 2023 and beyond. We will be launching this in the form of Flathub Focus Groups at the Linux App Summit in Brno in May 2023, followed up with surveys and other opportunities for online participation. We see our role as interconnecting communities and want to be sure that we remain transparent and accountable to those we are seeking to empower with our work.

Whilst we are being bold and ambitious with what we are trying to create for the Linux desktop community, we also want to make sure we provide the right forums to listen to the FLOSS community and prioritise our work accordingly.

Advisory Board

As we build the Flathub organisation up in 2023, we’re also planning to expand its governance by creating an Advisory Board. We will establish an ongoing forum with different stakeholders around Flathub: OS vendors, hardware integrators, app developers and user representatives to help us create the Flathub that supports and promotes our mutually shared interests in a strong and healthy Linux desktop community.

Direct Uploads

Direct app uploads are close to ready, and they enable exciting stuff like allowing Electron apps to be built outside of flatpak-builder, or driving automatic Flathub uploads from GitHub actions or GitLab CI flows; however, we need to think a little about how we encourage these to be used. Even with its frustrations, our current Buildbot ensures that the build logs and source versions of each app on Flathub are captured, and that the apps are built on all supported architectures. (Is 2023 when we add RISC-V? Reach out if you’d like to help!). If we hand upload tokens out to any developer, even if the majority of apps are open source, we will go from this relatively structured situation to something a lot more unstructured—and we fear many apps will be available on only 64-bit Intel/AMD machines.

My sketch here is that we need to establish some best practices around how to integrate Flathub uploads into popular CI systems, encouraging best practices so that we promote the properties of transparency and reproducibility that we don’t want to lose. If anyone is a CI wizard and would like to work with us as a thought partner about how we can achieve this—make it more flexible where and how build tasks can be hosted, but not lose these cross-platform and inspectability properties—we’d love to hear from you.

Donations and Payments

Once the work around legal and governance reaches a decent point, we will be in the position to move ahead with our Stripe setup and switch on the third big new feature in the Flathub web app. At present, we have already implemented support for one-off payments either as donations or a required purchase. We would like to go further than that, in line with what we were describing earlier about helping developers sustainably work on apps for our ecosystem: we would also like to enable developers to offer subscriptions. This will allow us to create a relationship between users and creators that funds ongoing work rather than what we already have.

Security

For Flathub to succeed, we need to make sure that as we grow, we continue to be a platform that can give users confidence in the quality and security of the apps we offer. To that end, we are planning to set up infrastructure to help ensure developers are shipping the best products they possibly can to users. For example, we’d like to set up automated linting and security scanning on the Flathub back-end to help developers avoid bad practices, unnecessary sandbox permissions, outdated dependencies, etc. and to keep users informed and as secure as possible.

Sponsorship

Fundraising is a forever task—as is running such a big and growing service. We hope that one day, we can cover our costs through some modest fees built into our payments—but until we reach that point, we’re going to be seeking a combination of grant funding and sponsorship to keep our roadmap moving. Our hope is very much that we can encourage different organisations that buy into our vision and will benefit from Flathub to help us support it and ensure we can deliver on our goals. If you have any suggestions of who might like to support Flathub, we would be very appreciative if you could reach out and get us in touch.

Finally, Thank You!

Thanks to you all for reading this far and supporting the work of Flathub, and also to our major sponsors and donors without whom Flathub could not exist: GNOME Foundation, KDE e.V., Mythic Beasts, Endless Network, Fastly, and Equinix Metal via the CNCF Community Cluster. Thanks also to the tireless work of the Freedesktop SDK community to give us the runtime platform most Flatpaks depend on, particularly Seppo Yli-Olli, Codethink and others.

I wanted to also give my personal thanks to a handful of dedicated people who keep Flathub working as a service and as a community: Bartłomiej Piotrowski is keeping the infrastructure working essentially single-handedly (in his spare time from keeping everything running at GNOME); Kolja Lampe and Bart built the new web app and backend API for Flathub which all of the new functionality has been built on, and Filippe LeMarchand maintains the checker bot which helps keeps all of the Flatpaks up to date.

And finally, all of the submissions to Flathub are reviewed to ensure quality, consistency and security by a small dedicated team of reviewers, with a huge amount of work from Hubert Figuière and Bart to keep the submissions flowing. Thanks to everyone­—named or unnamed—for building this vision of the future of the Linux desktop together with us.

(originally posted to Flathub Discourse, head there if you have any questions or comments)

07 March, 2023 11:00AM by ramcq

hackergotchi for Jonathan Dowland

Jonathan Dowland

date warping in HLedger

My credit card and bank account rarely agree on the date for when I pay it off1. Since I added balance assertions for bank account transactions, I need the transaction in my ledger to match what the bank thinks, otherwise the balance assertions would start to fail.

The skew is not normally more than a couple of days, and could be corrected by changing the date for just one of the two postings. But the skew is not very important, and altering the posting date could be used for something more useful.

date warping credit card repayments

My credit card bills land halfway through the month, so February's bill covers transactions between January 15th and February 14th. I pay off the bill in full each month using Direct Debit. The credit card company consider the bill paid immediately, but they don't actually draw it until the end of the month (Jan 31 in the running example). This means the payment transaction for a given month lands halfway through the period covered by the next month's bill.

The credit card bill itself shows the payment date at the end of the month but presents the transaction "warped" right to the start. This is actually useful, because it means the balance is zero for the first purchase on the bill.

The credit card data in CSV form has the repayment transaction at the date it occurred, not warped to the start of the period. When I import this into HLedger, the credit card account balance for each new transaction does not match the statement right up to the point of the repayment, half way through. This makes spot-checking that the imported data matches the statement a bit more awkward.

So, I have started "warping" the payment transaction to the start of the billing period, just like the credit card statement does:

2022-12-31  pay credit card
  asset:bank    £ -500
  liabilities:credit card  ;  date:2022-12-15

I can then spot-check the transactions in HLedger after import, in particular the final one, and the final account balance, and then write a manual balance assertion when I'm finished.

I'd quite like to automate the adjusted posting date, too, but I haven't figured out how to do that just yet.

date warping for refunds

Another thing I've found "date warping" useful is for marrying up refunds with their related purchase. Imagine I spent £200 on some shoes in late January, but returned most of them in early February:

2023-01-25  buy some shoes. hedging on the size
  liabilities:credit card    £ -200
  expenses:shoes

2023-02-05  return the ones that don't fit
  liabilities:credit card    £  150
  expenses:shoes

If I look at how much I've spent on shoes per month, it looks odd: £200 in January (although ultimately I only spent £50), and £-150 in February.

Balance changes in 2023-01-01..2023-02-28:

$ hledger bal -Mt expenses:shoes
                ||   Jan     Feb 
================++===============
 expenses:shoes || £ 200  £ -150 
----------------++---------------
                || £ 200  £ -150 

By "warping" the refund's posting to the expense account to the purchase date, how much I ultimately spent on shoes is more properly reflected:

2023-02-05  return the ones that don't fit
  liabilities:credit card    £  150
  expenses:shoes  ; date:2023-01-25

resulting in

$ hledger bal -Mt expenses:shoes
Balance changes in 2023-01-01..2023-02-28:

                ||  Jan  Feb 
================++===========
 expenses:shoes || £ 50    0 
----------------++-----------
                || £ 50    0 

I suppose whether you'd want to do this is a matter of taste.


  1. Amazon rarely agrees with my bank on when we've paid for things either. For other reasons, Amazon is a beast to tackle in another blog post.

07 March, 2023 10:28AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.1 and 0.0.2: New Package on CRAN!

James Yang and I are thrilled to announce the new CRAN package RcppFastAD which arrived at CRAN last Monday as version 0.0.1, and is as of today at version 0.0.2 with a first set of small updates.

It is based on the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation in an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. Included in the package are three example: a simple quadratic expression evaluating x' S x for given x and S return the expression value with a gradient, a linear regression example generalising this and using the gradient to derive to arrive at the least-squares minimizing solution, as well as the well-known Black-Scholes options pricer and its important partial derivatives delta, rho, theta and vega derived via automatic differentiation.

The NEWS file for these two initial releases follows.

Changes in version 0.0.2 (2023-03-05)

  • One C++ operation is protected from operating on a nullptr

  • Additional tests have been added, tests now cover all three demo / example functions

  • Return values and code for the examples linear_regression and quadratic_expression have been adjusted

Changes in version 0.0.1 (2023-02-24)

  • Initial release version and CRAN upload

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 March, 2023 01:34AM

March 06, 2023

Vincent Bernat

DDoS detection and remediation with Akvorado and Flowspec

Akvorado collects sFlow and IPFIX flows, stores them in a ClickHouse database, and presents them in a web console. Although it lacks built-in DDoS detection, it’s possible to create one by crafting custom ClickHouse queries.

DDoS detection​

Let’s assume we want to detect DDoS targeting our customers. As an example, we consider a DDoS attack as a collection of flows over one minute targeting a single customer IP address, from a single source port and matching one of these conditions:

  • an average bandwidth of 1 Gbps,
  • an average bandwidth of 200 Mbps when the protocol is UDP,
  • more than 20 source IP addresses and an average bandwidth of 100 Mbps, or
  • more than 10 source countries and an average bandwidth of 100 Mbps.

Here is the SQL query to detect such attacks over the last 5 minutes:

SELECT *
FROM (
  SELECT
    toStartOfMinute(TimeReceived) AS TimeReceived,
    DstAddr,
    SrcPort,
    dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto,
    SUM(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps,
    uniq(SrcAddr) AS sources,
    uniq(SrcCountry) AS countries
  FROM flows
  WHERE TimeReceived > now() - INTERVAL 5 MINUTE
    AND DstNetRole = 'customers'
  GROUP BY
    TimeReceived,
    DstAddr,
    SrcPort,
    Proto
)
WHERE (Gbps > 1)
   OR ((Proto = 'UDP') AND (Gbps > 0.2)) 
   OR ((sources > 20) AND (Gbps > 0.1)) 
   OR ((countries > 10) AND (Gbps > 0.1))
ORDER BY
  TimeReceived DESC,
  Gbps DESC

Here is an example output1 where two of our users are under attack. One from what looks like an NTP amplification attack, the other from a DNS amplification attack:

TimeReceived DstAddr SrcPort Proto Gbps sources countries
2023-02-26 17:44:00 ::ffff:203.0.113.206 123 UDP 0.102 109 13
2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.130 133 17
2023-02-26 17:43:00 ::ffff:203.0.113.68 53 UDP 0.129 364 63
2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.113 129 21
2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.139 50 14
2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.105 42 14
2023-02-26 17:40:00 ::ffff:203.0.113.68 53 UDP 0.121 340 65

DDoS remediation​

Once detected, there are at least two ways to stop the attack at the network level:

  • blackhole the traffic to the targeted user (RTBH), or
  • selectively drop packets matching the attack patterns (Flowspec).

Traffic blackhole​

The easiest method is to sacrifice the attacked user. While this helps the attacker, this protects your network. It is a method supported by all routers. You can also offload this protection to many transit providers. This is useful if the attack volume exceeds your internet capacity.

This works by advertising with BGP a route to the attacked user with a specific community. The border router modifies the next hop address of these routes to a specific IP address configured to forward the traffic to a null interface. RFC 7999 defines 65535:666 for this purpose. This is known as a “remote-triggered blackhole� (RTBH) and is explained in more detail in RFC 3882.

It is also possible to blackhole the source of the attacks by leveraging unicast Reverse Path Forwarding (uRPF) from RFC 3704, as explained in RFC 5635. However, uRPF can be a serious tax on your router resources. See “NCS5500 uRPF: Configuration and Impact on Scale� for an example of the kind of restrictions you have to expect when enabling uRPF.

On the advertising side, we can use BIRD. Here is a complete configuration file to allow any router to collect them:

log stderr all;
router id 192.0.2.1;

protocol device {
  scan time 10;
}

protocol bgp exporter {
  ipv4 {
    import none;
    export where proto = "blackhole4";
  };
  ipv6 {
    import none;
    export where proto = "blackhole6";
  };
  local as 64666;
  neighbor range 192.0.2.0/24 external;
  multihop;
  dynamic name "exporter";
  dynamic name digits 2;
  graceful restart yes;
  graceful restart time 0;
  long lived graceful restart yes;
  long lived stale time 3600;  # keep routes for 1 hour!
}

protocol static blackhole4 {
  ipv4;
  route 203.0.113.206/32 blackhole {
    bgp_community.add((65535, 666));
  };
  route 203.0.113.68/32 blackhole {
    bgp_community.add((65535, 666));
  };
}
protocol static blackhole6 {
  ipv6;
}

We use BGP long-lived graceful restart to ensure routes are kept for one hour, even if the BGP connection goes down, notably during maintenance.

On the receiver side, if you have a Cisco router running IOS XR, you can use the following configuration to blackhole traffic received on the BGP session. As the BGP session is dedicated to this usage, The community is not used, but you can also forward these routes to your transit providers.

router static
 vrf public
  address-family ipv4 unicast
   192.0.2.1/32 Null0 description "BGP blackhole"
  !
  address-family ipv6 unicast
   2001:db8::1/128 Null0 description "BGP blackhole"
  !
 !
!
route-policy blackhole_ipv4_in_public
  if destination in (0.0.0.0/0 le 31) then
    drop
  endif
  set next-hop 192.0.2.1
  done
end-policy
!
route-policy blackhole_ipv6_in_public
  if destination in (::/0 le 127) then
    drop
  endif
  set next-hop 2001:db8::1
  done
end-policy
!
router bgp 12322
 neighbor-group BLACKHOLE_IPV4_PUBLIC
  remote-as 64666
  ebgp-multihop 255
  update-source Loopback10
  address-family ipv4 unicast
   maximum-prefix 100 90
   route-policy blackhole_ipv4_in_public in
   route-policy drop out
   long-lived-graceful-restart stale-time send 86400 accept 86400
  !
  address-family ipv6 unicast
   maximum-prefix 100 90
   route-policy blackhole_ipv6_in_public in
   route-policy drop out
   long-lived-graceful-restart stale-time send 86400 accept 86400
  !
 !
 vrf public
  neighbor 192.0.2.1
   use neighbor-group BLACKHOLE_IPV4_PUBLIC
   description akvorado-1

When the traffic is blackholed, it is still reported by IPFIX and sFlow. In Akvorado, use ForwardingStatus >= 128 as a filter.

While this method is compatible with all routers, it makes the attack successful as the target is completely unreachable. If your router supports it, Flowspec can selectively filter flows to stop the attack without impacting the customer.

Flowspec​

Flowspec is defined in RFC 8955 and enables the transmission of flow specifications in BGP sessions. A flow specification is a set of matching criteria to apply to IP traffic. These criteria include the source and destination prefix, the IP protocol, the source and destination port, and the packet length. Each flow specification is associated with an action, encoded as an extended community: traffic shaping, traffic marking, or redirection.

To announce flow specifications with BIRD, we extend our configuration. The extended community used shapes the matching traffic to 0 bytes per second.

flow4 table flowtab4;
flow6 table flowtab6;

protocol bgp exporter {
  flow4 {
    import none;
    export where proto = "flowspec4";
  };
  flow6 {
    import none;
    export where proto = "flowspec6";
  };
  # […]
}

protocol static flowspec4 {
  flow4;
  route flow4 {
    dst 203.0.113.68/32;
    sport = 53;
    length >= 1476 && <= 1500;
    proto = 17;
  }{
    bgp_ext_community.add((generic, 0x80060000, 0x00000000));
  };
  route flow4 {
    dst 203.0.113.206/32;
    sport = 123;
    length = 468;
    proto = 17;
  }{
    bgp_ext_community.add((generic, 0x80060000, 0x00000000));
  };
}
protocol static flowspec6 {
  flow6;
}

If you have a Cisco router running IOS XR, the configuration may look like this:

vrf public
 address-family ipv4 flowspec
 address-family ipv6 flowspec
!
router bgp 12322
 address-family vpnv4 flowspec
 address-family vpnv6 flowspec
 neighbor-group FLOWSPEC_IPV4_PUBLIC
  remote-as 64666
  ebgp-multihop 255
  update-source Loopback10
  address-family ipv4 flowspec
   long-lived-graceful-restart stale-time send 86400 accept 86400
   route-policy accept in
   route-policy drop out
   maximum-prefix 100 90
   validation disable
  !
  address-family ipv6 flowspec
   long-lived-graceful-restart stale-time send 86400 accept 86400
   route-policy accept in
   route-policy drop out
   maximum-prefix 100 90
   validation disable
  !
 !
 vrf public
  address-family ipv4 flowspec
  address-family ipv6 flowspec
  neighbor 192.0.2.1
   use neighbor-group FLOWSPEC_IPV4_PUBLIC
   description akvorado-1

Then, you need to enable Flowspec on all interfaces with:

flowspec
 vrf public
  address-family ipv4
   local-install interface-all
  !
  address-family ipv6
   local-install interface-all
  !
 !
!

As with the RTBH setup, you can filter dropped flows with ForwardingStatus >= 128.

DDoS detection (continued)​

In the example using Flowspec, the flows were also filtered on the length of the packet:

route flow4 {
  dst 203.0.113.68/32;
  sport = 53;
  length >= 1476 && <= 1500;
  proto = 17;
}{
  bgp_ext_community.add((generic, 0x80060000, 0x00000000));
};

This is an important addition: legitimate DNS requests are smaller than this and therefore not filtered.2 With ClickHouse, you can get the 10th and 90th percentiles of the packet sizes with quantiles(0.1, 0.9)(Bytes/Packets).

The last issue we need to tackle is how to optimize the request: it may need several seconds to collect the data and it is likely to consume substantial resources from your ClickHouse database. One solution is to create a materialized view to pre-aggregate results:

CREATE TABLE ddos_logs (
  TimeReceived DateTime,
  DstAddr IPv6,
  Proto UInt32,
  SrcPort UInt16,
  Gbps SimpleAggregateFunction(sum, Float64),
  Mpps SimpleAggregateFunction(sum, Float64),
  sources AggregateFunction(uniqCombined(12), IPv6),
  countries AggregateFunction(uniqCombined(12), FixedString(2)),
  size AggregateFunction(quantiles(0.1, 0.9), UInt64)
) ENGINE = SummingMergeTree
PARTITION BY toStartOfHour(TimeReceived)
ORDER BY (TimeReceived, DstAddr, Proto, SrcPort)
TTL toStartOfHour(TimeReceived) + INTERVAL 6 HOUR DELETE ;

CREATE MATERIALIZED VIEW ddos_logs_view TO ddos_logs AS
  SELECT
    toStartOfMinute(TimeReceived) AS TimeReceived,
    DstAddr,
    Proto,
    SrcPort,
    sum(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps,
    sum(((Packets * SamplingRate) / 1000) / 1000) / 60 AS Mpps,
    uniqCombinedState(12)(SrcAddr) AS sources,
    uniqCombinedState(12)(SrcCountry) AS countries,
    quantilesState(0.1, 0.9)(toUInt64(Bytes/Packets)) AS size
  FROM flows
  WHERE DstNetRole = 'customers'
  GROUP BY
    TimeReceived,
    DstAddr,
    Proto,
    SrcPort

The ddos_logs table is using the SummingMergeTree engine. When the table receives new data, ClickHouse replaces all the rows with the same sorting key, as defined by the ORDER BY directive, with one row which contains summarized values using either the sum() function or the explicitly specified aggregate function (uniqCombined and quantiles in our example).3

Finally, we can modify our initial query with the following one:

SELECT *
FROM (
  SELECT
    TimeReceived,
    DstAddr,
    dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto,
    SrcPort,
    sum(Gbps) AS Gbps,
    sum(Mpps) AS Mpps,
    uniqCombinedMerge(12)(sources) AS sources,
    uniqCombinedMerge(12)(countries) AS countries,
    quantilesMerge(0.1, 0.9)(size) AS size
  FROM ddos_logs
  WHERE TimeReceived > now() - INTERVAL 60 MINUTE
  GROUP BY
    TimeReceived,
    DstAddr,
    Proto,
    SrcPort
)
WHERE (Gbps > 1)
   OR ((Proto = 'UDP') AND (Gbps > 0.2)) 
   OR ((sources > 20) AND (Gbps > 0.1)) 
   OR ((countries > 10) AND (Gbps > 0.1))
ORDER BY
  TimeReceived DESC,
  Gbps DESC

Gluing everything together​

To sum up, building an anti-DDoS system requires to following these steps:

  1. define a set of criteria to detect a DDoS attack,
  2. translate these criteria into SQL requests,
  3. pre-aggregate flows into SummingMergeTree tables,
  4. query and transform the results to a BIRD configuration file, and
  5. configure your routers to pull the routes from BIRD.

A Python script like the following one can handle the fourth step. For each attacked target, it generates both a Flowspec rule and a blackhole route.

import socket
import types
from clickhouse_driver import Client as CHClient

# Put your SQL query here!
SQL_QUERY = "…"

# How many anti-DDoS rules we want at the same time?
MAX_DDOS_RULES = 20

def empty_ruleset():
    ruleset = types.SimpleNamespace()
    ruleset.flowspec = types.SimpleNamespace()
    ruleset.blackhole = types.SimpleNamespace()
    ruleset.flowspec.v4 = []
    ruleset.flowspec.v6 = []
    ruleset.blackhole.v4 = []
    ruleset.blackhole.v6 = []
    return ruleset

current_ruleset = empty_ruleset()

client = CHClient(host="clickhouse.akvorado.net")
while True:
    results = client.execute(SQL_QUERY)
    seen = {}
    new_ruleset = empty_ruleset()
    for (t, addr, proto, port, gbps, mpps, sources, countries, size) in results:
        if (addr, proto, port) in seen:
            continue
        seen[(addr, proto, port)] = True

        # Flowspec
        if addr.ipv4_mapped:
            address = addr.ipv4_mapped
            rules = new_ruleset.flowspec.v4
            table = "flow4"
            mask = 32
            nh = "proto"
        else:
            address = addr
            rules = new_ruleset.flowspec.v6
            table = "flow6"
            mask = 128
            nh = "next header"
        if size[0] == size[1]:
            length = f"length = {int(size[0])}"
        else:
            length = f"length >= {int(size[0])} && <= {int(size[1])}"
        header = f"""
# Time: {t}
# Source: {address}, protocol: {proto}, port: {port}
# Gbps/Mpps: {gbps:.3}/{mpps:.3}, packet size: {int(size[0])}<=X<={int(size[1])}
# Flows: {flows}, sources: {sources}, countries: {countries}
"""
        rules.append(
                f"""{header}
route {table} {{
  dst {address}/{mask};
  sport = {port};
  {length};
  {nh} = {socket.getprotobyname(proto)};
}}{{
  bgp_ext_community.add((generic, 0x80060000, 0x00000000));
}};
"""
        )

        # Blackhole
        if addr.ipv4_mapped:
            rules = new_ruleset.blackhole.v4
        else:
            rules = new_ruleset.blackhole.v6
        rules.append(
            f"""{header}
route {address}/{mask} blackhole {{
  bgp_community.add((65535, 666));
}};
"""
        )

        new_ruleset.flowspec.v4 = list(
            set(new_ruleset.flowspec.v4[:MAX_DDOS_RULES])
        )
        new_ruleset.flowspec.v6 = list(
            set(new_ruleset.flowspec.v6[:MAX_DDOS_RULES])
        )

        # TODO: advertise changes by mail, chat, ...

        current_ruleset = new_ruleset
        changes = False
        for rules, path in (
            (current_ruleset.flowspec.v4, "v4-flowspec"),
            (current_ruleset.flowspec.v6, "v6-flowspec"),
            (current_ruleset.blackhole.v4, "v4-blackhole"),
            (current_ruleset.blackhole.v6, "v6-blackhole"),
        ):
            path = os.path.join("/etc/bird/", f"{path}.conf")
            with open(f"{path}.tmp", "w") as f:
                for r in rules:
                    f.write(r)
            changes = (
                changes or not os.path.exists(path) or not samefile(path, f"{path}.tmp")
            )
            os.rename(f"{path}.tmp", path)

        if not changes:
            continue

        proc = subprocess.Popen(
            ["birdc", "configure"],
            stdin=subprocess.DEVNULL,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        stdout, stderr = proc.communicate(None)
        stdout = stdout.decode("utf-8", "replace")
        stderr = stderr.decode("utf-8", "replace")
        if proc.returncode != 0:
            logger.error(
                "{} error:\n{}\n{}".format(
                    "birdc reconfigure",
                    "\n".join(
                        [" O: {}".format(line) for line in stdout.rstrip().split("\n")]
                    ),
                    "\n".join(
                        [" E: {}".format(line) for line in stderr.rstrip().split("\n")]
                    ),
                )
            )

Until Akvorado integrates DDoS detection and mitigation, the ideas presented in this blog post provide a solid foundation to get started with your own anti-DDoS system. 🛡�


  1. ClickHouse can export results using Markdown format when appending FORMAT Markdown to the query. ↩�

  2. While most DNS clients should retry with TCP on failures, this is not always the case: until recently, musl libc did not implement this. ↩�

  3. The materialized view also aggregates the data at hand, both for efficiency and to ensure we work with the right data types. ↩�

06 March, 2023 07:34AM by Vincent Bernat

March 05, 2023

Enrico Zini

Heart-driven drum loop

I have Python code for reading a heart rate monitor.

I have Python code to generate MIDI events.

Could I resist putting them together? Clearly not.

Here's Jack Of Hearts, a JACK MIDI drum loop generator that uses the heart rate for BPM, and an improvised way to compute heart rate increase/decrease to add variations in the drum pattern.

It's very simple minded and silly. To me it was a fun way of putting unrelated things together, and Python worked very well for it.

05 March, 2023 10:53PM

Generating MIDI events with JACK and Python

I had a go at trying to figure out how to generate arbitrary MIDI events and send them out over a JACK MIDI channel.

Setting up JACK and Pipewire

Pipewire has a JACK interface, which in theory means one could use JACK clients out of the box without extra setup.

In practice, one need to tell JACK clients which set of libraries to use to communicate to servers, and Pipewire's JACK server is not the default choice.

To tell JACK clients to use Pipewire's server, you can either:

  • on a client-by-client basis, wrap the commands with pw-jack
  • to change the system default: cp /usr/share/doc/pipewire/examples/ld.so.conf.d/pipewire-jack-*.conf /etc/ld.so.conf.d/ and run ldconfig (see the Debian wiki for details)

Programming with JACK

Python has a JACK client library that worked flawlessly for me so far.

Everything with JACK is designed around minimizing latency. Everything happens around a callback that gets called form a separate thread, and which gets a buffer to fill with events.

All the heavy processing needs to happen outside the callback, and the callback is only there to do the minimal amount of work needed to shovel the data your application produced into JACK channels.

Generating MIDI messages

The Mido library can be used to parse and create MIDI messages and it also worked flawlessly for me so far.

One needs to study a bit what kind of MIDI message one needs to generate (like "note on", "note off", "program change") and what arguments they get.

It also helps to read about the General MIDI standard which defines mappings between well-known instruments and channels and instrument numbers in MIDI messages.

A timed message queue

To keep a queue of events that happen over time, I implemented a Delta List that indexes events by their future frame number.

I called the humble container for my audio experiments pyeep and here's my delta list implementation.

A JACK player

The simple JACK MIDI player backend is also in pyeep.

It needs to protect the delta list with a mutex since we are working across thread boundaries, but it tries to do as little work under lock as possible, to minimize the risk of locking the realtime thread for too long.

The play method converts delays in seconds to frame counts, and the on_process callback moves events from the queue to the jack output.

Here's an example script that plays a simple drum pattern:

#!/usr/bin/python3

# Example JACK midi event generator
#
# Play a drum pattern over JACK

import time

from pyeep.jackmidi import MidiPlayer

# See:
# https://soundprogramming.net/file-formats/general-midi-instrument-list/
# https://www.pgmusic.com/tutorial_gm.htm

DRUM_CHANNEL = 9

with MidiPlayer("pyeep drums") as player:
    beat: int = 0
    while True:
        player.play("note_on", velocity=64, note=35, channel=DRUM_CHANNEL)
        player.play("note_off", note=38, channel=DRUM_CHANNEL, delay_sec=0.5)
        if beat == 0:
            player.play("note_on", velocity=100, note=38, channel=DRUM_CHANNEL)
            player.play("note_off", note=36, channel=DRUM_CHANNEL, delay_sec=0.3)
        if beat + 1 == 2:
            player.play("note_on", velocity=100, note=42, channel=DRUM_CHANNEL)
            player.play("note_off", note=42, channel=DRUM_CHANNEL, delay_sec=0.3)

        beat = (beat + 1) % 4
        time.sleep(0.3)

Running the example

I ran the jack_drums script, and of course not much happened.

First I needed a MIDI synthesizer. I installed fluidsynth, and ran it on the command line with no arguments. it registered with JACK, ready to do its thing.

Then I connected things together. I used qjackctl, opened the graph view, and connected the MIDI output of "pyeep drums" to the "FLUID Synth input port".

fluidsynth's output was already automatically connected to the audio card and I started hearing the drums playing! ��🎉�

05 March, 2023 11:14AM

Reproducible Builds

Reproducible Builds in February 2023

Welcome to the February 2023 report from the Reproducible Builds project. As ever, if you are interested in contributing to our project, please visit the Contribute page on our website.


FOSDEM 2023 was held in Brussels on the 4th & 5th of February and featured a number of talks related to reproducibility. In particular, Akihiro Suda gave a talk titled Bit-for-bit reproducible builds with Dockerfile discussing deterministic timestamps and deterministic apt-get (original announcement). There was also an entire ‘track’ of talks on Software Bill of Materials (SBOMs). SBOMs are an inventory for software with the intention of increasing the transparency of software components (the US National Telecommunications and Information Administration (NTIA) published a useful Myths vs. Facts document in 2021).


On our mailing list this month, Larry Doolittle was puzzled why the Debian verilator package was not reproducible [], but Chris Lamb pointed out that this was due to the use of Python’s datetime.fromtimestamp over datetime.utcfromtimestamp [].


James Addison also was having issues with a Debian package: in this case, the alembic package. Chris Lamb was also able to identify the Sphinx documentation generator as the cause of the problem, and provided a potential patch that might fix it. This was later filed upstream [].


Anthony Harrison wrote to our list twice, first by introducing himself and their background and later to mention the increasing relevance of Software Bill of Materials (SBOMs):

As I am sure everyone is aware, there is a growing interest in [SBOMs] as a way of improving software security and resilience. In the last two years, the US through the Exec Order, the EU through the proposed Cyber Resilience Act (CRA) and this month the UK has issued a consultation paper looking at software security and SBOMs appear very prominently in each publication. []


Tim Retout wrote a blog post discussing AlmaLinux in the context of CentOS, RHEL and supply-chain security in general []:

Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What’s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?


Debian


F-Droid & Android


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats.

This month, Chris Lamb released versions 235 and 236; Mattia Rizzolo later released version 237.

Contributions include:

  • Chris Lamb:
    • Fix compatibility with PyPDF2 (re. issue #331) [][][].
    • Fix compatibility with ImageMagick version 7.1 [].
    • Require at least version 23.1.0 to run the Black source code tests [].
    • Update debian/tests/control after merging changes from others [].
    • Don’t write test data during a test [].
    • Update copyright years [].
    • Merged a large number of changes from others.
  • Akihiro Suda edited the .gitlab-ci.yml configuration file to ensure that versioned tags are pushed to the container registry [].

  • Daniel Kahn Gillmor provided a way to migrate from PyPDF2 to pypdf (#1029741).

  • Efraim Flashner updated the tool metadata for isoinfo on GNU Guix [].

  • FC Stegerman added support for Android resources.arsc files [], improved a number of file-matching regular expressions [][] and added support for Android dexdump []; they also fixed a test failure (#1031433) caused by Debian’s black package having been updated to a newer version.

  • Mattia Rizzolo:
    • updated the release documentation [],
    • fixed a number of Flake8 errors [][],
    • updated the autopkgtest configuration to only install aapt and dexdump on architectures where they are available [], making sure that the latest diffoscope release is in a good fit for the upcoming Debian bookworm freeze.

reprotest

Reprotest version 0.7.23 was uploaded to both PyPI and Debian unstable, including the following changes:

  • Holger Levsen improved a lot of documentation [][][], tidied the documentation as well [][], and experimented with a new --random-locale flag [].

  • Vagrant Cascadian adjusted reprotest to no longer randomise the build locale and use a UTF-8 supported locale instead […] (re. #925879, #1004950), and to also support passing --vary=locales.locale=LOCALE to specify the locale to vary [].

Separate to this, Vagrant Cascadian started a thread on our mailing list questioning the future development and direction of reprotest.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, the following changes were made by Holger Levsen:

  • Add three new OSUOSL nodes [][][] and decommission the osuosl174 node [].
  • Change the order of listed Debian architectures to show the 64-bit ones first [].
  • Reduce the frequency that the Debian package sets and dd-list HTML pages update [].
  • Sort “Tested suite” consistently (and Debian unstable first) [].
  • Update the Jenkins shell monitor script to only query disk statistics every 230min [] and improve the documentation [][].

Other development work

disorderfs version 0.5.11-3 was uploaded by Holger Levsen, fixing a number of issues with the manual page [][][].


Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.


If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

05 March, 2023 08:53AM

March 04, 2023

hackergotchi for Matt Brown

Matt Brown

Retrospective: Feb 2023

February ended up being a very short work month as I made a last minute decision to travel to Adelaide for the first 2 weeks of the month to help my brother with some house renovations he was undertaking. I thought I might be able to keep up with some work and my writing goals in the evenings while I was there, but days of hard manual labour are such an unfamiliar routine for me that I didn’t have any energy left to make good on that intention.

The majority of my time and focus for the remaining one and half weeks of the month was catching up on the consulting work that I had pushed back while in Adelaide.

So while it doesn’t make for a thrilling first month to look back and report on, overall I’m not unhappy with what I achieved given the time available. Next month, I hope to be able to report some more exciting progress on the product development front as well.

Monthly Scoring Rubric

I’m evaluating each goal using a 10 point scale based on execution velocity and risk level, rather than absolute success (which is what I will look at in the annual/mid-year review). If velocity is good and risk is low or well managed the score is high, if either the velocity is low, or risk is high then the score is low. E.g:

  • 10 - perfect execution with low-risk, on track for significantly overachieving the goal.
  • 7 - good execution with low or well managed risk, highly likely to achieve the goal.
  • 5 - execution and risk are OK, should achieve the goal if all goes well.
  • 3 - execution or risk have problems, goal is at risk.
  • 0 - stalled, with no obvious path to recovery or success.

Goals

Consulting - 6/10

Goal: Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.

  • I have one active local engagement assisting a software team with migrating their application from a single to multi-region architecture.
  • Two promising international engagements which were close to starting both cancelled based on newly issued company policies freezing their staffing/outsourcing budgets due to the current economic climate.

I’m happy with where this is at - I hit 90% of my target hours in February (taking into account 2 weeks off) and the feedback I’m receiving is positive. The main risk is the future pipeline of engagements, particularly if the cancellations indicate a new pattern. I’m not overly concerned yet, as all the opportunities to date have been from direct or referred contacts in my personal network, so there’s plenty of potential to more actively solicit work to create a healthier pipeline.

Product Development - 3/10

Goal: Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.

  • Accelerating electrification - I continued to keep up with industry news and added some interesting reports to my reading queue, but made no significant progress towards identifying a specific product opportunity.

  • Farm management SaaS - no activity or progress at all.

  • co2mon.nz - I put significant thought and planning into how to approach a second iteration of this product. I started writing and completed 80% of a post to communicate the revised business plan, but it’s not ready for publication yet, and even if it was, the real work towards it would need to actually happen to score more points here.

I had high hopes to make at least some progress in all three areas in February, but it just didn’t happen due to lack of time. The good news is that since the low score here is purely execution driven, there’s no new risks or blockers that will hinder much better progress here in March.

Professional Network Development - 8/10

Goal: To build a professional relationship with at least 30 new people this year.

This is off to a strong start, I made 4 brand new connections and re-established contact with 9 other existing people I’d not talked to for a while. I’ve found the conversations energising and challenging and I’m looking forward to continuing to keep this up.

Writing - 2/10

Goal: To publish a high-quality piece of writing on this site at least once a week.

Well off track as already noted. I am enjoying the writing process and I continue to find it useful in developing my thoughts and forcing me to challenge my assumptions, but coupling the writing process with the thinking/planning that is a prerequisite to get those benefits definitely makes my output a lot slower than I was expecting.

The slower speed, combined with the obvious time constraints of this month are not a great doubly whammy to be starting with, but I think with some planning and preparation it should have been avoidable by having a backlog of pre-written content for use in weeks where I’m on holiday or otherwise busy.

It’s worth noting that among all the useful feedback I received, this writing target was often called out as overly ambitious, or likely to be counterproductive to producing quality writing. The feedback makes sense - for now I’m not planning to change the goal (I might at my 6-month review point), but I am going to be diligent about adhering to my quality standard, which in turn means I’m choosing to accept missing a weekly post here and there and taking a lower score on the goal overall.

I apologise if you’ve been eagerly waiting for writing that never arrived over February!

Community - 5/10

Goal: To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

  • I was an invited participant of the monthly KiwiSRE meet-up which was discussing SRE team models, and in particular I was able to speak to my experiences as described in an old CRE blog post on this topic.

  • I joined the program committee for SREcon23 APAC which is scheduled for mid-June in Singapore. I also submitted two talk proposals of my own (not sharing the details for now, since the review process is intended to be blind) which I’m hopeful might make the grade with my fellow PC members!

Feedback

As always, I’d love to hear from you if you have thoughts or feedback triggered by anything I’ve written above. In particular, it would be useful to know whether you find this type of report interesting to read and/or what you’d like to see added/removed or changed.

04 March, 2023 01:03AM

March 03, 2023

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Goodbye Bullseye — report from the Montreal 2023 BSP

Hello World! I haven't really had time to blog here since the start of the semester, as I've been pretty busy at work1.

All this to say, this report for the Bug Squashing Party we held in Montreal last weekend is a little late, sorry :)

First of all, I'm pleased to announce our local community seems to be doing great and has recovered from the pandemic-induced lull. May COVID stay away from our bodies forever.

This time around, a total of 9 people made it to what has become somewhat of a biennial tradition2. We worked on a grand total of 14 bugs and even managed to close some!

It looks like I was too concentrated on bugs to take a picture of the event... To redeem myself, I hereby offer you a picture of a cute-but-hairless cat I met on Sunday morning:

Picture of a curious sphinx cat on a table

You should try to join an upcoming BSP or to organise one if you can. It's loads of fun and you'll be helping the project make the next release happen sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Goodbye Bullseye!


  1. Which I guess is a good thing, since it means I actually have work this semester :O 

  2. See our previous BSPs in 2017, 2019 and 2021

03 March, 2023 10:13PM by Louis-Philippe Véronneau

Sven Hoexter

exfat-fuse 1.4 in experimental

I know a few people hold on to the exFAT fuse implementation due the support for timezone offsets, so here is a small update for you. Andrew released 1.4.0, which includes the timezone offset support, which was so far only part of the git master branch. It also fixes a, from my point of view very minor, security issue CVE-2022-29973. In addition to that it's the first build with fuse3 support. If you still use this driver, pick it up in experimental (we're in the bookworm freeze right now), and give it a try. I'm personally not using it anymore beyond a very basic "does it mount" test.

03 March, 2023 04:23PM

Russell Coker

Hyper Threading on the E5-2696v3

I just did some quick tests of hyper-threading on my new E5-2696v3 CPU. I compiled the Linux 6.0.10 kernel with and without hyper-threading enabled. Here’s the times for “make -j36 bzImage” and “make -j36 modules” with HT enabled:

real    2m26.540s
user    55m25.121s
sys     9m56.443s

real    10m57.374s
user    309m21.531s
sys     58m1.070s

Here’s the times for “make -j18 bzImage” and “make -j18 modules” with HT disabled:

real    2m40.501s
user    31m35.295s
sys     5m43.523s

real    11m39.313s
user    170m46.840s
sys     31m37.756s

That’s 9.6% faster for bzImage and 6.4% faster for modules.

So for a performance boost that’s between 5% and 10% I get greater exposure to kernel security issues and more difficulty tracking CPU time. That doesn’t seem like a good trade-off so I’ve put the “nosmt” kernel command-line option back.

03 March, 2023 10:35AM by etbe

March 02, 2023

Ian Jackson

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, January/February 2023

In January I was assigned 24 hours by Freexian's Debian LTS initiative and worked 8 hours. In February I was assigned another 8 hours and worked 8 hours.

I updated the linux (4.19) package to the latest stable update, but didn't upload it. I merged the latest bullseye security update into the linux-5.10 package and uploaded that.

02 March, 2023 04:16PM

March 01, 2023

Russ Allbery

Small book haul

I'm a bit behind on both free software maintenance and on writing reviews, what with one thing and another, but hopefully will have time to catch up next month. Meanwhile, publishing continues and books keep catching my eye.

Blake Crouch (ed.) — Forward (sff anthology)
Kate Elliott — The Keeper's Six (sff)
Ruthanna Emrys — A Half-Built Garden (sff)
R.F. Kuang — Babel (sff)
Seanan McGuire — The Unkindest Tide (sff)
Seanan McGuire — A Killing Frost (sff)
Seanan McGuire — When Sorrows Come (sff)
Seanan McGuire — Be the Serpent (sff)
Terry Pratchett — Thief of Time (sff)
Terry Pratchett — The Last Hero (sff)
Terry Pratchett — The Amazing Maurice and His Educated Rodents (sff)
Terry Pratchett — Night Watch (sff)
Terry Pratchett — The Wee Free Men (sff)
Terry Pratchett — Monstrous Regiment (sff)

I keep hearing amazing things about Babel, so it's very high on the list.

01 March, 2023 05:28AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got crosvm building in Debian.

Got crosvm building in Debian. Now to rebase and try to upload. Or maybe upload the version I have first and then rebase.

01 March, 2023 04:32AM by Junichi Uekawa

hackergotchi for Debian XMPP Team

Debian XMPP Team

XMPP What's new in Debian 12 bookworm

On Tue 13 July 2021 there was a blog post of new XMPP related software releases which have been uploaded to Debian 11 (bullseye). Today, we will inform you about updates for the upcoming Debian release bookworm.

A lot of new releases have been provided by the upstream projects. There were lot of changes to the XMPP clients like Dino, Gajim, Profanity, Poezio and others. Also the XMPP servers have been enhanced.

Unfortunately, we can not provide a list of all the changes which have been done, but will try to highlight some of the changes and new features.

BTW, feel free to join the Debian User Support on Jabber at xmpp:debian@conference.debian.org?join.

You can find a list of 58 packages of the Debian XMPP team on the XMPP QA Page.

  • Dino, modern XMPP client has been upgraded from 0.2.0 to 0.4.0. The new version supports encrypted calls and group calls and reactions give you a way to respond to a message with an emoji. You can find more information about Dino 0.3.0 and Dino 0.4.0 in the release notes of the upstream project. Dino is using GTK4 / libadwaita which provides widgets for mobile-friendly UIs. Changes has been done on the main view of Dino.
  • Gajim, a GTK+-based Jabber client has been upgraded from 1.3.1 to 1.7.1. Since 1.4 Gajim has got a new UI, which supports spaces. 1.5.2 supports a content viewer for PEP nodes. 1.6.0 is using libsoup3 and python 3.10. Audio preview looks a lot nicer with a wave graph visualization and profile images (avatar) are not limited to only JPG anymore. The plugins gajim-appindicatorintegration, gajim-plugininstaller, gajim-syntaxhighlight und gajim-urlimagepreview are obsolete, these features has been moved to gajim. There were a lot of releases in Gajim. You can find the full story at https://gajim.org/post/
  • Profanity, the console based XMPP client has been upgraded from 0.10.0 to 0.13.1. Profanity supports XEP-0377 Spam Reporting, and XEP-0157 server contact information discovery. It now marks a window with an attention flag, updated HTTP Upload XEP-0363, and messages can be composed with an external editor. It also features easy quoting, in-band account registration (XEP-0077), Print OMEMO verification QR code, and many more.
  • Kaidan, a simple and user-friendly Jabber/XMPP client based on Qt has been updated from 0.7.0 to 0.8.0. The new release supports XEP-0085: Chat State Notifications and XEP-0313: Message Archive Management.
  • Poezio, a console-based XMPP client as been updated from 0.13.1 to 0.14. Poezio is now under GPLv3+. The new release supports request for voice and the /join command support using an XMPP URI. More information at https://lab.louiz.org/poezio/poezio/-/raw/v0.14/CHANGELOG.
  • [Swift][swift-im], back in Debian is the Swift XMPP client - a cross-platform Client written in C++. In 2015 the client was removed from testing and is back with version 5.0.

Server

  • prosody the lightweight extensible XMPP server has been upgraded from 0.11.9 to 0.12.2. Mobile and connectivity optimizations, a new module for HTTP file sharing, audio/video calling support. See the release announcement for more info. You will also find a lot of new modules which have been added to 0.12.0. The version 0.12.3 is waiting migration from unstable to testing.
  • ejabberd, extensible realtime platform (XMPP server + MQTT broker + SIP service) has been updated from Version 21.01 to 23.01. The new version supports the latest version of MIX (XEP-0369). There were also changes for SQL and MUC. See the release information for 22.10 and 23.01 for more details.

Libs

  • libstrophe, xmpp C lib has been upgraded from 0.10.1 to 0.12.2. The lib has SASL EXTERNAL support (XEP-0178), support for manual certificate verification and Stream Management support (XEP-0198).
  • python-nbxmpp 2.0.2 to 4.2.0 - used by gajim
  • qxmpp 1.3.2 to 1.4.0
  • slixmpp 1.7.0 to 1.8.3 (see https://lab.louiz.org/poezio/slixmpp/-/tags/slix-1.8.0)
  • loudmouth 1.5.3 to 1.5.4
  • libomemo-c, new in Debian with version 0.5.0 - a fork of libsignal-protocol-c

Others

  • There were some changes of the Libervia, formerly known as Salut à Toi (SaT) packages in Debian. The most visible change is, that Salut à Toi has been renamed to libervia:
  • salutatoi is now libervia-backend (0.9.0)
  • sat-xmpp-primitivus is now libervia-tui
  • sat-xmpp-core is now libervia-backend
  • sat-xmpp-jp is now libervia-cli
  • sat-pubsub is now libervia-pubsub (0.4.0)
  • gsasl has been updated from 1.10.0 to 2.2.0
  • libxeddsa 2.0.0 is new in Debian - toolkit around Curve25519 and Ed25519 key pairs

Happy chatting - keep in touch with your family and friends via Jabber / XMPP - XMPP is an open standard of the Internet Engineering Task Force (IETF) for instant messaging.

01 March, 2023 12:00AM by Debian XMPP Team

February 28, 2023

Paul Wise

FLOSS Activities Feb 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced package servefile
  • Debian IRC: turn an old channel into a redirect to the right one
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The pyemd/sptag work was sponsored. All other work was done on a volunteer basis.

28 February, 2023 11:48PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Cutting off body parts and Lenovo

I would suggest that this blog post would be slightly unpleasant and I do wish that there was a way, a standardized way just like movies where you can put General, 14+, 16+, Adult and whatnot. so people could share without getting into trouble. I would suggest to consider this blog as for somewhat mature and perhaps disturbing.

Cutting off body parts

From last couple of months or so we have been getting daily reports of either men or women killed and then being chopped into pieces and this is being ‘normalized’. During my growing up years, the only such case I remember was the 1995 Tandoor case and it jolted the conscience of the nation. But it seems lot of water has passe under the bridge. as no one seems to be shocked anymore 😦 Also shocking are the number of heart attacks that young people are getting. Dunno the reason for either. Just saw this yesterday, The first thing to my mind was, at least she wasn’t chopped. It was only latter I realized that the younger sister may have wanted to educate herself or have some other drreams, but because of some evil customs had to give hand in marriage. No outrage here for anything, not even child marriage :(. How have we become so insensitive. And it’s mostly Hindus killing Hindus but still no outrage. We have been killing Muslims and Christians so that I guess is just par for the course :(. I wish I could say there is a solution but there seems to be not 😦 Even Child abuse cases have been going up but sad to say even they are being normalised. It’s only when a US agency or somebody who feels shocked, then we feel shocked otherwise we have become numb 😦

AMD and Lenovo Lappies

About couple of months ago I had made a blog post about lappies. Then Russel reached out to me on Twitter and we engaged. One thing lead to other and soon I saw on some other topic somewhere came across this –

The above is a video presentation given by Mark Pearson. Sad to say it was not illuminating enough. Especially the whole ‘boothole’ thing. I did see three blog posts to get some more insight. The security entry did also share some news. I also reached out to Mr. Pearson to know both the status and also to enquire if there are any new lappies without an OS that I can buy from Lenovo. Sadly, both these e-mails went unanswered. Maybe they went to spam or something else, have no clue. While other organizations did work on it, Debian was kinda side-lined. Hence the annoyance from the Debian Maintainers that the whole thing came from the left field. And this doesn’t just effect Debian but all those downstream distributions that rely on Debian 😦 . Now while it’s almost a year since then and probably all has been fixed but there haven’t been any instructions that I could find that tellls me if there is any new way or just the old way works. In any case, I do think bookworm release probably would have all the fixes needed. IIRC, we just entered ‘soft freeze’ just couple of weeks back.

I have to admit something though, I have never used secure-boot as it has been designed, partially because I always run testing, irrespective of whatever device I use. And AFAIK the whole idea of Secure Boot is to have few updates unlike Testing which is kinda a rolling release thing. While Secure Boot wants same bits, all underlying bits, in Testing it’s hard to ensure that as the idea is to test new releases of software and see what works and what breaks till we send it to final release (something like ‘Bookworm’). FWIW, currently ‘bookworm’ and ‘Testing’ is one and the same till Bookworm releases, and then Testing would have its own updates from the next hour/day after.



28 February, 2023 02:09PM by shirishag75

Russell Coker

February 26, 2023

Russ Allbery

Review: An Informal History of the Hugos

Review: An Informal History of the Hugos, by Jo Walton

Publisher: Tor
Copyright: August 2018
ISBN: 1-4668-6573-3
Format: Kindle
Pages: 564

An Informal History of the Hugos is another collection of Jo Walton's Tor.com posts. As with What Makes This Book So Great, these are blog posts that are still available for free on-line. Unlike that collection, this series happened after Tor.com got better at tags, so it's much easier to find. Whether to buy it therefore depends on whether having it in convenient book form is worth it to you.

Walton's previous collection was a somewhat random assortment of reviews of whatever book she felt like reviewing. As you may guess from the title, this one is more structured. She starts at the first year that the Hugo Awards were given out (1953) and discusses the winners for each year up through 2000. Nearly all of that discussion is about the best novel Hugo, a survey of other good books for that year, and, when other awards (Nebula, Locus, etc.) start up, comparing them to the winners and nominees of other awards. One of the goals of each discussion is to decide whether the Hugo nominees did a good job of capturing the best books of the year and the general feel of the genre at that time.

There are a lot of pages in this book, but that's partly because there's a lot of filler. Each post includes all of the winners and (once a nomination system starts) nominees in every Hugo category. Walton offers an in-depth discussion of the novel in every year, and an in-depth discussion of the John W. Campbell Award for Best New Writer (technically not a Hugo but awarded with them and voted on in the same way) once those start. Everything else gets a few sentences at most, so it's mostly just lists, all of which you can readily find elsewhere if you cared. Personally, I would have omitted categories without commentary when this was edited into book form.

Two other things are included in this book. Most helpfully, Walton's Tor.com reviews of novels in the shortlist are included after the discussion of that year. If you like Walton's reviews, this is great for all the reasons that What Makes This Book So Great was so much fun. Walton has a way of talking about books with infectious enthusiasm, brief but insightful technical analysis, and a great deal of genre context without belaboring any one point. They're concise and readable and never outlast my attention span, and I wish I could write reviews half as well.

The other inclusion is a selection of the comments from the original blog posts. When these posts originally ran, they turned into a community discussion of the corresponding year of SF, and Tor included a selection of those comments in the book. Full disclosure: one of those comments is mine, about the way that cyberpunk latched on to some incorrect ideas of how computers work and made them genre conventions to such a degree that most cyberpunk takes place in a parallel universe with very different computer technology. (I suppose that technically makes me a published author to the tune of a couple of pages.) While I still largely agree with the comment, I blamed Neuromancer for this at the time, and embarrassingly discovered when re-reading it that I had been unfair. This is why one should never express opinions in public where someone might record them.

Anyway, there is a general selection of comments from random people, but the vast majority of the comments are discussions of the year's short fiction by Rich Horton and Gardner Dozois. I understand why this was included; Walton doesn't talk about the short fiction, Dozois was a legendary SF short fiction editor and multiple Hugo winner, and both Horton and Dozois reviewed short fiction for Locus. But they don't attempt reviews. For nearly all stories under discussion, unless you recognized the title, you would have no idea even what sub-genre it was in. It's just a sequence of assertions about which title or author was better.

Given that there are (in most years) three short fiction categories to the one novel category and both Horton and Dozois write about each category, I suspect there are more words in this book from Horton and Dozois than Walton. That's a problem when those comments turn into tedious catalogs.

Reviewing short fiction, particularly short stories, is inherently difficult. I've tried to do a lot of that myself, and it's tricky to find something useful to say that doesn't spoil the story. And to be fair to Horton and Dozois, they weren't being paid to write reviews; they were just commenting on blog posts as part of a community conversation, and I doubt anyone thought this would turn into a book. But when read as a book, their inclusion in this form wasn't my favorite editorial decision.

This is therefore a collection of Walton's commentary on the selections for best novel and best new writer alongside a whole lot of boring lists. In theory, the padding shouldn't matter; one can skip over it and just read Walton's parts, and that's still lots of material. But Walton's discussion of the best novels of the year also tends to turn into long lists of books with no commentary (particularly once the very-long Locus recommended list starts appearing), adding to the tedium. This collection requires a lot of skimming.

I enjoyed this series of blog posts when they were first published, but even at the time I skimmed the short fiction comments. Gathered in book form with this light of editing, I think it was less successful. If you are curious about the history of science fiction awards and never read the original posts, you may enjoy this, but I would rather have read another collection of straight reviews.

Rating: 6 out of 10

26 February, 2023 05:17AM

February 25, 2023

hackergotchi for Gregor Herrmann

Gregor Herrmann

demo video: dpt(1) in pkg-perl-tools

in the Debian Perl Group we are maintaining a lot of packages (around 4000 at the time of writing). this also means that we are spending some time on improving our tools which allow us to handle this amount of packages in a reasonable time.

many of the tools are shipped in the pkg-perl-tools package since 2013, & lots of them are scripts which are called as subcommands of the dpt(1) wrapper script.

in the last years I got the impression that not all team members are aware of all the useful tools, & that some more promotion might be called for. & last week I was in the mood for creating a short demo video to showcase how I use some dpt(1) subcommands when updating a package to a new upstream release. (even though I prefer text over videos myself :))

probably not a cinematographic masterpiece but as the feedback of a few viewers has been positive, I'm posting it here as well:

(direct link as planets ignore iframes …)

25 February, 2023 10:36PM

Jelmer Vernooij

Silver Platter Batch Mode

Background

Silver-Platter makes it easier to publish automated changes to repositories. However, in its default mode, the only option for reviewing changes before publishing them is to run in dry-run mode. This can be quite cumbersome if you have a lot of repositories.

A new “batch” mode now makes it possible to generate a large number of changes against different repositories using a script, review and optionally alter the diffs, and then all publish them (and potentially refresh them later if conflicts appear).

Example running pyupgrade

I’m using the pyupgrade example recipe that comes with silver-platter.

 ---
 name: pyupgrade
 command: 'pyupgrade --exit-zero-even-if-changed $(find -name "test_*.py")'
 mode: propose
 merge-request:
   commit-message: Upgrade Python code to a modern version

And a list of candidate repositories to process in candidates.yaml.

 ---
 - url: https://github.com/jelmer/dulwich
 - url: https://github.com/jelmer/xandikos

With these in place, the updated repositories can be created:

 $ svp batch generate --recipe=pyupgrade.yaml --candidates=candidate.syml pyupgrade

The intermediate results

This will create a directory called pyupgrade, with a clone of each of the repositories.

$ ls pyupgrade
batch.yaml  dulwich  xandikos

$ cd pyupgrade/dulwich
$ git log
commit 931f9ffb26e9143c56f20e0b85e6ddb0a8eee2eb (HEAD -> master)
Author: Jelmer Vernooij <jelmer@jelmer.uk>
Date:   Sat Feb 25 22:28:12 2023 +0000

Run pyupgrade
diff --git a/dulwich/tests/compat/test_client.py b/dulwich/tests/compat/test_client.py
index 02ab6c0a..9b0661ed 100644
--- a/dulwich/tests/compat/test_client.py
+++ b/dulwich/tests/compat/test_client.py
@@ -628,7 +628,7 @@ class HTTPGitServer(http.server.HTTPServer):
         self.server_name = "localhost"

     def get_url(self):
-        return "http://{}:{}/".format(self.server_name, self.server_port)
+        return f"http://{self.server_name}:{self.server_port}/"


 class DulwichHttpClientTest(CompatTestCase, DulwichClientTestBase):
...

There is also a file called batch.yaml that describes the pending changes:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
name: pyupgrade
work:
- url: https://github.com/dulwich/dulwich
  name: dulwich
  description: Upgrade to modern Python statements
  commit-message: Run pyupgrade
  mode: propose
- url: https://github.com/jelmer/xandikos
  name: xandikos
  description: Upgrade to modern Python statements
  commit-message: Run pyupgrade
  mode: propose
recipe: ../pyupgrade.yaml

At this point the changes can be reviewed, and batch.yaml edited as the user sees fit - they can remove entries that don’t appear to be correct, edit the metadata for the merge requests, etc. It’s also possible to make changes to the clones.

Once you’re happy, publish the results:

$ svp batch publish pyupgrade

This will publish all the changes, using the mode and parameters specified in batch.yaml.

batch.yaml is automatically stripped of any entries in work that have fully landed, i.e. where the pull request has been merged or where the changes were pushed to the origin.

To check up on the status of your changes, run svp batch status:

$ svp batch status pyupgrade

To refresh any merge proposals that may have become out of date, simply run publish again:

svp batch publish pyupgrade

25 February, 2023 09:44PM by Jelmer Vernooij

hackergotchi for Holger Levsen

Holger Levsen

20230225-Debian-Reunion-Hamburg-2023

Debian Reunion Hamburg 2023 from May 23 to 30

As in the last years there will be a Debian Reunion Hamburg 2023 event taking place at the same location as previous years, from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

This is just a short announcement to get the word out, that this event will happen, so you can ponder and prepare attending. The wiki page has more information and some fine folks have even already registered! Announcements on the appropriate mailinglists will follow soon.

And once again, a few things still need to be sorted out, eg a call for papers and a call for sponsors. Also this year I'd like to distribute the work on more shoulders, especially dealing with accomodation (there are 34 beds available on-site), accomodation payments and finances in general.

If you want to help with any of that or have questions about the event, please reach out via #debconf-hamburg on irc.oftc.net or via the debconf-hamburg mailinglist.

I'm very much looking forward to meet some of you once again and getting to know some others for the first time! Yay.

25 February, 2023 06:59PM

February 24, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

ttdo 0.0.9 on CRAN: Small Update

A new minor release of our ttdo package arrived on CRAN a few days ago. The ttdo package extends the excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs (as shown in the screenshot below) which seemingly is so compelling an idea that it eventually got copied by another package which shall remain unnamed…

ttdo screenshot

This release adds a versioned dependency on the just released tinytest version 1.4.1. As we extend tinytest (for use in the autograder we deploy within the lovely PrairieLearn framework) by consuming the tinytest code we have to update in sync.

There were no other code changes in the package beside the usual maintenance of badges and continuous integration setup.

As usual, the NEWS entry follows.

Changes in ttdo version 0.0.9 (2023-02-21)

  • Minor cleanup in README.md

  • Minor continuous integration update

  • Updated (versioned) depends on tinytest to 1.4.1

My CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 February, 2023 10:33PM

February 23, 2023

hackergotchi for Steve Kemp

Steve Kemp

A quick hack for Emacs

As I've mentioned in the past I keep a work-log, or work-diary, recording my activities every day.

I have a bunch of standard things that I record, but one thing that often ends up happening is that I make references to external bug trackers, be they Jira, Bugzilla, or something else.

Today I hacked up a simple emacs minor-mode for converting these references to hyperlinks, automatically, via the use of regular expressions.

Given this configuration:

(setq linkifier-patterns '(
          ("\\\<XXX-[0-9]+\\\>" "https://jira.example.com/browse/%s")
          ("\\\<BUG-[0-9]+\\\>" "https://bugzilla.example.com/show?id=%s")))

When the minor-mode is active the any literal text that matches the pattern, for example "XXX-1234", will suddenly become a clickable button that will open Jira, and BUG-1234 will become a clickable button that opens the appropriate bug in Bugzilla.

There's no rewriting of the content, this is just a bit of magic that changes the display of the text (i.e. I'm using a button/text-property).

Since I mostly write in org-mode I could have written my text like so:

[[jira:XXX-1234][XXX-1234]]

But that feels like an ugly thing to do, and that style of links wouldn't work outside org-files anyway. That said it's a useful approach if you're only using org-mode, and the setup is simple:

(add-to-list 'org-link-abbrev-alist
    '("jira" . "http://jira.example.com/browse/%s"))

Anyway, cute hack. Useful too.

23 February, 2023 10:10PM

February 22, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.0.1.0 on CRAN: New Upstream, New Features

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1042 other packages on CRAN, downloaded 28.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 513 times according to Google Scholar.

This release brings a new upstream release 12.0.1. We found a small regression with the 12.0.0 release when we tested prior to a CRAN upload. Conrad very promptly fixed this with a literal one liner and made it 12.0.1 which we wrapped up as 0.12.0.1.0. Subsequent testing revealed no issues for us, and CRAN autoprocessed it as I tweeted earlier. This is actually quite impressive given the over 1000 CRAN packages using it all of which got tested again by CRAN. All this is testament to the rigour, as well as the well-oiled process at the repository. Our thanks go to the tireless maintainers!

The releases actually has a rather nice set of changes (detailed below) to which we added one robustification thanks to Kevin.

The full set of changes follows. We include the previous changeset as we may have skipped the usual blog post here.

Changes in RcppArmadillo version 0.12.0.1.0 (2023-02-20)

  • Upgraded to Armadillo release 12.0.1 (Cortisol Profusion)

    • faster fft() and ifft() via optional use of FFTW3

    • faster min() and max()

    • faster index_min() and index_max()

    • added .col_as_mat() and .row_as_mat() which return matrix representation of cube column and cube row

    • added csv_opts::strict option to loading CSV files to interpret missing values as NaN

    • added check_for_zeros option to form 4 of sparse matrix batch constructors

    • inv() and inv_sympd() with options inv_opts::no_ugly or inv_opts::allow_approx now use a scaled threshold similar to pinv()

    • set_cout_stream() and set_cerr_stream() are now no-ops; instead use the options ARMA_WARN_LEVEL, or ARMA_COUT_STREAM, or ARMA_CERR_STREAM

    • fix regression (mis-compilation) in shift() function (reported by us in #409)

  • The include directory order is now more robust (Kevin Ushey in #407 addressing #406)

Changes in RcppArmadillo version 0.11.4.4.0 (2023-02-09)

  • Upgraded to Armadillo release 11.4.4 (Ship of Theseus)

    • extended pow() with various forms of element-wise power operations

    • added find_nan() to find indices of NaN elements

    • faster handling of compound expressions by sum()

  • The package no longer sets a compilation standard, or progagates on in the generated packages as R ensures C++11 on all non-ancient versions

  • The CITATION file was updated to the current format

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2023 10:34PM

February 21, 2023

Antoine Beaupré

Wayland: i3 to Sway migration

I started migrating my graphical workstations to Wayland, specifically migrating from i3 to Sway. This is mostly to address serious graphics bugs in the latest Framwork laptop, but also something I felt was inevitable.

The current status is that I've been able to convert my i3 configuration to Sway, and adapt my systemd startup sequence to the new environment. Screen sharing only works with Pipewire, so I also did that migration, which basically requires an upgrade to Debian bookworm to get a nice enough Pipewire release.

I'm testing Wayland on my laptop, but I'm not using it as a daily driver because I first need to upgrade to Debian bookworm on my main workstation.

Most irritants have been solved one way or the other. My main problem with Wayland right now is that I spent a frigging week doing the conversion: it's exciting and new, but it basically sucked the life out of all my other projects and it's distracting, and I want it to stop.

The rest of this page documents why I made the switch, how it happened, and what's left to do. Hopefully it will keep you from spending as much time as I did in fixing this.

TL;DR: Wayland is mostly ready. Main blockers you might find are that you need to do manual configurations, DisplayLink (multiple monitors on a single cable) doesn't work in Sway, HDR and color management are still in development.

I had to install the following packages:

apt install \
    brightnessctl \
    foot \
    gammastep \
    gdm3 \
    grim slurp \
    pipewire-pulse \
    sway \
    swayidle \
    swaylock \
    wdisplays \
    wev \
    wireplumber \
    wlr-randr \
    xdg-desktop-portal-wlr

And did some of tweaks in my $HOME, mostly dealing with my esoteric systemd startup sequence, which you won't have to deal with if you are not a fan.

Why switch?

I originally held back from migrating to Wayland: it seemed like a complicated endeavor hardly worth the cost. It also didn't seem actually ready.

But after reading this blurb on LWN, I decided to at least document the situation here. The actual quote that convinced me it might be worth it was:

It’s amazing. I have never experienced gaming on Linux that looked this smooth in my life.

... I'm not a gamer, but I do care about latency. The longer version is worth a read as well.

The point here is not to bash one side or the other, or even do a thorough comparison. I start with the premise that Xorg is likely going away in the future and that I will need to adapt some day. In fact, the last major Xorg release (21.1, October 2021) is rumored to be the last ("just like the previous release...", that said, minor releases are still coming out, e.g. 21.1.4). Indeed, it seems even core Xorg people have moved on to developing Wayland, or at least Xwayland, which was spun off it its own source tree.

X, or at least Xorg, is in maintenance mode and has been for years. Granted, the X Window System is getting close to forty years old at this point: it got us amazingly far for something that was designed around the time the first graphical interface. Since Mac and (especially?) Windows released theirs, they have rebuilt their graphical backends numerous times, but UNIX derivatives have stuck on Xorg this entire time, which is a testament to the design and reliability of X. (Or our incapacity at developing meaningful architectural change across the entire ecosystem, take your pick I guess.)

What pushed me over the edge is that I had some pretty bad driver crashes with Xorg while screen sharing under Firefox, in Debian bookworm (around November 2022). The symptom would be that the UI would completely crash, reverting to a text-only console, while Firefox would keep running, audio and everything still working. People could still see my screen, but I couldn't, of course, let alone interact with it. All processes still running, including Xorg.

(And no, sorry, I haven't reported that bug, maybe I should have, and it's actually possible it comes up again in Wayland, of course. But at first, screen sharing didn't work of course, so it's coming a much further way. After making screen sharing work, though, the bug didn't occur again, so I consider this a Xorg-specific problem until further notice.)

There were also frustrating glitches in the UI, in general. I actually had to setup a compositor alongside i3 to make things bearable at all. Video playback in a window was lagging, sluggish, and out of sync.

Wayland fixed all of this.

Wayland equivalents

This section documents each tool I have picked as an alternative to the current Xorg tool I am using for the task at hand. It also touches on other alternatives and how the tool was configured.

Note that this list is based on the series of tools I use in desktop.

TODO: update desktop with the following when done, possibly moving old configs to a xorg archive.

Window manager: i3 → sway

This seems like kind of a no-brainer. Sway is around, it's feature-complete, and it's in Debian.

I'm a bit worried about the "Drew DeVault community", to be honest. There's a certain aggressiveness in the community I don't like so much; at least an open hostility towards more modern UNIX tools like containers and systemd that make it hard to do my work while interacting with that community.

I'm also concern about the lack of unit tests and user manual for Sway. The i3 window manager has been designed by a fellow (ex-)Debian developer I have a lot of respect for (Michael Stapelberg), partly because of i3 itself, but also working with him on other projects. Beyond the characters, i3 has a user guide, a code of conduct, and lots more documentation. It has a test suite.

Sway has... manual pages, with the homepage just telling users to use man -k sway to find what they need. I don't think we need that kind of elitism in our communities, to put this bluntly.

But let's put that aside: Sway is still a no-brainer. It's the easiest thing to migrate to, because it's mostly compatible with i3. I had to immediately fix those resources to get a minimal session going:

i3 Sway note
set_from_resources set no support for X resources, naturally
new_window pixel 1 default_border pixel 1 actually supported in i3 as well

That's it. All of the other changes I had to do (and there were actually a lot) were all Wayland-specific changes, not Sway-specific changes. For example, use brightnessctl instead of xbacklight to change the backlight levels.

See a copy of my full sway/config for details.

Other options include:

  • dwl: tiling, minimalist, dwm for Wayland, not in Debian
  • Hyprland: tiling, fancy animations, not in Debian
  • Qtile: tiling, extensible, in Python, not in Debian (1015267)
  • river: Zig, stackable, tagging, not in Debian (1006593)
  • velox: inspired by xmonad and dwm, not in Debian
  • vivarium: inspired by xmonad, not in Debian

Status bar: py3status → waybar

I have invested quite a bit of effort in setting up my status bar with py3status. It supports Sway directly, and did not actually require any change when migrating to Wayland.

Unfortunately, I had trouble making nm-applet work. Based on this nm-applet.service, I found that you need to pass --indicator for it to show up at all.

In theory, tray icon support was merged in 1.5, but in practice there are still several limitations, like icons not clickable. Also, on startup, nm-applet --indicator triggers this error in the Sway logs:

nov 11 22:34:12 angela sway[298938]: 00:49:42.325 [INFO] [swaybar/tray/host.c:24] Registering Status Notifier Item ':1.47/org/ayatana/NotificationItem/nm_applet'
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet IconPixmap: No such property “IconPixmap”
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet AttentionIconPixmap: No such property “AttentionIconPixmap”
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet ItemIsMenu: No such property “ItemIsMenu”
nov 11 22:36:10 angela sway[313419]: info: fcft.c:838: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: size=24.00pt/32px, dpi=96.00

... but that seems innocuous. The tray icon displays but is not clickable.

Note that there is currently (November 2022) a pull request to hook up a "Tray D-Bus Menu" which, according to Reddit might fix this, or at least be somewhat relevant.

If you don't see the icon, check the bar.tray_output property in the Sway config, try: tray_output *.

The non-working tray was the biggest irritant in my migration. I have used nmtui to connect to new Wifi hotspots or change connection settings, but that doesn't support actions like "turn off WiFi".

I eventually fixed this by switching from py3status to waybar, which was another yak horde shaving session, but ultimately, it worked.

Other alternatives include:

Web browser: Firefox

Firefox has had support for Wayland for a while now, with the team enabling it by default in nightlies around January 2022. It's actually not easy to figure out the state of the port, the meta bug report is still open and it's huge: it currently (Sept 2022) depends on 76 open bugs, it was opened twelve (2010) years ago, and it's still getting daily updates (mostly linking to other tickets).

Firefox 106 presumably shipped with "Better screen sharing for Windows and Linux Wayland users", but I couldn't quite figure out what those were.

TL;DR: echo MOZ_ENABLE_WAYLAND=1 >> ~/.config/environment.d/firefox.conf && apt install xdg-desktop-portal-wlr

How to enable it

Firefox depends on this silly variable to start correctly under Wayland (otherwise it starts inside Xwayland and looks fuzzy and fails to screen share):

MOZ_ENABLE_WAYLAND=1 firefox

To make the change permanent, many recipes recommend adding this to an environment startup script:

if [ "$XDG_SESSION_TYPE" == "wayland" ]; then
    export MOZ_ENABLE_WAYLAND=1
fi

At least that's the theory. In practice, Sway doesn't actually run any startup shell script, so that can't possibly work. Furthermore, XDG_SESSION_TYPE is not actually set when starting Sway from gdm3 which I find really confusing, and I'm not the only one. So the above trick doesn't actually work, even if the environment (XDG_SESSION_TYPE) is set correctly, because we don't have conditionals in environment.d(5).

(Note that systemd.environment-generator(7) does support running arbitrary commands to generate environment, but for some reason does not support user-specific configuration files: it only looks at system directories... Even then it may be a solution to have a conditional MOZ_ENABLE_WAYLAND environment, but I'm not sure it would work because ordering between those two isn't clear: maybe the XDG_SESSION_TYPE wouldn't be set just yet...)

At first, I made this ridiculous script to workaround those issues. Really, it seems to me Firefox should just parse the XDG_SESSION_TYPE variable here... but then I realized that Firefox works fine in Xorg when the MOZ_ENABLE_WAYLAND is set.

So now I just set that variable in environment.d and It Just Works™:

MOZ_ENABLE_WAYLAND=1

Screen sharing

Out of the box, screen sharing doesn't work until you install xdg-desktop-portal-wlr or similar (e.g. xdg-desktop-portal-gnome on GNOME). I had to reboot for the change to take effect.

Without those tools, it shows the usual permission prompt with "Use operating system settings" as the only choice, but when we accept... nothing happens. After installing the portals, it actually works, and works well!

This was tested in Debian bookworm/testing with Firefox ESR 102 and Firefox 106.

Major caveat: we can only share a full screen, we can't currently share just a window. The major upside to that is that, by default, it streams only one output which is actually what I want most of the time! See the screencast compatibility for more information on what is supposed to work.

This is actually a huge improvement over the situation in Xorg, where Firefox can only share a window or all monitors, which led me to use Chromium a lot for video-conferencing. With this change, in other words, I will not need Chromium for anything anymore, whoohoo!

If slurp, wofi, or bemenu are installed, one of them will be used to pick the monitor to share, which effectively acts as some minimal security measure. See xdg-desktop-portal-wlr(1) for how to configure that.

Side note: Chrome fails to share a full screen

I was still using Google Chrome (or, more accurately, Debian's Chromium package) for some videoconferencing. It's mainly because Chromium was the only browser which will allow me to share only one of my two monitors, which is extremely useful.

To start chrome with the Wayland backend, you need to use:

chromium  -enable-features=UseOzonePlatform -ozone-platform=wayland

If it shows an ugly gray border, check the Use system title bar and borders setting.

It can do some screen sharing. Sharing a window and a tab seems to work, but sharing a full screen doesn't: it's all black. Maybe not ready for prime time.

And since Firefox can do what I need under Wayland now, I will not need to fight with Chromium to work under Wayland:

apt purge chromium

Note that a similar fix was necessary for Signal Desktop, see this commit. Basically you need to figure out a way to pass those same flags to signal:

--enable-features=WaylandWindowDecorations --ozone-platform-hint=auto

Email: notmuch

See Emacs, below.

File manager: thunar

Unchanged.

News: feed2exec, gnus

See Email, above, or Emacs in Editor, below.

Editor: Emacs okay-ish

Emacs is being actively ported to Wayland. According to this LWN article, the first (partial, to Cairo) port was done in 2014 and a working port (to GTK3) was completed in 2021, but wasn't merged until late 2021. That is: after Emacs 28 was released (April 2022).

So we'll probably need to wait for Emacs 29 to have native Wayland support in Emacs, which, in turn, is unlikely to arrive in time for the Debian bookworm freeze. There are, however, unofficial builds for both Emacs 28 and 29 provided by spwhitton which may provide native Wayland support.

I tested the snapshot packages and they do not quite work well enough. First off, they completely take over the builtin Emacs — they hijack the $PATH in /etc! — and certain things are simply not working in my setup. For example, this hook never gets ran on startup:

(add-hook 'after-init-hook 'server-start t) 

Still, like many X11 applications, Emacs mostly works fine under Xwayland. The clipboard works as expected, for example.

Scaling is a bit of an issue: fonts look fuzzy.

I have heard anecdotal evidence of hard lockups with Emacs running under Xwayland as well, but haven't experienced any problem so far. I did experience a Wayland crash with the snapshot version however.

TODO: look again at Wayland in Emacs 29.

Backups: borg

Mostly irrelevant, as I do not use a GUI.

Color theme: srcery, redshift → gammastep

I am keeping Srcery as a color theme, in general.

Redshift is another story: it has no support for Wayland out of the box, but it's apparently possible to apply a hack on the TTY before starting Wayland, with:

redshift -m drm -PO 3000

This tip is from the arch wiki which also has other suggestions for Wayland-based alternatives. Both KDE and GNOME have their own "red shifters", and for wlroots-based compositors, they (currently, Sept. 2022) list the following alternatives:

I configured gammastep with a simple gammastep.service file associated with the sway-session.target.

Display manager: lightdm → gdm3

Switched because lightdm failed to start sway:

nov 16 16:41:43 angela sway[843121]: 00:00:00.002 [ERROR] [wlr] [libseat] [common/terminal.c:162] Could not open target tty: Permission denied

Possible alternatives:

Terminal: xterm → foot

One of the biggest question mark in this transition was what to do about Xterm. After writing two articles about terminal emulators as a professional journalist, decades of working on the terminal, and probably using dozens of different terminal emulators, I'm still not happy with any of them.

This is such a big topic that I actually have an entire blog post specifically about this.

For starters, using xterm under Xwayland works well enough, although the font scaling makes things look a bit too fuzzy.

I have also tried foot: it ... just works!

Fonts are much crisper than Xterm and Emacs. URLs are not clickable but the URL selector (control-shift-u) is just plain awesome (think "vimperator" for the terminal).

There's cool hack to jump between prompts.

Copy-paste works. True colors work. The word-wrapping is excellent: it doesn't lose one byte. Emojis are nicely sized and colored. Font resize works. There's even scroll back search (control-shift-r).

Foot went from a question mark to being a reason to switch to Wayland, just for this little goodie, which says a lot about the quality of that software.

The selection clicks are a not quite what I would expect though. In rxvt and others, you have the following patterns:

  • single click: reset selection, or drag to select
  • double: select word
  • triple: select quotes or line
  • quadruple: select line

I particularly find the "select quotes" bit useful. It seems like foot just supports double and triple clicks, with word and line selected. You can select a rectangle with control,. It correctly extends the selection word-wise with right click if double-click was first used.

One major problem with Foot is that it's a new terminal, with its own termcap entry. Support for foot was added to ncurses in the 20210731 release, which was shipped after the current Debian stable release (Debian bullseye, which ships 6.2+20201114-2). A workaround for this problem is to install the foot-terminfo package on the remote host, which is available in Debian stable.

This should eventually resolve itself, as Debian bookworm has a newer version. Note that some corrections were also shipped in the 20211113 release, but that is also shipped in Debian bookworm.

That said, I am almost certain I will have to revert back to xterm under Xwayland at some point in the future. Back when I was using GNOME Terminal, it would mostly work for everything until I had to use the serial console on a (HP ProCurve) network switch, which have a fancy TUI that was basically unusable there. I fully expect such problems with foot, or any other terminal than xterm, for that matter.

The foot wiki has good troubleshooting instructions as well.

Update: I did find one tiny thing to improve with foot, and it's the default logging level which I found pretty verbose. After discussing it with the maintainer on IRC, I submitted this patch to tweak it, which I described like this on Mastodon:

today's reason why i will go to hell when i die (TRWIWGTHWID?): a 600-word, 63 lines commit log for a one line change: https://codeberg.org/dnkl/foot/pulls/1215

It's Friday.

Launcher: rofi → rofi??

rofi does not support Wayland. There was a rather disgraceful battle in the pull request that led to the creation of a fork (lbonn/rofi), so it's unclear how that will turn out.

Given how relatively trivial problem space is, there is of course a profusion of options:

Tool In Debian Notes
alfred yes general launcher/assistant tool
bemenu yes, bookworm+ inspired by dmenu
cerebro no Javascript ... uh... thing
dmenu-wl no fork of dmenu, straight port to Wayland
Fuzzel ITP 982140 dmenu/drun replacement, app icon overlay
gmenu no drun replacement, with app icons
kickoff no dmenu/run replacement, fuzzy search, "snappy", history, copy-paste, Rust
krunner yes KDE's runner
mauncher no dmenu/drun replacement, math
nwg-launchers no dmenu/drun replacement, JSON config, app icons, nwg-shell project
Onagre no rofi/alfred inspired, multiple plugins, Rust
πmenu no dmenu/drun rewrite
Rofi (lbonn's fork) no see above
sirula no .desktop based app launcher
Ulauncher ITP 949358 generic launcher like Onagre/rofi/alfred, might be overkill
tofi yes, bookworm+ dmenu/drun replacement, C
wmenu no fork of dmenu-wl, but mostly a rewrite
Wofi yes dmenu/drun replacement, not actively maintained
yofi no dmenu/drun replacement, Rust

The above list comes partly from https://arewewaylandyet.com/ and awesome-wayland. It is likely incomplete.

I have read some good things about bemenu, fuzzel, and wofi.

A particularly tricky option is that my rofi password management depends on xdotool for some operations. At first, I thought this was just going to be (thankfully?) impossible, because we actually like the idea that one app cannot send keystrokes to another. But it seems there are actually alternatives to this, like wtype or ydotool, the latter which requires root access. wl-ime-type does that through the input-method-unstable-v2 protocol (sample emoji picker, but is not packaged in Debian.

As it turns out, wtype just works as expected, and fixing this was basically a two-line patch. Another alternative, not in Debian, is wofi-pass.

The other problem is that I actually heavily modified rofi. I use "modis" which are not actually implemented in wofi or tofi, so I'm left with reinventing those wheels from scratch or using the rofi + wayland fork... It's really too bad that fork isn't being reintegrated...

For now, I'm actually still using rofi under Xwayland. The main downside is that fonts are fuzzy, but it otherwise just works.

Note that wlogout could be a partial replacement (just for the "power menu").

Image viewers: geeqie → ?

I'm not very happy with geeqie in the first place, and I suspect the Wayland switch will just make add impossible things on top of the things I already find irritating (Geeqie doesn't support copy-pasting images).

In practice, Geeqie doesn't seem to work so well under Wayland. The fonts are fuzzy and the thumbnail preview just doesn't work anymore (filed as Debian bug 1024092). It seems it also has problems with scaling.

Alternatives:

See also this list and that list for other list of image viewers, not necessarily ported to Wayland.

TODO: pick an alternative to geeqie, nomacs would be gorgeous if it wouldn't be basically abandoned upstream (no release since 2020), has an unpatched CVE-2020-23884 since July 2020, does bad vendoring, and is in bad shape in Debian (4 minor releases behind).

So for now I'm still grumpily using Geeqie.

Media player: mpv, gmpc / sublime

This is basically unchanged. mpv seems to work fine under Wayland, better than Xorg on my new laptop (as mentioned in the introduction), and that before the version which improves Wayland support significantly, by bringing native Pipewire support and DMA-BUF support.

gmpc is more of a problem, mainly because it is abandoned. See 2022-08-22-gmpc-alternatives for the full discussion, one of the alternatives there will likely support Wayland.

Finally, I might just switch to sublime-music instead... In any case, not many changes here, thankfully.

Screensaver: xsecurelock → swaylock

I was previously using xss-lock and xsecurelock as a screensaver, with xscreensaver "hacks" as a backend for xsecurelock.

The basic screensaver in Sway seems to be built with swayidle and swaylock. It's interesting because it's the same "split" design as xss-lock and xsecurelock.

That, unfortunately, does not include the fancy "hacks" provided by xscreensaver, and that is unlikely to be implemented upstream.

Other alternatives include gtklock and waylock (zig), which do not solve that problem either.

It looks like swaylock-plugin, a swaylock fork, which at least attempts to solve this problem, although not directly using the real xscreensaver hacks. swaylock-effects is another attempt at this, but it only adds more effects, it doesn't delegate the image display.

Other than that, maybe it's time to just let go of those funky animations and just let swaylock do it's thing, which is display a static image or just a black screen, which is fine by me.

In the end, I am just using swayidle with a configuration based on the systemd integration wiki page but with additional tweaks from this service, see the resulting swayidle.service file.

Interestingly, damjan also has a service for swaylock itself, although it's not clear to me what its purpose is...

Screenshot: maim → grim, pubpaste

I'm a heavy user of maim (and a package uploader in Debian). It looks like the direct replacement to maim (and slop) is grim (and slurp). There's also swappy which goes on top of grim and allows preview/edit of the resulting image, nice touch (not in Debian though).

See also awesome-wayland screenshots for other alternatives: there are many, including X11 tools like Flameshot that also support Wayland.

One key problem here was that I have my own screenshot / pastebin software which will needed an update for Wayland as well. That, thankfully, meant actually cleaning up a lot of horrible code that involved calling xterm and xmessage for user interaction. Now, pubpaste uses GTK for prompts and looks much better. (And before anyone freaks out, I already had to use GTK for proper clipboard support, so this isn't much of a stretch...)

Screen recorder: simplescreenrecorder → wf-recorder

In Xorg, I have used both peek or simplescreenrecorder for screen recordings. The former will work in Wayland, but has no sound support. The latter has a fork with Wayland support but it is limited and buggy ("doesn't support recording area selection and has issues with multiple screens").

It looks like wf-recorder will just do everything correctly out of the box, including audio support (with --audio, duh). It's also packaged in Debian.

One has to wonder how this works while keeping the "between app security" that Wayland promises, however... Would installing such a program make my system less secure?

Many other options are available, see the awesome Wayland screencasting list.

RSI: workrave → nothing?

Workrave has no support for Wayland. activity watch is a time tracker alternative, but is not a RSI watcher. KDE has rsiwatcher, but that's a bit too much on the heavy side for my taste.

SafeEyes looks like an alternative at first, but it has many issues under Wayland (escape doesn't work, idle doesn't work, it just doesn't work really). timekpr-next could be an alternative as well, and has support for Wayland.

I am also considering just abandoning workrave, even if I stick with Xorg, because it apparently introduces significant latency in the input pipeline.

And besides, I've developed a pretty unhealthy alert fatigue with Workrave. I have used the program for so long that my fingers know exactly where to click to dismiss those warnings very effectively. It makes my work just more irritating, and doesn't fix the fundamental problem I have with computers.

Other apps

This is a constantly changing list, of course. There's a bit of a "death by a thousand cuts" in migrating to Wayland because you realize how many things you were using are tightly bound to X.

  • .Xresources - just say goodbye to that old resource system, it was used, in my case, only for rofi, xterm, and ... Xboard!?

  • keyboard layout switcher: built-in to Sway since 2017 (PR 1505, 1.5rc2+), requires a small configuration change, see this answer as well, looks something like this command:

     swaymsg input 0:0:X11_keyboard xkb_layout de
    

    or using this config:

     input * {
         xkb_layout "ca,us"
         xkb_options "grp:sclk_toggle"
     }
    

    That works refreshingly well, even better than in Xorg, I must say.

    swaykbdd is an alternative that supports per-window layouts (in Debian).

  • wallpaper: currently using feh, will need a replacement, TODO: figure out something that does, like feh, a random shuffle. swaybg just loads a single image, duh. oguri might be a solution, but unmaintained, used here, not in Debian. wallutils is another option, also not in Debian. For now I just don't have a wallpaper, the background is a solid gray, which is better than Xorg's default (which is whatever crap was left around a buffer by the previous collection of programs, basically)

  • notifications: currently using dunst in some places, which works well in both Xorg and Wayland, not a blocker, salut a possible alternative (not in Debian), damjan uses mako. TODO: install dunst everywhere

  • notification area: I had trouble making nm-applet work. based on this nm-applet.service, I found that you need to pass --indicator. In theory, tray icon support was merged in 1.5, but in practice there are still several limitations, like icons not clickable. On startup, nm-applet --indicator triggers this error in the Sway logs:

     nov 11 22:34:12 angela sway[298938]: 00:49:42.325 [INFO] [swaybar/tray/host.c:24] Registering Status Notifier Item ':1.47/org/ayatana/NotificationItem/nm_applet'
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet IconPixmap: No such property “IconPixmap”
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet AttentionIconPixmap: No such property “AttentionIconPixmap”
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet ItemIsMenu: No such property “ItemIsMenu”
     nov 11 22:36:10 angela sway[313419]: info: fcft.c:838: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: size=24.00pt/32px, dpi=96.00
    

    ... but it seems innocuous. The tray icon displays but, as stated above, is not clickable. If you don't see the icon, check the bar.tray_output property in the Sway config, try: tray_output *. Note that there is currently (November 2022) a pull request to hook up a "Tray D-Bus Menu" which, according to Reddit might fix this, or at least be somewhat relevant.

    This was the biggest irritant in my migration. I have used nmtui to connect to new Wifi hotspots or change connection settings, but that doesn't support actions like "turn off WiFi".

    I eventually fixed this by switching from py3status to waybar.

  • window switcher: in i3 I was using this bespoke i3-focus script, which doesn't work under Sway, swayr an option, not in Debian. So I put together this other bespoke hack from multiple sources, which works.

  • PDF viewer: currently using atril and sioyek (both of which supports Wayland), could also just switch to zatura/mupdf permanently, see also calibre for a discussion on document viewers

See also this list of useful addons and this other list for other app alternatives.

More X11 / Wayland equivalents

For all the tools above, it's not exactly clear what options exist in Wayland, or when they do, which one should be used. But for some basic tools, it seems the options are actually quite clear. If that's the case, they should be listed here:

X11 Wayland In Debian
arandr wdisplays yes
autorandr kanshi yes
xdotool wtype yes
xev wev, xkbcli interactive-wayland yes
xlsclients swaymsg -t get_tree yes
xprop wlprop or swaymsg -t get_tree no
xrandr wlr-randr yes

lswt is a more direct replacement for xlsclients but is not packaged in Debian.

xkbcli interactive-wayland is part of the libxkbcommon-tools package.

See also:

Note that arandr and autorandr are not directly part of X. arewewaylandyet.com refers to a few alternatives. We suggest wdisplays and kanshi above (see also this service file) but wallutils can also do the autorandr stuff, apparently, and nwg-displays can do the arandr part. Neither are packaged in Debian yet.

So I have tried wdisplays and it Just Works, and well. The UI even looks better and more usable than arandr, so another clean win from Wayland here.

TODO: test kanshi as a autorandr replacement

Other issues

systemd integration

I've had trouble getting session startup to work. This is partly because I had a kind of funky system to start my session in the first place. I used to have my whole session started from .xsession like this:

#!/bin/sh

. ~/.shenv

systemctl --user import-environment

exec systemctl --user start --wait xsession.target

But obviously, the xsession.target is not started by the Sway session. It seems to just start a default.target, which is really not what we want because we want to associate the services directly with the graphical-session.target, so that they don't start when logging in over (say) SSH.

damjan on #debian-systemd showed me his sway-setup which features systemd integration. It involves starting a different session in a completely new .desktop file. That work was submitted upstream but refused on the grounds that "I'd rather not give a preference to any particular init system." Another PR was abandoned because "restarting sway does not makes sense: that kills everything".

The work was therefore moved to the wiki.

So. Not a great situation. The upstream wiki systemd integration suggests starting the systemd target from within Sway, which has all sorts of problems:

  • you don't get Sway logs anywhere
  • control groups are all messed up

I have done a lot of work trying to figure this out, but I remember that starting systemd from Sway didn't actually work for me: my previously configured systemd units didn't correctly start, and especially not with the right $PATH and environment.

So I went down that rabbit hole and managed to correctly configure Sway to be started from the systemd --user session. I have partly followed the wiki but also picked ideas from damjan's sway-setup and xdbob's sway-services. Another option is uwsm (not in Debian).

This is the config I have in .config/systemd/user/:

I have also configured those services, but that's somewhat optional:

You will also need at least part of my sway/config, which sends the systemd notification (because, no, Sway doesn't support any sort of readiness notification, that would be too easy). And you might like to see my swayidle-config while you're there.

Finally, you need to hook this up somehow to the login manager. This is typically done with a desktop file, so drop sway-session.desktop in /usr/share/wayland-sessions and sway-user-service somewhere in your $PATH (typically /usr/bin/sway-user-service).

The session then looks something like this:

$ systemd-cgls | head -101
Control group /:
-.slice
├─user.slice (#472)
│ → user.invocation_id: bc405c6341de4e93a545bde6d7abbeec
│ → trusted.invocation_id: bc405c6341de4e93a545bde6d7abbeec
│ └─user-1000.slice (#10072)
│   → user.invocation_id: 08f40f5c4bcd4fd6adfd27bec24e4827
│   → trusted.invocation_id: 08f40f5c4bcd4fd6adfd27bec24e4827
│   ├─user@1000.service … (#10156)
│   │ → user.delegate: 1
│   │ → trusted.delegate: 1
│   │ → user.invocation_id: 76bed72a1ffb41dca9bfda7bb174ef6b
│   │ → trusted.invocation_id: 76bed72a1ffb41dca9bfda7bb174ef6b
│   │ ├─session.slice (#10282)
│   │ │ ├─xdg-document-portal.service (#12248)
│   │ │ │ ├─9533 /usr/libexec/xdg-document-portal
│   │ │ │ └─9542 fusermount3 -o rw,nosuid,nodev,fsname=portal,auto_unmount,subt…
│   │ │ ├─xdg-desktop-portal.service (#12211)
│   │ │ │ └─9529 /usr/libexec/xdg-desktop-portal
│   │ │ ├─pipewire-pulse.service (#10778)
│   │ │ │ └─6002 /usr/bin/pipewire-pulse
│   │ │ ├─wireplumber.service (#10519)
│   │ │ │ └─5944 /usr/bin/wireplumber
│   │ │ ├─gvfs-daemon.service (#10667)
│   │ │ │ └─5960 /usr/libexec/gvfsd
│   │ │ ├─gvfs-udisks2-volume-monitor.service (#10852)
│   │ │ │ └─6021 /usr/libexec/gvfs-udisks2-volume-monitor
│   │ │ ├─at-spi-dbus-bus.service (#11481)
│   │ │ │ ├─6210 /usr/libexec/at-spi-bus-launcher
│   │ │ │ ├─6216 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2…
│   │ │ │ └─6450 /usr/libexec/at-spi2-registryd --use-gnome-session
│   │ │ ├─pipewire.service (#10403)
│   │ │ │ └─5940 /usr/bin/pipewire
│   │ │ └─dbus.service (#10593)
│   │ │   └─5946 /usr/bin/dbus-daemon --session --address=systemd: --nofork --n…
│   │ ├─background.slice (#10324)
│   │ │ └─tracker-miner-fs-3.service (#10741)
│   │ │   └─6001 /usr/libexec/tracker-miner-fs-3
│   │ ├─app.slice (#10240)
│   │ │ ├─xdg-permission-store.service (#12285)
│   │ │ │ └─9536 /usr/libexec/xdg-permission-store
│   │ │ ├─gammastep.service (#11370)
│   │ │ │ └─6197 gammastep
│   │ │ ├─dunst.service (#11958)
│   │ │ │ └─7460 /usr/bin/dunst
│   │ │ ├─wterminal.service (#13980)
│   │ │ │ ├─69100 foot --title pop-up
│   │ │ │ ├─69101 /bin/bash
│   │ │ │ ├─77660 sudo systemd-cgls
│   │ │ │ ├─77661 head -101
│   │ │ │ ├─77662 wl-copy
│   │ │ │ ├─77663 sudo systemd-cgls
│   │ │ │ └─77664 systemd-cgls
│   │ │ ├─syncthing.service (#11995)
│   │ │ │ ├─7529 /usr/bin/syncthing -no-browser -no-restart -logflags=0 --verbo…
│   │ │ │ └─7537 /usr/bin/syncthing -no-browser -no-restart -logflags=0 --verbo…
│   │ │ ├─dconf.service (#10704)
│   │ │ │ └─5967 /usr/libexec/dconf-service
│   │ │ ├─gnome-keyring-daemon.service (#10630)
│   │ │ │ └─5951 /usr/bin/gnome-keyring-daemon --foreground --components=pkcs11…
│   │ │ ├─gcr-ssh-agent.service (#10963)
│   │ │ │ └─6035 /usr/libexec/gcr-ssh-agent /run/user/1000/gcr
│   │ │ ├─swayidle.service (#11444)
│   │ │ │ └─6199 /usr/bin/swayidle -w
│   │ │ ├─nm-applet.service (#11407)
│   │ │ │ └─6198 /usr/bin/nm-applet --indicator
│   │ │ ├─wcolortaillog.service (#11518)
│   │ │ │ ├─6226 foot colortaillog
│   │ │ │ ├─6228 /bin/sh /home/anarcat/bin/colortaillog
│   │ │ │ ├─6230 sudo journalctl -f
│   │ │ │ ├─6233 ccze -m ansi
│   │ │ │ ├─6235 sudo journalctl -f
│   │ │ │ └─6236 journalctl -f
│   │ │ ├─afuse.service (#10889)
│   │ │ │ └─6051 /usr/bin/afuse -o mount_template=sshfs -o transform_symlinks -…
│   │ │ ├─gpg-agent.service (#13547)
│   │ │ │ ├─51662 /usr/bin/gpg-agent --supervised
│   │ │ │ └─51719 scdaemon --multi-server
│   │ │ ├─emacs.service (#10926)
│   │ │ │ ├─ 6034 /usr/bin/emacs --fg-daemon
│   │ │ │ └─33203 /usr/bin/aspell -a -m -d en --encoding=utf-8
│   │ │ ├─xdg-desktop-portal-gtk.service (#12322)
│   │ │ │ └─9546 /usr/libexec/xdg-desktop-portal-gtk
│   │ │ ├─xdg-desktop-portal-wlr.service (#12359)
│   │ │ │ └─9555 /usr/libexec/xdg-desktop-portal-wlr
│   │ │ └─sway.service (#11037)
│   │ │   ├─6037 /usr/bin/sway
│   │ │   ├─6181 swaybar -b bar-0
│   │ │   ├─6209 py3status
│   │ │   ├─6309 /usr/bin/i3status -c /tmp/py3status_oy4ntfnq
│   │ │   └─6969 Xwayland :0 -rootless -terminate -core -listen 29 -listen 30 -…
│   │ └─init.scope (#10198)
│   │   ├─5909 /lib/systemd/systemd --user
│   │   └─5911 (sd-pam)
│   └─session-7.scope (#10440)
│     ├─5895 gdm-session-worker [pam/gdm-password]
│     ├─6028 /usr/libexec/gdm-wayland-session --register-session sway-user-serv…
[...]

I think that's pretty neat.

Environment propagation

At first, my terminals and rofi didn't have the right $PATH, which broke a lot of my workflow. It's hard to tell exactly how Wayland gets started or where to inject environment. This discussion suggests a few alternatives and this Debian bug report discusses this issue as well.

I eventually picked environment.d(5) since I already manage my user session with systemd, and it fixes a bunch of other problems. I used to have a .shenv that I had to manually source everywhere. The only problem with that approach is that it doesn't support conditionals, but that's something that's rarely needed.

Pipewire

This is a whole topic onto itself, but migrating to Wayland also involves using Pipewire if you want screen sharing to work. You can actually keep using Pulseaudio for audio, that said, but that migration is actually something I've wanted to do anyways: Pipewire's design seems much better than Pulseaudio, as it folds in JACK features which allows for pretty neat tricks. (Which I should probably show in a separate post, because this one is getting rather long.)

I first tried this migration in Debian bullseye, and it didn't work very well. Ardour would fail to export tracks and I would get into weird situations where streams would just drop mid-way.

A particularly funny incident is when I was in a meeting and I couldn't hear my colleagues speak anymore (but they could) and I went on blabbering on my own for a solid 5 minutes until I realized what was going on. By then, people had tried numerous ways of letting me know that something was off, including (apparently) coughing, saying "hello?", chat messages, IRC, and so on, until they just gave up and left.

I suspect that was also a Pipewire bug, but it could also have been that I muted the tab by error, as I recently learned that clicking on the little tiny speaker icon on a tab mutes that tab. Since the tab itself can get pretty small when you have lots of them, it's actually quite frequently that I mistakenly mute tabs.

Anyways. Point is: I already knew how to make the migration, and I had already documented how to make the change in Puppet. It's basically:

apt install pipewire pipewire-audio-client-libraries pipewire-pulse wireplumber 

Then, as a regular user:

systemctl --user daemon-reload
systemctl --user --now disable pulseaudio.service pulseaudio.socket
systemctl --user --now enable pipewire pipewire-pulse
systemctl --user mask pulseaudio

An optional (but key, IMHO) configuration you should also make is to "switch on connect", which will make your Bluetooth or USB headset automatically be the default route for audio, when connected. In ~/.config/pipewire/pipewire-pulse.conf.d/autoconnect.conf:

context.exec = [
    { path = "pactl"        args = "load-module module-always-sink" }
    { path = "pactl"        args = "load-module module-switch-on-connect" }
    #{ path = "/usr/bin/sh"  args = "~/.config/pipewire/default.pw" }
]

See the excellent — as usual — Arch wiki page about Pipewire for that trick and more information about Pipewire. Note that you must not put the file in ~/.config/pipewire/pipewire.conf (or pipewire-pulse.conf, maybe) directly, as that will break your setup. If you want to add to that file, first copy the template from /usr/share/pipewire/pipewire-pulse.conf first.

So far I'm happy with Pipewire in bookworm, but I've heard mixed reports from it. I have high hopes it will become the standard media server for Linux in the coming months or years, which is great because I've been (rather boldly, I admit) on the record saying I don't like PulseAudio.

Rereading this now, I feel it might have been a little unfair, as "over-engineered and tries to do too many things at once" applies probably even more to Pipewire than PulseAudio (since it also handles video dispatching).

That said, I think Pipewire took the right approach by implementing existing interfaces like Pulseaudio and JACK. That way we're not adding a third (or fourth?) way of doing audio in Linux; we're just making the server better.

Keypress drops

Sometimes I lose keyboard presses. This correlates with the following warning from Sway:

déc 06 10:36:31 curie sway[343384]: 23:32:14.034 [ERROR] [wlr] [libinput] event5  - SONiX USB Keyboard: client bug: event processing lagging behind by 37ms, your system is too slow 

... and corresponds to an open bug report in Sway. It seems the "system is too slow" should really be "your compositor is too slow" which seems to be the case here on this older system (curie). It doesn't happen often, but it does happen, particularly when a bunch of busy processes start in parallel (in my case: a linter running inside a container and notmuch new).

The proposed fix for this in Sway is to gain real time privileges and add the CAP_SYS_NICE capability to the binary. We'll see how that goes in Debian once 1.8 gets released and shipped.

Output mirroring

Sway does not support output mirroring, a strange limitation considering the flexibility that software like wdisplays seem to offer.

(In practice, if you layout two monitors on top of each other in that configuration, they do not actually mirror. Instead, sway assigns a workspace to each monitor, as if they were next to each other but, confusingly, the cursor appears in both monitors. It's extremely disorienting.)

The bug report has been open since 2018 and has seen a long discussion, but basically no progress. Part of the problem is the ticket tries to tackle "more complex configurations" as well, not just output mirroring, so it's a long and winding road.

Note that other Wayland compositors (e.g. Hyprland, GNOME's Mutter) do support mirroring, so it's not a fundamental limitation of Wayland.

One workaround is to use a tool like wl-mirror to make a window that mirrors a specific output and place that in a different workspace. That way you place the output you want to mirror to next to the output you want to mirror from, and use wl-mirror to copy between the two outputs. The problem is that wl-mirror is not packaged in Debian yet.

Another workaround mentioned in the thread is to use a presentation tool which supports mirroring on its own, or presenter notes. So far I have generally found workarounds for the problem, but it might be a big limitation for others.

Improvements over i3

Tiling improvements

There's a lot of improvements Sway could bring over using plain i3. There are pretty neat auto-tilers that could replicate the configurations I used to have in Xmonad or Awesome, see:

Display latency tweaks

TODO: You can tweak the display latency in wlroots compositors with the max_render_time parameter, possibly getting lower latency than X11 in the end.

Sound/brightness changes notifications

TODO: Avizo can display a pop-up to give feedback on volume and brightness changes. Not in Debian. Other alternatives include SwayOSD and sway-nc, also not in Debian.

Debugging tricks

The xeyes (in the x11-apps package) will run in Wayland, and can actually be used to easily see if a given window is also in Wayland. If the "eyes" follow the cursor, the app is actually running in xwayland, so not natively in Wayland.

Another way to see what is using Wayland in Sway is with the command:

swaymsg -t get_tree

Other documentation

Conclusion

In general, this took me a long time, but it mostly works. The tray icon situation is pretty frustrating, but there's a workaround and I have high hopes it will eventually fix itself. I'm also actually worried about the DisplayLink support because I eventually want to be using this, but hopefully that's another thing that will hopefully fix itself before I need it.

A word on the security model

I'm kind of worried about all the hacks that have been added to Wayland just to make things work. Pretty much everywhere we need to, we punched a hole in the security model:

Wikipedia describes the security properties of Wayland as it "isolates the input and output of every window, achieving confidentiality, integrity and availability for both." I'm not sure those are actually realized in the actual implementation, because of all those holes punched in the design, at least in Sway. For example, apparently the GNOME compositor doesn't have the virtual-keyboard protocol, but they do have (another?!) text input protocol.

Wayland does offer a better basis to implement such a system, however. It feels like the Linux applications security model lacks critical decision points in the UI, like the user approving "yes, this application can share my screen now". Applications themselves might have some of those prompts, but it's not mandatory, and that is worrisome.

21 February, 2023 09:00PM

DebianProject.org

What is the difference between a Project and a proper Association?

We regularly see people referring to Debian and Fedora as Projects. They tell us that Debian is a Project. They tell us that Fedora is a Project. There is something fishy about this.

Here is one of those emails where a volunteer is referred to as a project member.

Why are we project members and not simply members?

The Cambridge English dictionary gives us the following definition of a project:

a piece of planned work or an activity that is finished over a period of time and intended to achieve a particular purpose

The key word is finished. A project starts with a plan and finishes with a product or outcome. Projects are transient in nature. Therefore, being part of a project team also implies a somewhat transient status.

When you consider the quantity and quality of the work and intellectual property that volunteers contribute to Debian and Fedora, this inferior and transient status is somewhat insulting.

This inconvenient vocabulary is no accident. Every time there is an election for the Debian Project Leader, somebody raises the idea of creating a proper Debian foundation or a Debian association. In other words, creating a body with its own legal status where all the volunteers have a status as equal members. In the 2022 election, Christian Kastner raised the topic here.

In reality, all of the money and other assets associated with Debian development have been siphoned off into other legal entities. They are described as "trusted organizations" and there is a list of them on the Debian wiki site. Each of these organizations puts out an obscure financial report from time to time.

In 2023, we will be celebrating 30 years of the Debian Project. To put it in the terminology of the Afghan war, Debian and Fedora are forever-projects, like forever-wars, that seem to be losing their way. The Debian Social Contract gives us promises of transparency but in 30 years, we have never seen anybody publish a consolidated set of Debian financial accounts.

What is there to hide and why?

Subject: Call for ideas -- useful ways of spending Debian money
Date: Tue, 1 Oct 2013 21:46:17 +0200
From: Lucas Nussbaum 
To: debian-private@lists.debian.org
CC: auditor@debian.org, philipp@hug.cx

[ TL;DR: ETOO_MUCH_MONEY -- need ideas to flush queue ]

Hi,

Thanks to the fantastic work of the DebConf13 sponsorship (fundraising)
team, DebConf13 generated a surplus. The current estimate of it is CHF 38k (that's USD 42k, or EUR 31k). That's excellent news.

The not-so-excellent news is that it means that the debconf13
association will have to pay income taxes for it. (no estimate yet;
Philipp Hug (DC13 and debian.ch treasurer) will get in touch with a tax
expert).

Even if Switzerland has been very welcoming towards Debian, it would not
be a bad idea to try to avoid paying too much taxes. A good way to do
that is to spend some of the surplus (in ways useful to Debian, of
course).

Could you start thinking of useful ways to spend some money? servers?
porter boxes? buildds? sprints? Of course, it would need to be spent before
the end of 2013. There are no known restrictions on what we can buy or
where we can ship. What we end up buying will of course be made public
as usual. To move forward, please reply to this mail, providing an
estimate and a justification. Or mail leader@ + auditor@ if you prefer.


Somehow related: we are participating in GNOME's Outreach Program for
Women, winter edition[1].

As already stated in April[2], I wouldn't favor a situation where Debian
funds are used to pay OPW participation on a regular basis. However, as
an experiment, it makes sense to help that happen for the first time (it
didn't happen in the summer edition).

So, if the fundraising effort currently being set up fails to raise
enough money for one stipend, but still raises a significant amount of
money, I will authorize the use of Debian money for the difference
(likely for at most $2900 -- that's half the stipend, so the other half
needs to come from fundraising).

[1] https://lists.debian.org/debian-women/2013/09/msg00058.html
[2] https://lists.debian.org/debian-project/2013/04/msg00108.html


Q: Why is this on -private@?
A: Because I'm not sure yet how cautious we need to be about the DC13
   surplus situation. Better safe than sorry. We can restart the
   discussion on a public list when/if things are cleared.
 Thanks,

Lucas

and then there was this... $300,000 from Google hidden behind another $300,000 donation from Handshake Foundation:

Subject: Realizing Good Ideas with Debian Money
Date: Wed, 29 May 2019 07:49:25 -0400
From: Sam Hartman 
Reply-To: debian-project@lists.debian.org
To: Mo Zhou 
CC: Andrey Rahmatullin , debian-devel@lists.debian.org, debian-project@lists.debian.org


[moving a discussion from -devel to -project where it belongs]

>>>>> "Mo" == Mo Zhou  writes:

    Mo> Hi,
    Mo> On 2019-05-29 08:38, Raphael Hertzog wrote:
    >> Use the $300,000 on our bank accounts?

So, there were two $300k donations in the last year.
One of these was earmarked for a DSA equipment upgrade.
DSA has a couple of options to pursue, but it's possible they may
actually spend $400k on an equipment refresh.

$200k doesn't really go that far in terms of big infrastructure projects
like bikeshed or similar.

I'm looking for someone who would be willing to guide a discussion of
the Money issues Martin brought up in his campaign.  I don't have time
to guide that effor myself.  Real thought needs to be put into it; it
will be at least as much work as the discussions I'm leading on
packaging practices and git if done correctly.

However it could be very valuable for the project.

--Sam

21 February, 2023 05:00PM

February 20, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Fixing mobile viewing

It was brought to my attention recently that the mobile viewing experience of this blog was not exactly what I’d hope for. In my poor defence I proof read on my desktop and the only time I see my posts on mobile is via FreshRSS. Also my UX ability sucks.

Anyway. I’ve updated the “theme” to a more recent version of minima and tried to make sure I haven’t broken it all in the process (I did break tagging, but then I fixed it again). I double checked the generated feed to confirm it was the same (other than some re-tagging I did), so hopefully I haven’t flooded anyone’s feed.

Hopefully I can go back to ignoring the underlying blog engine for another 5+ years. If not I’ll have to take a closer look at Enrico’s staticsite.

20 February, 2023 07:09PM

February 19, 2023

Debian Chat

Debian Chat in Context

Before we can fix chat on Debian, we need to consider where we are today and how we got here.

The beginnings

The traditional chat solution for Debian Developers and the wider world of free software developers is Internet Relay Chat, denoted by the acronym IRC.

IRC began in a period when most people were using a desktop, few people were using a laptop and virtually nobody had a wireless device such as a mobile phone. The main competitors were dial-up bulletin board systems, each of them being an island.

The size of the Internet in that era meant that only a limited number of servers were necessary and username collisions were not a major problem.

Where we are now

We won't go through the evolutionary changes step by step. Rather, we will simply fast forward to the problem today.

Today, many people already have a range of chat services before they even begin participating in free software. For example, somebody may have created accounts using social media during their teenage years. These users already have a critical mass of friends on those platforms and they are comfortable with the user interfaces on their mobile phones.

Asking them to start using IRC requires a big jump and a steep learning curve. It is obvious they can't bring their old friends with them. Using IRC means trying to maintain an entirely new persona alongside their existing personas on other platforms. This in itself is a burden on the mental capacity of any user. Many people only use the most basic features of IRC and only when they have to.

Where we are going

There have recently been attempts to coordinate multiple chat programs into a single interface. The popular Matrix chat software attempts to provide full integration for legacy IRC. Nonetheless, the Matrix developers themselves admit that they don't have a comprehensive solution to federation and identity, in other words, Matrix is marginally better than some alternatives but it is not a silver bullet.

Federated solutions are not new: both SIP and XMPP are federated real-time protocols that support chat messaging. It raises the questions, why didn't Matrix simply extend one of those existing protocols?

In parallel, while Matrix has pursued a federated approach, other developers have explored peer-to-peer and blockchain oriented solutions. One example of this is the Ring platform, now known as Jami.

The peer-to-peer nature of Jami complements the federated strategy behind Matrix.

In the world of SIP, we also have SIP RELOAD, a peer-to-peer, serverless technology that is another alternative to Jami.

Next steps

The above comments attempt to clarify the current situation. In the next blog, we will examine strategic considerations for Debian and other open source users to move forward productively in the world of chat and IM.

19 February, 2023 01:30PM

Debian Chat in Context

Before we can fix chat on Debian, we need to consider where we are today and how we got here.

The beginnings

The traditional chat solution for Debian Developers and the wider world of free software developers is Internet Relay Chat, denoted by the acronym IRC.

IRC began in a period when most people were using a desktop, few people were using a laptop and virtually nobody had a wireless device such as a mobile phone. The main competitors were dial-up bulletin board systems, each of them being an island.

The size of the Internet in that era meant that only a limited number of servers were necessary and username collisions were not a major problem.

Where we are now

We won't go through the evolutionary changes step by step. Rather, we will simply fast forward to the problem today.

Today, many people already have a range of chat services before they even begin participating in free software. For example, somebody may have created accounts using social media during their teenage years. These users already have a critical mass of friends on those platforms and they are comfortable with the user interfaces on their mobile phones.

Asking them to start using IRC requires a big jump and a steep learning curve. It is obvious they can't bring their old friends with them. Using IRC means trying to maintain an entirely new persona alongside their existing personas on other platforms. This in itself is a burden on the mental capacity of any user. Many people only use the most basic features of IRC and only when they have to.

Where we are going

There have recently been attempts to coordinate multiple chat programs into a single interface. The popular Matrix chat software attempts to provide full integration for legacy IRC. Nonetheless, the Matrix developers themselves admit that they don't have a comprehensive solution to federation and identity, in other words, Matrix is marginally better than some alternatives but it is not a silver bullet.

Federated solutions are not new: both SIP and XMPP are federated real-time protocols that support chat messaging. It raises the questions, why didn't Matrix simply extend one of those existing protocols?

In parallel, while Matrix has pursued a federated approach, other developers have explored peer-to-peer and blockchain oriented solutions. One example of this is the Ring platform, now known as Jami.

The peer-to-peer nature of Jami complements the federated strategy behind Matrix.

In the world of SIP, we also have SIP RELOAD, a peer-to-peer, serverless technology that is another alternative to Jami.

Next steps

The above comments attempt to clarify the current situation. In the next blog, we will examine strategic considerations for Debian and other open source users to move forward productively in the world of chat and IM.

19 February, 2023 01:30PM

Russell Coker

New 18 Core CPU and NVMe

I just got a E5-2696 v3 CPU for my ML110 Gen9 home workstation, this has a Passmark score of 23326 which is almost 3 times faster than the E5-2620 v4 which rated 9224. Previously it took over 40 minutes real time to compile a 6.10 kernel that was based on the Debian kernel configuration, now it takes 14 minutes of real time, 202 minutes of user time, and 37 minutes of system CPU time. That’s a definite benefit of having a faster CPU, I don’t often compile kernels but when I do I don’t want to wait 40+ minutes for a result. I also expanded the system from 96G of RAM to 128G, most of the time I don’t need so much RAM but it’s better to have too much than too little, particularly as my friend got me a good deal on RAM. The extra RAM might have helped improve performance too, going from 6/8 DIMM slots full to 8/8 might help the CPU balance access.

That series of HP machines has a plastic mounting bracket for the CPU, see this video about the HP Proliant Smart Socket for details [1]. I was working on this with a friend who has the same model of HP server as I do, after buying myself a system I was so happy with it that I bought another the same when I saw it going for a good price and then sold it to my friend when I realised that I had too many tower servers at home. It turns out that getting the same model of computer as a friend is a really good strategy so then you can work together to solve problems with it. My friend’s first idea was to try and buy new clips for the new CPUs (which would have delayed things and cost more money), but Reddit and some blog posts suggested that you can just skip the smart-socket guide clip and when the chip was resting in the socket it felt secure as the protrusions on the sides of the socket fit firmly enough into the notches in the CPU to prevent it moving far enough to short a connection. Testing on 2 systems showed that you don’t need the clip. As an aside it would be nice if Intel made every CPU that fits a particular socket have the same physical dimensions so clips and heatsinks can work well on all CPUs.

The TDP of the new CPU is 145W and the old one was 85W. One would hope that in a server class system that wouldn’t make a lot of difference but unfortunately the difference was significant. Previously I could have the system running 7/8 cores with BOINC 24*7 and I wouldn’t notice the fans being louder. It is possible that 100% CPU use on a hot day might make the fans sound louder if I didn’t have an air-conditioner on that was loud enough to drown them out, but the noteworthy fact is that with the previous CPU the system fans were a minor annoyance. Now if I have 16 cores running BOINC it’s quite loud, the sort of noise that makes most people avoid using tower servers as workstations! I’ve found that if I limit it to 4 or 5 cores then the system is about as quiet as it was before. As a rough approximation I can use as much CPU power as before without making the fans louder but if I use more CPU power than was previously available it gets noisy.

I also got some new NVMe devices, I was previously using 2*Crucial 1TB P1 NVMes in a BTRFS RAID-1 and now I have 2*Crucial 1TB P3 NVMes (where P1 is the slowest Crucial offering, P3 is better and more expensive, P5 is even better, etc). When doing the BTRFS migrations to move my workstation to new NVMe devices and my server to the old NVMe devices I found that the P3 series seem to have a limit of about 70MB/s for sustained random writes and the P1 series is about 35MB/s. Apparently with the cheaper NVMe devices they slow down if you do lots of random writes, pity that all the review articles talking about GB/s speeds don’t mention this. To see how bad reviews are Google some reviews of these SSDs, you will find a couple of comment threads on places like Reddit of them slowing down with lots of writes and lots of review articles on well known sites that don’t mention it. Generally I’d recommend not upgrading from P1 to P3 NVMe devices, the benefit isn’t enough to cover the effort. For every capacity of NVMe devices the most expensive devices cost more than twice as much as the cheapest devices, and sometimes it will be worth the money. Getting the most expensive device won’t guarantee great performance but getting cheap devices will guarantee that it’s slow.

It seems that CPU development isn’t progressing as well as it used to, the CPU I just bought was released in 2015 and scored 23,343 according to Passmark [2]. The most expensive Intel CPU on offer at my local computer store is the i9-13900K which was released this year and scores 62,914 [3]. One might say that CPUs designed for servers are different from ones designed for desktop PCs, but the i9 in question has a “TDP Up” of 253W which is too big for the PSU I have! According to the HP web site the new ML110 Gen10 servers aren’t sold with a CPU as fast as the E5-2696 v3! In the period from 1988 to about 2015 every year there were new CPUs with new capabilities that were worth an upgrade. Now for the last 8 years or so there hasn’t been much improvement at all. Buy a new PC for better USB ports or something not for a faster CPU!

19 February, 2023 12:13PM by etbe

Debian Ireland

Making Debian work for Ireland

Well known Debian Developer Daniel Pocock, founder of the Software Freedom Institute is an Irish citizen.

Pocock has been disappointed by the progress of adapting Debian for Ireland and created this page to help volunteers begin.

Please see these pages:

More details will be added here as the internationalization project progresses.

If you wish to discuss Irish in Debian, please contact Daniel Pocock at the Software Freedom Institute

19 February, 2023 08:00AM

Outreachy Dating

Recognizing relationships and false accusations in GSoC and Outreachy

For about five years now Debian fanatics and their rent-a-mob have been spreading rumors about a mentor.

Many of us trust Debian as an operating system for our computers and servers. But can we really trust the people who make Debian?

Here is Ariadne Conill spreading rumours about a mentor girlfriending one of the GSoC interns:

Ariadne Conill

The last woman this mentor was responsible for is Elena Gjevukaj. In the middle of her internship, she sent the mentor a picture of her wedding.

Oops. Debian lies. Ariadne lies. If the woman got married then it is totally absurd to suggest she was the mentor's girlfriend.

Subject: 	Surprise
Date: 	Wed, 15 Aug 2018 01:14:54 +0200
From: 	Elena Gjevukaj <gjevukaje@gmail.com>
To: 	Daniel Pocock <daniel@pocock.pro>

We got married! 😂
Elena Gjevukaj

Yet Ariadne persists. She is even stalking the mentor on Twitter, despite the fact the mentor doesn't have any social media accounts.

Ariadne Conill

There is a lot more evidence too. In fact, the mentor was denied funding to attend DebConf18 in 2018. Here is the email:

Subject: Your bursary request for DebConf18: status updated
Date: Wed, 13 Jun 2018 18:35:52 -0000
From: <bursaries@debconf.org>
To: <daniel@pocock.pro>

Dear Daniel Pocock,

The bursaries team has updated the status of your bursary request for DebConf18.

Travel bursary
--------------

Your request for a travel bursary has been evaluated and ranked. However, we are
unable to grant it at this time: our travel budget is very limited, and we had
to defer a lot of strong applications. We will let you know as soon as possible,
hopefully before the end of June, if we can grant you the amount you have
requested, as our budget evolves and higher ranked applicants finalize their
plans.


Food bursary
------------

You have told us that you would be completely unable to come to DebConf if you
weren't granted a travel bursary. Your food bursary is therefore pending an
update on the travel bursaries front. If you're able to join us nonetheless,
let the bursaries team know so we can update your "level of need". Note that
this will be reflected in your travel bursary ranking.


Accommodation bursary
---------------------

You have told us that you would be completely unable to come to DebConf if you
weren't granted a travel bursary. Your accommodation bursary is therefore
pending an update on the travel bursaries front. If you're able to join us
nonetheless, let the bursaries team know so we can update your "level of need".
Note that this will be reflected in your travel bursary ranking.


You can review the full status of your bursary request in your profile[1] on the
DebConf website.

[1] https://debconf18.debconf.org/users/pocock/
-- 
The DebConf18 bursaries team

Mentors do a lot of unpaid work for Google and Outreachy. Why did Debian and Google block this mentor going to DebConf18?

It looks like other developers wanted to have some personal time with the female interns. Jiin-Mei Lin published a photo gallery.

The gallery includes one inconvenient photo. It is the developer Lior Kaplan with his arm around an Outreachy. In fact, the woman concerned was subsequently employed by GNOME Foundation. She joined GNOME at the same time as Molly de Blanc.

Congratulations to this woman. She survived DebConf and she outlasted both Molly de Blanc and Neil McGovern at GNOME. The Albanian woman in Lior's arms is the last man standing.

Lior Kaplan, DebConf18, GNOME, Outreachy

At DebConf19 in Brazil, it was even worse. We saw pictures of the Debian Project Leader, Chris Lamb, with a table full of Albanian women at the conference dinner:

Chris Lamb, Anisa Kuci, DebConf19, Brazil, Albanian women

Eight weeks later and the woman sitting closest to Lamby won the Outreachy internship, $6,000 and more free trips:

Anisa Kuci, Chris Lamb, Outreachy, favoritism

How do other women feel when they waste two or three evenings doing the Outreachy application test and then they see photos suggesting the Debian leader had a romantic history with the winner?

FSFE is at it too

Here is that picture from OSCAL in Tirana, Albania where we see the FSFE president Matthias Kirschner with a table full of young Albanian girls.

Matthias Kirschner, OSCAL, Tirana, Albania, FSFE, women

Now we found another picture, it is Kirschner's predecessor at the FSFE, Karsten Gerloff taking a patriarchal pose with his arm around a smiling young woman from Eastern Europe:

Karsten Gerloff, FSFE, Women

RMS signatories were not victims

Here one of the women tells us she was not a victim.

RMS, Richard Stallman, petition

Nicolas Dandrimont was at it too

Dandrimont is one of the Debian Account Managers. He tried to bring his girlfriend into Outreachy.

Subject: Recusing myself from Outreachy applicant selection decisions, internships funding
Date: Fri, 14 Oct 2016 12:37:46 +0200
From: Nicolas Dandrimont <olasd@debian.org>
To: <leader@debian.org>, <outreach@debian.org>
CC: <mapreri@debian.org>, <pocock@debian.org>

Hey all,

As of today, the person I'm involved with, Pauline Pommeret, is applying to an
Outreachy internship in Debian (on the GPG cleanroom environment project - I
don't see her mail on the list archive yet, so something must have gone wrong,
but it should arrive soon enough).

To avoid an obvious conflict of interest, I am recusing myself for any
decisions regarding applicant selections for this round.

I am of course still happy to serve as a liaison with the Outreachy program
administrators, and to forward our applicants to them for general funding when
selected, if the money allocated by Debian runs out.

This would especially be relevant, in my opinion, to RTC projects, as I'm not
sure at all that we should fund them from Debian money directly. Karen Sandler
also told me that one of the Outreachy sponsors was interested in funding
interns on Reproducible Builds. All in all, we should be able to have two or
three internship slots with Debian only disbursing one.

I'll stay on the outreach@d.o alias for now, but let me know if you need help
ranking applicants, and I'll ask DSA to remove me so you can discuss at ease.

Cheers,
-- 
Nicolas Dandrimont

19 February, 2023 06:00AM

Debian Finance

Migrating from xTuple Postbooks to Tryton

At xTupleCon 2014, Debian Developer Daniel Pocock was awarded Community Member of the Year for his work converting the Postbooks software into Debian packages.

Daniel Pocock, xTuple, Debian, Community Member of the Year

The Debian and Red Hat Postbooks packages remain available to this day for anybody who is willing to compile them locally. However, xTuple decided in 2019 that they no longer provide an open source model for collaboration with Postbooks users. In other words, future versions of the PostBooks source code will not be published by xTuple and access to the discussion forums and bug trackers are limited to paying customers.

Using Tryton for new projects

People who are beginning a new accounting or ERP project will find Tryton is more interesting than xTuple. The Tryton organization has been designed to promote openness and inclusion. Everybody participates on an equal footing as volunteers.

Migrating an existing Postbooks database to Tryton

There is no standard tool for this database migration. Each migration tends to be unique and depends on the attributes of the business.

If you wish to discuss a migration from Postbooks to Tryton, please contact Daniel Pocock at the Software Freedom Institute

19 February, 2023 06:00AM

February 17, 2023

Enrico Zini

Monitoring a heart rate monitor

I bought myself a cheap wearable Bluetooth LE heart rate monitor in order to play with it, and this is a simple Python script to monitor it and plot data.

Bluetooth LE

I was surprised that these things seem decently interoperable.

You can use hcitool to scan for devices:

hcitool lescan

You can then use gatttool to connect to device and poke at them interactively from a command line.

Bluetooth LE from Python

There is a nice library called Bleak which is also packaged in Debian. It's modern Python with asyncio and works beautifully!

Heart rate monitors

Things I learnt:

How about a proper fitness tracker?

I found OpenTracks, also on F-Droid, which seems nice

Why script it from a desktop computer?

The question is: why not?

A fitness tracker on a phone is useful, but there are lots of silly things one can do from one's computer that one can't do from a phone. A heart rate monitor is, after all, one more input device, and there are never enough input devices!

There are so many extremely important use cases that seem entirely unexplored:

  • Log your heart rate with your git commits!
  • Add your heart rate as a header in your emails!
  • Correlate heart rate information with your work activity tracker to find out what tasks stress you the most!
  • Sync ping intervals with your own heartbeat, so you get faster replies when you're more anxious!
  • Configure workrave to block your keyboard if you get too excited, to improve the quality of your mailing list contributions!
  • You can monitor the monitor script of the heart rate monitor that monitors you! Forget buffalo, be your monitor monitor monitor monitor monitor monitor monitor monitor...

17 February, 2023 10:22PM