Feeds

June 04, 2023

Thorsten Alteholz

My Debian Activities in May 2023

FTP master

This month I accepted 157 and rejected 22 packages. The overall number of packages that got accepted was 160.

Debian LTS

This was my hundred-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3430-1] cups-filters security update for one CVE
  • [DSA 5407-1] cups-filters security update for one CVE
  • [unstable] upload of cups-filters to fix CVE-2023-24805
  • [#1036548] unblock bug to fix CVE-2023-24805 in bookworm
  • [unstable] upload of sniproxy to fix CVE-2023-25076
  • [DSA 5413-1] sniproxy security update in Bullseye for one CVE
  • [cups] working to fix CVE-2023-32324 in unstable, Bookworm, Bullseye, Buster

The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June.

I also did some work on security-master to inject missing dependencies for hugo and gitlab-workhose.

Last but not least I did some days on frontdesk duties.

Debian ELTS

This month was the fifty eighth ELTS month.

  • [ELA-852-1] cups-filters security update in Jessie and Stretch for one CVE
  • [ELA-856-1] freetype security update in Jessie and Stretch for two CVEs
  • [ELA-857-1] libtasn1-6 security update in Jessie and Stretch for one CVE
  • [cups] working to fix CVE-2023-32324 in Jessie and Stretch

The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June.

Last but not least I did some days on frontdesk duties.

Debian Astro

This month I uploaded some packages to fix RC bugs, that were
detected by one of many QA tools:

Thanks a lot to all the hardworking people who run these tools!

Debian Printing

This month I could fix RC bugs in:

This work is generously funded by Freexian!

Debian Mobcom

This month I could fix RC bugs in:

Other stuff

Some other packages also had last minute RC bugs:

I even did an upload of a new package force-ip-protocol. I finally had enough of people using IPv6 for their hosts but are unable to configure it. Now I can force firefox, or whatever software, to only use IPv4. One nuisance settled.

04 June, 2023 10:43AM by alteholz

June 03, 2023

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in May 2023

03 June, 2023 04:50PM

June 02, 2023

Jelmer Vernooij

Porting Python projects to Rust

I’ve recently been working on porting some of my Python code to rust, both for performance reasons, and because of the strong typing in the language. As a fan of Haskell, I also just really enjoy using the language.

Porting any large project to a new language can be a challenge. There is a temptation to do a rewrite from the ground-up in idiomatic rust and using all new fancy features of the language.

Porting in one go

However, this is a bit of a trap:

  • It blocks other work. It can take a long time to finish the rewrite, during which time there is no good place to make other bug fixes/feature changes. If you make the change in the python branch, then you may also have to patch the in-progress rust fork.
  • No immediate return on investment. While the rewrite is happening, all of the investment in it is sunk costs.
  • Throughout the process, you can only run the tests for subsystems that have already been ported. It’s common to find subtle bugs later in code ported early.
  • Understanding existing code, porting it and making it idiomatic rust all at the same time takes more time and post-facto debugging.

Iterative porting

Instead, we’ve found that it works much better to take an iterative approach. One of the hidden gems of rust is the excellent PyO3 crate, which allows creating python bindings for rust code in a way that is several times less verbose and less painful than C or SWIG. Because of rust’s strong ownership model, it’s also really hard to muck up e.g. reference counts when creating Python bindings for rust code.

We port individual functions or classes to rust one at a time, starting with functionality that doesn’t have dependencies on other python code and gradually working our way up the call stack.

Each subsystem of the code is converted to two matching rust crates: one with a port of the code to pure rust, and one with python bindings for the rust code. Generally multiple python modules end up being a single pair of rust crates.

The signature for the pure Rust code follow rust conventions, but the business logic is mostly ported as-is (just in rust syntax) and the signatures of the python bindings match that of the original python code.

This then allows running the original python tests to verify that the code still behaves the same way. Changes can also immediately land on the main branch.

A subsequent step is usually to refactor the rust code to be more idiomatic - all the while keeping the tests passing. There is also the potential to e.g. switch to using external rust crates (with perhaps subtly different behaviour), or drop functionality altogether.

At some point, we will also port the tests from python to rust, and potentially drop the python bindings - once all the caller’s have been converted to rust.

Example

For example, imagine I have a Python module janitor/mail_filter.py with this function:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
def parse_plain_text_body(text):
   lines = text.splitlines()

   for i, line in enumerate(lines):
       if line == 'Reply to this email directly or view it on GitHub:':
           return lines[i + 1].split('#')[0]
       if (line == 'For more details, see:'
               and lines[i + 1].startswith('https://code.launchpad.net/')):
           return lines[i + 1]
       try:
           (field, value) = line.split(':', 1)
       except ValueError:
           continue
       if field.lower() == 'merge request url':
           return value.strip()
   return None

Porting this to rust naively (in a crate I’ve called “mailfilter”) it might look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
pub fn parse_plain_text_body(text: &str) -> Option<String> {
     let lines: Vec<&str> = text.lines().collect();

     for (i, line) in lines.iter().enumerate() {
         if line == &"Reply to this email directly or view it on GitHub:" {
             return Some(lines[i + 1].split('#').next().unwrap().to_string());
         }
         if line == &"For more details, see:"
             && lines[i + 1].starts_with("https://code.launchpad.net/")
         {
             return Some(lines[i + 1].to_string());
         }
         if let Some((field, value)) = line.split_once(':') {
             if field.to_lowercase() == "merge request url" {
                 return Some(value.trim().to_string());
             }
         }
     }
     None
 }

Bindings are created in a crate called mailfilter-py, which looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use pyo3::prelude::*;

 #[pyfunction]
 fn parse_plain_text_body(text: &str) -> Option<String> {
     janitor_mail_filter::parse_plain_text_body(text)
 }

 #[pymodule]
 pub fn _mail_filter(py: Python, m: &PyModule) -> PyResult<()> {
     m.add_function(wrap_pyfunction!(parse_plain_text_body, m)?)?;

     Ok(())
 }

The metadata for the crates is what you’d expect. mailfilter-py uses PyO3 and depends on mailfilter.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[package]
 name = "mailfilter-py"
 version = "0.0.0"
 authors = ["Jelmer Vernooij <jelmer@jelmer.uk>"]
 edition = "2018"

 [lib]
 crate-type = ["cdylib"]

 [dependencies]
 janitor-mail-filter = { path = "../mailfilter" }
 pyo3 = { version = ">=0.14", features = ["extension-module"]}

I use python-setuptools-rust to get the python ecosystem to build the python bindings. Here is what setup.py looks like:

1
2
3
4
5
6
7
8
9
#!/usr/bin/python3
from setuptools import setup
from setuptools_rust import RustExtension, Binding

setup(
        rust_extensions=[RustExtension(
        "janitor._mailfilter", "crates/mailfilter-py/Cargo.toml",
        binding=Binding.PyO3)],
)

And of course, setuptools-rust needs to be listed as a setup requirement in pyproject.toml or setup.cfg.

After that, we can replace the original python code with a simple import and verify that the tests still run:

1
from ._mailfilter import parse_plain_text_body

Of course, not all bindings are as simple as this. Iterators in particular are more complicated, as is code that has a loose idea of ownership in python. But I’ve found that the time investment is usually well worth the ability to land changes on the development head early and often.

I’d be curious to hear if people have had success with other approaches to porting Python code to Rust. If you do, please leave a comment.

02 June, 2023 05:00PM by Jelmer Vernooij

hackergotchi for Matt Brown

Matt Brown

Calling time on DNSSEC: The costs exceed the benefits

I’m calling time on DNSSEC. Last week, prompted by a change in my DNS hosting setup, I began removing it from the few personal zones I had signed. Then this Monday the .nz ccTLD experienced a multi-day availability incident triggered by the annual DNSSEC key rotation process. This incident broke several of my unsigned zones, which led me to say very unkind things about DNSSEC on Mastodon and now I feel compelled to more completely explain my thinking:

For almost all domains and use-cases, the costs and risks of deploying DNSSEC outweigh the benefits it provides. Don’t bother signing your zones.

The .nz incident, while topical, is not the motivation or the trigger for this conclusion. Had it been a novel incident, it would still have been annoying, but novel incidents are how we learn so I have a small tolerance for them. The problem with DNSSEC is precisely that this incident was not novel, just the latest in a long and growing list.

It’s a clear pattern. DNSSEC is complex and risky to deploy. Choosing to sign your zone will almost inevitably mean that you will experience lower availability for your domain over time than if you leave it unsigned. Even if you have a team of DNS experts maintaining your zone and DNS infrastructure, the risk of routine operational tasks triggering a loss of availability (unrelated to any attempted attacks that DNSSEC may thwart) is very high - almost guaranteed to occur. Worse, because of the nature of DNS and DNSSEC these incidents will tend to be prolonged and out of your control to remediate in a timely fashion.

The only benefit you get in return for accepting this almost certain reduction in availability is trust in the integrity of the DNS data a subset of your users (those who validate DNSSEC) receive. Trusted DNS data that is then used to communicate across an untrusted network layer. An untrusted network layer which you are almost certainly protecting with TLS which provides a more comprehensive and trustworthy set of security guarantees than DNSSEC is capable of, and provides those guarantees to all your users regardless of whether they are validating DNSSEC or not.

In summary, in our modern world where TLS is ubiquitous, DNSSEC provides only a thin layer of redundant protection on top of the comprehensive guarantees provided by TLS, but adds significant operational complexity, cost and a high likelihood of lowered availability.

In an ideal world, where the deployment cost of DNSSEC and the risk of DNSSEC-induced outages were both low, it would absolutely be desirable to have that redundancy in our layers of protection. In the real world, given the DNSSEC protocol we have today, the choice to avoid its complexity and rely on TLS alone is not at all painful or risky to make as the operator of an online service. In fact, it’s the prudent choice that will result in better overall security outcomes for your users.

Ignore DNSSEC and invest the time and resources you would have spent deploying it improving your TLS key and certificate management.

Ironically, the one use-case where I think a valid counter-argument for this position can be made is TLDs (including ccTLDs such as .nz). Despite its many failings, DNSSEC is an Internet Standard, and as infrastructure providers, TLDs have an obligation to enable its use. Unfortunately this means that everyone has to bear the costs, complexities and availability risks that DNSSEC burdens these operators with. We can’t avoid that fact, but we can avoid creating further costs, complexities and risks by choosing not to deploy DNSSEC on the rest of our non-TLD zones.

But DNSSEC will save us from the evil CA ecosystem!

Historically, the strongest motivation for DNSSEC has not been the direct security benefits themselves (which as explained above are minimal compared to what TLS provides), but in the new capabilities and use-cases that could be enabled if DNS were able to provide integrity and trusted data to applications.

Specifically, the promise of DNS-based Authentication of Named Entities (DANE) is that with DNSSEC we can be free of the X.509 certificate authority ecosystem and along with it the expensive certificate issuance racket and dubious trust properties that have long been its most distinguishing features.

Ten years ago this was an extremely compelling proposition with significant potential to improve the Internet. That potential has gone unfulfilled.

Instead of maturing as deployments progressed and associated operational experience was gained, DNSSEC has been beset by the discovery of issue after issue. Each of these has necessitated further changes and additions to the protocol, increasing complexity and deployment cost. For many zones, including significant zones like google.com (where I led the attempt to evaluate and deploy DNSSEC in the mid 2010s), it is simply infeasible to deploy the protocol at all, let alone in a reliable and dependable manner.

While DNSSEC maturation and deployment has been languishing, the TLS ecosystem has been steadily and impressively improving. Thanks to the efforts of many individuals and companies, although still founded on the use of a set of root certificate authorities, the TLS and CA ecosystem today features transparency, validation and multi-party accountability that comprehensively build trust in the ability to depend and rely upon the security guarantees that TLS provides. When you use TLS today, you benefit from:

  • Free/cheap issuance from a number of different certificate authorities.
  • Regular, automated issuance/renewal via the ACME protocol.
  • Visibility into who has issued certificates for your domain and when through Certificate Transparency logs.
  • Confidence that certificates issued without certificate transparency (and therefore lacking an SCT) will not be accepted by the leading modern browsers.
  • The use of modern cryptographic protocols as a baseline, with a plausible and compelling story for how these can be steadily and promptly updated over time.

DNSSEC with DANE can match the TLS ecosystem on the first benefit (up front price) and perhaps makes the second benefit moot, but has no ability to match any of the other transparency and accountability measures that today’s TLS ecosystem offers. If your ZSK is stolen, or a parent zone is compromised or coerced, validly signed TLSA records for a forged certificate can be produced and spoofed to users under attack with minimal chances of detection.

Finally, in terms of overall trust in the roots of the system, the CA/Browser forum requirements continue to improve the accountability and transparency of TLS certificate authorities, significantly reducing the ability for any single actor (say a nefarious government) to subvert the system. The DNS root has a well established transparent multi-party system for establishing trust in the DNSSEC root itself, but at the TLD level, almost intentionally thanks to the hierarchical nature of DNS, DNSSEC has multiple single points of control (or coercion) which exist outside of any formal system of transparency or accountability.

We’ve moved from DANE being a potential improvement in security over TLS when it was first proposed, to being a definite regression from what TLS provides today.

That’s not to say that TLS is perfect, but given where we’re at, we’ll get a better security return from further investment and improvements in the TLS ecosystem than we will from trying to fix DNSSEC.

But TLS is not ubiquitous for non-HTTP applications

The arguments above are most compelling when applied to the web-based HTTP-oriented ecosystem which has driven most of the TLS improvements we’ve seen to date. Non-HTTP protocols are lagging in adoption of many of the improvements and best practices TLS has on the web. Some claim this need to provide a solution for non-HTTP, non-web applications provides a motivation to continue pushing DNSSEC deployment.

I disagree, I think it provides a motivation to instead double-down on moving those applications to TLS. TLS as the new TCP.

The problem is that costs of deploying and operating DNSSEC are largely fixed regardless of how many protocols you are intending to protect with it, and worse, the negative side-effects of DNSSEC deployment can and will easily spill over to affect zones and protocols that don’t want or need DNSSEC’s protection. To justify continued DNSSEC deployment and operation in this context means using a smaller set of benefits (just for the non-HTTP applications) to justify the already high costs of deploying DNSSEC itself, plus the cost of the risk that DNSSEC poses to the reliability to your websites. I don’t see how that equation can ever balance, particularly when you evaluate it against the much lower costs of just turning on TLS for the rest of your non-HTTP protocols instead of deploying DNSSEC. MTA-STS is a worked example of how this can be achieved.

If you’re still not convinced, consider that even DNS itself is considering moving to TLS (via DoT and DoH) in order to add the confidentiality/privacy attributes the protocol currently lacks. I’m not a huge fan of the latency implications of these approaches, but the ongoing discussion shows that clever solutions and mitigations for that may exist.

DoT/DoH solve distinct problems from DNSSEC and in principle should be used in combination with it, but in a world where DNS itself is relying on TLS and therefore has eliminated the majority of spoofing and cache poisoning attacks through DoT/DoH deployment the benefit side of the DNSSEC equation gets smaller and smaller still while the costs remain the same.

OK, but better software or more careful operations can reduce DNSSEC’s cost

Some see the current DNSSEC costs simply as teething problems that will reduce as the software and tooling matures to provide more automation of the risky processes and operational teams learn from their mistakes or opt to simply transfer the risk by outsourcing the management and complexity to larger providers to take care of.

I don’t find these arguments compelling. We’ve already had 15+ years to develop improved software for DNSSEC without success. What’s changed that we should expect a better outcome this year or next? Nothing.

Even if we did have better software or outsourced operations, the approach is still only hiding the costs behind automation or transferring the risk to another organisation. That may appear to work in the short-term, but eventually when the time comes to upgrade the software, migrate between providers or change registrars the debt will come due and incidents will occur.

The problem is the complexity of the protocol itself. No amount of software improvement or outsourcing addresses that.

After 15+ years of trying, I think it’s worth considering that combining cryptography, caching and distributed consensus, some of the most fundamental and complex computer science problems, into a slow-moving and hard to evolve low-level infrastructure protocol while appropriately balancing security, performance and reliability appears to be beyond our collective ability.

That doesn’t have to be the end of the world, the improvements achieved in the TLS ecosystem over the same time frame provide a positive counter example - perhaps DNSSEC is simply focusing our attention at the wrong layer of the stack.

Ideally secure DNS data would be something we could have, but if the complexity of DNSSEC is the price we have to pay to achieve it, I’m out. I would rather opt to remain with the simpler yet insecure DNS protocol and compensate for its short comings at higher transport or application layers where experience shows we are able to more rapidly improve and develop our security capabilities.

Summing up

For the vast majority of domains and use-cases there is simply no net benefit to deploying DNSSEC in 2023. I’d even go so far as to say that if you’ve already signed your zones, you should (carefully) move them back to being unsigned - you’ll reduce the complexity of your operating environment and lower your risk of availability loss triggered by DNS. Your users will thank you.

The threats that DNSSEC defends against are already amply defended by the now mature and still improving TLS ecosystem at the application layer, and investing in further improvements here carries far more return than deployment of DNSSEC.

For TLDs, like .nz whose outage triggered this post, DNSSEC is not going anywhere and investment in mitigating its complexities and risks is an unfortunate burden that must be shouldered. While the full incident report of what went wrong with .nz is not yet available, the interim report already hints at some useful insights. It is important that InternetNZ publishes a full and comprehensive review so that the full set of learnings and improvements this incident can provide can be fully realised by .nz and other TLD operators stuck with the unenviable task of trying to safely operate DNSSEC.

Postscript

After taking a few days to draft and edit this post, I’ve just stumbled across a presentation from the well respected Geoff Huston at last weeks RIPE86 meeting. I’ve only had time to skim the slides (video here) - they don’t seem to disagree with my thinking regarding the futility of the current state of DNSSEC, but also contain some interesting ideas for what it might take for DNSSEC to become a compelling proposition.

Probably worth a read/watch!

02 June, 2023 12:20AM

June 01, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Cheatable e-voting booths in Coahuila, Mexico, detected at the last minute

It’s been a very long time I haven’t blogged about e-voting, although some might remember it’s been a topic I have long worked with; particularly, it was the topic of my 2018 Masters thesis, plus some five articles I wrote in the 2010-2018 period. After the thesis, I have to admit I got weary of the subject, and haven’t pursued it anymore.

So, I was saddened and dismayed to read that –once again, as it has already happened– the electoral authorities would set up a pilot e-voting program in the local elections this year, that would probably lead to a wider deployment next year, in the Federal elections.

This year (…this week!), two States will have elections for their Governors and local Legislative branches: Coahuila (North, bordering with Texas) and Mexico (Center, surrounding Mexico City). They are very different states, demographically and in their development level.

Pilot programs with e-voting booths have been seen in four states TTBOMK in the last ~15 years: Jalisco (West), Mexico City, State of Mexico and Coahuila. In Coahuila, several universities have teamed up with the Electoral Institute to develop their e-voting booth; a good thing that I can say about how this has been done in my country is that, at least, the Electoral Institute is providing their own implementations, instead of sourcing with e-booth vendors (which have their long, tragic story mostly in the USA, but also in other places). Not only that: They are subjecting the machines to audit processes. Not open audit processes, as demanded by academics in the field, but nevertheless, external, rigorous audit processes.

But still, what me and other colleagues with Computer Security background oppose to is not a specific e-voting implementation, but the adoption of e-voting in general. If for nothing else, because of the extra complexity it brings, because of the many more checks that have to be put in place, and… Because as programmers, we are aware of the ease with which bugs can creep in any given implementation… both honest bugs (mistakes) and, much worse, bugs that are secretly requested and paid for.

Anyway, leave this bit aside for a while. I’m not implying there was any ill intent in the design or implementation of these e-voting booths.

Two days ago, the Electoral Institute announced there was an important bug found in the Coahuila implementation. The bug consists, as far as I can understand from the information reported in newspapers, in:

  • Each voter approaches their electoral authorities, who verify their identity and their authorization to vote in that precinct
  • The voter is given an activation code, with which they go to the voting booth
  • The booth is activated and enables each voter to cast a vote only once

The problem was that the activation codes remained active after voting, so a voter could vote multiple times.

This seems like an easy problem to be patched — It most likely is. However, given the inability to patch, properly test, and deploy in a timely manner the fix to all of the booths (even though only 74 e-voting booths were to be deployed for this pilot), the whole pilot for Coahuila was scratched; Mexico State is voting with a different implementation that is not affected by this issue.

This illustrates very well one of the main issues with e-voting technology: It requires a team of domain-specific experts to perform a highly specialized task (code and physical audits). I am happy and proud to say that part of the auditing experts were the professors of the Information Security Masters program of ESIME Culhuacán (the Masters program I was part of).

The reaction by the Electoral Institute was correct. As far as I understand, there is no evidence suggesting this bug could have been purposefully built, but it’s not impossible to rule it out.

A traditional, paper-and-ink-based process is not only immune to attacks (or mistakes!) based on code such as this one, but can be audited by anybody. And that is, I believe, a fundamental property of democracy: ensuring the process is done right is not limited to a handful of domain experts. Not only that: In Mexico, I am sure there are hundreds of very proficient developers that could perform a code and equipment audit such as this one, but the audits are open by invitation only, so being an expert is not enough to get clearance to do this.

In a democracy, the whole process should be observable and verifiable by anybody interested in doing so.

Some links about this news:

01 June, 2023 04:22PM

hackergotchi for Holger Levsen

Holger Levsen

20230514-fwupd

How-To use fwupd

As one cannot use fwupd on Qubes OS to update firmwares this is a quick How-To for using fwupd on Grml for future me. (Qubes 4.2 will bring qubes-fwupd.)

  • boot into Grml.
  • mkdir /efi ; mount /boot/efi to /efi or set OverrideESPMountPoint=/boot/efi/EFI if you mount to the usual path.
  • apt update ; apt install fwupd fwupd-amd64-signed udisks2 policykit-1
  • fwupdmgr get-devices
  • fwupdmgr refresh
  • fwupdmgr get-updates
  • fwupdmgr update
  • reboot into Qubes OS.

01 June, 2023 01:41PM

20230601-developers-reference-translations

src:developers-reference translations wanted

I've just uploaded developers-reference 12.19, bringing the German translation status back to 100% complete, thanks to Carsten Schoenert. Some other translations however could use some updates:

$ make status
for l in de fr it ja ru; do     \
    if [ -d source/locales/$l/LC_MESSAGES ] ; then  \
        echo -n "Stats for $l: " ;          \
        msgcat --use-first source/locales/$l/LC_MESSAGES/*.po | msgfmt --statistics - 2>&1 ; \
    fi ;                            \
done
Stats for de: 1374 translated messages.
Stats for fr: 1286 translated messages, 39 fuzzy translations, 49 untranslated messages.
Stats for it: 869 translated messages, 46 fuzzy translations, 459 untranslated messages.
Stats for ja: 891 translated messages, 26 fuzzy translations, 457 untranslated messages.
Stats for ru: 870 translated messages, 44 fuzzy translations, 460 untranslated messages.

01 June, 2023 01:39PM

Russell Coker

Do Desktop Computers Make Sense?

Laptop vs Desktop Price

Currently the smaller and cheaper USB-C docks start at about $25 and Dell has a new Vostro with 8G of RAM and 2*USB-C ports for $788. That gives a bit over $800 for a laptop and dock vs $795 for the cheapest Dell desktop which also has 8G of RAM. For every way of buying laptops and desktops (EG buying from Officeworks, buying on ebay, etc) the prices for laptops and desktops seem very similar. For all those comparisons the desktop will typically have a faster CPU and more options for PCIe cards, larger storage, etc. But if you don’t want to expand storage beyond the affordable 4TB NVMe/SSD devices, don’t need to add PCIe cards, and don’t need much CPU power then a laptop will do well. For the vast majority of the computer work I do my Thinkpad Carbon X1 Gen1 (from 2012) had plenty of CPU power.

If someone who’s not an expert in PC hardware was to buy a computer of a given age then laptops probably aren’t more expensive than desktops even disregarding the fact that a laptop works without the need to purchase a monitor, a keyboard, or a mouse. I can get regular desktop PCs for almost nothing and get parts to upgrade them very cheaply but most people can’t do that. I can also get a decent second-hand laptop and USB-C dock for well under $400.

Servers and Gaming Systems

For people doing serious programming or other compute or IO intensive tasks some variation on the server theme is the best option. That may be something more like the servers used by the r/homelab people than the corporate servers, or it might be something in the cloud, but a server is a server. If you are going to have a home server that’s a tower PC then it makes sense to put a monitor on it and use it as a workstation. If your server makes so much noise that you can’t spend much time in the same room or if it’s hosted elsewhere then using a laptop to access it makes sense.

Desktop computers for PC gaming makes sense as no-one seems to be making laptops with moderately powerful GPUs. The most powerful GPUs draw 150W which is more than most laptop PSUs can supply and even if a laptop PSU could supply that much there would be the issue of cooling. The Steam Deck [1] and the Nintendo Switch [2] can both work with USB-C docks. The PlayStation 5 [3] has a 350W PSU and doesn’t support video over USB-C. The Steam Deck can do 8K resolution at 60Hz or 4K at 120Hz but presumably the newer Steam games will need a desktop PC with a more powerful GPU to properly use such resolutions.

For people who want the best FPS rates on graphics intensive games it could make sense to have a tower PC. Also a laptop that’s run at high CPU/GPU use for a long time will tend to have it’s vents clogged by dust and possibly have the cooling fan wear out.

Monitor Resolution

Laptop support for a single 4K monitor became common in 2012 with the release of the Ivy Bridge mobile CPUs from Intel in 2012. My own experience of setting up 4K monitors for a Linux desktop in 2019 was that it was unreasonably painful and that the soon to be released Debian/Bookworm will make things work nicely for 4K monitors with KDE on X11. So laptop hardware has handled the case of a single high resolution monitor since before such monitors were cheap or common and before software supported it well. Of course at that time you had to use either a proprietary dock or a mini-DisplayPort to HDMI adaptor to get 4K working. But that was still easier than getting PCIe video cards supporting 4K resolution which is something that according to spec sheets wasn’t well supported by affordable cards in 2017.

Since USB-C became a standard feature in laptops in about 2017 support of more monitors than most people would want through a USB-C dock became standard. My Thinkpad X1 Carbon Gen5 which was released in 2017 will support 2*FullHD monitors plus a 4K monitor via a USB-C dock, I suspect it would do at least 2*4K monitors but haven’t had a chance to test. Cheap USB-C docks supporting this sort of thing have only become common in the last year or so.

How Many Computers per Home

Among middle class Australians it’s common to have multiple desktop PCs per household. One for each child who’s over the age of about 13 and one for the parents seems to be reasonably common. Students in the later years of high-school and university students are often compelled to have laptops so having the number of laptops plus the number of desktops be larger than the population of the house probably isn’t uncommon even among people who aren’t really into computers. As an aside it’s probably common among people who read my blog to have 2 desktops, a laptop, and a cloud server for their own personal use. But even among people who don’t do that sort of thing having computers outnumber people in a home is probably common.

A large portion of the computer users can do everything they need on a laptop. For gamers the graphics intensive games often run well on a console and that’s probably the most effective way of getting to playing the games. Of course the fact that there is “RGB RAM” (RAM with Red, Green, and Blue LEDs to light up) along with a lot of other wild products sold to gamers suggests that gaming PCs are not about what runs the game most effectively and that an art/craft project with the PC is more important than actually playing games.

Instead of having one desktop PC per bedroom and laptops for school/university as well it would make more sense to have a laptop per person and have a USB-C dock and monitor in each bedroom and a USB-C dock connected to a large screen TV in the lounge. This gives plenty of flexibility for moving around to do work and sharing what’s on your computer with other people. It also allows taking a work computer home and having work with your monitor, having a friend bring their laptop to your home to work on something together, etc.

For most people desktop computers don’t make sense. While I think that convergence of phones with laptops and desktops is the way of the future [4] for most people having laptops take over all functions of desktops is the best option today.

01 June, 2023 12:38PM by etbe

Jamie McClelland

Enough about the AI Apocalypse Already

After watching Democracy Now’s segment on artificial intelligence I started to wonder - am I out of step on this topic?

When people claim artificial intelligence will surpass human intelligence and thus threaten humanity with extinction, they seem to be referring specifically to advances made with large language models.

As I understand them, large language models are probability machines that have ingested massive amounts of text scraped from the Internet. They answer questions based on the probability of one series of words (their answer) following another series of words (the question).

It seems like a stretch to call this intelligence, but if we accept that definition then it follows that this kind of intelligence is nothing remotely like human intelligence, which makes the claim that it will surpass human intelligence confusing. Hasn’t this kind of machine learning surpassed us decades ago?

Or when we say “surpass” does that simply refer to fooling people into thinking an AI machine is a human via conversation? That is an important milestone, but I’m not ready to accept the turing test as proof of equal intelligence.

Furthermore, large language models “hallucinate” and also reflect the biases of their training data. The word “hallucinate” seems like a euphemism, as if it could be corrected with the right medication when in fact it seems hard to avoid when your strategy is to correlate words based on probability. But even if you could solve the “here is a completely wrong answer presented with sociopathic confidence” problem, reflecting the biases of your data sources seems fairly intractable. In what world would a system with built-in bias be considered on the brink of surpassing human intelligence?

The danger from LLMs seems to be their ability to convince people that their answers are correct, including their patently wrong and/or biased answers.

Why do people think they are giving correct answers? Oh right… terrifying right wing billionaires (with terrifying agendas have been claiming AI will exceed human intelligence and threaten humanity and every time they sign a hyperbolic statement they get front page mainstream coverage. And even progressive news outlets are spreading this narrative with minimal space for contrary opinions (thank you Tawana Petty from the Algorithmic Justice League for providing the only glimpse of reason in the segment).

The belief that artificial intelligence is or will soon become omnipotent has real world harms today: specifically it creates the misperception that current LLMs are accurate, which paves the way for greater adoption among police forces, social service agencies, medical facilities and other places where racial and economic biases have life and death consequences.

When the CEO of OpenAI calls the technology dangerous and in need of regulation, he gets both free advertising promoting the power and supposed accuracy of his product and the possibility of freezing further developments in the field that might challenge OpenAI’s current dominance.

The real threat to humanity is not AI, it’s massive inequality and the use of tactics ranging from mundane bureaucracy to deadly force and incarceration to segregate the affluent from the growing number of people unable to make ends meet. We have spent decades training bureaucrats, judges and cops to robotically follow biased laws to maintain this order without compassion or empathy. Replacing them with AI would be make things worse and should be stopped. But, let’s be clear, the narrative that AI is poised to surpass human intelligence and make humanity extinct is a dangerous distraction that runs counter to a much more important story about “the very real and very present exploitative practices of the [companies building AI], who are rapidly centralizing power and increasing social inequities.”.

Maybe we should talk about that instead?

01 June, 2023 12:27PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already June.

Already June.

01 June, 2023 02:23AM by Junichi Uekawa

Paul Wise

FLOSS Activities May 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian IRC: set topic on new #debian-sa channel
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SIMDe, gensim, sptag work was sponsored. All other work was done on a volunteer basis.

01 June, 2023 12:11AM

May 31, 2023

Arturo Borrero González

Wikimedia Hackathon 2023 Athens summary

Post logo

During the weekend of 19-23 May 2023 I attended the Wikimedia hackathon 2023 in Athens, Greece. The event physically reunited folks interested in the more technological aspects of the Wikimedia movement in person for the first time since 2019. The scope of the hacking projects include (but was not limited to) tools, wikipedia bots, gadgets, server and network infrastructure, data and other technical systems.

My role in the event was two-fold: on one hand I was in the event because of my role as SRE in the Wikimedia Cloud Services team, where we provided very valuable services to the community, and I was expected to support the technical contributors of the movement that were around. Additionally, and because of that same role, I did some hacking myself too, which was specially augmented given I generally collaborate on a daily basis with some community members that were present in the hacking room.

The hackathon had some conference-style track and I ran a session with my coworker Bryan, called Past, Present and Future of Wikimedia Cloud Services (Toolforge and friends) (slides) which was very satisfying to deliver given the friendly space that it was. I attended a bunch of other sessions, and all of them were interesting and well presented. The number of ML themes that were present in the program schedule was exciting. I definitely learned a lot from attending those sessions, from how LLMs work, some fascinating applications for them in the wikimedia space, to what were some industry trends for training and hosting ML models.

Session

Despite the sessions, the main purpose of the hackathon was, well, hacking. While I was in the hacking space for more than 12 hours each day, my ability to get things done was greatly reduced by the constant conversations, help requests, and other social interactions with the folks. Don’t get me wrong, I embraced that reality with joy, because the social bonding aspect of it is perhaps the main reason why we gathered in person instead of virtually.

That being said, this is a rough list of what I did:

The hackathon was also the final days of Technical Engagement as an umbrella group for WMCS and Developer Advocacy teams within the Technology department of the Wikimedia Foundation because of an internal reorg.. We used the chance to reflect on the pleasant time we have had together since 2019 and take a final picture of the few of us that were in person in the event.

Technical Engagement

It wasn’t the first Wikimedia Hackathon for me, and I felt the same as in previous iterations: it was a welcoming space, and I was surrounded by friends and nice human beings. I ended the event with a profound feeling of being privileged, because I was part of the Wikimedia movement, and because I was invited to participate in it.

31 May, 2023 12:11PM

Russell Coker

Genesis GV60

I recently test drove a Genesis GV70, but the GV60 [1] which I didn’t test drive is a nicer car.

The GV70 and GV60 are all electric so they are quiet and perform well. The GV70 has a sun-roof that opens, it was the first car I’ve driven like that and I decided I don’t like it. Having the shade open so I can see the sky while stuck in a traffic jam is nice though. The GV60 has a non-opening sun-roof with a shade that can be retracted, this is a feature I’d really like to have in my next car.

Electric cars as a general rule have good acceleration and are quiet, the GV70 performed as expected in that regard. It has a head-up display projected on the windscreen for the speed and the speed limit on the road in question which is handy. When driving in a car park it showed images from all sides which is really handy, I wish I had explored that feature more.

The console is all electronic with a TFT display instead of mechanical instruments but the only significant difference this makes in driving is that when a turn indicator is used the console display shows a video feed for the blind-spot that matches the lane change direction. This is a significant safety feature and will reduce the incidence of collisions. But the capabilities of the hardware seem under utilised, hopefully they will release a software update at some future time to do more with it.

The most significant benefit of the GV60 over the GV70 is that it has cameras instead of mirrors at the sides of the car. This reduces drag and also removes the need to adjust mirrors to match the height of the driver. Also for driver instruction the instructor and learner get to see the same view. A logical development of such cars is an expansion pack for instruction that has displays in the passenger seat to show the instructor the same instrument view as the driver sees.

The minimum list driveaway price for the GV60 is $117,171.50 and for the GV70 it is $138,119.89 – both of which are more than I’m prepared to pay for a car. The GV60 apparently can be started by fingerprint which seems like a bad idea given the poor security of fingerprint sensors, but as regular car keys tend not to be too difficult to work around it probably doesn’t matter. The Genesis web site makes it difficult to find the ranges of electric cars which is surprising. A Google search suggests that the GV60 can do 466Km and the GV70 can do 410Km which are both reasonable numbers and nothing to be ashamed of.

The GV70 was a fun car to drive and the GV60 looks like it would be even better. I recommend that everyone who likes technology take one for a test drive, but for my own use I’m looking for something that costs less than half as much.

31 May, 2023 11:03AM by etbe

Russ Allbery

Review: Night Watch

Review: Night Watch, by Terry Pratchett

Series: Discworld #29
Publisher: Harper
Copyright: November 2002
Printing: August 2014
ISBN: 0-06-230740-1
Format: Mass market
Pages: 451

Night Watch is the 29th Discworld novel and the sixth Watch novel. I would really like to tell people they could start here if they wanted to, for reasons that I will get into in a moment, but I think I would be doing you a disservice. The emotional heft added by having read the previous Watch novels and followed Vimes's character evolution is significant.

It's the 25th of May. Vimes is about to become a father. He and several of the other members of the Watch are wearing sprigs of lilac for reasons that Sergeant Colon is quite vehemently uninterested in explaining. A serial killer named Carcer the Watch has been after for weeks has just murdered an off-duty sergeant. It's a tense and awkward sort of day and Vimes is feeling weird and wistful, remembering the days when he was a copper and not a manager who has to dress up in ceremonial armor and meet with committees.

That may be part of why, when the message comes over the clacks that the Watch have Carcer cornered on the roof of the New Hall of the Unseen University, Vimes responds in person. He's grappling with Carcer on the roof of the University Library in the middle of a magical storm when lightning strikes. When he wakes up, he's in the past, shortly after he joined the Watch and shortly before the events of the 25th of May that the older Watch members so vividly remember and don't talk about.

I have been saying recently in Discworld reviews that it felt like Pratchett was on the verge of a breakout book that's head and shoulders above Discworld prior to that point. This is it. This is that book.

The setup here is masterful: the sprigs of lilac that slowly tell the reader something is going on, the refusal of any of the older Watch members to talk about it, the scene in the graveyard to establish the stakes, the disconcerting fact that Vetinari is wearing a sprig of lilac as well, and the feeling of building tension that matches the growing electrical storm. And Pratchett never gives into the temptation to explain everything and tip his hand prematurely. We know the 25th is coming and something is going to happen, and the reader can put together hints from Vimes's thoughts, but Pratchett lets us guess and sometimes be right and sometimes be wrong. Vimes is trying to change history, which adds another layer of uncertainty and enjoyment as the reader tries to piece together both the true history and the changes. This is a masterful job at a "what if?" story.

And, beneath that, the commentary on policing and government and ethics is astonishingly good. In a review of an earlier Watch novel, I compared Pratchett to Dickens in the way that he focuses on a sort of common-sense morality rather than political theory. That is true here too, but oh that moral analysis is sharp enough to slide into you like a knife. This is not the Vimes that we first met in Guards! Guards!. He has has turned his cynical stubbornness into a working theory of policing, and it's subtle and complicated and full of nuance that he only barely knows how to explain. But he knows how to show it to people.

Keep the peace. That was the thing. People often failed to understand what that meant. You'd go to some life-threatening disturbance like a couple of neighbors scrapping in the street over who owned the hedge between their properties, and they'd both be bursting with aggrieved self-righteousness, both yelling, their wives would either be having a private scrap on the side or would have adjourned to a kitchen for a shared pot of tea and a chat, and they all expected you to sort it out.

And they could never understand that it wasn't your job. Sorting it out was a job for a good surveyor and a couple of lawyers, maybe. Your job was to quell the impulse to bang their stupid fat heads together, to ignore the affronted speeches of dodgy self-justification, to get them to stop shouting and to get them off the street. Once that had been achieved, your job was over. You weren't some walking god, dispensing finely tuned natural justice. Your job was simply to bring back peace.

When Vimes is thrown back in time, he has to pick up the role of his own mentor, the person who taught him what policing should be like. His younger self is right there, watching everything he does, and he's desperately afraid he'll screw it up and set a worse example. Make history worse when he's trying to make it better. It's a beautifully well-done bit of tension that uses time travel as the hook to show both how difficult mentorship is and also how irritating one's earlier naive self would be.

He wondered if it was at all possible to give this idiot some lessons in basic politics. That was always the dream, wasn't it? "I wish I'd known then what I know now"? But when you got older you found out that you now wasn't you then. You then was a twerp. You then was what you had to be to start out on the rocky road of becoming you now, and one of the rocky patches on that road was being a twerp.

The backdrop of this story, as advertised by the map at the front of the book, is a revolution of sorts. And the revolution does matter, but not in the obvious way. It creates space and circumstance for some other things to happen that are all about the abuse of policing as a tool of politics rather than Vimes's principle of keeping the peace. I mentioned when reviewing Men at Arms that it was an awkward book to read in the United States in 2020. This book tackles the ethics of policing head-on, in exactly the way that book didn't.

It's also a marvelous bit of competence porn. Somehow over the years, Vimes has become extremely good at what he does, and not just in the obvious cop-walking-a-beat sort of ways. He's become a leader. It's not something he thinks about, even when thrown back in time, but it's something Pratchett can show the reader directly, and have the other characters in the book comment on.

There is so much more that I'd like to say, but so much would be spoilers, and I think Night Watch is more effective when you have the suspense of slowly puzzling out what's going to happen. Pratchett's pacing is exquisite. It's also one of the rare Discworld novels where Pratchett fully commits to a point of view and lets Vimes tell the story. There are a few interludes with other people, but the only other significant protagonist is, quite fittingly, Vetinari. I won't say anything more about that except to note that the relationship between Vimes and Vetinari is one of the best bits of fascinating subtlety in all of Discworld.

I think it's also telling that nothing about Night Watch reads as parody. Sure, there is a nod to Back to the Future in the lightning storm, and it's impossible to write a book about police and street revolutions without making the reader think about Les Miserables, but nothing about this plot matches either of those stories. This is Pratchett telling his own story in his own world, unapologetically, and without trying to wedge it into parody shape, and it is so much the better book for it.

The one quibble I have with the book is that the bits with the Time Monks don't really work. Lu-Tze is annoying and flippant given the emotional stakes of this story, the interludes with him are frustrating and out of step with the rest of the book, and the time travel hand-waving doesn't add much. I see structurally why Pratchett put this in: it gives Vimes (and the reader) a time frame and a deadline, it establishes some of the ground rules and stakes, and it provides a couple of important opportunities for exposition so that the reader doesn't get lost. But it's not good story. The rest of the book is so amazingly good, though, that it doesn't matter (and the framing stories for "what if?" explorations almost never make much sense).

The other thing I have a bit of a quibble with is outside the book. Night Watch, as you may have guessed by now, is the origin of the May 25th Pratchett memes that you will be familiar with if you've spent much time around SFF fandom. But this book is dramatically different from what I was expecting based on the memes. You will, for example see a lot of people posting "Truth, Justice, Freedom, Reasonably Priced Love, And a Hard-Boiled Egg!", and before reading the book it sounds like a Pratchett-style humorous revolutionary slogan. And I guess it is, sort of, but, well... I have to quote the scene:

"You'd like Freedom, Truth, and Justice, wouldn't you, Comrade Sergeant?" said Reg encouragingly.

"I'd like a hard-boiled egg," said Vimes, shaking the match out.

There was some nervous laughter, but Reg looked offended.

"In the circumstances, Sergeant, I think we should set our sights a little higher—"

"Well, yes, we could," said Vimes, coming down the steps. He glanced at the sheets of papers in front of Reg. The man cared. He really did. And he was serious. He really was. "But...well, Reg, tomorrow the sun will come up again, and I'm pretty sure that whatever happens we won't have found Freedom, and there won't be a whole lot of Justice, and I'm damn sure we won't have found Truth. But it's just possible that I might get a hard-boiled egg."

I think I'm feeling defensive of the heart of this book because it's such an emotional gut punch and says such complicated and nuanced things about politics and ethics (and such deeply cynical things about revolution). But I think if I were to try to represent this story in a meme, it would be the "angels rise up" song, with all the layers of meaning that it gains in this story. I'm still at the point where the lilac sprigs remind me of Sergeant Colon becoming quietly furious at the overstep of someone who wasn't there.

There's one other thing I want to say about that scene: I'm not naturally on Vimes's side of this argument. I think it's important to note that Vimes's attitude throughout this book is profoundly, deeply conservative. The hard-boiled egg captures that perfectly: it's a bit of physical comfort, something you can buy or make, something that's part of the day-to-day wheels of the city that Vimes talks about elsewhere in Night Watch. It's a rejection of revolution, something that Vimes does elsewhere far more explicitly.

Vimes is a cop. He is in some profound sense a defender of the status quo. He doesn't believe things are going to fundamentally change, and it's not clear he would want them to if they did.

And yet. And yet, this is where Pratchett's Dickensian morality comes out. Vimes is a conservative at heart. He's grumpy and cynical and jaded and he doesn't like change. But if you put him in a situation where people are being hurt, he will break every rule and twist every principle to stop it.

He wanted to go home. He wanted it so much that he trembled at the thought. But if the price of that was selling good men to the night, if the price was filling those graves, if the price was not fighting with every trick he knew... then it was too high.

It wasn't a decision that he was making, he knew. It was happening far below the areas of the brain that made decisions. It was something built in. There was no universe, anywhere, where a Sam Vimes would give in on this, because if he did then he wouldn't be Sam Vimes any more.

This is truly exceptional stuff. It is the best Discworld novel I have read, by far. I feel like this was the Watch novel that Pratchett was always trying to write, and he had to write five other novels first to figure out how to write it. And maybe to prepare Discworld readers to read it.

There are a lot of Discworld novels that are great on their own merits, but also it is 100% worth reading all the Watch novels just so that you can read this book.

Followed in publication order by The Wee Free Men and later, thematically, by Thud!.

Rating: 10 out of 10

31 May, 2023 02:51AM

May 30, 2023

Antoine Beaupré

Wayland: i3 to Sway migration

I started migrating my graphical workstations to Wayland, specifically migrating from i3 to Sway. This is mostly to address serious graphics bugs in the latest Framwork laptop, but also something I felt was inevitable.

The current status is that I've been able to convert my i3 configuration to Sway, and adapt my systemd startup sequence to the new environment. Screen sharing only works with Pipewire, so I also did that migration, which basically requires an upgrade to Debian bookworm to get a nice enough Pipewire release.

I'm testing Wayland on my laptop and I'm using it as a daily driver.

Most irritants have been solved one way or the other. My main problem with Wayland right now is that I spent a frigging week doing the conversion: it's exciting and new, but it basically sucked the life out of all my other projects and it's distracting, and I want it to stop.

The rest of this page documents why I made the switch, how it happened, and what's left to do. Hopefully it will keep you from spending as much time as I did in fixing this.

TL;DR: Wayland is mostly ready. Main blockers you might find are that you need to do manual configurations, DisplayLink (multiple monitors on a single cable) doesn't work in Sway, HDR and color management are still in development.

I had to install the following packages:

apt install \
    brightnessctl \
    foot \
    gammastep \
    gdm3 \
    grim slurp \
    pipewire-pulse \
    sway \
    swayidle \
    swaylock \
    wdisplays \
    wev \
    wireplumber \
    wlr-randr \
    xdg-desktop-portal-wlr

And did some of tweaks in my $HOME, mostly dealing with my esoteric systemd startup sequence, which you won't have to deal with if you are not a fan.

Note that this page is bound to be out of date as I make minute changes to my environment. Typically, changes will be visible in my Puppet repository, somewhere like the desktop.pp file, but I do not make any promise that the content below is up to date.

Why switch?

I originally held back from migrating to Wayland: it seemed like a complicated endeavor hardly worth the cost. It also didn't seem actually ready.

But after reading this blurb on LWN, I decided to at least document the situation here. The actual quote that convinced me it might be worth it was:

It’s amazing. I have never experienced gaming on Linux that looked this smooth in my life.

... I'm not a gamer, but I do care about latency. The longer version is worth a read as well.

The point here is not to bash one side or the other, or even do a thorough comparison. I start with the premise that Xorg is likely going away in the future and that I will need to adapt some day. In fact, the last major Xorg release (21.1, October 2021) is rumored to be the last ("just like the previous release...", that said, minor releases are still coming out, e.g. 21.1.4). Indeed, it seems even core Xorg people have moved on to developing Wayland, or at least Xwayland, which was spun off it its own source tree.

X, or at least Xorg, is in maintenance mode and has been for years. Granted, the X Window System is getting close to forty years old at this point: it got us amazingly far for something that was designed around the time the first graphical interface. Since Mac and (especially?) Windows released theirs, they have rebuilt their graphical backends numerous times, but UNIX derivatives have stuck on Xorg this entire time, which is a testament to the design and reliability of X. (Or our incapacity at developing meaningful architectural change across the entire ecosystem, take your pick I guess.)

What pushed me over the edge is that I had some pretty bad driver crashes with Xorg while screen sharing under Firefox, in Debian bookworm (around November 2022). The symptom would be that the UI would completely crash, reverting to a text-only console, while Firefox would keep running, audio and everything still working. People could still see my screen, but I couldn't, of course, let alone interact with it. All processes still running, including Xorg.

(And no, sorry, I haven't reported that bug, maybe I should have, and it's actually possible it comes up again in Wayland, of course. But at first, screen sharing didn't work of course, so it's coming a much further way. After making screen sharing work, though, the bug didn't occur again, so I consider this a Xorg-specific problem until further notice.)

There were also frustrating glitches in the UI, in general. I actually had to setup a compositor alongside i3 to make things bearable at all. Video playback in a window was lagging, sluggish, and out of sync.

Wayland fixed all of this.

Wayland equivalents

This section documents each tool I have picked as an alternative to the current Xorg tool I am using for the task at hand. It also touches on other alternatives and how the tool was configured.

Note that this list is based on the series of tools I use in desktop.

TODO: update desktop with the following when done, possibly moving old configs to a xorg archive.

Window manager: i3 → sway

This seems like kind of a no-brainer. Sway is around, it's feature-complete, and it's in Debian.

I'm a bit worried about the "Drew DeVault community", to be honest. There's a certain aggressiveness in the community I don't like so much; at least an open hostility towards more modern UNIX tools like containers and systemd that make it hard to do my work while interacting with that community.

I'm also concern about the lack of unit tests and user manual for Sway. The i3 window manager has been designed by a fellow (ex-)Debian developer I have a lot of respect for (Michael Stapelberg), partly because of i3 itself, but also working with him on other projects. Beyond the characters, i3 has a user guide, a code of conduct, and lots more documentation. It has a test suite.

Sway has... manual pages, with the homepage just telling users to use man -k sway to find what they need. I don't think we need that kind of elitism in our communities, to put this bluntly.

But let's put that aside: Sway is still a no-brainer. It's the easiest thing to migrate to, because it's mostly compatible with i3. I had to immediately fix those resources to get a minimal session going:

i3 Sway note
set_from_resources set no support for X resources, naturally
new_window pixel 1 default_border pixel 1 actually supported in i3 as well

That's it. All of the other changes I had to do (and there were actually a lot) were all Wayland-specific changes, not Sway-specific changes. For example, use brightnessctl instead of xbacklight to change the backlight levels.

See a copy of my full sway/config for details.

Other options include:

  • dwl: tiling, minimalist, dwm for Wayland, not in Debian
  • Hyprland: tiling, fancy animations, not in Debian
  • Qtile: tiling, extensible, in Python, not in Debian (1015267)
  • river: Zig, stackable, tagging, not in Debian (1006593)
  • velox: inspired by xmonad and dwm, not in Debian
  • vivarium: inspired by xmonad, not in Debian

Status bar: py3status → waybar

I have invested quite a bit of effort in setting up my status bar with py3status. It supports Sway directly, and did not actually require any change when migrating to Wayland.

Unfortunately, I had trouble making nm-applet work. Based on this nm-applet.service, I found that you need to pass --indicator for it to show up at all.

In theory, tray icon support was merged in 1.5, but in practice there are still several limitations, like icons not clickable. Also, on startup, nm-applet --indicator triggers this error in the Sway logs:

nov 11 22:34:12 angela sway[298938]: 00:49:42.325 [INFO] [swaybar/tray/host.c:24] Registering Status Notifier Item ':1.47/org/ayatana/NotificationItem/nm_applet'
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet IconPixmap: No such property “IconPixmap”
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet AttentionIconPixmap: No such property “AttentionIconPixmap”
nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet ItemIsMenu: No such property “ItemIsMenu”
nov 11 22:36:10 angela sway[313419]: info: fcft.c:838: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: size=24.00pt/32px, dpi=96.00

... but that seems innocuous. The tray icon displays but is not clickable.

Note that there is currently (November 2022) a pull request to hook up a "Tray D-Bus Menu" which, according to Reddit might fix this, or at least be somewhat relevant.

If you don't see the icon, check the bar.tray_output property in the Sway config, try: tray_output *.

The non-working tray was the biggest irritant in my migration. I have used nmtui to connect to new Wifi hotspots or change connection settings, but that doesn't support actions like "turn off WiFi".

I eventually fixed this by switching from py3status to waybar, which was another yak horde shaving session, but ultimately, it worked.

Other alternatives include:

Web browser: Firefox

Firefox has had support for Wayland for a while now, with the team enabling it by default in nightlies around January 2022. It's actually not easy to figure out the state of the port, the meta bug report is still open and it's huge: it currently (Sept 2022) depends on 76 open bugs, it was opened twelve (2010) years ago, and it's still getting daily updates (mostly linking to other tickets).

Firefox 106 presumably shipped with "Better screen sharing for Windows and Linux Wayland users", but I couldn't quite figure out what those were.

TL;DR: echo MOZ_ENABLE_WAYLAND=1 >> ~/.config/environment.d/firefox.conf && apt install xdg-desktop-portal-wlr

How to enable it

Firefox depends on this silly variable to start correctly under Wayland (otherwise it starts inside Xwayland and looks fuzzy and fails to screen share):

MOZ_ENABLE_WAYLAND=1 firefox

To make the change permanent, many recipes recommend adding this to an environment startup script:

if [ "$XDG_SESSION_TYPE" == "wayland" ]; then
    export MOZ_ENABLE_WAYLAND=1
fi

At least that's the theory. In practice, Sway doesn't actually run any startup shell script, so that can't possibly work. Furthermore, XDG_SESSION_TYPE is not actually set when starting Sway from gdm3 which I find really confusing, and I'm not the only one. So the above trick doesn't actually work, even if the environment (XDG_SESSION_TYPE) is set correctly, because we don't have conditionals in environment.d(5).

(Note that systemd.environment-generator(7) does support running arbitrary commands to generate environment, but for some reason does not support user-specific configuration files: it only looks at system directories... Even then it may be a solution to have a conditional MOZ_ENABLE_WAYLAND environment, but I'm not sure it would work because ordering between those two isn't clear: maybe the XDG_SESSION_TYPE wouldn't be set just yet...)

At first, I made this ridiculous script to workaround those issues. Really, it seems to me Firefox should just parse the XDG_SESSION_TYPE variable here... but then I realized that Firefox works fine in Xorg when the MOZ_ENABLE_WAYLAND is set.

So now I just set that variable in environment.d and It Just Works™:

MOZ_ENABLE_WAYLAND=1

Screen sharing

Out of the box, screen sharing doesn't work until you install xdg-desktop-portal-wlr or similar (e.g. xdg-desktop-portal-gnome on GNOME). I had to reboot for the change to take effect.

Without those tools, it shows the usual permission prompt with "Use operating system settings" as the only choice, but when we accept... nothing happens. After installing the portals, it actually works, and works well!

This was tested in Debian bookworm/testing with Firefox ESR 102 and Firefox 106.

Major caveat: we can only share a full screen, we can't currently share just a window. The major upside to that is that, by default, it streams only one output which is actually what I want most of the time! See the screencast compatibility for more information on what is supposed to work.

This is actually a huge improvement over the situation in Xorg, where Firefox can only share a window or all monitors, which led me to use Chromium a lot for video-conferencing. With this change, in other words, I will not need Chromium for anything anymore, whoohoo!

If slurp, wofi, or bemenu are installed, one of them will be used to pick the monitor to share, which effectively acts as some minimal security measure. See xdg-desktop-portal-wlr(1) for how to configure that.

Side note: Chrome fails to share a full screen

I was still using Google Chrome (or, more accurately, Debian's Chromium package) for some videoconferencing. It's mainly because Chromium was the only browser which will allow me to share only one of my two monitors, which is extremely useful.

To start chrome with the Wayland backend, you need to use:

chromium  -enable-features=UseOzonePlatform -ozone-platform=wayland

If it shows an ugly gray border, check the Use system title bar and borders setting.

It can do some screen sharing. Sharing a window and a tab seems to work, but sharing a full screen doesn't: it's all black. Maybe not ready for prime time.

And since Firefox can do what I need under Wayland now, I will not need to fight with Chromium to work under Wayland:

apt purge chromium

Note that a similar fix was necessary for Signal Desktop, see this commit. Basically you need to figure out a way to pass those same flags to signal:

--enable-features=WaylandWindowDecorations --ozone-platform-hint=auto

Email: notmuch

See Emacs, below.

File manager: thunar

Unchanged.

News: feed2exec, gnus

See Email, above, or Emacs in Editor, below.

Editor: Emacs okay-ish

Emacs is being actively ported to Wayland. According to this LWN article, the first (partial, to Cairo) port was done in 2014 and a working port (to GTK3) was completed in 2021, but wasn't merged until late 2021. That is: after Emacs 28 was released (April 2022).

So we'll probably need to wait for Emacs 29 to have native Wayland support in Emacs, which, in turn, is unlikely to arrive in time for the Debian bookworm freeze. There are, however, unofficial builds for both Emacs 28 and 29 provided by spwhitton which may provide native Wayland support.

I tested the snapshot packages and they do not quite work well enough. First off, they completely take over the builtin Emacs — they hijack the $PATH in /etc! — and certain things are simply not working in my setup. For example, this hook never gets ran on startup:

(add-hook 'after-init-hook 'server-start t) 

Still, like many X11 applications, Emacs mostly works fine under Xwayland. The clipboard works as expected, for example.

Scaling is a bit of an issue: fonts look fuzzy.

I have heard anecdotal evidence of hard lockups with Emacs running under Xwayland as well, but haven't experienced any problem so far. I did experience a Wayland crash with the snapshot version however.

TODO: look again at Wayland in Emacs 29.

Backups: borg

Mostly irrelevant, as I do not use a GUI.

Color theme: srcery, redshift → gammastep

I am keeping Srcery as a color theme, in general.

Redshift is another story: it has no support for Wayland out of the box, but it's apparently possible to apply a hack on the TTY before starting Wayland, with:

redshift -m drm -PO 3000

This tip is from the arch wiki which also has other suggestions for Wayland-based alternatives. Both KDE and GNOME have their own "red shifters", and for wlroots-based compositors, they (currently, Sept. 2022) list the following alternatives:

I configured gammastep with a simple gammastep.service file associated with the sway-session.target.

Display manager: lightdm → gdm3

Switched because lightdm failed to start sway:

nov 16 16:41:43 angela sway[843121]: 00:00:00.002 [ERROR] [wlr] [libseat] [common/terminal.c:162] Could not open target tty: Permission denied

Possible alternatives:

Terminal: xterm → foot

One of the biggest question mark in this transition was what to do about Xterm. After writing two articles about terminal emulators as a professional journalist, decades of working on the terminal, and probably using dozens of different terminal emulators, I'm still not happy with any of them.

This is such a big topic that I actually have an entire blog post specifically about this.

For starters, using xterm under Xwayland works well enough, although the font scaling makes things look a bit too fuzzy.

I have also tried foot: it ... just works!

Fonts are much crisper than Xterm and Emacs. URLs are not clickable but the URL selector (control-shift-u) is just plain awesome (think "vimperator" for the terminal).

There's cool hack to jump between prompts.

Copy-paste works. True colors work. The word-wrapping is excellent: it doesn't lose one byte. Emojis are nicely sized and colored. Font resize works. There's even scroll back search (control-shift-r).

Foot went from a question mark to being a reason to switch to Wayland, just for this little goodie, which says a lot about the quality of that software.

The selection clicks are a not quite what I would expect though. In rxvt and others, you have the following patterns:

  • single click: reset selection, or drag to select
  • double: select word
  • triple: select quotes or line
  • quadruple: select line

I particularly find the "select quotes" bit useful. It seems like foot just supports double and triple clicks, with word and line selected. You can select a rectangle with control,. It correctly extends the selection word-wise with right click if double-click was first used.

One major problem with Foot is that it's a new terminal, with its own termcap entry. Support for foot was added to ncurses in the 20210731 release, which was shipped after the current Debian stable release (Debian bullseye, which ships 6.2+20201114-2). A workaround for this problem is to install the foot-terminfo package on the remote host, which is available in Debian stable.

This should eventually resolve itself, as Debian bookworm has a newer version. Note that some corrections were also shipped in the 20211113 release, but that is also shipped in Debian bookworm.

That said, I am almost certain I will have to revert back to xterm under Xwayland at some point in the future. Back when I was using GNOME Terminal, it would mostly work for everything until I had to use the serial console on a (HP ProCurve) network switch, which have a fancy TUI that was basically unusable there. I fully expect such problems with foot, or any other terminal than xterm, for that matter.

The foot wiki has good troubleshooting instructions as well.

Update: I did find one tiny thing to improve with foot, and it's the default logging level which I found pretty verbose. After discussing it with the maintainer on IRC, I submitted this patch to tweak it, which I described like this on Mastodon:

today's reason why i will go to hell when i die (TRWIWGTHWID?): a 600-word, 63 lines commit log for a one line change: https://codeberg.org/dnkl/foot/pulls/1215

It's Friday.

Launcher: rofi → fuzzel

rofi does not support Wayland. There was a rather disgraceful battle in the pull request that led to the creation of a fork (lbonn/rofi), so it's unclear how that will turn out.

Given how relatively trivial problem space is, there is of course a profusion of options:

Tool In Debian Notes
alfred yes general launcher/assistant tool
bemenu yes, bookworm+ inspired by dmenu
cerebro no Javascript ... uh... thing
dmenu-wl no fork of dmenu, straight port to Wayland
Fuzzel ITP 982140 dmenu/drun replacement, app icon overlay
gmenu no drun replacement, with app icons
kickoff no dmenu/run replacement, fuzzy search, "snappy", history, copy-paste, Rust
krunner yes KDE's runner
mauncher no dmenu/drun replacement, math
nwg-launchers no dmenu/drun replacement, JSON config, app icons, nwg-shell project
Onagre no rofi/alfred inspired, multiple plugins, Rust
πmenu no dmenu/drun rewrite
Rofi (lbonn's fork) no see above
sirula no .desktop based app launcher
Ulauncher ITP 949358 generic launcher like Onagre/rofi/alfred, might be overkill
tofi yes, bookworm+ dmenu/drun replacement, C
wmenu no fork of dmenu-wl, but mostly a rewrite
Wofi yes dmenu/drun replacement, not actively maintained
yofi no dmenu/drun replacement, Rust

The above list comes partly from https://arewewaylandyet.com/ and awesome-wayland. It is likely incomplete.

I have read some good things about bemenu, fuzzel, and wofi.

A particularly tricky option is that my rofi password management depends on xdotool for some operations. At first, I thought this was just going to be (thankfully?) impossible, because we actually like the idea that one app cannot send keystrokes to another. But it seems there are actually alternatives to this, like wtype or ydotool, the latter which requires root access. wl-ime-type does that through the input-method-unstable-v2 protocol (sample emoji picker, but is not packaged in Debian.

As it turns out, wtype just works as expected, and fixing this was basically a two-line patch. Another alternative, not in Debian, is wofi-pass.

The other problem is that I actually heavily modified rofi. I use "modis" which are not actually implemented in wofi or tofi, so I'm left with reinventing those wheels from scratch or using the rofi + wayland fork... It's really too bad that fork isn't being reintegrated...

Note that wlogout could be a partial replacement (just for the "power menu").

Fuzzel

I ended up completely switching to fuzzel after realizing it was the same friendly author as [foot][]. I did have to severely hack around its limitations, by rewriting my rofi "modis" with plain shell scripts. I wrote the following:

  • dmenu-ssh.py: reads your SSH config and extracts hostnames, keeps history sorted by frequency in ~/.cache/dmenu-ssh
  • dmenu-bash-history: reads your .history and .bash_history files and prompts for a command to run, appending dmenu_path, which is basically all available commands in your $PATH, also saves the command in your .history file (also required me to bump the size of that file to really be useful)
  • pass-dmenu: was already in use, just a little patch to support Wayland, basically list the pass entries sorted by domains (pass-domains) and piped the picked password to the clipboard or wl-type
  • dmenu-unicode: (NEW!) grep around the unicode database for emojis and other stuff

With those, I can basically use fuzzel or any other dmenu-compatible program and not care, it will "just work".

Image viewers: geeqie → ?

I'm not very happy with geeqie in the first place, and I suspect the Wayland switch will just make add impossible things on top of the things I already find irritating (Geeqie doesn't support copy-pasting images).

In practice, Geeqie doesn't seem to work so well under Wayland. The fonts are fuzzy and the thumbnail preview just doesn't work anymore (filed as Debian bug 1024092). It seems it also has problems with scaling.

Alternatives:

See also this list and that list for other list of image viewers, not necessarily ported to Wayland.

TODO: pick an alternative to geeqie, nomacs would be gorgeous if it wouldn't be basically abandoned upstream (no release since 2020), has an unpatched CVE-2020-23884 since July 2020, does bad vendoring, and is in bad shape in Debian (4 minor releases behind).

So for now I'm still grumpily using Geeqie.

Media player: mpv, gmpc / sublime

This is basically unchanged. mpv seems to work fine under Wayland, better than Xorg on my new laptop (as mentioned in the introduction), and that before the version which improves Wayland support significantly, by bringing native Pipewire support and DMA-BUF support.

gmpc is more of a problem, mainly because it is abandoned. See 2022-08-22-gmpc-alternatives for the full discussion, one of the alternatives there will likely support Wayland.

Finally, I might just switch to sublime-music instead... In any case, not many changes here, thankfully.

Screensaver: xsecurelock → swaylock

I was previously using xss-lock and xsecurelock as a screensaver, with xscreensaver "hacks" as a backend for xsecurelock.

The basic screensaver in Sway seems to be built with swayidle and swaylock. It's interesting because it's the same "split" design as xss-lock and xsecurelock.

That, unfortunately, does not include the fancy "hacks" provided by xscreensaver, and that is unlikely to be implemented upstream.

Other alternatives include gtklock and waylock (zig), which do not solve that problem either.

It looks like swaylock-plugin, a swaylock fork, which at least attempts to solve this problem, although not directly using the real xscreensaver hacks. swaylock-effects is another attempt at this, but it only adds more effects, it doesn't delegate the image display.

Other than that, maybe it's time to just let go of those funky animations and just let swaylock do it's thing, which is display a static image or just a black screen, which is fine by me.

In the end, I am just using swayidle with a configuration based on the systemd integration wiki page but with additional tweaks from this service, see the resulting swayidle.service file.

Interestingly, damjan also has a service for swaylock itself, although it's not clear to me what its purpose is...

Screenshot: maim → grim, pubpaste

I'm a heavy user of maim (and a package uploader in Debian). It looks like the direct replacement to maim (and slop) is grim (and slurp). There's also swappy which goes on top of grim and allows preview/edit of the resulting image, nice touch (not in Debian though).

See also awesome-wayland screenshots for other alternatives: there are many, including X11 tools like Flameshot that also support Wayland.

One key problem here was that I have my own screenshot / pastebin software which will needed an update for Wayland as well. That, thankfully, meant actually cleaning up a lot of horrible code that involved calling xterm and xmessage for user interaction. Now, pubpaste uses GTK for prompts and looks much better. (And before anyone freaks out, I already had to use GTK for proper clipboard support, so this isn't much of a stretch...)

Screen recorder: simplescreenrecorder → wf-recorder

In Xorg, I have used both peek or simplescreenrecorder for screen recordings. The former will work in Wayland, but has no sound support. The latter has a fork with Wayland support but it is limited and buggy ("doesn't support recording area selection and has issues with multiple screens").

It looks like wf-recorder will just do everything correctly out of the box, including audio support (with --audio, duh). It's also packaged in Debian.

One has to wonder how this works while keeping the "between app security" that Wayland promises, however... Would installing such a program make my system less secure?

Many other options are available, see the awesome Wayland screencasting list.

RSI: workrave → nothing?

Workrave has no support for Wayland. activity watch is a time tracker alternative, but is not a RSI watcher. KDE has rsiwatcher, but that's a bit too much on the heavy side for my taste.

SafeEyes looks like an alternative at first, but it has many issues under Wayland (escape doesn't work, idle doesn't work, it just doesn't work really). timekpr-next could be an alternative as well, and has support for Wayland.

I am also considering just abandoning workrave, even if I stick with Xorg, because it apparently introduces significant latency in the input pipeline.

And besides, I've developed a pretty unhealthy alert fatigue with Workrave. I have used the program for so long that my fingers know exactly where to click to dismiss those warnings very effectively. It makes my work just more irritating, and doesn't fix the fundamental problem I have with computers.

Other apps

This is a constantly changing list, of course. There's a bit of a "death by a thousand cuts" in migrating to Wayland because you realize how many things you were using are tightly bound to X.

  • .Xresources - just say goodbye to that old resource system, it was used, in my case, only for rofi, xterm, and ... Xboard!?

  • keyboard layout switcher: built-in to Sway since 2017 (PR 1505, 1.5rc2+), requires a small configuration change, see this answer as well, looks something like this command:

     swaymsg input 0:0:X11_keyboard xkb_layout de
    

    or using this config:

     input * {
         xkb_layout "ca,us"
         xkb_options "grp:sclk_toggle"
     }
    

    That works refreshingly well, even better than in Xorg, I must say.

    swaykbdd is an alternative that supports per-window layouts (in Debian).

  • wallpaper: currently using feh, will need a replacement, TODO: figure out something that does, like feh, a random shuffle. swaybg just loads a single image, duh. oguri might be a solution, but unmaintained, used here, not in Debian. wallutils is another option, also not in Debian. For now I just don't have a wallpaper, the background is a solid gray, which is better than Xorg's default (which is whatever crap was left around a buffer by the previous collection of programs, basically)

  • notifications: currently using dunst in some places, which works well in both Xorg and Wayland, not a blocker, fnott (not in Debian), salut (not in Debian) possible alternatives, damjan uses mako. TODO: install dunst everywhere

  • notification area: I had trouble making nm-applet work. based on this nm-applet.service, I found that you need to pass --indicator. In theory, tray icon support was merged in 1.5, but in practice there are still several limitations, like icons not clickable. On startup, nm-applet --indicator triggers this error in the Sway logs:

     nov 11 22:34:12 angela sway[298938]: 00:49:42.325 [INFO] [swaybar/tray/host.c:24] Registering Status Notifier Item ':1.47/org/ayatana/NotificationItem/nm_applet'
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet IconPixmap: No such property “IconPixmap”
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet AttentionIconPixmap: No such property “AttentionIconPixmap”
     nov 11 22:34:12 angela sway[298938]: 00:49:42.327 [ERROR] [swaybar/tray/item.c:127] :1.47/org/ayatana/NotificationItem/nm_applet ItemIsMenu: No such property “ItemIsMenu”
     nov 11 22:36:10 angela sway[313419]: info: fcft.c:838: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: size=24.00pt/32px, dpi=96.00
    

    ... but it seems innocuous. The tray icon displays but, as stated above, is not clickable. If you don't see the icon, check the bar.tray_output property in the Sway config, try: tray_output *. Note that there is currently (November 2022) a pull request to hook up a "Tray D-Bus Menu" which, according to Reddit might fix this, or at least be somewhat relevant.

    This was the biggest irritant in my migration. I have used nmtui to connect to new Wifi hotspots or change connection settings, but that doesn't support actions like "turn off WiFi".

    I eventually fixed this by switching from py3status to waybar.

  • window switcher: in i3 I was using this bespoke i3-focus script, which doesn't work under Sway, swayr an option, not in Debian. So I put together this other bespoke hack from multiple sources, which works.

  • PDF viewer: currently using atril and sioyek (both of which supports Wayland), could also just switch to zatura/mupdf permanently, see also calibre for a discussion on document viewers

See also this list of useful addons and this other list for other app alternatives.

More X11 / Wayland equivalents

For all the tools above, it's not exactly clear what options exist in Wayland, or when they do, which one should be used. But for some basic tools, it seems the options are actually quite clear. If that's the case, they should be listed here:

X11 Wayland In Debian
arandr wdisplays yes
autorandr kanshi yes
xclock wlclock no
xdotool wtype yes
xev wev, xkbcli interactive-wayland yes
xlsclients swaymsg -t get_tree yes
xprop wlprop or swaymsg -t get_tree no
xrandr wlr-randr yes

lswt is a more direct replacement for xlsclients but is not packaged in Debian.

xkbcli interactive-wayland is part of the libxkbcommon-tools package.

See also:

Note that arandr and autorandr are not directly part of X. arewewaylandyet.com refers to a few alternatives. We suggest wdisplays and kanshi above (see also this service file) but wallutils can also do the autorandr stuff, apparently, and nwg-displays can do the arandr part. shikane is a promising kanshi rewrite in Rust. None (but kanshi) are packaged in Debian yet.

So I have tried wdisplays and it Just Works, and well. The UI even looks better and more usable than arandr, so another clean win from Wayland here.

I'm currently kanshi as a autorandr replacement and it mostly works. It can be hard to figure out the right configuration to put, and auto-detection doesn't always work. A key feature missing for me is the save profile functionality that autorandr has and which makes it much easier to use.

Other issues

systemd integration

I've had trouble getting session startup to work. This is partly because I had a kind of funky system to start my session in the first place. I used to have my whole session started from .xsession like this:

#!/bin/sh

. ~/.shenv

systemctl --user import-environment

exec systemctl --user start --wait xsession.target

But obviously, the xsession.target is not started by the Sway session. It seems to just start a default.target, which is really not what we want because we want to associate the services directly with the graphical-session.target, so that they don't start when logging in over (say) SSH.

damjan on #debian-systemd showed me his sway-setup which features systemd integration. It involves starting a different session in a completely new .desktop file. That work was submitted upstream but refused on the grounds that "I'd rather not give a preference to any particular init system." Another PR was abandoned because "restarting sway does not makes sense: that kills everything".

The work was therefore moved to the wiki.

So. Not a great situation. The upstream wiki systemd integration suggests starting the systemd target from within Sway, which has all sorts of problems:

  • you don't get Sway logs anywhere
  • control groups are all messed up

I have done a lot of work trying to figure this out, but I remember that starting systemd from Sway didn't actually work for me: my previously configured systemd units didn't correctly start, and especially not with the right $PATH and environment.

So I went down that rabbit hole and managed to correctly configure Sway to be started from the systemd --user session. I have partly followed the wiki but also picked ideas from damjan's sway-setup and xdbob's sway-services. Another option is uwsm (not in Debian).

This is the config I have in .config/systemd/user/:

I have also configured those services, but that's somewhat optional:

You will also need at least part of my sway/config, which sends the systemd notification (because, no, Sway doesn't support any sort of readiness notification, that would be too easy). And you might like to see my swayidle-config while you're there.

Finally, you need to hook this up somehow to the login manager. This is typically done with a desktop file, so drop sway-session.desktop in /usr/share/wayland-sessions and sway-user-service somewhere in your $PATH (typically /usr/bin/sway-user-service).

The session then looks something like this:

$ systemd-cgls | head -101
Control group /:
-.slice
├─user.slice (#472)
│ → user.invocation_id: bc405c6341de4e93a545bde6d7abbeec
│ → trusted.invocation_id: bc405c6341de4e93a545bde6d7abbeec
│ └─user-1000.slice (#10072)
│   → user.invocation_id: 08f40f5c4bcd4fd6adfd27bec24e4827
│   → trusted.invocation_id: 08f40f5c4bcd4fd6adfd27bec24e4827
│   ├─user@1000.service … (#10156)
│   │ → user.delegate: 1
│   │ → trusted.delegate: 1
│   │ → user.invocation_id: 76bed72a1ffb41dca9bfda7bb174ef6b
│   │ → trusted.invocation_id: 76bed72a1ffb41dca9bfda7bb174ef6b
│   │ ├─session.slice (#10282)
│   │ │ ├─xdg-document-portal.service (#12248)
│   │ │ │ ├─9533 /usr/libexec/xdg-document-portal
│   │ │ │ └─9542 fusermount3 -o rw,nosuid,nodev,fsname=portal,auto_unmount,subt…
│   │ │ ├─xdg-desktop-portal.service (#12211)
│   │ │ │ └─9529 /usr/libexec/xdg-desktop-portal
│   │ │ ├─pipewire-pulse.service (#10778)
│   │ │ │ └─6002 /usr/bin/pipewire-pulse
│   │ │ ├─wireplumber.service (#10519)
│   │ │ │ └─5944 /usr/bin/wireplumber
│   │ │ ├─gvfs-daemon.service (#10667)
│   │ │ │ └─5960 /usr/libexec/gvfsd
│   │ │ ├─gvfs-udisks2-volume-monitor.service (#10852)
│   │ │ │ └─6021 /usr/libexec/gvfs-udisks2-volume-monitor
│   │ │ ├─at-spi-dbus-bus.service (#11481)
│   │ │ │ ├─6210 /usr/libexec/at-spi-bus-launcher
│   │ │ │ ├─6216 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2…
│   │ │ │ └─6450 /usr/libexec/at-spi2-registryd --use-gnome-session
│   │ │ ├─pipewire.service (#10403)
│   │ │ │ └─5940 /usr/bin/pipewire
│   │ │ └─dbus.service (#10593)
│   │ │   └─5946 /usr/bin/dbus-daemon --session --address=systemd: --nofork --n…
│   │ ├─background.slice (#10324)
│   │ │ └─tracker-miner-fs-3.service (#10741)
│   │ │   └─6001 /usr/libexec/tracker-miner-fs-3
│   │ ├─app.slice (#10240)
│   │ │ ├─xdg-permission-store.service (#12285)
│   │ │ │ └─9536 /usr/libexec/xdg-permission-store
│   │ │ ├─gammastep.service (#11370)
│   │ │ │ └─6197 gammastep
│   │ │ ├─dunst.service (#11958)
│   │ │ │ └─7460 /usr/bin/dunst
│   │ │ ├─wterminal.service (#13980)
│   │ │ │ ├─69100 foot --title pop-up
│   │ │ │ ├─69101 /bin/bash
│   │ │ │ ├─77660 sudo systemd-cgls
│   │ │ │ ├─77661 head -101
│   │ │ │ ├─77662 wl-copy
│   │ │ │ ├─77663 sudo systemd-cgls
│   │ │ │ └─77664 systemd-cgls
│   │ │ ├─syncthing.service (#11995)
│   │ │ │ ├─7529 /usr/bin/syncthing -no-browser -no-restart -logflags=0 --verbo…
│   │ │ │ └─7537 /usr/bin/syncthing -no-browser -no-restart -logflags=0 --verbo…
│   │ │ ├─dconf.service (#10704)
│   │ │ │ └─5967 /usr/libexec/dconf-service
│   │ │ ├─gnome-keyring-daemon.service (#10630)
│   │ │ │ └─5951 /usr/bin/gnome-keyring-daemon --foreground --components=pkcs11…
│   │ │ ├─gcr-ssh-agent.service (#10963)
│   │ │ │ └─6035 /usr/libexec/gcr-ssh-agent /run/user/1000/gcr
│   │ │ ├─swayidle.service (#11444)
│   │ │ │ └─6199 /usr/bin/swayidle -w
│   │ │ ├─nm-applet.service (#11407)
│   │ │ │ └─6198 /usr/bin/nm-applet --indicator
│   │ │ ├─wcolortaillog.service (#11518)
│   │ │ │ ├─6226 foot colortaillog
│   │ │ │ ├─6228 /bin/sh /home/anarcat/bin/colortaillog
│   │ │ │ ├─6230 sudo journalctl -f
│   │ │ │ ├─6233 ccze -m ansi
│   │ │ │ ├─6235 sudo journalctl -f
│   │ │ │ └─6236 journalctl -f
│   │ │ ├─afuse.service (#10889)
│   │ │ │ └─6051 /usr/bin/afuse -o mount_template=sshfs -o transform_symlinks -…
│   │ │ ├─gpg-agent.service (#13547)
│   │ │ │ ├─51662 /usr/bin/gpg-agent --supervised
│   │ │ │ └─51719 scdaemon --multi-server
│   │ │ ├─emacs.service (#10926)
│   │ │ │ ├─ 6034 /usr/bin/emacs --fg-daemon
│   │ │ │ └─33203 /usr/bin/aspell -a -m -d en --encoding=utf-8
│   │ │ ├─xdg-desktop-portal-gtk.service (#12322)
│   │ │ │ └─9546 /usr/libexec/xdg-desktop-portal-gtk
│   │ │ ├─xdg-desktop-portal-wlr.service (#12359)
│   │ │ │ └─9555 /usr/libexec/xdg-desktop-portal-wlr
│   │ │ └─sway.service (#11037)
│   │ │   ├─6037 /usr/bin/sway
│   │ │   ├─6181 swaybar -b bar-0
│   │ │   ├─6209 py3status
│   │ │   ├─6309 /usr/bin/i3status -c /tmp/py3status_oy4ntfnq
│   │ │   └─6969 Xwayland :0 -rootless -terminate -core -listen 29 -listen 30 -…
│   │ └─init.scope (#10198)
│   │   ├─5909 /lib/systemd/systemd --user
│   │   └─5911 (sd-pam)
│   └─session-7.scope (#10440)
│     ├─5895 gdm-session-worker [pam/gdm-password]
│     ├─6028 /usr/libexec/gdm-wayland-session --register-session sway-user-serv…
[...]

I think that's pretty neat.

Environment propagation

At first, my terminals and rofi didn't have the right $PATH, which broke a lot of my workflow. It's hard to tell exactly how Wayland gets started or where to inject environment. This discussion suggests a few alternatives and this Debian bug report discusses this issue as well.

I eventually picked environment.d(5) since I already manage my user session with systemd, and it fixes a bunch of other problems. I used to have a .shenv that I had to manually source everywhere. The only problem with that approach is that it doesn't support conditionals, but that's something that's rarely needed.

Pipewire

This is a whole topic onto itself, but migrating to Wayland also involves using Pipewire if you want screen sharing to work. You can actually keep using Pulseaudio for audio, that said, but that migration is actually something I've wanted to do anyways: Pipewire's design seems much better than Pulseaudio, as it folds in JACK features which allows for pretty neat tricks. (Which I should probably show in a separate post, because this one is getting rather long.)

I first tried this migration in Debian bullseye, and it didn't work very well. Ardour would fail to export tracks and I would get into weird situations where streams would just drop mid-way.

A particularly funny incident is when I was in a meeting and I couldn't hear my colleagues speak anymore (but they could) and I went on blabbering on my own for a solid 5 minutes until I realized what was going on. By then, people had tried numerous ways of letting me know that something was off, including (apparently) coughing, saying "hello?", chat messages, IRC, and so on, until they just gave up and left.

I suspect that was also a Pipewire bug, but it could also have been that I muted the tab by error, as I recently learned that clicking on the little tiny speaker icon on a tab mutes that tab. Since the tab itself can get pretty small when you have lots of them, it's actually quite frequently that I mistakenly mute tabs.

Anyways. Point is: I already knew how to make the migration, and I had already documented how to make the change in Puppet. It's basically:

apt install pipewire pipewire-audio-client-libraries pipewire-pulse wireplumber 

Then, as a regular user:

systemctl --user daemon-reload
systemctl --user --now disable pulseaudio.service pulseaudio.socket
systemctl --user --now enable pipewire pipewire-pulse
systemctl --user mask pulseaudio

An optional (but key, IMHO) configuration you should also make is to "switch on connect", which will make your Bluetooth or USB headset automatically be the default route for audio, when connected. In ~/.config/pipewire/pipewire-pulse.conf.d/autoconnect.conf:

context.exec = [
    { path = "pactl"        args = "load-module module-always-sink" }
    { path = "pactl"        args = "load-module module-switch-on-connect" }
    #{ path = "/usr/bin/sh"  args = "~/.config/pipewire/default.pw" }
]

See the excellent — as usual — Arch wiki page about Pipewire for that trick and more information about Pipewire. Note that you must not put the file in ~/.config/pipewire/pipewire.conf (or pipewire-pulse.conf, maybe) directly, as that will break your setup. If you want to add to that file, first copy the template from /usr/share/pipewire/pipewire-pulse.conf first.

So far I'm happy with Pipewire in bookworm, but I've heard mixed reports from it. I have high hopes it will become the standard media server for Linux in the coming months or years, which is great because I've been (rather boldly, I admit) on the record saying I don't like PulseAudio.

Rereading this now, I feel it might have been a little unfair, as "over-engineered and tries to do too many things at once" applies probably even more to Pipewire than PulseAudio (since it also handles video dispatching).

That said, I think Pipewire took the right approach by implementing existing interfaces like Pulseaudio and JACK. That way we're not adding a third (or fourth?) way of doing audio in Linux; we're just making the server better.

Keypress drops

Sometimes I lose keyboard presses. This correlates with the following warning from Sway:

déc 06 10:36:31 curie sway[343384]: 23:32:14.034 [ERROR] [wlr] [libinput] event5  - SONiX USB Keyboard: client bug: event processing lagging behind by 37ms, your system is too slow 

... and corresponds to an open bug report in Sway. It seems the "system is too slow" should really be "your compositor is too slow" which seems to be the case here on this older system (curie). It doesn't happen often, but it does happen, particularly when a bunch of busy processes start in parallel (in my case: a linter running inside a container and notmuch new).

The proposed fix for this in Sway is to gain real time privileges and add the CAP_SYS_NICE capability to the binary. We'll see how that goes in Debian once 1.8 gets released and shipped.

Output mirroring

Sway does not support output mirroring, a strange limitation considering the flexibility that software like wdisplays seem to offer.

(In practice, if you layout two monitors on top of each other in that configuration, they do not actually mirror. Instead, sway assigns a workspace to each monitor, as if they were next to each other but, confusingly, the cursor appears in both monitors. It's extremely disorienting.)

The bug report has been open since 2018 and has seen a long discussion, but basically no progress. Part of the problem is the ticket tries to tackle "more complex configurations" as well, not just output mirroring, so it's a long and winding road.

Note that other Wayland compositors (e.g. Hyprland, GNOME's Mutter) do support mirroring, so it's not a fundamental limitation of Wayland.

One workaround is to use a tool like wl-mirror to make a window that mirrors a specific output and place that in a different workspace. That way you place the output you want to mirror to next to the output you want to mirror from, and use wl-mirror to copy between the two outputs. The problem is that wl-mirror is not packaged in Debian yet.

Another workaround mentioned in the thread is to use a presentation tool which supports mirroring on its own, or presenter notes. So far I have generally found workarounds for the problem, but it might be a big limitation for others.

Improvements over i3

Tiling improvements

There's a lot of improvements Sway could bring over using plain i3. There are pretty neat auto-tilers that could replicate the configurations I used to have in Xmonad or Awesome, see:

Display latency tweaks

TODO: You can tweak the display latency in wlroots compositors with the max_render_time parameter, possibly getting lower latency than X11 in the end.

Sound/brightness changes notifications

TODO: Avizo can display a pop-up to give feedback on volume and brightness changes. Not in Debian. Other alternatives include SwayOSD and sway-nc, also not in Debian.

Debugging tricks

The xeyes (in the x11-apps package) will run in Wayland, and can actually be used to easily see if a given window is also in Wayland. If the "eyes" follow the cursor, the app is actually running in xwayland, so not natively in Wayland.

Another way to see what is using Wayland in Sway is with the command:

swaymsg -t get_tree

Other documentation

Conclusion

In general, this took me a long time, but it mostly works. The tray icon situation is pretty frustrating, but there's a workaround and I have high hopes it will eventually fix itself. I'm also actually worried about the DisplayLink support because I eventually want to be using this, but hopefully that's another thing that will hopefully fix itself before I need it.

A word on the security model

I'm kind of worried about all the hacks that have been added to Wayland just to make things work. Pretty much everywhere we need to, we punched a hole in the security model:

Wikipedia describes the security properties of Wayland as it "isolates the input and output of every window, achieving confidentiality, integrity and availability for both." I'm not sure those are actually realized in the actual implementation, because of all those holes punched in the design, at least in Sway. For example, apparently the GNOME compositor doesn't have the virtual-keyboard protocol, but they do have (another?!) text input protocol.

Wayland does offer a better basis to implement such a system, however. It feels like the Linux applications security model lacks critical decision points in the UI, like the user approving "yes, this application can share my screen now". Applications themselves might have some of those prompts, but it's not mandatory, and that is worrisome.

30 May, 2023 05:27PM

Russ Allbery

Review: The Mimicking of Known Successes

Review: The Mimicking of Known Successes, by Malka Older

Series: Mossa and Pleiti #1
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-86051-2
Format: Kindle
Pages: 169

The Mimicking of Known Successes is a science fiction mystery novella, the first of an expected series. (The second novella is scheduled to be published in February of 2024.)

Mossa is an Investigator, called in after a man disappears from the eastward platform on the 4°63' line. It's an isolated platform, five hours away from Mossa's base, and home to only four residential buildings and a pub. The most likely explanation is that the man jumped, but his behavior before he disappeared doesn't seem consistent with that theory. He was bragging about being from Valdegeld University, talking to anyone who would listen about the important work he was doing — not typically the behavior of someone who is suicidal. Valdegeld is the obvious next stop in the investigation.

Pleiti is a Classics scholar at Valdegeld. She is also Mossa's ex-girlfriend, making her both an obvious and a fraught person to ask for investigative help. Mossa is the last person she expected to be waiting for her on the railcar platform when she returns from a trip to visit her parents.

The Mimicking of Known Successes is mostly a mystery, following Mossa's attempts to untangle the story of what happened to the disappeared man, but as you might have guessed there's a substantial sapphic romance subplot. It's also at least adjacent to Sherlock Holmes: Mossa is brilliant, observant, somewhat monomaniacal, and very bad at human relationships. All of this story except for the prologue is told from Pleiti's perspective as she plays a bit of a Watson role, finding Mossa unreadable, attractive, frustrating, and charming in turn. Following more recent Holmes adaptations, Mossa is portrayed as probably neurodivergent, although the story doesn't attach any specific labels.

I have no strong opinions about this novella. It was fine? There's a mystery with a few twists, there's a sapphic romance of the second chance variety, there's a bit of action and a bit of hurt/comfort after the action, and it all felt comfortably entertaining but kind of predictable. Susan Stepney has a "passes the time" review rating, and while that may be a bit harsh, that's about where I ended up.

The most interesting part of the story is the science fiction setting. We're some indefinite period into the future. Humans have completely messed up Earth to the point of making it uninhabitable. We then took a shot at terraforming Mars and messed that planet up to the point of uninhabitability as well. Now, what's left of humanity (maybe not all of it — the story isn't clear) lives on platforms connected by rail lines high in the atmosphere of Jupiter. (Everyone in the story calls Jupiter "Giant" for reasons that I didn't follow, given that they didn't rename any of its moons.) Pleiti's position as a Classics scholar means that she studies Earth and its now-lost ecosystems, whereas the Modern faculty focus on their new platform life.

This background does become relevant to the mystery, although exactly how is not clear at the start.

I wouldn't call this a very realistic setting. One has to accept that people are living on platforms attached to artificial rings around the solar system's largest planet and walk around in shirt sleeves and only minor technological support due to "atmoshields" of some unspecified capability, and where the native atmosphere plays the role of London fog. Everything feels vaguely Edwardian, including to the occasional human porter and message runner, which matches the story concept but seems unlikely as a plausible future culture. I also disbelieve in humanity's ability to do anything to Earth that would make it less inhabitable than the clouds of Jupiter.

That said, the setting is a lot of fun, which is probably more important. It's fun to try to visualize, and it has that slightly off-balance, occasionally surprising feel of science fiction settings where everyone is recognizably human but the things they consider routine and unremarkable are unexpected by the reader.

This novella also has a great title. The Mimicking of Known Successes is simultaneously a reference a specific plot point from late in the story, a nod to the shape of the romance, and an acknowledgment of the Holmes pastiche, and all of those references work even better once you know what the plot point is. That was nicely done.

This was not very memorable apart from the setting, but it was pleasant enough. I can't say that I'm inspired to pre-order the next novella in this series, but I also wouldn't object to reading it. If you're in the mood for gender-swapped Holmes in an exotic setting, you could do worse.

Followed by The Imposition of Unnecessary Obstacles.

Rating: 6 out of 10

30 May, 2023 02:09AM

May 29, 2023

hackergotchi for Shirish Agarwal

Shirish Agarwal

Pearls of Luthra, Dahaad, Tetris & Discord.

Pearls of Luthra

Pearls of Luthra is the first book by Brian Jacques and I think I am going to be a fan of his work. This particular book you have to be wary of. While it is a beautiful book with quite a few illustrations, I have to warn that if you are somebody who feels hungry at the very mention of food, then you will be hungry throughout the book. There isn’t a single page where food isn’t mentioned and not just any kind of food, the kind of food that is geared towards sweet tooth. So if you fancy tarts or chocolates or anything sweet you will right at home. The book also touches upon various teas and wines and various liquors but food is where it shines in literally. The tale is very much like a Harry Potter adventure but isn’t as dark as HP was. In fact, apart from one death and one ear missing rest of our heroes and heroines and there are quite a few. I don’t want to give too much away as it’s a book to be treasured.

Dahaad

Dahaad (the roar) is Sonakshi Sinha’s entry in OTT/Web Series. The stage is set somewhere in North India while the exploits are based on a real life person called Cyanide Mohan who killed 20 women between 2005-2009. In the web series however, the antagonist’s crimes are done over a period of 12 years and has 29 women as his victims. Apart from that it’s pretty much a copy of what was done by the person above. It’s a melting pot of a series which quite a few stories enmeshed along with the main one. The main onus and plot of the movie is about women from lower economic and caste order whose families want them to be wed but cannot due to huge demands for dowry. Now in such a situation, if a person were to give them a bit of attention, promise marriage and ask them to steal a bit and come with him and whatever, they will do it. The same modus operandi was done by Cynaide Mohan. He had a car that was not actually is but used it show off that he’s from a richer background, entice the women, have sex, promise marriage and in the morning after pill there will be cynaide which the women unwittingly will consume.

This is also framed by the protagonist Sonakshi Sinha to her mother as her mother is also forcing her to get married as she is becoming older. She shows some of the photographs of the victims and says that while the perpetrator is guilty but so is the overall society that puts women in such vulnerable positions. AFAIK, that is still the state of things. In fact, there is a series called ‘Indian Matchmaking‘ that has all the snobbishness that you want. How many people could have a lifestyle like the ones shown in that, less than 2% of the population. It’s actually shows like the above that make the whole thing even more precarious 😦

Apart from it, the show also shows prejudice about caste and background. I wouldn’t go much into it as it’s worth seeing and experiencing.

Tetris

Tetris in many a ways is a story of greed. It’s also a story of a lone inventor who had to wait almost 20 odd years to profit from his invention. Forbes does a marvelous job of giving some more background and foreground info. about Tetris, the inventor and the producer that went to strike it rich. It also does share about copyright misrepresentation happens but does nothing to address it. Could talk a whole lot but better to see the movie and draw your own conclusions. For me it was 4/5.

Discord

Discord became Discord 2.0 and is a blank to me. A blank page. Can’t do anything. First I thought it was a bug. Waited for a few days as sometimes webservices do fix themselves. But two weeks on and it still wasn’t fixed then decided to look under. One of the tools in Firefox is Web Developer Tools ( CTRL+Shift+I) that tells you if an element of a page is not appearing or at least gives you a hint. To me it gave me the following –


Content Security Policy: Ignoring “'unsafe-inline'� within script-src or style-src: nonce-source or hash-source specified
Content Security Policy: The page’s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2… (“style-src�). data:44:30
Content Security Policy: Ignoring “'unsafe-inline'� within script-src or style-src: nonce-source or hash-source specified
TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118

What is being done is dom.webaudio.enabled being disabled in Firefox.

Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it’s labelled NSFW although to my mind there wasn’t anything remotely NSFW about it. They do mention using another tool ‘AudioContext Fingerprint Defender‘ which supposedly fakes or spoofs an id. As this add-on isn’t tracked by Firefox privacy team it’s hard for me to say anything positive or negative.

So, in the end I stopped using discord as the alternative was being tracked by them 😦

Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.


29 May, 2023 11:49PM by shirishag75

John Goerzen

Recommendations for Tools for Backing Up and Archiving to Removable Media

I have several TB worth of family photos, videos, and other data. This needs to be backed up — and archived.

Backups and archives are often thought of as similar. And indeed, they may be done with the same tools at the same time. But the goals differ somewhat:

Backups are designed to recover from a disaster that you can fairly rapidly detect.

Archives are designed to survive for many years, protecting against disaster not only impacting the original equipment but also the original person that created them.

Reflecting on this, it implies that while a nice ZFS snapshot-based scheme that supports twice-hourly backups may be fantastic for that purpose, if you think about things like family members being able to access it if you are incapacitated, or accessibility in a few decades’ time, it becomes much less appealing for archives. ZFS doesn’t have the wide software support that NTFS, FAT, UDF, ISO-9660, etc. do.

This post isn’t about the pros and cons of the different storage media, nor is it about the pros and cons of cloud storage for archiving; these conversations can readily be found elsewhere. Let’s assume, for the point of conversation, that we are considering BD-R optical discs as well as external HDDs, both of which are too small to hold the entire backup set.

What would you use for archiving in these circumstances?

Establishing goals

The goals I have are:

  • Archives can be restored using Linux or Windows (even though I don’t use Windows, this requirement will ensure the broadest compatibility in the future)
  • The archival system must be able to accommodate periodic updates consisting of new files, deleted files, moved files, and modified files, without requiring a rewrite of the entire archive dataset
  • Archives can ideally be mounted on any common OS and the component files directly copied off
  • Redundancy must be possible. In the worst case, one could manually copy one drive/disc to another. Ideally, the archiving system would automatically track making n copies of data.
  • While a full restore may be a goal, simply finding one file or one directory may also be a goal. Ideally, an archiving system would be able to quickly tell me which discs/drives contain a given file.
  • Ideally, preserves as much POSIX metadata as possible (hard links, symlinks, modification date, permissions, etc). However, for the archiving case, this is less important than for the backup case, with the possible exception of modification date.
  • Must be easy enough to do, and sufficiently automatable, to allow frequent updates without error-prone or time-consuming manual hassle

I would welcome your ideas for what to use. Below, I’ll highlight different approaches I’ve looked into and how they stack up.

Basic copies of directories

The initial approach might be one of simply copying directories across. This would work well if the data set to be archived is smaller than the archival media. In that case, you could just burn or rsync a new copy with every update and be done. Unfortunately, this is much less convenient with data of the size I’m dealing with. rsync is unavailable in that case. With some datasets, you could manually design some rsyncs to store individual directories on individual devices, but that gets unwieldy fast and isn’t scalable.

You could use something like my datapacker program to split the data across multiple discs/drives efficiently. However, updates will be a problem; you’d have to re-burn the entire set to get a consistent copy, or rely on external tools like mtree to reflect deletions. Not very convenient in any case.

So I won’t be using this.

tar or zip

While you can split tar and zip files across multiple media, they have a lot of issues. GNU tar’s incremental mode is clunky and buggy; zip is even worse. tar files can’t be read randomly, making it extremely time-consuming to extract just certain files out of a tar file.

The only thing going for these formats (and especially zip) is the wide compatibility for restoration.

dar

Here we start to get into the more interesting tools. Dar is, in my opinion, one of the best Linux tools that few people know about. Since I first wrote about dar in 2008, it’s added some interesting new features; among them, binary deltas and cloud storage support. So, dar has quite a few interesting features that I make use of in other ways, and could also be quite helpful here:

  • Dar can both read and write files sequentially (streaming, like tar), or with random-access (quick seek to extract a subset without having to read the entire archive)
  • Dar can apply compression to individual files, rather than to the archive as a whole, faciliting both random access and resilience (corruption in one file doesn’t invalidate all subsequent files). Dar also supports numerous compression algorithms including gzip, bzip2, xz, lzo, etc., and can omit compressing already-compressed files.
  • The end of each dar file contains a central directory (dar calls this a catalog). The catalog contains everything necessary to extract individual files from the archive quickly, as well as everything necessary to make a future incremental archive based on this one. Additionally, dar can make and work with “isolated catalogs” — a file containing the catalog only, without data.
  • Dar can split the archive into multiple pieces called slices. This can best be done with fixed-size slices (–slice and –first-slice options), which let the catalog regord the slice number and preserves random access capabilities. With the –execute option, dar can easily wait for a given slice to be burned, etc.
  • Dar normally stores an entire new copy of a modified file, but can optionally store an rdiff binary delta instead. This has the potential to be far smaller (think of a case of modifying metadata for a photo, for instance).

Additionally, dar comes with a dar_manager program. dar_manager makes a database out of dar catalogs (or archives). This can then be used to identify the precise archive containing a particular version of a particular file.

All this combines to make a useful system for archiving. Isolated catalogs are tiny, and it would be easy enough to include the isolated catalogs for the entire set of archives that came before (or even the dar_manager database file) with each new incremental archive. This would make restoration of a particular subset easy.

The main thing to address with dar is that you do need dar to extract the archive. Every dar release comes with source code and a win64 build. dar also supports building a statically-linked Linux binary. It would therefore be easy to include win64 binary, Linux binary, and source with every archive run. dar is also a part of multiple Linux and BSD distributions, which are archived around the Internet. I think this provides a reasonable future-proofing to make sure dar archives will still be readable in the future.

The other challenge is user ability. While dar is highly portable, it is fundamentally a CLI tool and will require CLI abilities on the part of users. I suspect, though, that I could write up a few pages of instructions to include and make that a reasonably easy process. Not everyone can use a CLI, but I would expect a person that could follow those instructions could be readily-enough found.

One other benefit of dar is that it could easily be used with tapes. The LTO series is liked by various hobbyists, though it could pose formidable obstacles to non-hobbyists trying to aceess data in future decades. Additionally, since the archive is a big file, it lends itself to working with par2 to provide redundancy for certain amounts of data corruption.

git-annex

git-annex is an interesting program that is designed to facilitate managing large sets of data and moving it between repositories. git-annex has particular support for offline archive drives and tracks which drives contain which files.

The idea would be to store the data to be archived in a git-annex repository. Then git-annex commands could generate filesystem trees on the external drives (or trees to br burned to read-only media).

In a post about using git-annex for blu-ray backups, an earlier thread about DVD-Rs was mentioned.

This has a few interesting properties. For one, with due care, the files can be stored on archival media as regular files. There are some different options for how to generate the archives; some of them would place the entire git-annex metadata on each drive/disc. With that arrangement, one could access the individual files without git-annex. With git-annex, one could reconstruct the final (or any intermediate) state of the archive appropriately, handling deltions, renames, etc. You would also easily be able to know where copies of your files are.

The practice is somewhat more challenging. Hundreds of thousands of files — what I would consider a medium-sized archive — can pose some challenges, running into hours-long execution if used in conjunction with the directory special remote (but only minutes-long with a standard git-annex repo).

Ruling out the directory special remote, I had thought I could maybe just work with my files in git-annex directly. However, I ran into some challenges with that approach as well. I am uncomfortable with git-annex mucking about with hard links in my source data. While it does try to preserve timestamps in the source data, these are lost on the clones. I wrote up my best effort to work around all this.

In a forum post, the author of git-annex comments that “I don’t think that CDs/DVDs are a particularly good fit for git-annex, but it seems a couple of users have gotten something working.” The page he references is Managing a large number of files archived on many pieces of read-only medium. Some of that discussion is a bit dated (for instance, the directory special remote has the importtree feature that implements what was being asked for there), but has some interesting tips.

git-annex supplies win64 binaries, and git-annex is included with many distributions as well. So it should be nearly as accessible as dar in the future. Since git-annex would be required to restore a consistent recovery image, similar caveats as with dar apply; CLI experience would be needed, along with some written instructions.

Bacula and BareOS

Although primarily tape-based archivers, these do also also nominally support drives and optical media. However, they are much more tailored as backup tools, especially with the ability to pull from multiple machines. They require a database and extensive configuration, making them a poor fit for both the creation and future extractability of this project.

Conclusions

I’m going to spend some more time with dar and git-annex, testing them out, and hope to write some future posts about my experiences.

29 May, 2023 04:57PM by John Goerzen

hackergotchi for Jonathan Carter

Jonathan Carter

MiniDebConf Germany 2023

This year I attended Debian Reunion Hamburg (aka MiniDebConf Germany) for the second time. My goal for this MiniDebConf was just to talk to people and make the most of the time I have there. No other specific plans or goals. Despite this simple goal, it was a very productive and successful event for me.

Tuesday 23rd:

  • Arrived much later than planned after about 18h of travel, went to bed early.

Wednesday 24th:

  • Was in a discussion about individual package maintainership.
  • Was in a discussion about the nature of Technical Committee.
  • Co-signed a copy of The Debian System book along with the other DDs
  • Submitted a BoF request for people who are present to bring issues to the attention of the DPL (and to others who are around).
  • Noticed I still had a blog entry draft about this event last year, and posted it just to get it done.
  • Had a stand-up meeting, was nice to see what everyone was working on.
  • Had some event budgeting discussions with Holger.
  • Worked a bit on a talk I haven’t submitted yet called “Current events” (it’s slightly punny, get it?) – it’s still very raw but I’m passively working on it just in case we need a backup talk over the weekend.
  • Had a discussion over lunch with someone who runs their HPC on Debian and learned about Octopus and Pac.
  • TIL (from -python) about pyproject.toml (https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/)
  • Was in a discussion about amd64 build times on our buildds and referred them to DSA. I also e-mailed DSA to ask them if there’s anything we can do to improve build times (since it affects both productivity and motivation).
  • Did some premium cola tasting with andrewsh
  • Had a discussion with Ilu about installers (and luks2 issues in Calamares), accessibility and organisational stuff.

Thursday 25th:

  • Spent quite a chunk of the morning in a usrmerge BoF. I’m very impressed by the amount of reading and research the people in the BoF did and gathering all the facts/data, it seems that there is now a way forward that will fix usrmerge in Debian in a way that could work for everyone, an extensive summary/proposal will be posted to debian-devel as soon as possible.
  • Mind was in zombie mode. So I did something easy and upgraded the host running this blog and a few other hosts to bookworm to see what would break.
  • Cheese and wine party, which resulted in a mao party that ran waaaay too late.

Friday 26th:

Saturday 27th:

  • Attended talks:
    • HTTP all the things – The rocky path from the basement into the “cloud”
    • Running Debian on a Smartphone
    • debvm – Ephemeral Virtual Debian Machines
    • Network Configuration on Debian Systems
    • Discussing changes to the Debian key package definition
    • Meet the Release Team
    • Towards collective decision-making and maintenance in the Debian base system
  • Performed some PGP key signing.
  • Edited group photo.

Sunday 28th:

  • Had a BoF where we had an open discussion about things on our collective minds (Debian Therapy Session).
  • Had a session on upcoming legislature in the EU (like CRA).
  • Some web statistics with MrFai.
  • Talked to Marc Haber about a DebConf bid for Heidelberg for DebConf 25.
  • Closing session.

Monday 29th:

  • Started the morning with Helmut and Jochen convincing me switch from cowbuilder to sbuild (I’m tentatively sold, the huge new plus is that you don’t need schroot anymore, which trashed two of my systems in the past and effectively made sbuild a no-go for me until now).
  • Dealt with more laptop hardware failures, removing a stick of RAM seems to have solved that for now!

Das is nicht gut.

  • Dealt with some delegation issues for release team and publicity team.
  • Attended my last stand-up meeting.
  • Wrapped things up, blogged about the event. Probably forgot to list dozens of things in this blog entry. It is fine.

Tuesday 30th:

  • Didn’t attend the last day, basically a travel day for me.

Thank you to Holger for organising this event yet again!

29 May, 2023 12:48PM by jonathan

Russell Coker

Considering Convergence

What is Convergence

In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I’d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I’d want to use.

Hardware for Convergence – Comparing a Phone to a Laptop

The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven’t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn’t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there’s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence.

The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU – did the Purism people do something to make their device faster than most?

For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend’s N900 (like the one Kyle used) won’t complete the Passmark test apparently due to the “Extended Instructions (NEON)” test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the “Compression” and “CPU Single Threaded” tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013!

In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today’s standards and with the need to drive 4K monitors etc it’s not that great.

The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting.

Software Issues for Convergence

Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps.

The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing).

Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8].

The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can’t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature.

Security

When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen.

A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don’t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it.

The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent.

Conclusion

I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn’t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won’t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone.

The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on.

To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who’s working on similar things will help a lot, especially as he has some low level hardware skills that I lack.

Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

29 May, 2023 07:41AM by etbe

Russ Allbery

Book haul

I think this is partial because I also have a stack of other books that I missed recording. At some point, I should stop using this method to track book acquisitions in favor of one of the many programs intended for this purpose, but it's in the long list of other things I really should do one of these days.

As usual, I have already read and reviewed a few of these. I might be getting marginally better at reading books shortly after I acquire them? Maybe?

Steven Brust — Tsalmoth (sff)
C.L. Clark — The Faithless (sff)
Oliver Darkshire — Once Upon a Tome (non-fiction)
Hernan Diaz — Trust (mainstream)
S.B. Divya — Meru (sff)
Kate Elliott — Furious Heaven (sff)
Steven Flavall — Before We Go Live (non-fiction)
R.F. Kuang — Babel (sff)
Laurie Marks — Dancing Jack (sff)
Arkady Martine — Rose/House (sff)
Madeline Miller — Circe (sff)
Jenny Odell — Saving Time (non-fiction)
Malka Older — The Mimicking of Known Successes (sff)
Sabaa Tahir — An Ember in the Ashes (sff)
Emily Tesh — Some Desperate Glory (sff)
Valerie Valdes — Chilling Effect (sff)

29 May, 2023 04:31AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Python 3.11, pip and (breaking) system packages

As we get closer to Debian Bookworm's release, I thought I'd share one change in Python 3.11 that will surely affect many people.

Python 3.11 implements the new PEP 668, Marking Python base environments as “externally managed”1. If you use pip regularly on Debian, it's likely you'll eventually hit the externally-managed-environment error:

error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

With this PEP, Python tools can now distinguish between packages that have been installed by the user with a tool like pip and ones installed using a distribution's package manager, like apt.

This is generally great news: it was previously too easy to break a system by mixing the two types of packages. This PEP will simplify our role as a distribution, as well as improve the overall Python user experience in Debian.

Sadly, it's also likely this change will break some of your scripts, especially CI that (legitimately) install packages via pip alongside system packages. For example, I use the following gitlab-ci snippet to make sure my PRs don't break my build process2:

build:flit:
  stage: build
  script:
  - apt-get update && apt-get install -y flit python3-pip
  - FLIT_ROOT_INSTALL=1 flit install
  - metalfinder --help

With Python 3.11, this snippet will error out, as pip will refuse to install packages alongside the system's. The fix is to tell pip it's OK to "break" your system packages, either using the --break-system-packages parameter, or the PIP_BREAK_SYSTEM_PACKAGES=1 environment variable3.

This, of course, is not something you should be using in production to restore the old behavior! The "proper" way to fix this issue, as the externally-managed-environment error message aptly (har har) informs you, is to use virtual environments.

Happy hacking!


  1. Kudos to our own Matthias Klose, Stefano Rivera and Elana Hashman, who worked on designing and implementing this PEP! 

  2. Which is something that bit me before... You push some changes to your git repository, everything seems fine and all the tests pass, so you merge it and make a new git tag. When the time comes to build and upload this tag to PyPi, you find out some minor thing broke your build system (which you weren't testing) and you have to scramble to make a point-release to fix the issue. Sad! 

  3. Don't go searching for this environment variable in pip's code though, as you won't find it! All of pip's command line options can be passed as env vars using the PIP_<UPPER_LONG_NAME> format. Useful for tools that use pip indirectly, like flit

29 May, 2023 04:00AM by Louis-Philippe Véronneau

May 27, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.4.0.0 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1074 other packages on CRAN, downloaded 29.3 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 535 times according to Google Scholar.

This release brings a new upstream release 12.4.0 made by Conrad a day or so ago. I prepared the usual release candidate, tested on the over 1000 reverse depends (which sadly takes almost a day on old hardware), found no issues and sent it to CRAN. Where it got tested again and was once again auto-processed smoothly by CRAN within a few hours on a Friday night which is just marvelous. So this time I tweeted about it too.

The releases actually has a relatively small set of changes as a second follow-up release in the 12.* series.

Changes in RcppArmadillo version 0.12.4.0.0 (2023-05-26)

  • Upgraded to Armadillo release 12.4.0 (Cortisol Profusion Redux)

    • Added norm2est() for finding fast estimates of matrix 2-norm (spectral norm)

    • Added vecnorm() for obtaining the vector norm of each row or column of a matrix

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like my open-source work, you may consider sponsoring me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 May, 2023 09:35PM

hackergotchi for Daniel Pocock

Daniel Pocock

JSON CEE structured logging for WebRTC, SIP and XMPP

I've recently added JSON CEE structured logging to reSIProcate and submitted pull requests for identical functionality in some related projects.

The use case for structured logging is quite compelling in the RTC world, which includes WebRTC, SIP and XMPP software. In the early days, we would do everything with a single process like Asterisk and we would only have to deal with a single log file. Today, especially with WebRTC, we often have multiple processes involved in a single phone call or video meeting. When something goes wrong, we need to be able to look at the logs from all of these processes. Structured logging provides a convenient way to combine and analyze the log files.

For structured logging to be useful in a distributed system, every process needs to use the same structure. The syntax of the logs and the semantics of individual values from different processes need to be identical.

JSON CEE is a specific standard for structured logging. It solves this problem. The JSON CEE standard was never fully completed. Nonetheless, it has been documented widely and it appears to be more than good enough for the ecosystem of free, open source RTC applications. Using JSON CEE is far better than every individual SIP and XMPP project reinventing the wheel with their own JSON schema.

Free-form text-based logs can become particularly awkward in the RTC world. For example, we often log multi-line SIP or SDP message bodies as they undergo various transformations in the stack. Text-based log files have no standard way to cope with multi-line log messages. JSON solves this problem.

With free-form log entries, tools have to be adapted to look for certain patterns. Structured logging may eliminate the need for any customization in log analysis tools.

The latest JSON CEE standard 1.0-beta1 is documented here.

Here is an example of a JSON CEE structured log message. Notice it includes a multi-line message body.

{
    "hostname": "host1.example.org",
    "pri": "DEBUG",
    "syslog": {
        "level": 7
    },
    "time": "2023-05-26T23:59:42.616853697Z",
    "pname": "testSdp",
    "subsys": "RESIP",
    "proc": {
        "id": "4003132",
        "tid": 70366700277776
    },
    "file": {
        "name": "rutil/ParseBuffer.cxx",
        "line": 986
    },
    "native": {
        "function": "fail"
    },
    "msg": "resip/stack/SdpContents.cxx:1648, Parse failed Too many connection addresses in context: Contents\nv=0\no=Evil 3559 3228 IN IP4 192.168.2.122\ns=SIP Call\nt=0 0\nm=audio 17124 RTP/AVP 18\nc=IN IP4 192.168.2.122/127/2000000000[CRLF]\n                                     ^[CRLF]\na=rtpmap:18 G729/8000\n"
}

Adding custom fields to JSON log messages

Applications can add extra fields to the JSON message if they really want to. For example, a SIP application could add the dialog ID to all log entries that relate to a specific dialog. A TURN server, rtpproxy or signalling server such as a SIP proxy could add UDP port numbers in a special field of their log messages. These values would allow reporting tools to quickly filter all log messages from different applications that relate to a specific session.

Progress adding JSON CEE support to popular RTC applications

reSIProcateDone: when calling Log::initialize, add Log::JSON_CEE
KamailioDone: see pull request 2826 and 2848
OpenSIPSDone: Syslog config and the stderr config
GstreamerPull request submitted #847, please follow-up with Gstreamer developers
KurentoPull request submitted #1, please follow-up with Kurento developers
AsteriskBug report submitted, see ASTERISK-29517

Best practices for implementing JSON CEE support

Applications can output JSON CEE log messages as strings.

To maximize performance and minimize the risk of bugs, it is a good idea to avoid using a JSON library and simply write the JSON CEE log messages using basic string manipulation techniques. For example, in reSIProcate, we write JSON log entries as strings using the C++ stream operator (the code is here). In regular C, such as Gstreamer, the printf function can be used to write JSON log strings.

When using the Syslog API functions, the JSON strings need to be prefixed with the string @cee: like this:

#include 

syslog(LOG_CRIT, "@cee: { \"msg\" : \"feed me\" }");

That's all there is too it. Many application developers can simply copy-and-paste from the pull requests mentioned above.

Adding JSON CEE support to dependency libraries and pre-compiled binaries

In some cases, dependency libraries are calling the syslog API and sending log messages directly to syslog.

If the library provides logging callbacks, they may be able to override the syslog calls.

If not, the LD_PRELOAD mechanism can be used to intercept calls to the Syslog API. This is a bit of a hack and may not be able to fill all the fields in the JSON CEE schema.

There is an example in my logredir code.

Build it and they will come

A standard like JSON CEE, even if it was never finished, helps to unify the developers who write the log messages and the developers who create tools to interpret them.

At the most basic level, the log messages from all applications have the same syntax and field names.

More significantly, the standard unifies the semantics of the values in log messages. For exmaple, severity values from one application should be comparable to severity values from another application. This should make life a lot easier for everybody in both development and support teams.

Reporting on JSON CEE data

ElasticSearch/Kibana and the equivalent OpenSearch Dashboard stacks provide a convenient way to store and analyze JSON data sets such as CEE.

There are already many online examples for integrating rsyslog with ElasicSearch or OpenSearch. I was able to make this work by cutting-and-pasting from the examples.

The fact that many people already wrote documents and blogs about doing this with JSON CEE is another good reason for RTC applications to use the CEE schema.

Here are some screenshots:

ElasticSearch, Kibana, Syslog ElasticSearch, Kibana, Syslog

27 May, 2023 12:00AM

May 25, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.6 on CRAN: More updates from QuantLib

The sixth release of the still new-ish qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.

This release brings updates to a few calendars which happened since the QuantLib 1.30 release, and also updates a several of the (few) non-calendaring functions.

Changes in version 0.0.6 (2023-05-24)

  • Several calendars (India, Singapore, South Africa, South Korea) updated with post-QuantLib 1.3.0 changes (Sebastian Schmidt in #6)

  • Three now-used scheduled files were removed (Dirk in #7))

  • A number of non-calendaring files used were synchronised with the current QuantLib repo (Dirk in #8)

Last release, we also added a quick little demo using xts to column-bind calendars produced from each of the different US sub-calendars. This is a slightly updated version of the sketch we tooted a few days ago. The output now is

> print(Reduce(cbind, Map(makeHol, cals)))
           LiborImpact NYSE GovernmentBond NERC FederalReserve
2023-01-02        TRUE TRUE           TRUE TRUE           TRUE
2023-01-16        TRUE TRUE           TRUE   NA           TRUE
2023-02-20        TRUE TRUE           TRUE   NA           TRUE
2023-04-07          NA TRUE             NA   NA             NA
2023-05-29        TRUE TRUE           TRUE TRUE           TRUE
2023-06-19        TRUE TRUE           TRUE   NA           TRUE
2023-07-04        TRUE TRUE           TRUE TRUE           TRUE
2023-09-04        TRUE TRUE           TRUE TRUE           TRUE
2023-10-09        TRUE   NA           TRUE   NA           TRUE
2023-11-10        TRUE   NA             NA   NA             NA
2023-11-23        TRUE TRUE           TRUE TRUE           TRUE
2023-12-25        TRUE TRUE           TRUE TRUE           TRUE
> 

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 May, 2023 10:33PM

hackergotchi for Jonathan Carter

Jonathan Carter

Upgraded this host to Debian 12 (bookworm)

I upgraded the host running my blog to Debian 12 today. My website has existed in some form since 1997, it changed from pure html to a Python CGI script in the early 2000s, and when blogging became big around then, I migrated to WordPress around 2004.

This WordPress instance ran on Ubuntu up until 2010, and then on Debian ever since. Upgrades are just too easy. I did end up hitting one small bug with today’s upgrade though, I run the PHP fast process manager on the Apache MPM event server, and during upgrade, php8.2-fpm wasn’t enabled somehow (contrary to what I would expect), at least a simple 'a2conf enable php8.2-fpm' saved my site again after a (very rare) few minutes of downtime.

25 May, 2023 10:10AM by jonathan

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (March and April 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • James Lu (jlu)
  • Hugh McMaster (hmc)
  • Agathe Porte (gagath)

The following contributors were added as Debian Maintainers in the last two months:

  • Soren Stoutner
  • Matthijs Kooijman
  • Vinay Keshava
  • Jarrah Gosbell
  • Carlos Henrique Lima Melara
  • Cordell Bloor

Congratulations!

25 May, 2023 10:00AM by Jean-Pierre Giraud

May 24, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP Brenda McDowell

My mother died earlier this month. She’d been diagnosed with cancer back in February 2022 and had been through major surgery and a couple of rounds of chemotherapy, so it wasn’t a complete surprise even if it was faster at the end than expected. That doesn’t make it easy, but I’m glad to be able to say that her immediate family were all with her at home at the end.

I was touched by the number of people who turned up, both to the wake and the subsequent funeral ceremony. Mum had done a lot throughout her life and was settled in Newry, and it was nice to see how many folk wanted to pay their respects. It was also lovely to hear from some old school friends who had fond memories of her.

There are many things I could say about her, but I don’t feel that here is the place to do so. My father and brother did excellent jobs at eulogies at the funeral. However, while I blog less about life things than I did in the past, I did not want it to go unmarked here. She was my Mum, I loved her, and I am sad she is gone.

24 May, 2023 07:02PM

hackergotchi for Jonathan Carter

Jonathan Carter

Debian Reunion MiniDebConf 2022

It wouldn’t be inaccurate to say that I’ve had a lot on my plate in the last few years, and that I have a *huge* backlog of little tasks to finish. Just last week, I finally got to all my keysigning from DebConf22. This week, I’m at MiniDebConf Germany in Hamburg. It’s the second time I’m here! And it’s great already. Last year I drafted a blog entry, but never got around to publishing it. So, in order to mentally tick off yet another thing, here follows a somewhat imperfect (I had to delete a lot of short-hand because I didn’t know what it means anymore), but at least published post about my activities from a year ago.

This week (well, last year) I attended my first ever in-person MiniDebConf and MiniDebCamp in Hamburg, Germany. The last time I was in Germany was 7 years ago for DebConf15 (or at time of publishing, actually, last year… for this same event).

My focus for the week was to work on Debian live related stuff.

In preparation for the week I tried to fix/close as many Calamares bugs as I could, so before the event I closed:

  • File calamares upstream issue #1944 ‘Calamares allows me to select a username of “root”‘ for Debian bug #976617.
  • File calamares upstream issue #1945 ‘Calamares needs support for high DPI’ for Debian bug #992162.
  • Comment on calamares bug #1005212 ‘Calamares installer fails at partitioning disks’ requesting further info.
  • Close calamares bug #1009876 ‘There is no /tmp item in the list during the partitioning step in the debian calamares installer’ – /tmp partitions can be created, not a bug, really just a small UI issue.
  • Close calamares bug #974998 ‘SegFault when clicked on “Create” in manual partitioning’, not reproducible in bullseye.
  • Close calamares bug #971266 ‘Debian fails to start when /home is encrypted during installation’ – this works fine since bullseye.
  • Close calamares bug #976617 ‘Calamares allows me to select a username of “root”‘ – has since been fixed upstream.

Monday to Friday we worked together on various issues, and the weekend was for talks.

On Monday morning, I had a nice discussion with Roland Clobus who has been working on making Debian live images reproducible. He’s also been working on testing Debian Live on openqa.debian.net. We’re planning on integrating his work so that Debian 12 live images will be reproducible. For automated testing on openqa, it will be ongoing work, one blocker has been that snapshots.debian.org limits connections after a while, so builds on there start failing fast.

On Monday afternoon, I went ahead and uploaded the latest Calamares hotfix (Calamares 3.2.58.1-1) release that fixes a UI issue on the partitioning screen where it could get stuck. On 15:00 we had a stand-up meeting where we introduced ourselves and talked a bit about our plans. It was great to see how many people could benefit from each other being there. For example, someone wanting to learn packaging, another wanting to improve packaging documentation, another wanting help with packaging something written in Rust, another wanting to improve Rust packaging in general and lots of overlap when it comes to reproducible builds! I also helped a few people with some of their packaging issues.

On Monday evening, someone in videoteam managed to convince me to put together a loopy loop for this MiniDebConf. There’s really wasn’t enough time to put together something elaborate, but I put something together based on the previous loopy with some experiments that I’ve been working on for the upcoming DC22 loopy, and we can use this loop to do a call for content for the DC22 loop.

On Tuesday morning had some chats with urbec and Ilu,Tuesday afternoon talked to MIA team about upcoming removals. Did some admin on debian.ch payments for hosting. On Tuesday evening worked on live image stuff (d-i downloader, download module for dmm).

On Wednesday morning I slept a bit late, then had to deal with some DPL admin/legal things. Wednesday afternoon, more chats with people.

On Thursday: Talked to a bunch more people about a lot of issues, got loopy in a reasonably shape, edited and published the Group photo!

On Friday: prepared my talk slides, learned about Brave (https://github.com/bbc/brave) – It initially looked like a great compositor for DebConf video stuff (and possible replacement for OBS, but it turned out it wasn’t really maintained upstream). In the evening we had the Cheese and Wine party, where lots of deliciousness was experienced.

On Saturday, I learned from Felix’s talk that Tensorflow is now in experimental! (and now in 2023 I checked again and that’s still the case, although it hasn’t made it’s way in unstable yet, hopefully that improves over the trixie cycle)

I know most of the people who attended quite well, but it was really nice to also see a bunch of new Debianites that I’ve only seen online before and to properly put some faces to names. We also had a bunch of enthusiastic new contributors and we did some key signing.

24 May, 2023 01:32PM by jonathan

May 23, 2023

Antoine Beaupré

Framework 12th gen laptop review

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com.

There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets.

I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years.

Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well.

This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine.

On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right.

Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in.

Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock.

Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well.

This blog post has been maturing for months now. It started in September 2022 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers

Those are things I wish I would have known before buying:

  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure

  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports

  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings

  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two

  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage

  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status

I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in.

I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure.

Update: Framework just (2023-03-23) just announced a whole bunch of new stuff:

The recording is available in this video and it's not your typical keynote. It starts ~25 minutes late, audio is crap, lightning and camera are crap, clapping seems to be from whatever staff they managed to get together in a room, decor is bizarre, colors are shit. It's amazing.

Reviews:

Specifications

Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.

  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD

Actual build

This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration

  • CPU: Intel® Core™ i5-1240P (AKA Alder Lake P 8 4.4GHz P-threads, 8 3.2GHz E-threads, 16 total, 28-64W), 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$

Customization

  • Keyboard: US English, included

Expansion Cards

  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage – 1TB $199
  • Sub-total: 384$

Accessories

  • Power Adapter - US/Canada $64.00

Total

  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping

Quick evaluation

This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros

Cons

  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)

  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working

  • 12th gen might have issues with thunderbolt docks

  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped (?) in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.

  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner

  • no space for a 2.5" drive

  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it

  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while

  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)

  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob (!)

  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)

  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup

A breeze.

Accessing the board

The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them.

The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off (!) to get access to the internals. Kind of scary.

I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done.

Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes

Unfortunately, the QR codes I tested (in the expansion card slot, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are.

After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF).

The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404.

I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:

https://guides.frame.work/Guide/Framework\+Laptop\+DIY\+Edition\+Quick\+Start\+Guide/57

... which Firefox immediately transforms in:

https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57

I'm puzzled as to why they would send the URL that way, the proper URL is of course:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory

The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual.

Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed.

Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review

Keyboard and touch pad

The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented.

At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy.

The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works.

Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:

cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF
Section "InputClass"
      Identifier "libinput touch pad catchall"
      MatchIsTouchpad "on"
      MatchDevicePath "/dev/input/event*"
      Driver "libinput"
      Option "Tapping" "on"
      Option "TappingButtonMap" "lmr"
EndSection
EOF

But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button

The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button.

So obviously, don't do that. But that's not very helpful.

An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:

[Login]
HandlePowerKey=suspend
HandlePowerKeyLongPress=poweroff

You might have to create the directory first:

mkdir /etc/systemd/logind.conf.d/

Then restart logind:

systemctl restart systemd-logind

And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends...

Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings

There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.

Key Equivalent Effect Command
p Pause lock screen xset s activate
b Break ? ?
k ScrLk switch keyboard layout N/A

It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there.

Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player".

The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark.

A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience.

Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all.

This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022.

In the meantime, the fancy functionality can be supposedly disabled with:

echo 'blacklist hid_sensor_hub' | sudo tee /etc/modprobe.d/framework-als-blacklist.conf

... and a reboot. This solution is also documented in the upstream guide.

Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches

The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound.

Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor

The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright.

I have yet to use it in full sunlight.

The dimmed light is very dim, which I like.

Screen backlight

I bind brightness keys to xbacklight in i3, but out of the box I get this error:

sep 29 22:09:14 angela i3[5661]: No outputs have backlight property

It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:

Section "Device"
    Identifier  "Card0"
    Driver      "intel"
    Option      "Backlight"  "intel_backlight"
EndSection

This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support

I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great.

I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:

  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card

Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test).

But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports.

But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs.

My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:

autorandr -l framework-external-dual-lg-acer

The magic:

autorandr -l horizontal

... also works well.

The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:

autorandr -l horizontal &&
eho Xft.dpi: 96 | xrdb -merge &&
systemctl restart terminal xcolortaillog background-image emacs &&
i3-msg restart

Kind of disruptive.

Expansion ports

I ordered a total of 10 expansion ports.

I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around).

The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually.

There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this.

I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked.

I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing.

Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards.

Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't).

The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging

One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least.

(As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.)

I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!)

This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:

  • red/amber: charging
  • white: charged
  • off: unplugged

I couldn't find documentation about this, but the forum answered.

This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well.

Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed.

Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices

Those things also work:

  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below

There's also a light sensor, but it conflicts with the keyboard brightness controls (see above).

There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests

The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere.

It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board.

With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking.

On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework.

Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises.

In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop.

See also my audio page.

Performance tests

Compiling Linux 5.19.11

On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:

5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k
10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps

This was using 16 watts of power, with full screen brightness.

With all 16 cores (make -j16), it takes less than 25 minutes:

19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k
8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps

I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts).

During compilation, fans were spinning really hard, quite noisy, but not painfully so.

The laptop was sucking 55 watts of power, steadily:

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  87.9   0.0  10.7   1.4   0.1 17.8 6583.6 5054.3 233.0 223.9 233.1  55.96
 GeoMean  87.9   0.0  10.6   1.2   0.0 17.6 6427.8 5048.1 227.6 218.7 227.7  55.96
  StdDev   1.4   0.0   1.2   0.6   0.2  3.0 1436.8  255.5 50.0 47.5 49.7   0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  85.0   0.0   7.8   0.5   0.0 13.0 3594.0 4638.0 117.0 111.0 120.0  55.52
 Maximum  90.8   0.0  12.9   3.5   0.8 38.0 10174.0 5901.0 374.0 362.0 375.0  56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.

memtest86+

I ran Memtest86+ v6.00b3. It shows something like this:

Memtest86+ v6.00b3      | 12th Gen Intel(R) Core(TM) i5-1240P
CLK/Temp: 2112MHz    78/78°C | Pass  2% #
L1 Cache:   48KB    414 GB/s | Test 46% ##################
L2 Cache: 1.25MB    118 GB/s | Test #3 [Moving inversions, 1s & 0s] 
L3 Cache:   12MB     43 GB/s | Testing: 16GB - 18GB [1GB of 15.7GB]
Memory  :  15.7GB  14.9 GB/s | Pattern: 
--------------------------------------------------------------------------------
CPU: 4P+8E-Cores (16T)    SMP: 8T (PAR))  | Time:  0:27:23  Status: Pass     \
RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51  | Pass:  1        Errors: 0
--------------------------------------------------------------------------------

Memory SPD Information
----------------------
 - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23)







                          Framework FRANMACP04
 <ESC> Exit  <F1> Configuration  <Space> Scroll Lock            6.00.unknown.x64

So about 30 minutes for a full 16GB memory test.

Software setup

Once I had everything in the hardware setup, I figured, voilà, I'm done, I'm just going to boot this beautiful machine and I can get back to work.

I don't understand why I am so naïve some times. It's mind boggling.

Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI

First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS.

At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI.

So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:

Unable to find medium containing a live file system

This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix.

I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition.

At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations

Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion.

I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive.

Oops.

It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet

Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:

/etc/ssh
/var/lib/puppet

So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet.

I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support

It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:

apt install fwupd-amd64-signed
fwupdmgr refresh
fwupdmgr get-updates
fwupdmgr update

Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:

fwupdmgr enable-remote lvfs-testing
echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf 
fwupdmgr update

Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks

The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling".

The Debian wiki page has a few tricks for this.

Console

This will make the console and grub fonts more readable:

cat >> /etc/default/console-setup <<EOF
FONTFACE="Terminus"
FONTSIZE=32x16
EOF
echo GRUB_GFXMODE=1024x768 >> /etc/default/grub
update-grub

Xorg

Adding this to your .Xresources will make everything look much bigger:

! 1.5*96
Xft.dpi: 144

Apparently, some of this can also help:

! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter:  lcddefault
Xft.hintstyle:  hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb

It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.

The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness".

TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues

BIOS configuration

The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive.

The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update.

Note that the following keybindings are active in the BIOS POST check:

Key Meaning
F2 Enter BIOS setup menu
F12 Enter BIOS boot manager
Delete Enter BIOS setup menu

WiFi compatibility issues

I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.)

Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network.

Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly.

Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life

I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with).

(Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.)

The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box.

That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected.

Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:

Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   1.7   0.0   0.5  97.6   0.2  1.2 4684.9 1985.2 126.6 39.1 128.0   7.57
 GeoMean   1.4   0.0   0.4  97.6   0.1  1.2 4416.6 1734.5 111.6 27.9 113.3   7.54
  StdDev   1.0   0.2   0.2   1.2   0.0  0.5 1584.7 1058.3 82.1 44.0 80.2   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.2   0.0   0.2  94.9   0.1  1.0 2242.0  698.2 82.0 17.0 82.0   6.36
 Maximum   4.1   1.1   1.0  99.4   0.2  3.0 8687.4 4445.1 463.0 249.0 449.0   9.10
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
System:   7.57 Watts on average with standard deviation 0.71

Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power).

Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me.

Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background...

In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross.

(When writing about abusive power usage generates more power usage, is that an heisenbug? Or schrödinbug?)

Power management

Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.

  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1

TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:

  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)

Update: also try to follow the official optimization guide. It was made for Ubuntu but will probably also work for your distribution of choice with a few tweaks. They recommend using tlpui but it's not packaged in Debian. There is, however, a Flatpak release. In my case, it resulted in the following diff to tlp.conf: tlp.patch.

Background on CPU architecture

There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new.

The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:

Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.

All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite.

Also note that the firmware updates for various chipsets are supposed to improve things eventually.

On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage

I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat).

Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states...

As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop.

Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats

Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there.

Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests

Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:

Device Minimum Average Max Stdev Note
Screen, 100% 2.4W 2.6W 2.8W N/A
Screen, 1% 30mW 140mW 250mW N/A
Backlight 1 290mW ? ? ? fairly small, all things considered
Backlight 2 890mW 1.2W 3W? 460mW? geometric progression
Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use
Radios 100mW 250mW N/A N/A
USB-C N/A N/A N/A N/A negligible power drain
USB-A 10mW 10mW ? 10mW almost negligible
DisplayPort 300mW 390mW 600mW N/A not passive
HDMI 380mW 440mW 1W? 20mW not passive
1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy
MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy
Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card

So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:

It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.

Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.)

The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning.

The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed.

Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W.

The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland

Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS

I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:

Device Minimum Average Max Stdev Note
Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios
2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive...
3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev
1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade
1 USB-A 1.97W 2.18W 4.02W 530mW unchanged
2 USB-A 1.97W 2.00W 2.08W 30mW unchanged
3 USB-A 1.94W 1.99W 2.03W 20mW unchanged
MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving!
MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation
DisplayPort 2.28W 2.31W 2.37W 20mW unchanged
1 HDMI 2.43W 2.69W 4.53W 460mW unchanged
2 HDMI 2.53W 2.59W 2.67W 30mW unchanged
External USB 3.85W 3.89W 3.94W 30mW new result
Ethernet 3.60W 3.70W 4.91W 230mW unchanged

Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers.

Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved.

The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage

I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out.

My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]

... to this:

sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]

That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad!

In comparison, here is my really old x220, before:

sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now                     =   5070 [mWh]

... after:

sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now                     =   4980 [mWh]

... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby.

And here is the Framework performing a similar test, before:

sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full                    =   3518 [mAh]
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now                     =   2861 [mAh]

... after:

sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now                     =   2812 [mAh]

... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad.

Note that this was done with the following expansion cards:

  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN

Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA).

This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA).

Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:

sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now                     =   2544 [mAh]

... after:

sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now                     =   2542 [mAh]

... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery.

It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life.

You can confirm the current sleep mode with:

# cat /sys/power/mem_sleep
s2idle [deep]

In the above, deep is selected. You can change it on the fly with:

printf s2idle > /sys/power/mem_sleep

Here's another test:

sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now                     =   1619 [mAh]
sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now                     =   1613 [mAh]

... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby.

A longer test:

oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now                     =   3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now                     =   3147 [mAh]

That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.

I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests — running them with whatever card I had plugged in without paying attention — so it's likely the cause of the high power usage and fluctuations.

It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post.

In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:

oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now                     =   3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now                     =   3145 [mAh]

Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey.

So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below.

There is also some hope that the Chromebook edition — specifically designed with a specification of 14 days standby time — could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since.

TODO: one final solution here is suspend-then-hibernate, which Windows uses for this

TODO: consider implementing the S0ix sleep states , see also troubleshooting

TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results

This table is a summary of the more extensive standby-tests I have performed:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1
s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1
normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1
1 USB-C 0.23W 15mA ~10
2 USB-C 0.23W 14.9mA same as above
1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card!
2 USB-A 1.11W 72mA 2 +360mW
3 USB-A 1.48W 96mA <2 +370mW
1TB SSD 0.49W 32mA <5 +260mW
MicroSD 0.52W 34mA ~4 +290mW
DisplayPort 0.85W 55mA <3 +620mW (!!)
1 HDMI 0.58W 38mA ~4 +250mW
2 HDMI 0.65W 42mA <4 +70mW (?)

Conclusions:

  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required

  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)

  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)

  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)

  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)

  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?

A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS

Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage.

The update came out in December 2022 and I redid my analysis with the following results:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 no cards, same as before upgrade
1 USB-C 0.25W 16mA 9 same as before
2 USB-C 0.25W 16mA 9 same
1 USB-A 0.80W 62mA 3 +550mW!! worse than before
2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad!
Ethernet 0.62W 40mA 3-4 new result, decent
1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA)
MicroSD 0.51W 22mA 4 same
DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW
1 HDMI ? 38mA ? same
2 HDMI ? 45mA ? a bit worse than before (+3mA)
Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A

Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either.

Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection

The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:

sudo ectool fwchargelimit 80

I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there.

Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:

root@angela:/home/anarcat# tlp-stat -b
--- TLP 1.5.0 --------------------------------------------

+++ Battery Care
Plugin: generic
Supported features: none available

+++ Battery Status: BAT1
/sys/class/power_supply/BAT1/manufacturer                   = NVT
/sys/class/power_supply/BAT1/model_name                     = Framewo
/sys/class/power_supply/BAT1/cycle_count                    =      3
/sys/class/power_supply/BAT1/charge_full_design             =   3572 [mAh]
/sys/class/power_supply/BAT1/charge_full                    =   3541 [mAh]
/sys/class/power_supply/BAT1/charge_now                     =   1625 [mAh]
/sys/class/power_supply/BAT1/current_now                    =    178 [mA]
/sys/class/power_supply/BAT1/status                         = Discharging

/sys/class/power_supply/BAT1/charge_control_start_threshold = (not available)
/sys/class/power_supply/BAT1/charge_control_end_threshold   = (not available)

Charge                                                      =   45.9 [%]
Capacity                                                    =   99.1 [%]

One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card

The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop"....

The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop.

I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots.

I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings.

It seems like this:

echo 'on' > '/sys/bus/usb/devices/4-2/power/control'

... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:

echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'

Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:

By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.

So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:

USB_AUTOSUSPEND=0

... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:

USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"

and this on the kernel commandline:

usbcore.quirks=0bda:8156:k

... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:

# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"

# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768

Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  238             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works.

In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs

The framework does need proprietary firmware to operate. Specifically:

  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package

Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable — I estimate 200-500ms — lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty.

One workaround is to delete the two affected firmware files:

cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u

You will get the following warning during build, which is good as it means the problematic firmware is disabled:

W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915

But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast.

The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:

[Unit]
Description=start compositing manager
PartOf=graphical-session.target
ConditionHost=angela

[Service]
Type=exec
ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc
Restart=on-failure

[Install]
RequiredBy=graphical-session.target

compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:

anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0

When running from the base session, I would get this instead:

compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Couldn't load plugin 'ccp'
compiz (core) - Error: Couldn't load plugin 'ccp'

Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum.

Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.

Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:

[   19.534429] Intel(R) Wireless WiFi driver for Linux
[   19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[   19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[   19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[   19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[   19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[   19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[   19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[   19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[   19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[   19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[   19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm

Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them.

I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them.

There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration

After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears.

Running in a window gives me odd results, as the gears basically grind to a halt:

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds =  5.022 FPS

Ouch. 5FPS!

But interestingly, once the window is in full screen, it does hit the monitor refresh rate:

300 frames in 5.0 seconds = 60.000 FPS

I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?).

I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo).

Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:

user_pref("media.ffmpeg.vaapi.enabled", true);

Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird.

The guide also mentions installing the intel-media-driver package, but I could not find that in Debian.

The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs

It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:

export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"

It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!).

Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:

exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"

This was mostly done in this Puppet commit.

I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs

I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming.

The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29).

It could also be that bug because the error message I get is actually:

Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled

It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:

echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0" | sudo tee /etc/modprobe.d/i915.conf

from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches

I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches.

The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

USB docks are hell

Note that the monitors are hooked up to angela through a USB-C / Thunderbolt dock from Cable Matters, with the lovely name of 201053-SIL. It has issues, see this blog post for an in-depth discussion.

Shipping details

I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late.

People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand.

As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline

There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:

  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)

This comes from the crowdsourced spreadsheet, which should be updated when the status changes here.

I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering.

It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia

I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took:

I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs.

There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources

23 May, 2023 05:35PM

Picking a USB-C dock and charger

Dear lazy web, help me pick the right hardware to make my shiny new laptop work better. I want a new USB-C dock and travel power supply.

Background

I need advice on hardware, because my current setup in the office doesn't work so well. My new Framework laptop has four (4!) USB-C ports which is great, but it only has those ports (there's a combo jack, but I don't use it because it's noisy). So right now I have the following setup:

  • HDMI: monitor one
  • HDMI: monitor two
  • USB-A: Yubikey
  • USB-C: USB-C hub, which has:
    • RJ-45 network
    • USB-A keyboard
    • USB-A mouse
    • USB-A headset

... and I'm missing a USB-C port for power! So I get into this annoying situation where I need to actually unplug the USB-A Yubikey, unplug the USB-A expansion card, plug in the power for a while so it can charge, and then do that in reverse when I need the Yubikey again (which is: often).

Another option I have is to unplug the headset, but I often need both the headset and the Yubikey at once. I also have a pair of earbuds that work in the combo jack, but, again, they are noticeably noisy.

So this doesn't work.

I'm thinking I should get a USB-C Dock of some sort. The Framework forum has a long list of supported docks in a "megathread", but I figured people here might have their own experience with docks and laptop/dock setups.

So what should USB-C Dock should I get?

Should I consider changing to a big monitor with a built-in USB-C dock and power?

Ideally, i'd like to just walk in the office, put the laptop down and insert a single USB-C cable and be done with it. Does that even work with Wayland? I have read reports of Displaylink not working in Sway specifically... does that apply to any multi-monitor over a single USB-C cable setup?

Oh, and what about travel options? Do you have a fancy small form factor USB-C power charger that you really like?

Current ideas

Here are the devices I'm considering right now...

USB chargers

The spec here is at least 65W USB-C with international plugs.

  • Anker nano II: 50$USD sold out, not international? they have the PowerPort III (65W, UK/US/EU, not AU), but it's sold out
  • Ugreen 65W 2 USB-C 1 USB-A UK/US/EU: 56$ USD (disappeared?)
  • Thinkpad power adapter: 54$USD, basically your normal ThinkPad charger, meh
  • TOFU Power station: 95$USD 2 USB-A (15W), 2 USB-C (30-45W PD), AU/US/UK/EU Mac-compatible adapters, 3-port 7A power strip (!)
  • Volta GIGA 130W GAN charger: 99$ 3 USB-C, 1 USB-A, 5$ extra for each international adapter
  • One World 65: 69$ 3 USB-C (one with 65W PD), 2 USB-A, slide-out international plugs, also acts as a 7A international adapter, built-in fuse, mentioned by Wired, 15% off with code OneWorld65_15%Off
  • The LinkOn 166W looks really promising (2 USB-A, 2 USB-C, near universal), delivering the full 100W permitted under power delivery (PD) 3.0 (PD 3.1 allows for 240W delivery) and 166W (100+30+18+18) when all ports are in use, untested otherwise (update: a previous version of this entry expressed concerned about their certification but LinkOn actually wrote me to clarify they only have the PD 3.0 certification, while offering some affiliate links and free stuff, to which I have said (basically) sure, send me stuff and then they said "oh canada we don't ship there", interesting gear nevertheless)

TOFU power station

I found that weird little thing through this Twitter post from Benedict Reuschling, from this blog post, from 2.5 admins episode 127 (phew!).

I ordered a TOFU power station in February (2023-02-20) and it landed on my doorstep about two weeks later (2023-03-08).

The power output is a little disappointing: my laptop tells me it's charging at 30W instead of the rated 45W, which is already less than the 65W provided by the normal Framework charger. I suspect it will have a hard time keeping up with a full-on, all CPU blaring power consumption, so I'm still considering a separate charger. It should be fine for charging the laptop overnight during my travels, which is basically my use case here.

The "travel" thing is a little plastic contraption that holds three different power adapters: Australian, British_plugs_and_sockets), Europe, and USA. The clever thing here is the other end is what looks like a IEC 60320 C7/C8 coupler, AKA a "figure-8", "infinity" or "shotgun", according to Wikipedia. It seems design to fit with Macbook charger cable adapters, but it also seems to physically fit inside a classic Thinkpad power supply, which means you can use this thing to turn a normal Thinkpad power supply into an international power supply, at the cost of removing a good chunk of wire. It is not compatible with the Framework power supply, which uses a three pin, grounded, C5/C6 coupler, AKA a "cloverleaf" or "Mickey Mouse" connector.

Strangely, the travel adapters also have a fourth adapter which is not really an adapter, it's a flashlight, rechargeable with Micro USB connector.

I'm still a little worried about overload: this thing is supposed to be designed as a power bar and a charger, but they warn against "overloading" it, with a picture of a hair drier... So what is it? Is it a full on 15A power bar or not? 220V? There's an odd lack of documentation about all of this. The specifications on the cover are:

AC:

  • Input: 100V-240V
  • Output: 100V-240V

DC:

  • Type-C: 36W/45W (PD)
  • Type-C: 18W (PD)
  • USB-Ax2: 15W (share)

Dimensions:

  • 82mm(ø)x28mm(H)
  • Weight: 201g
  • 7A auto-reset fuse
  • Cable: 85cm

Update: I found the main TOFU website and the user manual which is a little more detailed.

So I guess you can only draw 7A from the power source? That would mean 700W at 100V, or 1680W at 240V, which I'm a little suspicious of.

The specs for the "traveler" are:

Dimensions:

  • 3cm x 3.8cm x 5.8cm
  • UK/EU/AU/US
  • Weight: 62g

The two devices come in a small carrying case that is about 5" x 3.75" x 2" (or 12.7cm x 9.25cm x 5.08cm), so it's actually pretty bulky once everything is packed together. The actual power cable that wraps around the device is actually 2'7", or 78.74cm, the 85cm figure about probably counts the width of the device itself, which is a little disingenuous. There's a USB-C cable provided to actually charge your laptop, but it's tiny, only about a feet (11⅝") or 30cm.

Compared to the Framework power supply, which has a 6'8" (203cm) USB-C cable and a 3'2" (96cm) power cable (so 9'10" total, or 3 meter long!), it's kind of ridiculous. That said, I can easily take the USB-C cable from the Framework power supply and carry it alongside the TOFU to get a ~280cm (~9'2") cable, which is then somewhat reasonable. It feels very "crammed" in the carrying case with the longer cable, unfortunately.

At this stage, I'll definitely try this device as my main power source when I leave the office, but I'll probably bring a backup for my first international travels in case something goes wrong. I'm looking at Ugreen and Volta chargers as a backup for those.

Update: in a real-world charging test, the power supply provided a about 28W (not 45W!) of charge, so it definitely can't sustain full power operation. A Anker GANPrime charger rated for 65W also doesn't provide the full 60W and peaks at 38W. This graph shows the Framework laptop (rated for PD 3.0, 100W) charging for about 15 minutes then switching to the Anker charger.

A graph from the GNOME Power Statistics program showing samples oscillating between 24 and 30W and then jumping to about 36W

Update 2: I traveled quite a bit with this device and I like it. The main downside is the cable is just too damn short and a larger cable doesn't fit well in the case. Otherwise it's really nice.

TOFU YOYO Cable

I also bought the YOYO cable in the hope it would fix that problem while simultaneously provide three other purposes I carry stuff for:

  • multi-USB connector (USB-C, micro-USB, Lightning) for charging
  • longer charging cable
  • phone stand
  • "eject SIM" pin

That device is a little more disappointing. First off, like the TOFU power station, the cable needs to be manually rewound which makes it kind of annoying. Also, the cable is kind of short: only 1m long, so with the TOFU, we're still 1m+ short of the 3m cable offered stock by the Framework laptop.

The design is also a little gimmicky: it has a more "plasticky" feeling than the power station, and some parts are hard to take out. For example, there's a Micro-USB to USB-C adapter that I almost broke trying to figure out how to pry it out of there.

It's also a bit annoying to have all those adapters dangling around when the basic use case is "I just want to power my laptop". I guess it does fulfill the "I want just one thing" purpose, and I haven't actually carried it around while traveling, so we'll see how useful this actually is.

Specifications:

  • YoYo cable case x1
  • Silicon cable 100cm x1
  • SIM ejector x1
  • (nano?)SIM card storage
  • SD card reader?
  • 1W LED??
  • Adapter cap x2
  • Type-c to Lightning adapter x1
  • Type-c to Micro adapter x1
  • Type-c to Type-A adapter x1
  • Dimensions: 56Øx29mm
  • Weight: 55g
  • User manual
  • Home page
  • Store (29$USD)

The funny thing with this is there's so much stuff crammed in there that the manual doesn't even mention all of it. For example, the specifications mention a LED and an SD card reader somewhere in there, and I haven't found those yet, and they're not in the manual.

This and the MASA power bank (below) were ordered together and took over a month to ship.

TOFU MASA power bank

This is getting off-topic but...

I also bought the MASA power bank which promise a 68.4Wh supply so, in theory, could act as a second battery for my framework laptop. I'll believe it when I see it though. It also acts as a wireless charger which would be nice if I had any wireless charging thing. It ships in a nice case and a USB-C wire with two adapters that actually fit in the case if you roll them up just so.

A little bulky. Doesn't seem to actually charge anything, hugely disappointing.

Update: in contact with tech support, it seems I am misinterpreting the output of the LEDs. Also, when the battery is fully discharged, it can't charge fast with USB-C.

Here are the LED meanings I could gather:

  • when clicked:
    • all four LEDs steady: battery full, 100% charged
    • 3 LEDs steady: 75% charged
    • 2 LEDs steady: 50% charged
    • 1 LED steady: 25% charged
    • all four LEDs blinking: low battery warning, plug in a USB-A slow charger for an hour
    • no LED: flat dead, plug in a slow charger
  • when charging:
    • rightmost LED blinking: 0-25% charged, need slow charging
    • one steady LED, second LED blinking: 25%+ charged, can charge fast

The LED button can be pushed for two seconds to reset the protection circuits.

Specifications:

  • Full recharge in 3 hours with 33W input, 50% in one hour
  • Type-c Port1: Support USB-PD/Maximum 30W(5/9/12/15/20V)
  • Output Port1: Maximum 36W(20/15/12/9/5V)
  • Output Port2: Maximum 18W(9V2A,5V3A)
  • Wireless: QI standard Maximum 7.5W
  • Battery: Li-ion Polymer battery
  • 3 cells: 68.4Wh (18000mAh±5%)
  • Dimensions, closed: 80x80x28 mm
  • Dimensions, opened: 82x80x57 mm
  • Weight: 320g
  • User manual
  • Home page
  • Shop (88$USD)

Ugreen

So I was recommended the Ugreen chargers, but unfortunately it seems their international edition just disappeared from their website. A first attempt at contacting them yielded no response, and a second one yielded a bounce from qq.com telling me (in Chinese) "出 错原因:该邮件内容涉嫌大量群发,并且被多数用户投诉为垃圾邮件。" which Google translates to "Reason for error: The content of this email is suspected of being mass-sent, and is complained by most users as spam."

The Support button on their website does exactly fuckall, so I guess that's it for Ugreen.

Volta

Volta has been a little more helpful and clarified it's possible to get extra international adapters for their chargers by email (which wasn't obvious from the website). But their charger is currently (2023-03-13) marked as "sold out", so I guess I'm stuck there as well.

One World

I have ordered a One World 65 as well. At 69$USD, it boasts 2 USB-A and 3 USB-C, with one 65W PD. It has slide-out international plugs which means it basically works everywhere. It also acts as a 7A international adapter as it has this funky array of connectors in the back where you can plug other AC devices. It has built-in timeout fuse.

I found it on Tech advisor but when I noticed it was quoting Wired, I found it was indeed mentioned by Wired, which also provided a promo code OneWorld65_15%Off, so this ended up being around 50$USD, a bargain.

Ordered on 2023-03-28, we'll see if it ever gets here or if it works. I mean to use it as a backup to the TOFU.

Update: I eventually got the device, some weeks later, but too late for me trip. It works pretty well, so well that I actually use it as a daily driver at home. It's compact and holds well in the plug, delivers fast charging for my laptop and other USB-C devices and has plenty of ports.

A good choice.

ZMI

ZMI has interesting products like this 65W international travel adapter. Found out about the company in this post on the Framework forum referring to some battery pack of theirs they were happy with.

One charger I was puzzled by is this combined charger / battery. It's a 45W charger with a small (6700mAh, so about presumably 25Wh). It has a USB-A and USB-C port. Otherwise they have a single 30W 10Ah battery which can presumably charge a Framework laptop in an hour.

They're also doing this crowdfunding campaign for "ZMI No. 20: The World's Most Powerful PowerPack 25000mAh Battery w/ 3 PD Ports | Revolutionary 210W Max Output | 100W USB-C/USB-A | Fast Charge". That product is actually "shipping" but is not on their main store page yet, and it's not possible to buy it on the IndieGogo page either.

Interestingly, it seems to embed a 21700 battery, similar to the 18650 but more compact and apparently used in Tesla cars, see also this comparison with the 18650. This gives at least some promise that the batteries could be eventually changed, although there's no promise on the repairability of this thing, which I would assume to be poor unless proven otherwise.

Untested.

USB Docks

Specification:

  • must have 2 or more USB-A ports (3 is ideal, otherwise i need a new headset adapter)
  • at least one USB-C port, preferably more
  • works in Linux
  • 2 display support (or one big monitor?), ideally 2x4k for future-proofing, HDMI or Display-Port, ideally also with double USB-C/Thunderbolt for future-proofing
  • all on one USB-C wire would be nice
  • power delivery over the USB-C cable
  • not too big, preferably

Note that I move from 4 USB-A ports down to 2 or 3 because I can change the USB-A cable on my keyboard for USB-C. But that means I need a slot for a USB-C port on the dock of course. I also could live with one less USB-A cable if I find a combo jack adapter, but that would mean a noisy experience.

Options found so far:

  • ThinkPad universal dock/40ay0090us): 300$USD, 65-100W, combo jack, 3x USB3.1, 2x USB2.0, 1x USB-C, 2x Display Port, 1x HDMI Port, 1x Gigabit Ethernet

  • Caldigit docks are apparently good, and the USB-C HDMI Dock seems like a good candidate (not on sale in there Canada shop), but leaves me wondering whether I want to keep my old analog monitors around and instead get proper monitors with USB-C inputs, and use something like Thunderbolt Element hub (230$USD). Update: I wrote Caldigit and they don't seem to have any Dock that would work for me, they suggest the TS3 plus which only has a single DP connector (!?). The USB-C HDMI dock is actually discontinued and they mentioned that they do have trouble with Linux in general.

  • I was also recommended OWC docks as well. update: their website is a mess, and live chat has confirmed they do not actually have any device that fits the requirement of two HDMI/DP outputs.

  • Anker also has docks (e.g. the Anker 568 USB-C Docking Station 11-in-1 looks nice, but holy moly 300$USD... Also, Anker docks are not all equal, I've heard reports of some of them being bad. Update: I reached out to Anker to clarify whether or not their docks will work on Linux and to advise on which dock to use, and their response is that they "do not recommend you use our items with Linux system". So I guess that settles it with Anker.

  • Cable Matters are promising, and their "USB-C Docking Station with Dual 4K HDMI and 80W Charging for Windows Computers might just actually work. It was out of stock on their website and Amazon but after reaching out to their support by email, they pointed out a product page that works in Canada.

  • a friend recommended this Belkin 11-in-1 and Pwaytech 11-in-1

Also: this post from Big Mess Of Wires has me worried that anything might work at all. It's where I had the Cable Matters reference however...

Update: I ordered a this dock from Cable Matters from Amazon (reluctantly). It promises “Linux” support and checked all the boxes for me (4x USB-A, audio, network, 2xHDMI).

It kind of works? I tested the USB-A ports, charging, networking, and the HDMI ports, all worked the first time. But! When I disconnect and reconnect the hub, the HDMI ports stop working. It’s quite infuriating especially since there’s very little diagnostics available. It’s unclear how the devices show up on my computer, I can’t even tell what device provides the HDMI connectors in lsbusb.

I’ve also seen the USB keyboard drop keypresses, which is also ... not fun. I suspect foul play inside Sway.

And yeah, those things are costly! This one goes for 300$ a pop, not great.

Update 2: Cable Matters support responded by simply giving me this hack that solved it at least for now. Just reverse the USB-C cable, and poof, everything works. Magic.

Update 3: turns out that was overly optimistic. It seems the problem actually resides in Sway, because when it happens (and it still does), logging out fixes the issue: GDM3 takes over and reinitializes the monitors properly. Then Sway can do its thing when I log back in again.

Your turn!

So what's your desktop setup like? Do you have docks? a laptop? a desktop? did you build it yourself?

Did you solder a USB-C port in the back of your neck and interface directly with the matrix and there's no spoon?

Do you have a 4k monitor? Two? A 8k monitor that curves around your head in a fully immersive display? Do you work on a Occulus rift and only interface the world through 3d virtual reality, including terminal emulators?

Thanks in advance!

23 May, 2023 05:35PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

neovim plugins and distributions

I've been watching the neovim community for a while and what seems like a cambrian explosion of plugins emerging. A few weeks back I decided to spend most of a "day of learning" on investigating some of the plugins and technologies that I'd read about: Language Server Protocol, TreeSitter, neorg (a grandiose organiser plugin), etc.

It didn't go so well. I spent most of my time fighting version incompatibilities or tracing through scant documentation or code to figure out what plugin was incompatible with which other.

There's definitely a line where crossing it is spending too much time playing with your tools instead of creating. On the other hand, there's definitely value in honing your tools and learning about new technologies. Everyone's line is probably in a different place. I've come to the conclusion that I don't have the time or inclination (or both) to approach exploring the neovim universe in this way. There exist a number of plugin "distributions" (such as LunarVim): collections of pre- configured and integrated plugins that you can try to use out-of-the-box. Next time I think I'll pick one up and give that a try &emdash independently from my existing configuration &emdash and see which ideas from it I might like to adopt.

shared vimrcs

Some folks upload their vim or neovim configurations in their entirety for others to see. I noticed Jess Frazelle had published hers so I took a look. I suppose one could evaluate a bunch of plugins and configuration in isolation using a shared vimrc like this, in the same was as a distribution.

bufferline

Amongst the plugins she uses was bufferline, a plugin to re-work neovim's tab bar to behave like tab bars from more conventional editors1. I don't make use of neovim's tabs at all2, so I would lose nothing having the (presently hidden) tab bar reworked, so I thought I'd give it a go.

I had to disable an existing plugin lightline, which I've had enabled for years but I wasn't sure I was getting much value from. Apparently it also messes with the tab bar! Disabling it, at least for now, at least means I'll find out if I miss it.

I am already using vim-buffergator as a means of seeing and managing open buffers: a hotkey opens a sidebar with a list of open buffers, to switch between or close. Bufferline gives me a more immediate, always-present view of open buffers, which is faintly useful: but not much. Perhaps I'd like it more if I was coming from an editor that had made it more of an expected feature. The two things I noticed about it that aren't especially useful for me: when browsing around vimwiki pages, I quickly open a lot of buffers. The horizontal line fills up very quickly. Even when I don't, I habitually have quite a lot of buffers open, and the horizontal line is quickly overwhelmed.

I have found myself closing open buffers with the mouse, which I didn't do before.

vert

Since I have brought up a neovim UI feature (tabs) I thought I'd briefly mention my new favourite neovim built-in command: vert.

Quite a few plugins and commands open up a new window (e.g. git-fugitive, Man, etc.) and they typically do so in a horizontal split. I'm increasingly preferring vertical splits. Prefixing any3 such command with vert forces the split to be vertical instead.


  1. in this case the direct influence was apparently DOOM Emacs
  2. (neo)vim's notion of tabs is completely different to what you might expect from other UI models.
  3. at least, I haven't found one that doesn't work yet

23 May, 2023 11:04AM

hackergotchi for Bits from Debian

Bits from Debian

proxmox Platinum Sponsor of DebConf23

proxmoxlogo

We are pleased to announce that Proxmox has committed to sponsor DebConf23 as a Platinum Sponsor.

Proxmox develops powerful, yet easy-to-use open-source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf23.

With this commitment as Platinum Sponsor, Proxmox is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Proxmox, for your support of DebConf23!

Become a sponsor too!

DebConf23 will take place from September 10th to 17th, 2022 in Kochi, India, and will be preceded by DebCamp, from September 3rd to 9th.

And DebConf23 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf23 website at https://debconf23.debconf.org/sponsors/become-a-sponsor/.

23 May, 2023 09:17AM by Sahil Dhiman

Sergio Durigan Junior

Using WireGuard to host services at home

It’s been a while since I had this idea to leverage the power of WireGuard to self-host stuff at home. Even though I pay for a proper server somewhere in the world, there are some services that I don’t consider critical to put there, or that I consider too critical to host outside my home.

It’s only NATural

With today’s ISP packages for end users, I find it very annoying the amount of trouble they create when you try to host anything at home. Dynamic IPs, NAT/CGNAT, port-blocking, traffic shapping are only a few examples of methods or limitations that prevent users from making local services reachable in a reliable way from outside.

WireGuard comes to help

If you already pay for a VPS or a dedicated server somewhere, why not use its existing infrastructure (and public availability) in your favour? That’s what I thought when I started this journey.

My initial idea was to use a reverse proxy to redirect external requests to the service running at my home. But how could I make sure that these requests reach my dynamic-IP-behind-a-NAT-behind-another-NAT? Well, let’s create a tunnel! WireGuard is the perfect tool for that because of many things: it’s stateless, very performant, secure, and requires very little configuration.

Setting up on the server

On the server side (i.e., VPS or dedicated server), you will create the first endpoint. Something like the following should do:

[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.1/32
ListenPort = 51821

[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.2/32
PersistentKeepalive = 10

A few interesting points to note:

  • The Peer section contains information about the home service that will be configured below.
  • I’m using PersistentKeepalive because I have a dynamic IP at my home. If you have a static IP, you could get rid of PersistentKeepalive and specify an Endpoint here (don’t forget to set a ListenPort below, in the Interface section).
  • Now you have an IP where you can forward requests to. If we’re talking about HTTP traffic, Apache and nginx are absolutely capable of doing it. If we’re talking about other kind of traffic, you might want to look into other utilities, like HAProxy, Traefik and others.

Setting up at your home

At your home, you will configure the peer:

[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.2/32

[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.1/32
Endpoint = YOUR_SERVER:51821
PersistentKeepalive = 10

A few notes about security

I would be remiss if I didn’t say anything about security, especially because we’re talking about hosting services at home. So, here are a few recommendations:

  • Make sure to put your services in a separate local network. Using VLANs is also a good option.
  • Don’t run services on your personal (or work!) computer, even if they’ll be running inside a VM.
  • Run a firewall on the WireGuard interface and make sure that you only allow traffic over the required ports.

Have fun!

23 May, 2023 04:56AM

Russ Allbery

Review: A Half-Built Garden

Review: A Half-Built Garden, by Ruthanna Emrys

Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-21097-6
Format: Kindle
Pages: 340

The climate apocalypse has happened. Humans woke up to the danger, but a little bit too late. Over one billion people died. But the world on the other side of that apocalypse is not entirely grim. The corporations responsible for so much of the damage have been pushed out of society and isolated on their independent "aislands," traded with only grudgingly for the few commodities the rest of the world has not yet learned how to manufacture without them. Traditional governments have largely collapsed, although they cling to increasingly irrelevant trappings of power. In their place arose the watershed networks: a new way of living with both nature and other humans, built around a mix of anarchic consensus and direct democracy, with conservation and stewardship of the natural environment at its core.

Therefore, when the aliens arrive near Bear Island on the Potomac River, they're not detected by powerful telescopes and met by military jets. Instead, their waste sets off water sensors, and they're met by the two women on call for alert duty, carrying a nursing infant and backed by the real-time discussion and consensus technology of the watershed's dandelion network. (Emrys is far from the first person to name something a "dandelion network," so be aware that the usage in this book seems unrelated to the charities or blockchain network.)

This is a first contact novel, but it's one that skips over the typical focus of the subgenre. The alien Ringers are completely fluent in English down to subtle nuance of emotion and connotation (supposedly due to observation of our radio and TV signals), have translation devices, and in some cases can make our speech sounds directly. Despite significantly different body shapes, they are immediately comprehensible; differences are limited mostly to family structure, reproduction, and social norms. This is Star Trek first contact, not the type more typical of written science fiction. That feels unrealistic, but it's also obviously an authorial choice to jump directly to the part of the story that Emrys wants to write.

The Ringers have come to save humanity. In their experience, technological civilization is inherently incompatible with planets. Technology will destroy the planet, and the planet will in turn destroy the species unless they can escape. They have reached other worlds multiple times before, only to discover that they were too late and everyone is already dead. This is the first time they've arrived in time, and they're eager to help humanity off its dying planet to join them in the Dyson sphere of space habitats they are constructing. Planets, to them, are a nest and a launching pad, something to eventually abandon and break down for spare parts.

The small, unexpected wrinkle is that Judy, Carol, and the rest of their watershed network are not interested in leaving Earth. They've finally figured out the most critical pieces of environmental balance. Earth is going to get hotter for a while, but the trend is slowing. What they're doing is working. Humanity would benefit greatly from Ringer technology and the expertise that comes from managing closed habitat ecosystems, but they don't need rescuing.

This goes over about as well as a toddler saying that playing in the road is perfectly safe.

This is a fantastic hook for a science fiction novel. It does exactly what a great science fiction premise should do: takes current concerns (environmentalism, space boosterism, the debatable primacy of humans as a species, the appropriate role of space colonization, the tension between hopefulness and doomcasting about climate change) and uses the freedom of science fiction to twist them around and come at them from an entirely different angle.

The design of the aliens is excellent for this purpose. The Ringers are not one alien species; they are two, evolved on different planets in the same system. The plains dwellers developed space flight first and went to meet the tree dwellers, and while their relationship is not entirely without hierarchy (the plains dwellers clearly lead on most matters), it's extensively symbiotic. They now form mixed families of both species, and have a rich cultural history of stories about first contact, interspecies conflicts and cooperation, and all the perils and misunderstandings that they successfully navigated. It makes their approach to humanity more believable to know that they have done first contact before and are building on a model. Their concern for humanity is credibly sincere. The joining of two species was wildly successful for them and they truly want to add a third.

The politics on the human side are satisfyingly complicated. The watershed network may have made first contact, but the US government (in the form of NASA) is close behind, attempting to lean on its widely ignored formal power. The corporations are farther away and therefore slower to arrive, but the alien visitors have a damaged ship and need space to construct a subspace beacon and Asterion is happy to offer a site on one of its New Zealand islands. The corporate representatives are salivating at the chance to escape Earth and its environmental regulation for uncontrolled space construction and a new market of trillions of Ringers. NASA's attitude is more measured, but their representative is easily persuaded that the true future of humanity is in space. The work the watershed networks are doing is difficult, uncertain, and involves a lot of sacrifice, particularly for corporate consumer lifestyles. With such an attractive alien offer on the table, why stay and work so hard for an uncertain future? Maybe the Ringers are right.

And then the dandelion networks that the watersheds use as the core of their governance and decision-making system all crash.

The setup was great; I was completely invested. The execution was more mixed. There are some things I really liked, some things that I thought were a bit too easy or predictable, and several places where I wish Emrys had dug deeper and provided more detail. I thought the last third of the book fizzled a little, although some of the secondary characters Emrys introduces are delightful and carry the momentum of the story when the politics feel a bit lacking.

If you tried to form a mental image of ecofeminist political science fiction with 1970s utopian sensibilities, but updated for the concerns of the 2020s, you would probably come very close to the politics of the watershed networks. There are considerably more breastfeedings and diaper changes than the average SF novel. Two of the primary characters are transgender, but with very different experiences with transition. Pronoun pins are an ubiquitous article of clothing. One of the characters has a prosthetic limb. Another character who becomes important later in the story codes as autistic. None of this felt gratuitous; the characters do come across as obsessed with gender, but in a way that I found believable. The human diversity is well-integrated with the story, shapes the characters, creates practical challenges, and has subtle (and sometimes not so subtle) political ramifications.

But, and I say this with love because while these are not quite my people they're closely adjacent to my people, the social politics of this book are a very specific type of white feminist collaborative utopianism. When religion makes an appearance, I was completely unsurprised to find that several of the characters are Jewish. Race never makes a significant appearance at all. It's the sort of book where the throw-away references to other important watershed networks includes African ones, and the characters would doubtless try to be sensitive to racial issues if they came up, but somehow they never do. (If you're wondering if there's polyamory in this book, yes, yes there is, and also I suspect you know exactly what culture I'm talking about.)

This is not intended as a criticism, just more of a calibration. All science fiction publishing houses could focus only on this specific political perspective for a year and the results would still be dwarfed by the towering accumulated pile of thoughtless paeans to capitalism. Ecofeminism has a long history in the genre but still doesn't show up in that many books, and we're far from exhausting the space of possibilities for what a consensus-based politics could look like with extensive computer support. But this book has a highly specific point of view, enough so that there won't be many thought-provoking surprises if you're already familiar with this school of political thought.

The politics are also very earnest in a way that I admit provoked a bit of eyerolling. Emrys pushes all of the political conflict into the contrasts between the human factions, but I would have liked more internal disagreement within the watershed networks over principles rather than tactics. The degree of ideological agreement within the watershed group felt a bit unrealistic. But, that said, at least politics truly matters and the characters wrestle directly with some tricky questions. I would have liked to see more specifics about the dandelion network and the exact mechanics of the consensus decision process, since that sort of thing is my jam, but we at least get more details than are typical in science fiction. I'll take this over cynical libertarianism any day.

Gender plays a huge role in this story, enough so that you should avoid this book if you're not interested in exploring gender conceptions. One of the two alien races is matriarchal and places immense social value on motherhood, and it's culturally expected to bring your children with you for any important negotiation. The watersheds actively embrace this, or at worst find it comfortable to use for their advantage, despite a few hints that the matriarchy of the plains aliens may have a very serious long-term demographic problem. In an interesting twist, it's the mostly-evil corporations that truly challenge gender roles, albeit by turning it into an opportunity to sell more clothing.

The Asterion corporate representatives are, as expected, mostly the villains of the plot: flashy, hierarchical, consumerist, greedy, and exploitative. But gender among the corporations is purely a matter of public performance, one of a set of roles that you can put on and off as you choose and signal with clothing. They mostly use neopronouns, change pronouns as frequently as their clothing, and treat any question of body plumbing as intensely private. By comparison, the very 2020 attitudes of the watersheds towards gender felt oddly conservative and essentialist, and the main characters get flustered and annoyed by the ever-fluid corporate gender presentation. I wish Emrys had done more with this.

As you can tell, I have a lot of thoughts and a lot of quibbles. Another example: computer security plays an important role in the plot and was sufficiently well-described that I have serious questions about the system architecture and security model of the dandelion networks. But, as with decision-making and gender, the more important takeaway is that Emrys takes enough risks and describes enough interesting ideas that there's a lot of meat here to argue with. That, more than getting everything right, is what a good science fiction novel should do.

A Half-Built Garden is written from a very specific political stance that may make it a bit predictable or off-putting, and I thought the tail end of the book had some plot and resolution problems, but arguing with it was one of the more intellectually satisfying science fiction reading experiences I've had recently. You have to be in the right mood, but recommended for when you are.

Rating: 7 out of 10

23 May, 2023 02:46AM

May 22, 2023

hackergotchi for Adnan Hodzic

Adnan Hodzic

rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform

Considering I’ve created my own private cloud in my home as part of: wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology)....

The post rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform appeared first on FoolControl: Phear the penguin.

22 May, 2023 10:44AM by Adnan Hodzic

Russ Allbery

Review: Tsalmoth

Review: Tsalmoth, by Steven Brust

Series: Vlad Taltos #16
Publisher: Tor
Copyright: 2023
ISBN: 1-4668-8970-5
Format: Kindle
Pages: 277

Tsalmoth is the sixteenth book in the Vlad Taltos series and (some fans of the series groan) yet another flashback novel to earlier in Vlad's life. It takes place between Yendi and the interludes in Dragon (or, perhaps more straightforwardly, between Yendi and Jhereg. Most of the books of this series stand alone to some extent, so you could read this book out of order and probably not be horribly confused, but I suspect it would also feel weirdly pointless outside of the context of the larger series.

We're back to Vlad running a fairly small operation as a Jhereg, who are the Dragaeran version of organized crime. A Tsalmoth who owes Vlad eight hundred imperials has rudely gotten himself murdered, thoroughly enough that he can't be revived. That's a considerable amount of money, and Vlad would like it back, so he starts poking around. As you might expect if you've read any other book in this series, things then get a bit complicated. This time, they involve Jhereg politics, Tsalmoth house politics, and necromancy (which in this universe is more about dimensional travel than it is about resurrecting the dead).

The main story is... fine. Kragar is around being unnoticeable as always, Vlad is being cocky and stubborn and bantering with everyone, and what appears to be a straightforward illegal business relationship turns out to involve Dragaeran magic and thus Vlad's highly-placed friends. As usual, they're intellectually curious about the magic and largely ambivalent to the rest of Vlad's endeavors. The most enjoyable part of the story is Vlad's insistence on getting his money back while everyone else in the story cannot believe he would be this persistent over eight hundred imperials and is certain he has some other motive. It's otherwise a fairly forgettable little adventure.

The implications for the broader series, though, are significant, although essentially none of the payoff is here. Brust has been keeping a major secret about Vlad that's finally revealed here, one that has little impact on the plot of this book (although it causes Vlad a lot of angst) but which I suspect will become very important later in the series. That was intriguing but rather unsatisfying, since it stays only a future hook with an attached justification for why we're only finding out about it now.

If one has read the rest of the series, it's also nice to see Vlad and Cawti working together, bantering with each other and playing off of each other's strengths. It's reminiscent of the best parts of Yendi. As with many of the books of this series, the chapter introductions tell a parallel story; this time, it's Vlad and Cawti's wedding.

I think previous books already mentioned that Vlad is narrating this series into some sort of recording device, and a bit about why he's doing that, but this is made quite explicit here. We get as much of the surrounding frame as we've ever seen before. There are no obvious plot consequences from this — it's still all hints and guesswork — but I suspect this will also become important by the end of the series.

If you've read this much of the series, you'll obviously want to read this one as well, but unfortunately don't get your hopes up for significant plot advancement. This is another station-keeping book, which is a bit of a disappointment. We haven't gotten major plot advancement since Hawk in 2014, and I'm getting impatient. Thankfully, Lyorn has a release date already (April 9, 2024), and assuming all goes according to the grand plan, there are only two books left after Lyorn (Chreotha and The Last Contract). I'm getting hopeful that we're going to get to see the entire series.

Meanwhile, I am very tempted to do a complete re-read of the series to date, probably in series chronological order rather than in publication order (as much as that's possible given the fractured timelines of Dragon and Tiassa) so that I can see how the pieces fit together. The constant jumping back and forth and allusions to events that have already happened but that we haven't seen yet is hard to keep track of. I'm very glad the Lyorn Records exists.

Followed by Lyorn.

Rating: 7 out of 10

22 May, 2023 02:39AM

May 21, 2023

hackergotchi for Bits from Debian

Bits from Debian

Infomaniak First Platinum Sponsor of DebConf23

infomaniaklogo

We are pleased to announce that Infomaniak has committed to sponsor DebConf23 as a Platinum Sponsor.

Infomaniak is a key player in the European Cloud and the leading developer of Web technologies in Switzerland. It aims to be an independent European alternative to the web giants and is committed to an ethical and sustainable Web that respects privacy and creates local jobs. Infomaniak develops cloud solutions (IaaS, PaaS, VPS), productivity tools for online collaboration and video and radio streaming services.

The company uses only renewable electricity, offsets 200% of its CO2 emissions and extends the life of its servers up to 15 years. The company cools its infrastructure with filtered air, without air conditioning, and is building a new data centre that will fully recycle the energy it consumes to heat up to 6,000 homes.

With this commitment as Platinum Sponsor, Infomaniak is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Infomaniak, for your support of DebConf23!

Become a sponsor too!

DebConf23 will take place from September 10th to 17th, 2022 in Kochi, India, and will be preceded by DebCamp, from September 3rd to 9th.

And DebConf23 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf23 website at https://debconf23.debconf.org/sponsors/become-a-sponsor/.

21 May, 2023 12:08PM by Sahil Dhiman

May 18, 2023

Antoine Beaupré

A terrible Pixel Tablet

In a strange twist of history, Google finally woke and thought "I know what we need to do! We need to make a TABLET!".

So some time soon in 2023, Google will release "The tablet that only Google could make", the Pixel Tablet.

Having owned a Samsung Galaxy Tab S5e for a few years, I was very curious to see how this would pan out and especially whether it would be easier to flash than the Samsung. As an aside, I figured I would give that a shot, and within a few days managed to completely brick the device. Awesome. See gts4lvwifi for the painful details of that.

In any case, Google made a tablet. I own a Pixel phone and I'm moderately happy with it. It's easy to flash with CalyxOS, maybe this is the promise land of tablets?

Compared with the Samsung

But it turns out that the Pixel Tablet pales in comparison with the Samsung tablet, produced 4 years ago, in 2019:

  • it's thicker (8.1mm vs 5.5mm)
  • it's heavier (493g vs 400g)
  • it's not AMOLED (IPS LCD)
  • it doesn't have an SD card reader
  • its camera is worse (8MP vs 13MP, 1080p video instead of 4k)
  • it's more expensive (670EUR vs 410EUR)

What the Pixel tablet has going for it:

  • a slightly more powerful CPU
  • a stylus
  • more storage (128GB or 256GB vs 64GB or 128GB)
  • more RAM (8GB vs 4GB or 6GB)
  • Wifi 6

I guess I should probably wait for the actual device to come out to see reviews and how it stacks up, but so far it's kind of impressive how underwhelming this is.

Also note that we're comparing against a very old Samsung tablet here, a fairer comparison might be against the Samsung Galaxy Tab S8. There the sizes are comparable, and the Samsung is more expensive than the Pixel, but then the Pixel has absolutely zero advantages and all the other disadvantages.

The Dock

The "Dock" is also worth a little aside.

See, the tablet comes with a dock that doubles as a speaker.

You can't buy the tablet without the dock. You have to have a dock.

I shit you not, actual quote: "Can I purchase a Pixel Tablet without the Charging Speaker Dock? No, you can only purchase the Pixel Tablet with the Charging Speaker Dock."

In case you really, really like the dock, "You may purchase additional Charging Speaker Docks separately (coming soon)." And no, they can't all play together, only the dock the tablet is docked into will play audio.

The dock is not a Bluetooth speaker, it can only play audio from that one tablet that Google made, this one time.

It's also not a battery pack. It's just a charger with speakers in it.

Promising e-waste.

Again, I hope I'm wrong and that this is going to be a fine tablet. But so far, it looks like it doesn't even come close to whatever Samsung threw over the fence before the apocalypse (remember 2019? were we even born yet?).

"The tablet that only Google could make." Amazing. Hopefully no one else gets any bright ideas like this.

18 May, 2023 04:05PM

May 17, 2023

Jamie McClelland

Cranky old timers should know perl

I act like an old timer (I’ve been around linux for 25 years and I’m cranky about new tech that is not easily maintained and upgraded) yet somehow I don’t know perl. How did that happen?

I discovered this state when I decided to move from the heroically packaged yet seemingly upstream un-maintained opendmarc package to authentication_milter.

It’s written in perl. And, alas, not in debian.

How hard could this be?

The instructions for installing seemed pretty straight forward: cpanm Mail::Milter::Authentication.

Wah. I’m glad I tried this out on a test virtual machine. It took forever! It ran tests! It compiled things! And, it installed a bunch of perl modules already packaged in Debian.

I don’t think I want to add this command to my ansible playbook.

Next I spent an inordinate amount of time trying to figure out how to list the dependencies of a given CPAN module. I was looking for something like cpanm --list-dependencies Mail::Milter::Authentication but eventually ended up writing a perl script that output perl code, inserting a “use " before each dependency and a semicolon and line break after them. Then, I could execute that script on a clean debian installation and see which perl modules I needed. For each error, I checked for the module in Debian (and installed it) or kept a list of modules I would have to build (and commented out the line).

Once I had a list of modules to build, I used the handy cpan2deb command. It took some creative ordering but eventually I got it right. Since I will surely forget how to do this when it’s time to upgrade, I wrote a script.

In total it took me several days to figure this all out, so I once again find myself very appreciative of all the debian packagers out there - particularly the perl ones!!

And also… if I did this all wrong and there is an easier way I would love to hear about it in the comments.

17 May, 2023 12:27PM

May 15, 2023

Sven Hoexter

GCP: Private Service Connect Forwarding Rules can not be Updated

PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:

Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target':
'<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/
k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalid

Worked around that with the help of terrraform_data and lifecycle:

resource "terraform_data" "replacement" {
    input = var.gcp_psc_data["target"]
}

resource "google_compute_forwarding_rule" "this" {
    count   = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0
    name    = "${var.gcp_psc_name}-psc"
    region  = var.gcp_region
    project = var.gcp_project

    target                = var.gcp_psc_data["target"]
    load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment
    network               = var.gcp_network
    ip_address            = google_compute_address.this.id

    lifecycle {
        replace_triggered_by = [
            terraform_data.replacement
        ]
    }
}

See also terraform data for replace_triggered_by.

15 May, 2023 07:54AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.10 on CRAN: New Upstream

We are happy to share that the RcppSimdJson package has been updated to release 0.1.10.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release updates the underlying simdjson library to version 3.1.8 (also made today). Otherwise we only made a minor edit to the README and adjusted one tweek for code coverage.

The (very short) NEWS entry for this release follows.

Changes in version 0.1.10 (2023-05-14)

  • simdjson was upgraded to version 3.1.8 (Dirk in #85).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 May, 2023 12:41AM

May 14, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Joining files with FFmpeg

Joining video files (back-to-back) losslessly with FFmpeg is a surprisingly cumbersome operation. You can't just, like, write all the inputs on the command line or something; you need to use a special demuxer and then write all the names in a text file and override the security for that file, which is pretty crazy.

But there's one issue I had that I crashed into and which random searching around didn't help for, namely this happening sometimes on switching files (and the resulting files just having no video in that area):

[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290238, current: 86263699; changing to 162290239. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290239, current: 86264723; changing to 162290240. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290240, current: 86265747; changing to 162290241. This may result in incorrect timestamps in the output file.

There are lots of hits about this online, most of them around different codecs and such, but the problem was surprisingly mundane: Some of the segments had video in stream 0 and audio in stream 1, and some the other way round, and the concat demuxer doesn't account for this.

Simplest workaround; just remux the files first. FFmpeg will put the streams in a consistent order. (Inspired by a Stack Overflow answer that suggested remuxing to MPEG-TS in order to use the concat protocol instead of the concat demuxer.)

14 May, 2023 09:54PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Early Access: Inserting JSON data to BigQuery from Spark on Dataproc

Hello folks!

We recently received a case letting us know that Dataproc 2.1.1 was unable to write to a BigQuery table with a column of type JSON. Although the BigQuery connector for Spark has had support for JSON columns since 0.28.0, the Dataproc images on the 2.1 line still cannot create tables with JSON columns or write to existing tables with JSON columns.

The customer has graciously granted permission to share the code we developed to allow this operation. So if you are interested in working with JSON column tables on Dataproc 2.1 please continue reading!

Use the following gcloud command to create your single-node dataproc cluster:

IMAGE_VERSION=2.1.1-debian11
REGION=us-west1
ZONE=${REGION}-a
CLUSTER_NAME=pick-a-cluster-name
gcloud dataproc clusters create ${CLUSTER_NAME} \
    --region ${REGION} \
    --zone ${ZONE} \
    --single-node \
    --master-machine-type n1-standard-4 \
    --master-boot-disk-type pd-ssd \
    --master-boot-disk-size 50 \
    --image-version ${IMAGE_VERSION} \
    --max-idle=90m \
    --enable-component-gateway \
    --scopes 'https://www.googleapis.com/auth/cloud-platform'

The following file is the Scala code used to write JSON structured data to a BigQuery table using Spark. The file following this one can be executed from your single-node Dataproc cluster.

Main.scala

import org.apache.spark.sql.functions.col
import org.apache.spark.sql.types.{Metadata, StringType, StructField, StructType}
import org.apache.spark.sql.{Row, SaveMode, SparkSession}
import org.apache.spark.sql.avro
import org.apache.avro.specific

  val env = "x"
  val my_bucket = "cjac-docker-on-yarn"
  val my_table = "dataset.testavro2"
    val spark = env match {
      case "local" =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .master("local")
          .appName("isssue_115574")
          .getOrCreate()
      case _ =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .appName("isssue_115574")
          .getOrCreate()
    }

  // create DF with some data
  val someData = Seq(
    Row("""{"name":"name1", "age": 10 }""", "id1"),
    Row("""{"name":"name2", "age": 20 }""", "id2")
  )
  val schema = StructType(
    Seq(
      StructField("user_age", StringType, true),
      StructField("id", StringType, true)
    )
  )

  val avroFileName = s"gs://${my_bucket}/issue_115574/someData.avro"
  
  val someDF = spark.createDataFrame(spark.sparkContext.parallelize(someData), schema)
  someDF.write.format("avro").mode("overwrite").save(avroFileName)

  val avroDF = spark.read.format("avro").load(avroFileName)
  // set metadata
  val dfJSON = avroDF
    .withColumn("user_age_no_metadata", col("user_age"))
    .withMetadata("user_age", Metadata.fromJson("""{"sqlType":"JSON"}"""))

  dfJSON.show()
  dfJSON.printSchema

  // write to BigQuery
  dfJSON.write.format("bigquery")
    .mode(SaveMode.Overwrite)
    .option("writeMethod", "indirect")
    .option("intermediateFormat", "avro")
    .option("useAvroLogicalTypes", "true")
    .option("table", my_table)
    .save()


repro.sh:

#!/bin/bash

PROJECT_ID=set-yours-here
DATASET_NAME=dataset
TABLE_NAME=testavro2

# We have to remove all of the existing spark bigquery jars from the local
# filesystem, as we will be using the symbols from the
# spark-3.3-bigquery-0.30.0.jar below.  Having existing jar files on the
# local filesystem will result in those symbols having higher precedence
# than the one loaded with the spark-shell.
sudo find /usr -name 'spark*bigquery*jar' -delete

# Remove the table from the bigquery dataset if it exists
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Create the table with a JSON type column
bq mk --table $PROJECT_ID:$DATASET_NAME.$TABLE_NAME \
  user_age:JSON,id:STRING,user_age_no_metadata:STRING

# Load the example Main.scala 
spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show the table schema when we use `bq mk --table` and then load the avro
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

# Remove the table so that we can see that the table is created should it not exist
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Dynamically generate a DataFrame, store it to avro, load that avro,
# and write the avro to BigQuery, creating the table if it does not already exist

spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show that the table schema does not differ from one created with a bq mk --table
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

Google BigQuery has supported JSON data since October of 2022, but until now, it has not been possible, on generally available Dataproc clusters, to interact with these columns using the Spark BigQuery Connector.

JSON column type support was introduced in spark-bigquery-connector release 0.28.0.

14 May, 2023 03:52AM by C.J. Collier

May 13, 2023

Sergio Durigan Junior

Ubuntu debuginfod and source code indexing

You might remember that in my last post about the Ubuntu debuginfod service I talked about wanting to extend it and make it index and serve source code from packages. I’m excited to announce that this is now a reality since the Ubuntu Lunar (23.04) release.

The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.

The problem

While debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:

  • apt-get source required dpkg-dev to be installed, which ends up pulling in a lot of other dependencies.
  • GDB needs to be taught how to find the source code for the package being debugged. This can usually be done by using the dir command, but finding the proper path to be is usually not trivial, and you find yourself having to use more “complex” commands like set substitute-path, for example.
  • You have to make sure that the version of the source package is the same as the version of the binary package(s) you want to debug.
  • If you want to debug the libraries that the package links against, you will face the same problems described above for each library.

So yeah, not a trivial/pleasant task after all.

The solution…

Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.

While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.

Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.

… but not a perfect one

In order to make debuginfod happy when indexing source files, I had to patch dpkg and make it always use -fdebug-prefix-map when compiling stuff. This GCC option is used to remap pathnames inside the DWARF, which is needed because in Debian/Ubuntu we build our packages inside chroots and the build directories end up containing a bunch of random cruft (like /build/ayusd-ASDSEA/something/here). So we need to make sure the path prefix (the /build/ayusd-ASDSEA part) is uniform across all packages, and that’s where -fdebug-prefix-map helps.

This means that the package must honour dpkg-buildflags during its build process, otherwise the magic flag won’t be passed and your DWARF will end up with bogus paths. This should not be a big problem, because most of our packages do honour dpkg-buildflags, and those who don’t should be fixed anyway.

… especially if you’re using LTO

Ubuntu enables LTO by default, and unfortunately we are affected by an annoying (and complex) bug that results in those bogus pathnames not being properly remapped. The bug doesn’t affect all packages, but if you see GDB having trouble finding a source file whose full path starts without /usr/src/..., that is a good indication that you’re being affected by this bug. Hopefully we should see some progress in the following weeks.

Your feedback is important to us

If you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.

13 May, 2023 08:43PM

May 12, 2023

hackergotchi for Holger Levsen

Holger Levsen

20230512-Debian-Reunion-Hamburg-2023

Small reminder for the Debian Reunion Hamburg 2023 from May 23 to 30

As in previous years there will be a rather small Debian Reunion Hamburg 2023 event taking place from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

We'll have days of hacking (inside and outside), a day trip and a small cheese & wine party, as well as daily standup meetings to learn what others are doing, and there shall also be talks and workshops. At the moment there are even still some beds on site available and the CfP is still open!

For more information on all of this: please check the above wiki page!

May the force be with you.

12 May, 2023 02:28PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

crc32c 0.0.2 on CRAN: Build Fixes

A first follow-up to the initial announcement just days ago of the new crc32c package. The package offers cyclical checksum with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64.

This follow-up was needed because I missed, when switching to a default static library build, that newest compilers would complain if -fPIC was not set. gcc-12 on my box was happy, gcc-13 on recent Fedora as used at CRAN was not. A second error was assuming that saying SystemRequirements: cmake would suffice. But hold on whippersnapper: macOS always has a surprise for you! As described at the end of the appropriate section in Writing R Extensions, on that OS you have to go the basement, open four cupboards, rearrange three shelves and then you get to use it. And then in doing so (from an added configure script) I failed to realize Windows needed a fallback. Gee.

The NEWS entry for this (as well the initial release) follows.

Changes in version 0.0.2 (2023-05-11)

  • Explicitly set cmake property for position-independent code

  • Help macOS find its cmake binary as detailed also in WRE

  • Help Windows with a non-conditional Makevars.win pointing at cmake

  • Add more badges to README.md

Changes in version 0.0.1 (2023-05-07)

  • Initial release version and CRAN upload

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 May, 2023 12:37AM

May 11, 2023

Simon Josefsson

Streamlined NTRU Prime sntrup761 goes to IETF

The OpenSSH project added support for a hybrid Streamlined NTRU Prime post-quantum key encapsulation method sntrup761 to strengthen their X25519-based default in their version 8.5 released on 2021-03-03. While there has been a lot of talk about post-quantum crypto generally, my impression has been that there has been a slowdown in implementing and deploying them in the past two years. Why is that? Regardless of the answer, we can try to collaboratively change things, and one effort that appears strangely missing are IETF documents for these algorithms.

Building on some earlier work that added X25519/X448 to SSH, writing a similar document was relatively straight-forward once I had spent a day reading OpenSSH and TinySSH source code to understand how it worked. While I am not perfectly happy with how the final key is derived from the sntrup761/X25519 secrets – it is a SHA512 call on the concatenated secrets – I think the construct deserves to be better documented, to pave the road for increased confidence or better designs. Also, reusing the RFC5656§4 structs makes for a worse specification (one unnecessary normative reference), but probably a simpler implementation. I have published draft-josefsson-ntruprime-ssh-00 here. Credit here goes to Jan Mojžíš of TinySSH that designed the earlier sntrup4591761x25519-sha512@tinyssh.org in 2018, Markus Friedl who added it to OpenSSH in 2019, and Damien Miller that changed it to sntrup761 in 2020. Does anyone have more to add to the history of this work?

Once I had sharpened my xml2rfc skills, preparing a document describing the hybrid construct between the sntrup761 key-encapsulation mechanism and the X25519 key agreement method in a non-SSH fashion was easy. I do not know if this work is useful, but it may serve as a reference for further study. I published draft-josefsson-ntruprime-hybrid-00 here.

Finally, how about a IETF document on the base Streamlined NTRU Prime? Explaining all the details, and especially the math behind it would be a significant effort. I started doing that, but realized it is a subjective call when to stop explaining things. If we can’t assume that the reader knows about lattice math, is a document like this the best place to teach it? I settled for the most minimal approach instead, merely giving an introduction to the algorithm, included SageMath and C reference implementations together with test vectors. The IETF audience rarely understands math, so I think it is better to focus on the bits on the wire and the algorithm interfaces. Everything here was created by the Streamlined NTRU Prime team, I merely modified it a bit hoping I didn’t break too much. I have now published draft-josefsson-ntruprime-streamlined-00 here.

I maintain the IETF documents on my ietf-ntruprime GitLab page, feel free to open merge requests or raise issues to help improve them.

To have confidence in the code was working properly, I ended up preparing a branch with sntrup761 for the GNU-project Nettle and have submitted it upstream for review. I had the misfortune of having to understand and implement NIST’s DRBG-CTR to compute the sntrup761 known-answer tests, and what a mess it is. Why does a deterministic random generator support re-seeding? Why does it support non-full entropy derivation? What’s with the key size vs block size confusion? What’s with the optional parameters? What’s with having multiple algorithm descriptions? Luckily I was able to extract a minimal but working implementation that is easy to read. I can’t locate DRBG-CTR test vectors, anyone? Does anyone have sntrup761 test vectors that doesn’t use DRBG-CTR? One final reflection on publishing known-answer tests for an algorithm that uses random data: are the test vectors stable over different ways to implement the algorithm? Just consider of some optimization moved one randomness-extraction call before another, then wouldn’t the output be different? Are there other ways to verify correctness of implementations?

As always, happy hacking!

11 May, 2023 10:03PM by simon

hackergotchi for Shirish Agarwal

Shirish Agarwal

India Press freedom, Profiteering, AMD issues in the wild.

India Press Freedom

Just about a week back, India again slipped in the Freedom index, this time falling to 161 out of 180 countries. The RW again made lot of noise as they cannot fathom why it has been happening so. A recent news story gives some idea. Every year NCRB (National Crime Records Bureau) puts out its statistics of crimes happening across the country. The report is in public domain. Now according to report shared, around 40k women from Gujarat alone disappeared in the last five years. This is a state where BJP has been ruling for the last 30 odd years. When this report became viral, almost all national newspapers the news was censored/blacked out. For e.g. check out newindianexpress.com, likewise TOI and other newspapers, the news has been 404. The only place that you can get that news is in minority papers like siasat. But the story didn’t remain till there. While the NCW (National Commission of Women) pointed out similar stuff happening in J&K, Gujarat Police claimed they got almost 39k women back. Now ideally, it should have been in NCRB data as an addendum as the report can be challenged. But as this news was made viral, nobody knows the truth or false in the above. What BJP has been doing is whenever they get questioned, they try to muddy the waters like that. And most of the time, such news doesn’t make to court so the party gets a freebie in a sort as they are not legally challenged. Even if somebody asks why didn’t Gujarat Police do it as NCRB report is jointly made with the help of all states, and especially with BJP both in Center and States, they cannot give any excuse. The only excuse you see or hear is whataboutism unfortunately 😦

Profiteering on I.T. Hardware

I was chatting with a friend yesterday who is an enthusiast like me but has been more alert about what has been happening in the CPU, motherboard, RAM world. I was simply shocked to hear the prices of motherboards which are three years old, even a middling motherboard. For e.g. the last time I bought a mobo, I spent about 6k but that was for an ATX motherboard. Most ITX motherboards usually sold for around INR 4k/- or even lower. I remember Via especially as their mobos were even cheaper around INR 1.5-2k/-. Even before pandemic, many motherboard manufacturers had closed down shop leaving only a few in the market. As only a few remained, prices started going higher. The pandemic turned it to a seller’s market overnight as most people were stuck at home and needed good rigs for either work or leisure or both. The manufacturers of CPU, motherboards, GPU’s, Powersupply (SMPS) named their prices and people bought it. So in 2023, high prices remained while warranty periods started coming down. Governments also upped customs and various other duties. So all are in hand in glove in the situation. So as shared before, what I have been offered is a 4 year motherboard with a CPU of that time. I haven’t bought it nor do I intend to in short-term future but extremely disappointed with the state of affairs 😦

AMD Issues

It’s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. 😦 Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever 😦

11 May, 2023 06:17AM by shirishag75

May 10, 2023

hackergotchi for Charles Plessy

Charles Plessy

Upvote to patch Firefox to render Markdown

I previously wrote that when Firefox receives a file whose media type is text/markdown, it prompts the user to download it, whereas other browsers display rendered results.

Now it is possible to upvote a proposal on connect.mozilla.org asking that Firefox renders Markdown by default.

10 May, 2023 11:43PM

May 09, 2023

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Instructions for installing Proxmox onto the Qotom device

These instructions are for qotom devices Q515P and Q1075GE. You can order one from Amazon or directly from Cherry Ni <export03@qotom.com>. Instructions are for those coming from Windows.

Prerequisites:

  • A USB keyboard and mouse
  • A powered HDMI monitor and an HDMI cable
  • A copy of the Proxmox VE Installer ISO
  • A USB disk from which to boot the installer
  • Software and instructions to burn the raw image to USB
  • The details of your wireless network including wireless network ID (SSID), WPA password, gateway address and network prefix length

To find your windows network details, run the following command at the command prompt:

netsh interface ip show addresses

Here’s my output:

PS C:\Users\cjcol> netsh interface ip show addresses "Wi-Fi"

Configuration for interface "Wi-Fi"
    DHCP enabled:                         Yes
    IP Address:                           172.16.79.53
    Subnet Prefix:                        172.16.79.0/24 (mask 255.255.255.0)
    Default Gateway:                      172.16.79.1
    Gateway Metric:                       0
    InterfaceMetric:                      50

Did you follow the instructions linked above in the “prerequisites” section? If not, take a moment to do so now.
Open Rufus and select the proxmox iso which you downloaded.

You may be warned that Rufus will be acting as dd.

Don’t forget to select the USB drive that you want to write the image to. In my example, the device is creatively called “NO_LABEL”.

You may be warned that re-imaging the USB disk will result in the previous data on the USB disk being lost.

Once the process is complete, the application will indicate that it is complete.

You should now have a USB disk with the Proxmox installer image on it. Place the USB disk into one of the blue, USB-3.0, USB-A slots on the Qotom device so that the system can read the installer image from it at full speed. The Proxmox installer requires a keyboard, video and mouse. Please attach these to the device along with inserting the USB disk you just created.

Press the power button on the Qotom device. Press the F11 key repeatedly until you see the AMI BIOS menu. Press F11 a couple more times. You’ll be presented with a boot menu. One of the options will launch the Proxmox installer. By trial and error, I found that the correct boot menu option was “UEFI OS”

Once you select the correct option, you will be presented with a menu that looks like this. Select the default option and install.

During the install, you will be presented with an option of the block device to install to. I think there’s only a single block device in this celeron, but if there are more than one, I prefer the smaller one for the ProxMox OS. I also make a point to limit the size of the root filesystem to 16G. I think it will take up the entire volume group if you don’t set a limit.

Okay, I’ll do another install and select the correct filesystem.

If you read this far and want me to add some more screenshots and better instructions, leave a comment.

09 May, 2023 11:43PM by C.J. Collier

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

crc32c 0.0.1 on CRAN: New Package

Happy to announce a new package: crc32c. This arose out of a user request to add crc32c (which is related to but differnt from crc32 without the trailing c) to my digest package. Which I did (for now in a branch), using the software-fallback version of crc32c from the reference implementation by Google at their crc32c repo.

However, the Google repo also offers hardware-accelerated versions and switches at run-time. So I pondered a little about how to offer the additional performance without placing a dependency burden on all users of digest.

Lo and behold, I think I found a solution by reusing what R offers. First off, the crc32c package wraps the Google repo cleanly and directly. We include all the repo code – but not the logging or benchmarking code. This keeps external dependencies down to just cmake. Which while still rare in the CRAN world is at least not entirely uncommon. So now each time you build the crc32c R package, the upstream hardware detection is added transparently thanks in part to cmake. We build libcrc32c.a as a static library and include it in the R package and its own shared library.

And we added exporting of three key functions, directly at the C level, along with exporting one C level function of package that other packages can call. The distinction matters. The second step of offering a function R can call (also from other packages) is used and documented. By means of an Imports: statement to instantiate the package providing the functionality, the client package obtains access to a compiled functions its R code can then call. A number of other R packages use this.

But what is less well known is that we can do the same with C level functions. Again, it takes an imports statement but once that is done we can call ‘C to C’. Which is quite nice. I am working currently on the branch in digest which motivated this, and it can import the automagically hardware-accelerated functionality on the suitable hardware. Stay tuned for a change in digest.

I also won and lost the CRAN lottery for once: the package made it through the ‘new package’ checks without any revisions. Only to then immediately fail on the CRAN machines using gcc-13 as a -fPIC was seen as missing when making the shared library. Even though both my CI and the r-universe builds were all green. Ah well. So a 0.0.2 release will be coming in a day or two.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 May, 2023 01:13AM

May 08, 2023

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Open to work!

I decided to leave my job (Principal Software Engineer) after 4 years. I have no idea what I want to do next, so I’ve been having loads of chats to try and work that out.

I like working in mission focused organizations, working to fix problems across the stack, from interpersonal down to the operating system. I enjoy “going where I’m rare”, places that don’t always get the most attention. At my last job, I most enjoyed working to drive engineering standards for all products across the company, mentoring engineers across all teams and seniority levels, and serving as an advisor for senior leadership as we grew the engineering team from 3 to 150 people.

If you have a role that you think I’d like to hear about, I’d love to hear about it at jobs{}pault.ag (where the {} is an @ sign).

08 May, 2023 06:19PM

May 07, 2023

Thorsten Alteholz

My Debian Activities in April 2023

FTP master

This month I accepted 103 and rejected 11 packages. The overall number of packages that got accepted was 103.

Debian LTS

This was my hundred-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3405-1] libxml2 security update for two CVE
  • [DLA 3406-1] sniproxy security update for one CVE
  • [sniproxy] updates for Unstable + Bullseye prepared and debdiffs sent to maintainer
  • [1033759] pu-bug: duktape/bullseye uploaded and accepted
  • [1029976] pu-bug: libzen/bullseye uploaded and accepted

I also continued to work on ring in Buster and Bullseye, where some new CVEs appeared.

Debian ELTS

This month was the fifty seventh ELTS month.

Unfortunately I couldn’t use up all my allocated hours and I was only able to continue my work on openssl1.0. I plan to do an upload in May.

Debian Astro

Due to a change in numpy the planetary-system-stacker stopped working. I created a patch and uploaded a new package. Meanwhile it already arrived in testing and I could analyse some pictures of the sun again.

Other stuff

Looking at my notes, there is nothing to be reported here.

07 May, 2023 11:41AM by alteholz

hackergotchi for Norbert Preining

Norbert Preining

Debian TeX Repo Stats

After having worked about 18 years on getting Debian users a great TeX experience, things have turned sour between Debian and me. So I think it is time to look a bit at my contributions over these years, for this I have prepared repo stats of the most important TeX related repositories.

Here are the repo stats for

Sad to see how much time and energy I have invested (wasted?). What a shame. And thanks to Hilmar for continuing my work!

07 May, 2023 08:14AM by Norbert Preining

May 06, 2023

Reproducible Builds

Reproducible Builds in April 2023

Welcome to the April 2023 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up to over the past month. And, as always, if you are interested in contributing to the project, please visit our Contribute page on our website.

General news

Trisquel is a fully-free operating system building on the work of Ubuntu Linux. This month, Simon Josefsson published an article on his blog titled Trisquel is 42% Reproducible!. Simon wrote:

The absolute number may not be impressive, but what I hope is at least a useful contribution is that there actually is a number on how much of Trisquel is reproducible. Hopefully this will inspire others to help improve the actual metric.

Simon wrote another blog post this month on a new tool to ensure that updates to Linux distribution archive metadata (eg. via apt-get update) will only use files that have been recorded in a globally immutable and tamper-resistant ledger. A similar solution exists for Arch Linux (called pacman-bintrans) which was announced in August 2021 where an archive of all issued signatures is publically accessible.


Joachim Breitner wrote an in-depth blog post on a bootstrap-capable GHC, the primary compiler for the Haskell programming language. As a quick background to what this is trying to solve, in order to generate a fully trustworthy compile chain, trustworthy root binaries are needed… and a popular approach to address this problem is called bootstrappable builds where the core idea is to address previously-circular build dependencies by creating a new dependency path using simpler prerequisite versions of software. Joachim takes an somewhat recursive approach to the problem for Haskell, leading to the inadvertently humourous question: “Can I turn all of GHC into one module, and compile that?”

Elsewhere in the world of bootstrapping, Janneke Nieuwenhuizen and Ludovic Courtès wrote a blog post on the GNU Guix blog announcing The Full-Source Bootstrap, specifically:

[…] the third reduction of the Guix bootstrap binaries has now been merged in the main branch of Guix! If you run guix pull today, you get a package graph of more than 22,000 nodes rooted in a 357-byte program—something that had never been achieved, to our knowledge, since the birth of Unix.

More info about this change is available on the post itself, including:

The full-source bootstrap was once deemed impossible. Yet, here we are, building the foundations of a GNU/Linux distro entirely from source, a long way towards the ideal that the Guix project has been aiming for from the start.

There are still some daunting tasks ahead. For example, what about the Linux kernel? The good news is that the bootstrappable community has grown a lot, from two people six years ago there are now around 100 people in the #bootstrappable IRC channel.


Michael Ablassmeier created a script called pypidiff as they were looking for a way to track differences between packages published on PyPI. According to Micahel, pypidiff “uses diffoscope to create reports on the published releases and automatically pushes them to a GitHub repository.” This can be seen on the pypi-diff GitHub page (example).


Eleuther AI, a non-profit AI research group, recently unveiled Pythia, a collection of 16 Large Language Model (LLMs) trained on public data in the same order designed specifically to facilitate scientific research. According to a post on MarkTechPost:

Pythia is the only publicly available model suite that includes models that were trained on the same data in the same order [and] all the corresponding data and tools to download and replicate the exact training process are publicly released to facilitate further research.

These properties are intended to allow researchers to understand how gender bias (etc.) can affected by training data and model scale.


Back in February’s report we reported on a series of changes to the Sphinx documentation generator that was initiated after attempts to get the alembic Debian package to build reproducibly. Although Chris Lamb was able to identify the source problem and provided a potential patch that might fix it, James Addison has taken the issue in hand, leading to a large amount of activity resulting in a proposed pull request that is waiting to be merged.


WireGuard is a popular Virtual Private Network (VPN) service that aims to be faster, simpler and leaner than other solutions to create secure connections between computing devices. According to a post on the WireGuard developer mailing list, the WireGuard Android app can now be built reproducibly so that its contents can be publicly verified. According to the post by Jason A. Donenfeld, “the F-Droid project now does this verification by comparing their build of WireGuard to the build that the WireGuard project publishes. When they match, the new version becomes available. This is very positive news.”


Author and public speaker, V. M. Brasseur published a sample chapter from her upcoming book on “corporate open source strategy” which is the topic of Software Bill of Materials (SBOM):

A software bill of materials (SBOM) is defined as “…a nested inventory for software, a list of ingredients that make up software components.” When you receive a physical delivery of some sort, the bill of materials tells you what’s inside the box. Similarly, when you use software created outside of your organisation, the SBOM tells you what’s inside that software. The SBOM is a file that declares the software supply chain (SSC) for that specific piece of software. []


Several distributions noticed recent versions of the Linux Kernel are no longer reproducible because the BPF Type Format (BTF) metadata is not generated in a deterministic way. This was discussed on the #reproducible-builds IRC channel, but no solution appears to be in sight for now.


Community news

On our mailing list this month:

Holger Levsen gave a talk at foss-north 2023 in Gothenburg, Sweden on the topic of Reproducible Builds, the first ten years.

Lastly, there were a number of updates to our website, including:

  • Chris Lamb attempted a number of ways to try and fix literal {: .lead} appearing in the page [][][], made all the Back to who is involved links italics [], and corrected the syntax of the _data/sponsors.yml file [].

  • Holger Levsen added his recent talk [], added Simon Josefsson, Mike Perry and Seth Schoen to the contributors page [][][], reworked the People page a little [] [], as well as fixed spelling of ‘Arch Linux’ [].

Lastly, Mattia Rizzolo moved some old sponsors to a ‘former’ section [] and Simon Josefsson added Trisquel GNU/Linux. []



Debian

  • Vagrant Cascadian reported on the Debian’s build-essential package set, which was “inspired by how close we are to making the Debian build-essential set reproducible and how important that set of packages are in general”. Vagrant mentioned that: “I have some progress, some hope, and I daresay, some fears…”. […]

  • Debian Developer Cyril Brulebois (kibi) filed a bug against snapshot.debian.org after they noticed that “there are many missing dinstalls” — that is to say, the snapshot service is not capturing 100% of all of historical states of the Debian archive. This is relevant to reproducibility because without the availability historical versions, it is becomes impossible to repeat a build at a future date in order to correlate checksums. .

  • 20 reviews of Debian packages were added, 21 were updated and 5 were removed this month adding to our knowledge about identified issues. Chris Lamb added a new build_path_in_line_annotations_added_by_ruby_ragel toolchain issue. […]

  • Mattia Rizzolo announced that the data for the stretch archive on tests.reproducible-builds.org has been archived. This matches the archival of stretch within Debian itself. This is of some historical interest, as stretch was the first Debian release regularly tested by the Reproducible Builds project.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope development

diffoscope version 241 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well a change by Chris Lamb to add a missing raise statement that was accidentally dropped in a previous commit. []



Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In April, a number of changes were made, including:

  • Holger Levsen:

    • Significant work on a new Documented Jenkins Maintenance (djm) script to support logged maintenance of nodes, etc. [][][][][][]
    • Add the new APT repo url for Jenkins itself with a new signing key. [][]
    • In the Jenkins shell monitor, allow 40 GiB of files for diffoscope for the Debian experimental distribution as Debian is frozen around the release at the moment. []
    • Updated Arch Linux testing to cleanup leftover files left in /tmp/archlinux-ci/ after three days. [][][]
    • Mark a number of nodes hosted by Oregon State University Open Source Lab (OSUOSL) as online and offline. [][][]
    • Update the node health checks to detect failures to end schroot sessions. []
    • Filter out another duplicate contributor from the contributor statistics. []
  • Mattia Rizzolo:




If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 May, 2023 07:55PM

May 05, 2023

Dima Kogan

mrcal 2.3 released!

Today I released mrcal 2.3 (the release notes are available here). Once again, in the code there are lots of useful improvements, but nothing major. The big update in this release is the documentation. Much of it was improved and extended, especially practical guides in the how-to-calibrate page and the recipes.

Major updates are imminent. I'm about to merge the cross-projection uncertainty branch and the triangulated-points-in-the-solver branch to study chessboard-less calibrations and structure from motion. Neither of these are novel, but mrcal's improved lens models and uncertainty propagation will hopefully produce better results.

05 May, 2023 09:13PM by Dima Kogan

hackergotchi for Shirish Agarwal

Shirish Agarwal

CAT-6, AMD 5600G, Dealerships closing down, TRAI-caller and privacy.

CAT-6 patch cord & ONU

Few months back I was offered a fibre service. Most of the service offering has been using Chinese infrastructure including the ONU (Optical Network Unit). Wikipedia doesn’t have a good page on ONU hence had to rely on third-party sites. FS (a name I don’t really know) has some (good basic info. on ONU and how it’s part and parcel of the whole infrastructure. I also got an ONT (Optical Network Terminal) but it seems to be very basic and mostly dumb. I used the old CAT-6 cable ( a decade old) to connect them and it worked for couple of months. Had to change it, first went to know if a higher cable solution offered themselves. CAT-7 is there but not backward compatible. CAT-8 is the next higher version but apparently it’s expensive and also not easily bought. I did quite a few tests on CAT-6 and the ONU and it conks out at best 1 mbps which is still far better than what I am used to. CAT-8 are either not available or simply too expensive for home applications atm. A good summary of CAT-8 and what they stand for can be found here. The networking part is hopeless as most consumer facing CPU’s and motherboards don’t even offer 10 mbps, so asking anything more is just overkill without any benefit. Which does bring me to the next question, something that I may do in a few months or a year down the road. Just to clarify they may say it is 100 mbps or even 1 Gbps but that’s plain wrong.

AMD APU, Asus Motherboard & Dealerships

I had been thinking of an AMD APU, could wait a while but sooner or later would have to get one. I got quoted an AMD Ryzen 3 3200G with an Asus A320 Motherboard for around 14k which kinda looked steep to me. Quite a few hardware dealers whom I had traded, consulted over years simply shut down. While there are new people, it’s much more harder now to make relationships (due to deafness) rather than before. The easiest to share which was also online was pcpartpicker.com that had an Indian domain now no longer available. The number of offline brick and mortar PC business has also closed quite a bit. There are a few new ones but it takes time and the big guys have made more of a killing. I was shocked quite a bit. Came home and browsed a bit and was hit by this. Both AMD and Intel PC business has taken a beating. AMD a bit more as Intel still holds part of the business segment as traditionally been theirs. There have been proofs and allegations of bribing in the past (do remember the EU Antitrust case against Intel for monopoly) but Intel’s own cutting corners with the Spectre and Meltdown flaws hasn’t helped its case, nor the suits themselves. AMD on the other hand under expertise of Lisa Su has simply grown strength by strength. Inflation and Profiteering by other big companies has made the outlook for both AMD and Intel a bit lackluster. AMD is supposed to show Zen5 chips in a few days time and the rumor mill has been ongoing.

Correction – Not few days but 2025.

Personally, I would be happy with maybe a Ryzen 5600G with an Asus motherboard. My main motive whenever I buy an APU is not to hit beyond 65 TDP. It’s kinda middle of the road. As far as what I could read this year and next year we could have AM4+ or something like those updates, AM5 APU’s, CPU’s and boards are slated to be launched in 2025. I did see pcpricetracker and it does give idea of various APU prices although have to say pcpartpicker was much intuitive to work with than the above.

I just had my system cleaned couple of months so touchwood I should be able to use it for another couple of years or more before I have to get one of these APU’s and do hope they are worth it. My idea is to use that not only for testing various softwares but also delve a bit into VR if that’s possible. I did read a bit about deafness and VR as well. A good summary can be found here. I am hopeful that there may be few people in the community who may look and respond to that. It’s crucial.

TRAI-caller, Privacy 101& Element.

While most of us in Debian and FOSS communities do engage in privacy, lots of times it’s frustrating. I’m always looking for videos that seek to share that view why Privacy is needed by individuals and why Governments and other parties hate it. There are a couple of basic Youtube Videos that does explain the same quite practically.

Now why am I sharing the above. It isn’t that people do not privacy and how we hold it dear. I share it because GOI just today blocked Element. While it may be trivial for us to workaround the issues, it does tell what GOI is doing. And it still acts as if surprised why it’s press ranking is going to pits.

Even our Women Wrestlers have been protesting for a week to just file an FIR (First Information Report) . And these are women who have got medals for the country. More than half of these organizations, specifically the women wrestling team don’t have POSH which is a mandatory body supposed to be in every organization. POSH stands for Prevention of Sexual Harassment at Workplace. The ‘gentleman’ concerned is a known rowdy/Goon hence it took almost a week of protest to do the needful 😦

I do try not to report because right now every other day we see somewhere or the other the Govt. curtailing our rights and most people are mute 😦

Signing out, till later 😦

05 May, 2023 02:30PM by shirishag75

hackergotchi for Jonathan Dowland

Jonathan Dowland

sidebar dividers for mutt

I wanted to start using (neo)mutt's sidebar and I wanted a way of separating groups of mail folders in the list. To achieve that I interleaved a couple of fake "divider" folder names. It looks like this:

  Screenshot of neomutt with sidebar

Screenshot of neomutt with sidebar

This was spurred on by an attempt to revamp my personal organisation.

I've been using mutt for at least 20 years (these days neomutt), which, by default, does not show you a list of mail folders all the time. The default view is an index of your default mailbox, from which you can view a mail (pager view), switch to a mailbox, or do a bunch of other things, some of which involve showing a list of mailboxes. But the list is not omnipresent. That's somewhat of a feature, if you believe that you don't need to see that list until you are actually planning to pick from it.

There's an old and widespread "sidebar" patch for mutt (which neomutt ships out of the box). It reserves a portion of the left-hand side of the terminal to render a list of mailboxes. It felt superfluous to me so I never really thought to use it, until now: I wanted to make my Inbox functional again, and to achieve that, I needed to move mail out of it which was serving as a placeholder for a particular Action, or as a reminder that I was Waiting on a response. What was stopping me was a feeling I'd forget to check other mailboxes. So, I need to have them up in my face all the time to remind me.

Key for me, to make it useful, is to control the ordering of mailboxes and to divide them up using the interleaved fake mailboxes. The key configuration is therefore

set sidebar_sort_method = 'unsorted'
mailboxes =INBOX =Action =Waiting
mailboxes '=   ~~~~~~~~' # divider
...

My groupings, for what it's worth, are: the key functional mailboxes (INBOX/Action/Waiting) come first; last, is reference ('2023' is the name of my current Archive folder; the other folders listed are project-specific reference and the two mailing lists I still directly subscribe to). Sandwiched in between is currently a single mailbox which is for a particular project for which it makes sense to have a separate mailbox. Once that's gone, so will that middle section.

For my work mail I do something similar, but the groupings are

  1. INBOX/Action/Waiting
  2. Reference (Sent Mail, Starred Mail)
  3. More reference (internal mailing lists I need to closely monitor)
  4. Even more reference (less important mailing lists)

As with everything, these approaches are under constant review.

05 May, 2023 10:12AM