Typically when I start my PC, whether it be at work or at home, there's a few pages I tend to open. Typically these are subreddits related to various tools I use in order to keep up with changes, potential problems people are having or tips and tricks.
Among these sub-reddits that are often open on my secondary browser
window is the Fedora subreddit, because I find it fun and useful to
keep a finger on the pulse of my distro of choice. Lately however,
there has been a lot of noise in said subreddit and presumbaly also
other Fedora communities about upgrade issues related to the
mesa-freeworld package, which have lead to people being unable to
boot into desktop or otherwise wreaked havoc on people's Linux
Now, upgrade issues are a fact of life. Human error tends to find its
way into both the packages and the package managers and things
break. However, what I found extremely strange was how some people
were describing how they got into said trouble with
What is mesa-freeworld?
For a bit of context, some time ago Fedora stripped hardware video encoding/decoding support from their Mesa packages for video codecs under strict patent or licensing rules, such as H.264 and H.265, which are still very commonly used on many video streaming websites. Fedora operates from the US and with fairly close ties to Red Hat, so they want to avoid royalty issues with this stuff.
On some systems, such as laptops, the hardware decode functionality is
fairly important due to energy efficiency concerns. So, when
patent-encumbered codecs were removed from the stock Mesa packages,
some people set up third-party Mesa packages on RPM Fusion which
still carried encumbered codec support and called it
Now, what should be clarified here is that this is the whole point
mesa-freeworld exists. It doesn't make your games run any
faster or anything like that, it just enables some video codecs to
be decoded or encoded on your GPU. Some people seem confused by this,
so I felt it important that this be clarified here.
So, onto the issues
Now, I didn't bother looking too deeply into the exact cause of the
issues myself, because I am not a user of
mesa-freeworld, but I
did read a number of explanations given and therefore can give a
general overview of what happened.
mesa-freeworld package lives in a third-party repository,
it doesn't necessarily update at the same pace as the rest of the
Fedora packages. What seemingly happened is Fedora updated its Mesa
packages and package dependencies while
at the old version. When updates were installed, dependency resolution
failed to come up with a working set of packages and various components
dependent on Mesa begun to fail. The most spectacular consequence was
the Fedora install to boot into a system failure screen on next boot,
presumably due to GDM or its components failing to start correctly.
Naturally a nasty surprise, especially for people booting their work
laptops on a Monday morning.
Apparently how the upgrade issues presented themselves came in at least a couple of forms, the nastiest of which was when upgrades were applied via PackageKit using the GNOME Software or KDE Discover graphical software managers, which apparently didn't notify the user of a problem whatsoever and rolled on with a clearly faulty upgrade. If this was the case, then PackageKit should very clearly be patched to reject an upgrade with broken dependencies.
However, what I also saw was a number of cases of people executing their installs via DNF in the terminal and despite clear warnings proceeded to install an upgrade that was marked as broken. This is something you should absolutely never do unless you know what you are doing or are being prompted to do so by someone who does.
This is not the first time this has happened and it probably isn't the last time either. The most prolific case of this, which also inspired the title of this blog post, was when Linus from Linus Tech Tips proceeded to force-install a package against clear warnings which resulted in nearly all desktop-related packages being removed and thus the system to only boot into a TTY environment upon next boot.
As a professional Linux nerd, this is extremely frustrating to watch. It is one thing to run into unexpected foot-guns and accidentally running into trouble and something different to clearly notice something is going wrong, ignoring the warnings that something is going wrong and then throwing your hands up when things inevitably go wrong.
How to solve this problem
This kind of sudden-onset rapid distro demolition is very clearly preventable with a few simple steps. I have compiled some suggestions here.
Package managers should try their best to prevent this
On the package manager side it mostly boils down to steps, which have mostly already been taken, which is to detect and warn of a broken upgrade and prevent it from happening silently. Apparently PackageKit might still need to improve on this and maybe DNF should be even louder about broken upgrades, but at least APT is at least in theory annoying enough to prevent anything short of a complete brain fart maneuver.
Naturally this area is based entirely on best and reasonable effort, because packages can break in surprising ways and it doesn't make sense to make the users' lives miserable by putting too many roadblocks in front of the user.
Package managers should make updates reliable and easy to rollback
If an update goes poorly, the package manager should provide the ability to recover to the prior state easily. Optimally it would also do so automatically unless instructed otherwise upon a failed boot.
DNF has this functionality (APT seemingly doesn't), but I doubt people are fully aware of it. GUI software for package management to my knowledge mostly ignores rollbacks, but in this case that probably wouldn't have helped either.
This also ties in with concepts like immutable distros, which I will touch on in a bit.
Be aware of what third-party software you install
Installation of third-party systems software shouldn't be taken as lightly as it has been so far. If you are replacing system components on your distro, you should have a clear idea of what you are replacing and why. This will help you and others to track down and fix issues faster.
This also means that the community should be responsible when suggesting installation of alternative system components. The community today seems to be far too willing to share and follow various setup guides without really communicating or understanding what the guides are about.
Especially when it comes to newbies, you should attempt to be minimally invasive when guiding them through their first-time setup. Don't try to rig them up an ultra-optimized setup that attempts to preemptively solve every issue. Instead you should address only concrete issues. Similarly, don't try to preemptively solve problems for yourself either, proceed with a system modification only if it actually has utility for you.
Use containerization to limit the potential blast radius of updates
One thing that became evident in the aftermath of the
debacle is that most people just needed hardware decode for their web
browser. And, as it happens, you could get hardware decode support for
your browser by simply installing it from Flatpak, since the Mesa
libraries carried by Flathub are built with encumbered codec support.
By using tools like Flatpak and Snap, you decouple the applications from the system libraries, which allows them to update at different paces. Optimally this means that your base system needs to carry fewer modifications and thus remains slimmer and more stable. Flaws in Flatpaks should also therefore not make your base system unbootable, which makes it easier to recover from issues.
You can also take this further with tools like Toolbx and Distrobox to also manage a mix of CLI and GUI tooling inside Podman containers with a traditional Linux environment while also protecting your base system from broken updates. If you mess up an update in a Toolbx container, just nuke the container and rebuild it. This should be a significantly easier task than trying to recover your entire desktop environment after your system dependencies broke.
Consider immutable distros
After adopting some containerization techniques for managing your apps, it's not actually that big of a leap to hop from a traditional distro to an immutable one.
The difference between a traditional and an immutable distro is in how system software updates are deployed. In a traditional distro the package metadata is used to replace files or parts of files in the root filesystem of the distro. In an immutable distro the changes are staged into a "deployment" which is used to construct the root filesystem contents without having to modify the current one.
Because of this, updates on an immutable distro have fewer moving parts that can go wrong (modifying files of software that is currently running is a potential source of issues) and the immutable distro can store multiple deployments simultaneously and switch between them easily.
I personally use Fedora Silverblue on my systems, because the
immutability gives me increased confidence in the reliability of my
systems. If I stumble upon a broken update, I can return to the
previous version with just a quick
rpm-ostree rollback && systemctl
Some people make the assumption that immutability means you cannot or should not make changes to the system software, but this isn't really the case either. In fact, tinkering is probably better on an immutable system, because at least you can recover from your mistakes more easily. I myself haven't really bothered to tinker with the base system too much, because all tinkering I do I can typically manage inside Podman containers. However, the confidence afforded by transactional and easy-to-rollback updates has meant that I have played around with beta versions of Fedora more often and adopted new distro versions quicker, because in the worst case scenario I always have the option to just go back to the previous version. Rolling back a full distro upgrade is immensely powerful.
Read what the software is telling you
Last but not least, you should make an effort to read what the software is telling you before you mindlessly proceed. If the software is warning you that something bad might happen, it's probably a good idea to hold off on pressing Enter until you have at least some idea about what is going to happen and how you might go about recovering from a failure.
There is also a responsibility here for the software developers to not desensitize their users by producing too many unnecessary warnings and errors.
Like I said earlier, watching people nuke their Linux installs is really quite frustrating to me because of how preventable all of this would be.
Now, some people have and probably will put some of the blame for this
issue on Fedora, because technically the whole
is only a problem because of Fedora's strictness on encumbered codec
support. But similarly I suppose we could also blame the companies
that try to hold stuff like video codecs hostage, AV1 will hopefully
solve that problem for us.
But in the end I feel like
mesa-freeworld is just a symptom of a
more systemic problem. Those of us who have been using Linux for
longer carry some kind of a responsibility in how we recommend others
to use Linux. And by now the Internet is full of really bad advice and
guides about how to set up distros.
So, if nothing else, I hope I at least made you think about these issues and carry those thoughts into future discussions related to these types of issues.
Thanks for reading.