/rss20.xml">

Fedora People

Integrating the NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector with ZHA

Posted by Brian (bex) Exelbierd on 2025-12-23 14:30:00 UTC

My friend Tomáš recently gave me a NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector. It is a compact ZigBee device that, on paper, integrates with Home Assistant. However, as is often the case with smart home hardware, the reality is slightly more nuanced. Home Assistant offers two primary ways to integrate Zigbee devices: Zigbee2MQTT and ZHA (Zigbee Home Automation). I started out with ZHA when I first installed Home Assistant. There is no way, as far as I know, to migrate between the two without re-adding all of your devices, so, 25 (now 26) devices in, I am on team ZHA. While the NOUS E10 was already fully supported in Zigbee2MQTT, it was not functional in ZHA.

NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector Home Assistant CO₂, Temperature & Humidity screenshot
Capturing the photo and the screenshot simultaneously without breathing on the sensor is hard; glossy surfaces are tricky to photograph, so slight value drift between the sensor and UI is expected.

The Tuya Rebrand Rabbit Hole

I did some reading and it seemed that between what the folks who did the Zigbee2MQTT integration figured out and the fact that the device is really a Tuya rebranded object in disguise, writing the integration should be achievable with my level of skill and general coding/technical experience. Tuya is a massive OEM (Original Equipment Manufacturer) that produces a vast array of smart home devices sold under hundreds of different brand names, so while the devices vary, the overall concept is fairly well understood.

The challenge with Tuya devices is that they often use a proprietary Zigbee cluster to communicate data. Instead of using the standard Zigbee clusters for temperature or humidity, they wrap everything in their own protocol. To make these devices work in ZHA, you need a “quirk.” A quirk is essentially a Python-based translator that tells ZHA how to interpret these non-standard messages and map them to the standard Home Assistant entities.

Developing the Quirk with AI

Because Tuya devices and the quirk concepts are fairly well understood this is a great use case for an LLM model. I did some ideating with Google Gemini and plugged in all the values I could find from the Zigbee2MQTT source code and the device’s own signature. Using an LLM for this was surprisingly effective - it helped me scaffold the Python classes and identify which Tuya data points mapped to which sensors. Honestly, all it got wrong was guessing that values were reported as deciunits (value times 10, i.e. 21.1 is reported as 211) when for this specific device, values are reported directly.

However, I hit multiple challenges, centered on this device not seeming to ever throw debug data. Usually, when you are developing a quirk, you can watch the Home Assistant logs to see the raw Zigbee frames coming in. You look for the “magic numbers” that change when you breathe on the sensor (CO₂). For some reason, the NOUS E10 was incredibly quiet. It took a lot of trial and error - and several restarts of the Zigbee network - to finally see the data flowing correctly. Eventually, I had a functional quirk that correctly reported CO₂ levels, temperature, and humidity.

Contributing to the Ecosystem

If you write a quirk, you’re encouraged to contribute it to the Zigpy ZHA Device Handlers Repository. This is the central hub for all ZHA quirks, and once a quirk is merged there, it eventually makes its way into a standard Home Assistant release. I worked on a basic test case, and cleaned up my code to match the code standards and general concepts used in similar quirks.

I have submitted this pull request and I’m waiting for feedback. I’m expecting to need to make corrections as this is my first time doing this kind of a contribution. While I have validated that the code works in my own environment, “working” and “ready for contribution” are not always the same thing. There are coding standards, naming conventions, and architectural patterns that the maintainers (rightly) insist upon to keep the codebase maintainable.

How to Use the Quirk Today

If you happen to have one of these and you use ZHA in Home Assistant, you can use the quirk right now without waiting on the merge. To do this, you need to save the actual python code in a custom quirks directory in your Home Assistant install. Typically, you would use /config/zha_quirks.

After you do that, update your configuration.yaml to add the quirk directory as follows:

zha:
  custom_quirks_path: /config/zha_quirks/

Then restart Home Assistant, pair your device, and, as a different friend would say, “Robert is your father’s brother.” It is a small but satisfying victory to take a non-working device and make it fully functional through a bit of code and community knowledge and advice.

NanoKVM: I like it

Posted by Jonathan McDowell on 2025-12-22 17:38:00 UTC

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

EU OS: Which Linux Distribution fits Europe best?

Posted by Robert Riemann on 2025-12-21 10:23:00 UTC

Logos of Fedora, openSUSE, Ubuntu and Debian surrounding the Logo of EU OS

Please note that views expressed in this post (and this blog in general) are only my own and not of my employer.

Dear opensuse planet, dear fedora planet, dear fediverse, dear colleagues,

Soon, the EU OS project celebrates its first anniversary. I want to seize the occasion (and your attention) before the Christmas holidays to share my personal view on the choice of the Linux distribution as a basis for EU OS. EU OS is so far a community-led initiative to offer a template solution for Linux on the Desktop deployments in corporate settings, specifically in the public sector.

Only few weeks ago, the EU OS collaborators tested together a fully functional Proof of Concept (PoC) with automatic provisioning of laptops and central user management. The documentation of this setup is to 90% complete and should be finalized in the coming weeks. This PoC relies on Fedora, which is the one aspect that triggered the most attention and criticism so far.

I recall that EU OS has so far no funding and only few contributors. Please check out the project Gitlab, join the Matrix channel, or send me an email to help or discuss funding. So in my view, EU OS can currently accomplish its mission best by bringing communities and organisations together to use their existing resources more strategically than now.

In 2025, digital sovereignty was much discussed in Europe. I had many opportunities to discuss EU OS with IT experts in the public sector. I am hopeful that eventually one or several European projects will emerge to bring Linux on the (public sector) Desktop more systematically as it is currently the case.

I also learnt more about public sector requirements for Linux on the server, in the VM, in Kubernetes. If the goal of EU OS is to leverage synergies with Cloud Native Computing technologies, those requirements must be considered as well by the Linux distribution powering EU OS.

Linux Use in the Public Sector

Let us map out briefly the obvious use cases of Linux in a public sector organisation, such as a ministry, a court, or the administration of a city/region. The focus is on uses that are directly managed1.

  • Linux on the desktop (rarely the case today, but that’s the ambition of the EU OS project)
  • Linux in a Virtual Machine (VM), a Docker/Podman Container, and for EU OS in a Flatpak Runtime
  • Linux on the server (including for Virtualisation/Kubernetes nodes)

Criteria for a Linux Distribution in the Public Sector

Given the exchanges I had so far, I would propose the following high-level criteria for the selection of a Linux Distribution:

Battle-tested Robustness
The public sector is very conservative and any change to the status-quo requires clear unique benefits.
Cloud Native Technology
The public sector reacted so far very positively to the promises of bootc technology (bootc in the EU OS FAQ). It is very recent technology, but the benefits for the management of Linux laptop fleets with teams knowing already container technology are recognised.
Enterprise Support
The public sector wants commercial support for a free Linux with an easy upgrade path to a managed enterprise Linux. Already existing companies, new companies, the public sector, or non-profit foundations could deliver such enterprise Linux. I expect that a mix with clear task allocations would work best in practice.
Enterprise Tools
The public sector needs tools for provisioning, configuration and monitoring of servers, VMs, Docker/Podman Containers, and laptops as well as for the management of users. Those tools must scale up to some ten thousands laptops/users. The EU OS project proposes to rely on FreeIPA for identity management and Foreman for Linux laptop fleet management.
Third-Party Support
The public sector wants that their existing possibly proprietary or legacy third-party hardware2 or appliances (think SAP) remain supported. This one is tricky, because it is each third party that decides what they support. Of course, any third-party vendor lock-in should be avoided eventually, but this takes time and some vendor lock-ins are less problematic than others.
Supply Chain Security and Consistency
The public sector must secure its supply chains. This becomes generally easier with less chains to secure. A Linux desktop based on Fedora and KDE requires about 1200 Source RPM packages3. A Linux server based on Fedora requires about 300 Source RPM packages. The flatpak runtime org.fedoraproject.KDE6Platform/x86_64/f43 requires about 100 Source RPM packages. I assume the numbers for Ubuntu/Debian/openSUSE are similar. So instead of securing all supply chains independently (possibly through outsourcing), the public sector can choose one, secure this one, and cover several use cases with the same packages at no or significant less extra effort. Also updates, testing, and certifications of those packages would benefit then all use cases.
Accreditation and Certifications
Some public sector organisations require a high level of compliance with cyber security, data protection, accessibility, records keeping, interoperability, etc. The more often a (similar) Linux distribution has passed such tests, the easier it should get.
Forward-looking Sovereignty and Sustainability
The public sector wants to work with stable vendors in stable jurisdictions that minimise the likelihood to interfere with the execution of its public mandate4. Companies can change ownership and jurisdiction. While not a bullet-proof solution, a multi-stakeholder non-profit organisation can offer more stability and alignment with public sector mandates. Such an organisation must then receive the resources to execute its mandate continuously over several years or decades. With several independent stakeholders, public tenders become more competitive and as such more meaningful (compare with procurement of Microsoft Windows 11).

Geographical Dimension

I have the impression that some governments would like to choose a Linux distribution that (a) local IT companies can support and (b) creates jobs in their country or region. In my view the only chance to offer such advantage while maintaining synergies across borders is to find a Linux distribution supported by IT companies active in many countries and regions.

While the project EU OS has EU in its name, I would be in favour to not stop at EU borders when looking for partners and synergies. It has already inspired MxOS in Mexico. Then, think of international organisations like OSCE, Council of Europe, OECD, CERN, UN (WHO, UNICEF, WFP, ICJ), ICC, Red Cross, Doctors Without Borders (MSF), etc. Also think of NATO. Those organisations are active in the EU, in Europe and in most other countries of the world. So if EU OS can rely on and stimulate investments in a Linux distribution that is truly an international project, international organisations would benefit likewise while upholding their mandated neutrality.

Diversity of Linux Distributions for EU OS

Douglas DeMaio (working for SUSE doing openSUSE community management) argues in his blog post from March 2025: Freedom Does Not Come From One Vendor. The motto of the European Union is ‘United in Diversity’. Diversity and decentralisation make systems more robust. However, when I see the small scale of on-going pilots, I find that as of December 2025, it is better to unify projects and choose one single Linux distribution to start with and progress quickly. EU OS proposes to achieve immutability with bootable containers (bootc). This is a cross-distribution technology under the umbrella of the Cloud Native Computing Foundation that makes switching Linux distributions later easier. Other Linux distributions could meanwhile implement bootc, FreeIPA, as well as Foreman support, and setup/grow their multi-stakeholder non-profit organisation, possibly with support of public funds they applied for.

The extend to which more Linux distributions in the public sector provide indeed more security requires an in-depth study. For example, consider the xz backdoor from 2024 (CVE-2024-3094).

Vendor Status
Redhat/Fedora/AlmaLinux Fedora Rawhide and 40 beta affected, RHEL and Almalinux unaffacted
SUSE/openSUSE openSUSE Tumbleweed and MicroOS affected, SUSE Linux Enterprise unaffected
Debian Debian testing and unstable affected
Kali Linux Kali Linux affected
ArchLinux unaffected
NixOS affected and unaffected, slow to roll out updates

Early adopters would have caught the vulnerability independently of the Linux distribution (except ArchLinux 👏). Larger distributions can possibly afford more testing. Older distributions with older build systems are more likely to offer tarball support (essential for the xz backdoor) as back then git was not yet around. To avert such supply chain attacks, implementing supply-chain hardening (e.g. SLSA Level 3) consistently is certainly important and diversification of distributions or supply chains makes it harder first.

Comparison of Linux Distributions

In the comparison here, I focus on Debian/Ubuntu, Fedora/RHEL/AlmaLinux and openSUSE/SUSE, because they are beyond doubt battle tested with many users in corporate environments already. They are also commonly supported by third parties. Note that I don’t list criteria for which all distributions perform equally.

Criterion Debian/Ubuntu Fedora/RHEL AlmaLinux openSUSE/SUSE
bootc 🟨 not yet yes yes 🟨 not yet (but Kalpa unstable and Aeon RC with snapshots)
Flatpak app support yes yes ✅ yes yes
Flatpak apps from own sources ❌ no yes 🟨 not yet, but adaptable from Fedora ❌ no
FreeIPA server for user management ✅ yes ✅ yes ✅ yes no5
Proxmox server for VMs yes ❌ no ❌ no ❌ no
Foreman server for laptop management ✅ yes ✅ yes ✅ yes ❌ no6
Non-Profit Foundation ✅ yes (US 🇺🇸 and France 🇫🇷) ❌ no ✅ yes (US 🇺🇸) ❌ no
3rd-party download mirrors in the EU ca. 1507 ca. 1008 ca. 2009 ca. 5010
3rd-party download mirrors worldwide ca. 3507 ca. 3258 ca. 3509 ca. 12510
Github Topics per distribution name ca. 17150 (6344+10803) ca. 2500 (1,943+478) ca. 150 ca. 550 (362+172)
world-wide adopted (based on mirrors) ✅ yes ✅ yes ✅ yes 🟨 not as much
annual revenue of backing company ca. 300m$ ca. 4500m$ only donations ca. 700m$
employees world-wide of backing company ca. 1k ca. 20k 11 < 500 (including CloudLinux) ca. 2.5k12
employees in Europe of backing company ≤ 1k ca. 4.5k < 500 (including CloudLinux) ≤ 2.5k
SAP-supported13 🙄 ❌ no ✅ yes 🟨 RHEL-compatible ✅ yes

I find it extremely difficult to find reliable public numbers on employees, revenues and donations. I list here what I was able to find in the Internet, because I think it helps to quantify the popularity of the enterprise Linux distribution in corporate settings. Numbers for Debian are not very expressive due to the many companies othen than Ubuntu involved. Let me know if you find better numbers.

Other than company figures, also the number of search queries (Google Trends) given an impression on the popularity of Linux distributions. Find here below the graph for the community Linux distributions as of December 2025.

Google Trends for Debian, Fedora and openSUSE worldwide 2025

Google Trends for Debian, Fedora and openSUSE worldwide 2025 (Source)

Conclusions as of December 2025

Obviously, it is challenging to propose comprehensive criteria and relevant metrics to compare Linux distributions for corporate environments for their suitability as base distribution for a project like EU OS. This blog post does not replace a more thorough study. It offers however some interesting insights to inform possible next steps.

  1. Debian is a multi-stakeholder non-profit organisation with legal entities in several jurisdictions. Unfortunately, its bootc support is in an early stage only and it lacks support for some third-party software vendors such as SAP. For corporate environments, Debian does not offer alternatives to FreeIPA and Foreman, which work best for Fedora/RHEL/AlmaLinux, but also support Debian.
  2. Fedora is in this comparison the 2nd largest community in terms of mirrors and Github repositories. Fedora has no legal entity independent from its main sponsor RHEL. However, AlmaLinux is a multi-stakeholder non-profit organisation, albeit very US-centered. With RHEL front running Linux in enterprise deployments for several years, most use cases are covered, including building Flatpak apps from Fedora sources. Fedora downstream distributions with bootc (ublue, Bazzite, Kinoite) run already on tens of thousands systems including in the EU public sector.
  3. openSUSE has most success in German-speaking countries and the US (possibly driven by SAP). Internationally, it is significantly less popular. openSUSE has no legal entity independent from its main sponsor SUSE registered in Luxembourg and headquartered in Germany. For corporate environments, openSUSE does not offer alternatives to FreeIPA and Foreman, which support openSUSE only as clients. While Uyuni6 offers infrastructure/configuration management, it remains unclear if it can replace Foreman for managing fleets of laptops. openSUSE’s bootc support is in an early stage only.

No Linux distribution fulfills all the criteria. Independently of the distribution, corporate environments would rely on FreeIPA, Foreman, Keycloak, podman, systemd, etc. that Red Hat sponsors. Debian is promising, but its work to support bootc is not receiving much attention. AlmaLinux is promising, but would need to proof its independence from politics yet as it is a fairly new project (1st release in 2021) and doubts remain on its capacity to support Fedora (as Red Hat does) in the long run. Microsoft blogged this week about their increasing contributions to Fedora. Maybe European and non-European companies can step up likewise in 2026, so that Fedora can become a multi-stakeholder non-profit organisation similar to AlmaLinux today.

Community Talk at Fosdem

My 30 min talk on this topic has been accepted at the community conference fosdem 2026 in Brussels, Belgium! Please consider to join if you are at fosdem and let me know your thoughts and questions. The organisers have not yet allocated timeslots yet, but I believe it will take place on Saturday, 31st January 2026.

Talk title
EU OS: learnings from 1 year advocating for a common Desktop Linux for the public sector
Track title
Building Europe’s Public Digital Infrastructure

All the best,
Robert

  1. I know that the public sector relies on vendors that ship embedded Linux on WiFi routers, traffic lights, fleet of cars, etc. If anyway you identified a relevant use case that is missing here, please feel free to let me know and I will consider to add it here. ↩︎

  2. During testing for EU OS, I learnt that Red Hat upgraded the instruction set architecture (ISA) baseline to x86-64-v3 microarchitecture level in RHEL 10. Consequently, my old Thinkpad x220 is not supported any longer. While this may not be an issue for resourceful public sector organisations with recent laptops, it is an issue for less resourceful organisations, including many schools world-wide, but also in the EU. ↩︎

  3. I counted Source RPM packages with rpm -qa --qf '%{SOURCERPM}\n' | sort -u | wc -l in each given environment. ↩︎

  4. The public sector also wants to avoid vendor lock-in, which is just one specific form to ‘interfere with the execution of its public mandate’. ↩︎

  5. FreeIPA does not run on openSUSE, but supports openSUSE clients. Alternative software for openSUSE may be available. Community members suggest Kanidm, but is lacks features and development seems stalled. ↩︎

  6. Foreman runs only on Debian/Fedora/RHEL/AlmaLinux, but supports openSUSE clients. SUSE offers Rancher, which is limited to Kubernetes clusters. Uyuni and its enterprise-supported downstream SUSE Multi-Linux Manager offers configuration and infrastructure management based on SALT↩︎ ↩︎2

  7. https://www.debian.org/mirror/list ↩︎ ↩︎2

  8. https://mirrormanager.fedoraproject.org/mirrors?page_size=500&page_number=1 ↩︎ ↩︎2

  9. https://mirrors.almalinux.org ↩︎ ↩︎2

  10. https://mirrors.opensuse.org ↩︎ ↩︎2

  11. https://www.redhat.com/en/about/company-details ↩︎

  12. https://fortune.com/2024/07/26/suse-software-ceo-championing-open-source-drives-innovation-purpose/ ↩︎

  13. https://pages.community.sap.com/topics/linux/supported-platforms ↩︎

home infra weekly recap: pre holidays 2025

Posted by Kevin Fenzi on 2025-12-20 23:22:31 UTC
Scrye into the crystal ball

Time for another weekly recap, but since I am on vacation for the holidays already, this one is things I've done at home. 🙂

There's often things I just don't have time for or energy for in my home infrastructure, so I add those things to a list and try and do those things over the holidays. Of course often I don't get to them all, or even most of them, but it's a nice list to look at for things to do.

This week:

== December of docs progress

I've kept up on my 'december of docs' plan. I've done a pull request and/or processed some docs tickets every day. When we moved the infra docs to pagure a number of years ago, we opened tickets on all our standard operating procedures to review and update them, so I have been slowly working thought them. So far about 30ish tickets closed and 20ish prs (some tickets I just closed because the sop was moot or didn't need any changes).

== iptables to nftables

I switched my home networks to use nftables directly. I just never got around to it and kept using iptables. The conversion was pretty simple with iptables-restore-translate / ip6tables-restore-translate. I also went though all my rules and dropped a bunch of old ones i no longer needed. You might wonder why I don't just move to firewalld? I could have, but my home network is a bit complicated and firewalld just seemed like overhead/complexity. I got everything working, then the next day I happened to reboot my main home server and... my wireguard tunnels wouldn't work. I couldn't see why the rules all looked fine. Finally I noticed that firewalld was starting and stepping all over my rules. It must have been enabled on install, but iptables started before it so it just failed, but nftables loaded later and it messed it up.

== frame work firmware day / fixing my media pc.

I have 4(!) framework motherboards. They were all downversion on firmware and my media pc (11th gen intel) stopped working for some reason.

The 11th gen intel board was the orig one I got with my first framework laptop several years ago now. When I upgraded to the 12th gen intel one, I moved this motherboard to a coolermaster external case and repurposed it for my media pc. Things worked for a while, but then I couldn't get it to boot, and because it was in the external case it was hard to tell why. So, I pulled it out and stuck it into a laptop case and it booted fine, but I realized the firmware was so old it didn't handle the "I am not in a laptop" mode very well at all. This one needed me to enable lvfs-testing and update firmware, then download and use a usb to upgrade it again. The first one was needed to in order to add support for upgrading firmware from the usb.

Next up was the 12th gen intel one I had gotten to replace the 11th gen. This one I also moved to a coolermaster case after upgrading to the ryzen board, and this one also wasn't booting. I swapped it into the laptop chassis and upgaded it to the latest firmware and then left it in that chassis/laptop.

The first rzyen one I got to replace the 12th gen intel one I decided to swap over to being the media center pc as it's faster/nicer. I got it updated in the laptop and swapped it into the coolermaster case, but then...it wouldn't boot. Red and Blue flashing lights and no booting. Poking around on the net I found that you can get past this by pressing and holding the case open switch 10 times in a row. Indeed this worked to get it booting up. It's still a bit anoying though because the ryzen board has a slightly different layout around the power switch and the coolermaster case doesn't work quite right on those boards like it does on the intel ones. I did manage to get it booting, but the power switch could use a bit of rework to avoid this problem. ;(

The last board is in my 'hot spare' laptop and was already up to date on firmware. Thanks lvfs!

== Some fun with 'health connect'

I played around with health connect on my grapheneos phone. It notified me that I could connect devices to it. I am not sure if this support is new or I just never noticed it before now.

My understanding of the way this works is that you can approve specific sensors to write data and applications that have permission to read that data. Everything stays on your phone unless you approve some application that syncs it off elsewhere.

In my case I enabled the 'number of steps' sensor (which currently is the only thing I have to write data into health connect) and then enabled the android home assistant app to read it. So, I now have a nice home assistant sensor that lets me see how many steps I walked each day. Kinda nice to have the historical data in home assistant.

I'm looking into getting a CGM (continious glucose monitor) sensor, and that I could also share with home assistant to keep historical data.

I'm a bit surprised that this setup is so reasonable.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115754591222815006

🛡️ PHP version 8.1.34, 8.2.30, 8.3.29, 8.4.16, and 8.5.1

Posted by Remi Collet on 2025-12-19 09:39:00 UTC

RPMs of PHP version 8.5.1 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.16 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.29 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.30 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.1.34 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

🛡️ These Versions fix 4 security bugs (CVE-2025-14177, CVE-2025-14178, CVE-2025-14180), so the update is strongly recommended.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.3/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.3
dnf update

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.9 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

Community Update – Week 51 2025

Posted by Fedora Community Blog on 2025-12-19 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infratructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 15 – 19 December 2025

Forgejo

This team is working on deployment of forge.fedoraproject.org.
Ticket tracker

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • DC Move (rdu-cc to rdu3/rdu3-iso) outage completed.
    • Some hosts still having issues, see ticket
  • Zabbix moving forward as our main monitoring tool.
  • Updates+uptimes tool now removes hosts as they are removed from  ansible inventory.
  • Dealt with HW warranties that need/not-need renewing for next year.
  • Docs are now migrated to https://forge.fedoraproject.org/infra/docs

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Fedora 41 is now END OF LIFE.

QE

This team is working on day to day business regarding Fedora CI and testing.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 51 2025 appeared first on Fedora Community Blog.

🎲 PHP on the road to the 8.5.0 release

Posted by Remi Collet on 2025-09-26 05:04:00 UTC

Version 8.5.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.

RPMs are available in the php:remi-8.5 stream for Fedora ≥ 41 and  Enterprise Linux 8 (RHEL, CentOS, Alma, Rocky...) and as Software Collection in the remi-safe repository (or remi for Fedora)

 

⚠️ The repository provides development versions which are not suitable for production usage.

Also read: PHP 8.5 as Software Collection

ℹ️ Installation : follow the Wizard instructions.

Replacement of default PHP by version 8.5 installation, module way (simplest way):

Using dnf 4 on Enterprise Linux

dnf module switch-to php:remi-8.5/common

Using dnf 5 on Fedora

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection (recommended for tests):

dnf install php85

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL9 rpm are build using RHEL-9.6
  • EL8 rpm are build using RHEL-8.10
  • lot of extensions are also available, see the PHP extension RPM status page and PHP version 8.5 tracker
  • follow the comments on this page for update until final version
  • proposed as a Fedora 44 change

ℹ️ Information, read:

Base packages (php)

Software Collections (php84)

⚙️ PHP version 8.3.28 and 8.4.15

Posted by Remi Collet on 2025-11-20 14:21:00 UTC

RPMs of PHP version 8.4.15 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.28 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

⚠️ These versions introduce a regression in MySQL connection when using an IPv6 address enclosed in square brackets. See the report #20528. A fix is under review and will be released soon.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0 (next build will use 10.1)
  • EL-9 RPMs are built using RHEL-9.6 (next build will use 9.7)
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

💎 PHP version 8.5 is released!

Posted by Remi Collet on 2025-11-21 07:52:00 UTC

RC5 was GOLD, so version 8.5.0 GA was just released, at the planned date.

A great thanks to Volker Dusch, Daniel Scherzer and Pierrick Charron, our Release Managers, to all developers who have contributed to this new, long-awaited version of PHP, and to all testers of the RC versions who have allowed us to deliver a good-quality version.

RPMs are available in the php:remi-8.5 module for Fedora and Enterprise Linux ≥ 8 and as Software Collection in the remi-safe repository.

Read the PHP 8.5.0 Release Announcement and its Addendum for new features and detailed description.

For memory, this is the result of 6 months of work for me to provide these packages, starting in July for Software Collections of alpha versions, in September for module streams of RC versions, and also a lot of work on extensions to provide a mostly full PHP 8.5 stack.

emblem-notice-24.pngInstallation: read the Repository configuration and choose installation mode, or follow the Configuration Wizard instructions.

Replacement of default PHP by version 8.5 installation (simplest):

Fedora (dnf 5):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %fedora).rpm
dnf module reset php
dnf module enable php:remi-8.5
dnf install php

Enterprise Linux (dnf 4):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %rhel).rpm
dnf module switch-to php:remi-8.5/common

Parallel installation of version 8.5 as Software Collection (recommended for tests):

yum install php85

emblem-important-2-24.pngTo be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • This version will also be the default version in Fedora 44
  • Many extensions are already available, see the PECL extension RPM status page.

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php85)

Blank lock screen in Hyprland

Posted by Major Hayden on 2025-12-18 00:00:00 UTC

I moved over to Hyprland as my primary desktop environment several months ago after wrestling with some other Wayland desktop environments. It does plenty of things well and finally allowed me to do screen sharing during meetings without much hassle.

A couple of small utilities, hyperidle and hyprlock, handle idle time and locking the screen when I step away from my desk. However, I kept coming back after lunch and found that both of my displays were often unresponsive with a blank screen after unlocking.

Diagnosing the issue #

I ran into this issue when I’d come back from lunch, hit the spacebar, and both monitors remained in power save mode. The power lights on both monitors were blinking, indicating that they were still in a low-power state.

If I turned off each monitor and turned it back on, the displays would come back on about 80% of the time. Power cycling the displays was annoying and it became more annoying when I found that my workspaces had migrated between the monitors. Nothing was in the right place any longer! 😭

I finally got in a situation where one monitor powered up and the other was still off! Time to run some diagnostic commands!

Digging in #

You can list the monitors in hyprland with hyprctl monitors all. Narrow that down by specifically looking for the DPMS (Display Power Management Signaling) status with this command:

$ hyprctl monitors | grep -E "(Monitor|dpms|disabled)")

Monitor DP-1 (ID 0):
	dpmsStatus: 1
	disabled: false
Monitor DP-2 (ID 1):
	dpmsStatus: 1
	disabled: false

In my case, the DPMS status for both monitors was 1, which means both monitors are on. Neither monitor is disabled. However, the monitor connected to DP-1 was still blank!

Even ddcutil said the same thing:

$ ddcutil detect

Display 1 
 I2C bus: /dev/i2c-9
 DRM_connector: card1-DP-1
 EDID synopsis:
 Mfg id: DEL - Dell Inc.
 Model: DELL U2723QE
 Product code: 17016 (0x4278)
 Serial number: 85P0F34
 Binary serial number: 1128482124 (0x4343454c)
 Manufacture year: 2024, Week: 38
 VCP version: 2.1

Display 2
 I2C bus: /dev/i2c-10
 DRM_connector: card1-DP-2
 EDID synopsis:
 Mfg id: DEL - Dell Inc.
 Model: DELL U2723QE
 Product code: 17016 (0x4278)
 Serial number: 55P0F34
 Binary serial number: 1128481356 (0x4343424c)
 Manufacture year: 2024, Week: 38
 VCP version: 2.1

Then I wondered if I could just cycle the DPMS and bring them both back:

$ hyprctl dispatch dpms off; sleep 1; hyprctl dispatch dpms on

Both monitors turned on and displayed my desktop! But why?

Could it be amdgpu? #

Checking the system journal with journalctl revealed an interesting message:

kernel: amdgpu 0000:03:00.0: [drm] REG_WAIT timeout 1us * 100 tries - dcn32_program_compbuf_size line:139

This suggests there’s some kind of a drm timeout when the AMD GPU driver is trying to do something.

The Arch Linux wiki suggests that disabling AMD’s low power state, GFXOFF, might help with similar issues. You can set a kernel parameter such as amdgpu.ppfeaturemask=0xfffd7fff to disable it. I’ve had bad luck in the past with these amdgpu parameters, so I wanted a workaround for now until I could test it more.

A (sorta) elegant workaround #

Hyprland has a key binding system that allows you to execute certain key combinations even when the screen is locked. I was already using some of these key bindings so that I could adjust my music even with the screen locked1:

> grep bindl ~/.config/hypr/hyprland.conf

bindl = , XF86AudioNext, exec, playerctl next
bindl = , XF86AudioPause, exec, playerctl play-pause
bindl = , XF86AudioPlay, exec, playerctl play-pause
bindl = , XF86AudioPrev, exec, playerctl previous

The normal key bindings in hyprland use bind, but bindl works even when the screen is locked. Here’s what I added:

> grep bindl ~/.config/hypr/hyprland.conf

bindl = , XF86AudioNext, exec, playerctl next
bindl = , XF86AudioPause, exec, playerctl play-pause
bindl = , XF86AudioPlay, exec, playerctl play-pause
bindl = , XF86AudioPrev, exec, playerctl previous
bindl = $mainMod SHIFT, D, exec, hyprctl dispatch dpms off && sleep 1 && hyprctl dispatch dpms on

Now I can hold down Mod + Shift + D when I return to my desk after lunch and both monitors come back on instantly!

I’ll let you know if I get around to messing with amdgpu.ppfeaturemask to see if that resolves the underlying issue. 🤓


  1. This was a family request after I went for a run and left some slightly-too-aggressive music playing by accident. 😅 ↩︎

Invalid bug reports are sometimes documentation bugs

Posted by Ben Cotton on 2025-12-17 21:24:41 UTC

Most open source maintainers know the pain of dealing with invalid bugs. These are bugs that are already listed as known issues, that are intended behaviors, that aren’t reproducible, unsupported versions, or any number of other explanations. They waste time on the maintainer side in the triage, investigation, and response. And they waste submitter time, too. Everyone loses. While it’s frustrating to deal with invalid bug reports, almost no one files them on purpose.

Researchers (including Muhammad Laiq et al) have investigated invalid bug reports. One of the recommendations is to improve system documentation. This makes perfect sense. When there’s a difference between the expected and actual behavior of software, that’s a software bug. When there’s a difference between the user-expected behavior and the developer-expected behavior, that’s a documentation bug.

There will always be some people who don’t read the documentation. But those who do will file better bugs if your documentation is accurate, easy to find, and understandable. As you notice patterns in invalid bug reports, look for places to improve your documentation. Just like the dirt trails through a grassy area can tell you where the sidewalks should have been, the invalid bugs can show you where your documentation needs to get better. (Note that this applies to process documentation as well as software documentation.

As with all interactions in your project, a little bit of grace goes a long way. It’s frustrating to deal with invalid bug reports, but keep in mind that the person who filed it is trying to help make your project better. And often their bug report represents a real bug — just not the one they think.

This post’s featured photo by Neringa Hünnefeld on Unsplash.

The post Invalid bug reports are sometimes documentation bugs appeared first on Duck Alignment Academy.

21 years of blogging

Posted by Jonathan McDowell on 2025-12-17 17:06:00 UTC

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

Using OpenSearch data streams in syslog-ng

Posted by Peter Czanik on 2025-12-17 13:01:57 UTC

Recently, one of our power users contributed OpenSearch data streams support to syslog-ng, which reminded me to also do some minimal testing on the latest OpenSearch release with syslog-ng. TL;DR: both worked just fine.

Read more at https://www.syslog-ng.com/community/b/blog/posts/using-opensearch-data-streams-in-syslog-ng

syslog-ng logo

Building Bridges: Microsoft’s Participation in the Fedora Linux Community

Posted by Brian (bex) Exelbierd on 2025-12-17 09:30:00 UTC

While I was at Flock 2025, I had the opportunity to share what Microsoft has been contributing to Fedora over the last year. I finally got a blog post written for the Microsoft Tech Community Linux and Open Source Blog.

Read the full blog over at the Microsoft Tech Community where this was originally posted.

F43 FESCo Elections: Interview with Máirín Duffy (duffy/mizmo)

Posted by Fedora Community Blog on 2025-12-17 08:06:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Máirín Duffy

  • FAS ID: duffy
  • Matrix Rooms: My long-term home has been Fedora Design, but I also hang out in Podman, Fedora Marketing, and Fedora AI/ML.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have used Fedora as my daily driver since 2003 and have actively contributed to Fedora since 2004. (Example: I designed the current Fedora logo and website design.) I am very passionate about the open source approach to technology. I first started using Linux as a high school student (my first Linux was Red Hat 5.1) and being able to use free software tools like Gimp when I couldn’t afford Photoshop made an outsized impact on my life. (I explain my background in Linux and open source in-depth in this interview with Malcolm Gladwell: https://youtu.be/SkXgG6ksKTA?si=RMXNzyzH9Tr6AuwN )

Technology has an increasingly large impact over society. We should have agency over the technology that impacts our lives. Open source is how we provide that agency. We’re now in a time period with a new disruptive technology (generative AI) that – regardless if you think it is real or not, is having real impact on computing. Fedora and other open source projects need to be able to provide the benefits of this new technology, the open source way and using open source software. Small, local models that are easy for our users to deploy on their own systems using open source tooling will provide them the ability to benefit AI’s strengths without having to sacrifice the privacy of their data.

There is a lot of hype around AI, and a lot of very legitimate concerns around its usage including the intellectual property concerns of the pre-trained data, not having enough visibility into what data is part of pre-trained data sets, the working conditions under which some of the data is labeled under, the environmental impact of the training process, the ethics of its usage. Open source projects in particular are getting pummeled by scraping bots hungry to feed coding models. There are folks in the tech industry who share these legitimate concerns that prefer to avoid AI and hope that it the bubble will just pop and it will go away. This strategy carries significant risks, however, and we need a more proactive approach. The technology has legitimate uses and the hype is masking them. When the hype dies down, and the real value of this new technology is more visible, it will be important for the type of community members we have in Fedora with their commitment to open source principles and genuinely helping people to have had a seat at the table to shape this technology.

(You can see a short video where I talk a bit more indepth about the pragmatic, privacy and open source-focused approach I take to AI here: https://youtu.be/oZ7EflyAPUw?si=HSbNhq_3NelXoX2J)

In the past I have been quite skeptical about generative AI and worried about its implications for open source. (I continue to be skeptical and annoyed by the hype surrounding it.) I’ve spent the past couple of years looking at open source licensed models and building open source generative AI tooling – getting hands on, deep experience to understand it – and as a result I have seen first hand the parts of this technology that have real value. I want FESCo to be able to make informed decisions when AI issues come up.

My background is in user experience engineering, and I am so excited about what this technology will mean for improving usability and accessibility for users of open source software. For example, we never have enough funding or interest to solve serious a11y problems; now we could generate text summaries of images & describe the screen out loud with high-quality audio from text-to-voice models for low vision users! I want open source to benefit from these and even more possibilities to reach and help more people so they can enjoy software freedom as well.

I have served in multiple governance roles in Fedora including time on the Fedora Council, the Mindshare Committee, lead of various Fedora Outreachy rounds (I have mentored dozens of interns in Fedora), and founder / lead of the Design team over many years. More importantly, I have deep Linux OS expertise, I have deep expertise in user experience, and I have a depth in AI technology to offer to FESCo. I believe my background and skills will enable FESCo to make responsible decisions in the best interest of open source and user agency, particularly around the usage of AI in Fedora and in the Fedora community. We will absolutely need to make decisions as a governing group in the AI space, and they should be informed by that specific expertise.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I founded and ran the Fedora Design Team for 17 years. It was the first major Linux distribution community-lead design team, and often as a team we’ve been asked by other distros and open source projects for help (so we expanded to call ourselves the “Community Design Team.”) Over the years I’ve designed the user experience and user interfaces for many components in Fedora including our background wallpapers, anaconda, virt-manager, the GNOME font-chooser, and a bunch of other stuff. I moved on from the Fedora Design role to lead design for Podman Desktop and to work more with the Podman team (who are also part of the Fedora community) for a couple of years, and I also led the InstructLab open source LLM fine-tuning project and corresponding Linux product from Red Hat (RHEL AI.) For the past year or so I have returned to working on core Linux on the Red Hat Enterprise Linux Lightspeed team, and my focus is on building AI enhancements to the Linux user experience. My team is part of the Fedora AI/ML SIG and we’re working on packaging user-facing components and tooling for AI/ML for Fedora, so folks who would like to work with LLMs can do so and the libraries and tools they need will be available. This includes building and packaging the linux-mcp-server and packaging goose, a popular open source AI agent, and all of their dependencies.

My career has focused on benefiting Fedora users by improving the user experience of using open source technology, and being collaborative and inclusive while doing so.

How do you handle disagreements when working as part of a team?

Data is the best way to handle disagreements when working as part of a team. Opinions are wonderful and everyone has them, but decisions are based made with real data. Qualitative data is just as important as quantitative data, by the way. That can be gathered by talking directly to the people most impacted by the decision (not necessarily those who are loudest about it) and learning their perspective. Then informing the decision at hand with that perspective.

A methodology I like to follow in the face of disagreements is “disagree and let’s see.” (This was coined by Molly Graham, a leadership expert.) A decision has to be made, so let’s treat it like an experiment. I’ll agree to run an experiment, and track the results (“let’s see”) and advocate for a pivot if it turns out that the results point to another way (and quickly.) Being responsible to track the decision and its outcomes and bringing it back to the table, over time, helps build trust in teams like FESCo so folks who disagree know that if the decision ended up being the wrong one, that it can and will be revisited based on actual outcomes.

Another framework I like to use in disagreements is called 10-10-10, created by Suzy Welch. It involves thinking through: how will this decision matter in 10 minutes? How about 10 months? How about 10 years? This frame of thought can diffuse some of the chargedness of disagreement when all of the involved people realize the short or long term nature of the issue together at the same time.

Acknowledging legitimate concerns and facing them head on instead of questioning or sidelining others’ lived experience and sincerely-held beliefs and perspectives is also incredibly important. Listening and building bridges between community members with different perspectives, and aligning them to the overall projects goals – which we all have in common as we work in this community – is really helpful to help folks look above the fray and be a little more open-minded.

What else should community members know about you or your positions?

I understand there is a campaign against my running for FESCo because myself and a colleague wrote an article that walked through real, undoctored debugging sessions with a locally-hosted, open source model in order to demonstrate the linux-mcp-server project.

I want to make it clear that I believe any AI enhancements that are considered for Fedora need a simple opt-in button, and no AI-based solutions should be the default. (I’ve spoken about this before, recently on the Destination Linux Podcast: https://youtu.be/EJZkJi8qF-M?t=3020) The user base of Fedora and other open source operating systems come to their usage in part due to wanting agency over the technology they use and having ownership and control over their data. The privacy-focused aspects of Fedora have spanned the project’s existence and that must be respected. We cannot ignore AI completely, but we must engage with it thoughtfully and in a way that is respectful of our contributors and user base.

To that end, should you elect to grant me the privilege of a seat to FESCo this term:

  • I intend to vote in opposition to proposals that involve bundling proprietary model weights in Fedora.
  • I intend to vote in opposition to proposals that involve sending Fedora user data to third party AI services.
  • I intend to vote in opposition to proposals to turn AI-powered features on by default in any Fedora release.
  • I intend to vote in favor of proposals to enact AI scraper mitigation strategies and to partner with other open source projects to fight this nuisance.

My core software engineering background is in user experience and usability, and I believe in the potential of small, local models to improve our experience with software without compromising our privacy and agency. I welcome ongoing community input on these principles and other boundaries you’d like to see around emerging technologies in Fedora.

The post F43 FESCo Elections: Interview with Máirín Duffy (duffy/mizmo) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Timothée Ravier (siosm/travier)

Posted by Fedora Community Blog on 2025-12-17 08:05:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Timothée Ravier

  • FAS ID: siosm
  • Matrix Rooms: Fedora Atomic Desktops, Fedora CoreOS, Fedora bootc, Fedora KDE, Fedora Kinoite, Fedora Silverblue

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I want to be a member of FESCo to represent the interests of users, developers and maintainers of what we call Atomic, Bootable Container, Image Based or Immutable variants of Fedora (CoreOS, Atomic Desktops, IoT, bootc, etc.).

I think that what we can build around those variants of Fedora is the best path forward for broader adoption of Fedora and Linux in the general public and not just in developer circles.

I thus want to push for better consideration of the challenges specific to Atomic systems in all parts of Fedora: change process, infrastructure, release engineering, etc.

I also want to act as a bridge with other important communities built around this ecosystem such as Flathub, downstream projects such as Universal Blue, Bazzite, Bluefin, Aurora, and other distributions such as Flatcar Linux, GNOME OS, KDE Linux, openSUSE MicroOS, Aeon or ParticleOS.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I primarily contribute to Fedora as a maintainer for the Fedora Atomic Desktops and Fedora CoreOS. I am also part of the KDE SIG and involved in the Bootable Containers (bootc) initiative.

My contributions are focused on making sure that those systems become the most reliable platform for users, developers and contributors. This includes both day to day maintenance work, development such as enabling safe bootloader updates or automatic system updates and coordination of changes across Fedora (switching to zstd compressed initrds as an example).

While my focus is on the Atomic variants of Fedora, I also make sure that the improvements I work on benefit the entire Fedora project as much as possible.

I’ve listed the Fedora Changes I contributed to on my Wiki profile: https://fedoraproject.org/wiki/User:Siosm.

How do you handle disagreements when working as part of a team?

Disagreements are a normal part of the course of a discussion. It’s important to give the time to everyone involved to express their positions and share their context. Limiting the scope of a change or splitting it into multiple phases may also help.

Reaching a consensus should always be the preferred route but sometimes this does not happen organically. Thus we have to be careful to not let disagreements linger on unresolved and a vote is often needed to reach a final decision. Not everyone may agree with the outcome of the vote but it’s OK, we respect it and move on.

Most decisions are not set in stone indefinitely and it’s possible to revisit one if the circumstances changed. A change being denied at one point may be accepted later when improved or clarified.

This is mostly how the current Fedora Change process works and I think it’s one of the strength of the Fedora community.

What else should community members know about you or your positions?

I’ve been a long time Fedora user. I started contributing more around 2018 and joined Red Hat in 2020 where I’ve been working on systems such as Fedora CoreOS and RHEL CoreOS as part of OpenShift. I am also part of other open source communities such as Flathub and KDE and I am committed to the upstream first, open source and community decided principles.

The post F43 FESCo Elections: Interview with Timothée Ravier (siosm/travier) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Daniel Mellado (dmellado)

Posted by Fedora Community Blog on 2025-12-17 08:04:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Daniel Mellado

  • FAS ID: dmellado
  • Matrix Rooms: #ebpf, #fedora-devel, #rust, #fedora-releng, and a lot of #fedora-* 😉

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I accepted this nomination because I believe FESCo would benefit from fresh perspectives, and I think that these new perspectives will also help to lower the entrance barriers for Fedora.

Governance bodies stay healthy when they welcome new voices alongside experienced members, and I want to be part of that renewal.

Technologies like eBPF are redefining what’s possible in Linux–observability, security, networking–but they also bring packaging challenges that we haven’t fully solved, such as kernel version dependencies, CO-RE relocations, BTF requirements, and SELinux implications.

On FESCo, I want to help Fedora stay ahead of these challenges rather than merely reacting to them. I want to advocate for tooling and guidelines that will help make complex kernel-dependent software easier to package.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I founded and currently lead the Fedora eBPF Special Interest Group. Our goal is to make eBPF a first-class citizen in Fedora, improving the experience for the developers who are building observability, security, and networking tools and figuring out how to package software that has deep kernel dependencies.

On the packaging side, I maintain bpfman (an eBPF program manager) and several Rust crates that support eBPF and container tooling. I’ve also learned the hard way that Rust dependency vendoring is… an adventure. 😅

Before Fedora, I spent years in the OpenStack community. I served as PTL (Project Team Lead) for the Kuryr project, the bridge between container and OpenStack networking and was active in the Kubernetes SIG. That experience taught me a lot about running open source projects: building consensus across companies, mentoring contributors, managing release cycles, and navigating the politics of large upstream communities.

I try to bring that same upstream, community-first mindset to Fedora. My hope is that the patterns we establish in the eBPF SIG become useful templates for other packagers facing similar challenges.

How do you handle disagreements when working as part of a team?

I start by assuming good intent. If someone is in the discussion, it’s because they do also care about the outcome, even though they may have another point of view.

I also try not to speculate about why someone holds a particular view. Assigning motives derails technical conversations fast. Instead, I focus on keeping things facts-driven: what does the code actually do, what do users need, what are the real constraints? Egos don’t ship software, and sticking to concrete data keeps discussions productive.

When disagreements persist, I find it helps to identify what everyone does agree on and use that as a new starting point. You’d be surprised how often this unblocks a stalled conversation.

Also, I think that it’s important to step back. It’s tempting to want the final word, but that can drag things on forever without real progress. Miscommunication happens and not every discussion needs a winner.

What else should community members know about you or your positions?

I believe in Fedora’s Four Foundations: Freedom, Friends, Features, First. What draws me to this community is the “Friends” part: there’s a place in Fedora for anyone who wants to help, regardless of background or technical skill level. Open source is at its best when it’s genuinely welcoming, and I want FESCo to reflect that.

From my time in the OpenStack community, I learned that healthy projects focus on protecting, empowering, and promoting: protecting the open development process and the values that make the community work; empowering contributors to do great work without painful barriers; and promoting not just the software, but the people who build and use it. I try to bring that mindset to everything I do.

I also believe strongly in working upstream. The changes we make should benefit not just Fedora users, but the broader open source ecosystem. When we solve a hard problem here, that knowledge should flow back to upstream projects and other distributions.

I’ll be at FOSDEM 2026. FOSDEM embodies what I love about open source: a non-commercial space where communities meet to share knowledge freely. If you’re there, come say hi.

The post F43 FESCo Elections: Interview with Daniel Mellado (dmellado) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Kevin Fenzi (kevin/nirik)

Posted by Fedora Community Blog on 2025-12-17 08:03:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Kevin Fenzi

  • FAS ID: kevin
  • Matrix Rooms: I’m probibly most active in the following rooms. I’m available and answer notifications and watch many other channels as well, but those 3 are the most active for me:
    • noc -> day to day infra stuff, handling alerts, talking with other infra folks
    • admin -> answering questions, helping fix issues, some team discussions
    • releng -> release engineering team discussions, answering questions, handling issues, etc.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I think I still provide useful historical information as well as being able to pull on that long history to know when things are good/bad/have been tried before and have lessons to teach us.

Based on the proposals we approve or reject we can steer things from FESCo. I do think we should be deliberate, try and reach consensus and accept any input we can get to try to come to good decisions. Sometimes things won’t work out that way, but it should really be the exception instead of the rule.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I’m lucky to be paid by Red Hat to work on infrastucture, I like to hope it’s useful to the community
In my spare time I work on packages, answering questions where I can, unblocking people, release engineering work, matrix and lists moderation.

I really hope my contributions contribute to a happier and more productive community.

How do you handle disagreements when working as part of a team?

I try and reach consensus where possible. Sometimes that means taking more time or involving more people, but If it can be reached I think it’s really the best way to go.

Sometimes of course you cannot reach a consensus and someone has to make a call. If thats something I am heavily involved in/in charge of, I do so. I’m happy that we have a council as a override of last resort in case folks want to appeal some particularly acromonious decision. Also, as part of a team you have to sometimes delegate something to someone and trust their judgement in how it’s done.

What else should community members know about you or your positions?

I think there’s been a number of big debates recently and probibly more to come. We need to remember we are all friends and try and see things from other people’s point of view.

My hero these days seems to be treebeard: “Don’t be hasty”

My matrix/email is always open for questions from anyone…

The post F43 FESCo Elections: Interview with Kevin Fenzi (kevin/nirik) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Fabio Alessandro Locati (fale)

Posted by Fedora Community Blog on 2025-12-17 08:02:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Fabio Alessandro Locati

  • FAS ID: fale
  • Matrix Rooms: I can be easily found in #atomic-desktops:fedoraproject.org, #bootc:fedoraproject.org, #coreos:fedoraproject.org, #devel:fedoraproject.org, #epel:fedoraproject.org, #event-devconf-cz:fedoraproject.org, #fedora:fedoraproject.org, #fedora-arm:matrix.org, #fedora-forgejo:fedoraproject.org, #fosdem:fedoraproject.org, #flock:fedoraproject.org, #golang:fedoraproject.org, #iot:fedoraproject.org, #meeting:fedoraproject.org, #meeting-1:fedoraproject.org, #mobility:fedoraproject.org, #python:fedoraproject.org, #rust:fedoraproject.org, #silverblue:fedoraproject.org, #sway:fedoraproject.org, #websites:fedoraproject.org

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have been part of the Fedora community for many years now: my FAS account dates back to January 2010 (over 15 years ago!), and I’ve contributed in many different roles to the Fedora project. I started as an ambassador, then became a packager and packaging mentor, and joined multiple SIGs, including Golang, Sway, and Atomic Desktop. For many years, I’ve been interested in immutable Linux desktops, Mobile Linux, and packaging challenges for “new” languages (such as Go), which are also becoming more relevant in the Fedora community now. Having contributed to the Fedora Project for a long time in many different areas, and given my experience and interest in other projects, I can bring those perspectives to FESCo.

How do you currently contribute to Fedora? How does that contribution benefit the community?

Currently, many of my contributions fall in the packaging area: I keep updating the packages I administer and exploring different solutions for packaging new languages and maintaining the Sway artifacts.
My current contributions are important to keeping Fedora first, not only in terms of package versions but also in terms of best practices and ways to reach our users.

Additionally, I served for the last two cycles (F41/F42) as a FESCo member, steering the community toward engineering decisions that were both sensible in the short and long term.

How do you handle disagreements when working as part of a team?

I think disagreements are normal in communities. I have a few beliefs that guide me in entering and during any disagreement:

  1. I always separate the person from their argument: this allows me to discuss the topic without being influenced by the person making the points.
  2. I always keep in mind during disagreements that all people involved probably have a lot of things they agree on and a few they don’t agree on (otherwise, they would not be part of the conversation in the first place): this allows me to always see the two sides of the disagreement as having way more in common than in disagreement.
  3. During a discussion, I always hold the belief that the people arguing on the opposite side of the disagreement are trying to make sure that what they believe is right becomes a reality: this allows me always to try to see if there are aspects in their point of view that I had not considered or not appropriately weighted.

Thanks to my beliefs, I always manage to keep disagreements civil and productive, which often leads to a consensus. It is not always possible to agree on everything, but it is always possible to disagree in a civil, productive way.

What else should community members know about you or your positions?

Let’s start with the fact that I’m a Red Hat employee, though what I do in my day job has nothing to do with Fedora (I’m an Ansible specialist, so I have nothing to do with RHEL either), so I have no ulterior motives for my contributions. I use Fedora on many devices (starting from my laptop) and have done so for many years. I contribute to the Fedora Project because I found in it and its community the best way to create the best operating system :).

I’ve been using Sway exclusively on my Fedora desktop since I brought it into Fedora in 2016. On the other systems, I use either Fedora Server, Fedora CoreOS, or Fedora IoT, even though lately, I prefer the latter for all new non-desktop systems.

I see the Fedora Community as one community within a sea of communities (upstream, downstream, similarly located ones, etc.). I think the only way for all those communities to be successful is to collaborate, creating a higher-level community where open-source communities collaborate for the greater good, which, in my opinion, would be a more open-source world.

The post F43 FESCo Elections: Interview with Fabio Alessandro Locati (fale) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Dave Cantrell (dcantrell)

Posted by Fedora Community Blog on 2025-12-17 08:01:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Dave Cantrell

  • FAS ID: dcantrell
  • Matrix Rooms: Looking right now it appears Fedora Council, FRCL, Introductions, Announcements, Fedora Meeting, and Fedora Meeting 1. I tend to go to rooms that people ask me to join. I also use it a lot for DMs and people find me that way. For me primarily I rely on Matrix for our online meetings and DMs with people. Email continues to be the most reliable way to reach me and have a conversation.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have been a member of FESCo for a while now and enjoy doing it. Fedora is really good at bringing in new technologies and ensuring that we minimize disruption for users. I enjoy the technical discussions and working together to ensure that changes account for everything before we bring them in. Making and having a plan is often difficult and requires a lot of coordination.

I am also interested in mentoring people interested in running for FESCo and introducing some changes to how we staff FESCo. There are discussions going on right now for that, but an important thing for me is ensuring we have a succession plan for FESCo that keeps Fedora going without burning people out. If you are interested in being on FESCo, please reach out to me!

Lastly, I feel very strongly about open source software and the licenses we have around it. I believe that it has fundamentally changed our industry and made it a better place. We continue to see changes come in to Fedora that bring challenges to those ideas and I want to ensure that Fedora’s position around open source, creator rights, and licensing are not lost or eroded.

How do you currently contribute to Fedora? How does that contribution benefit the community?

My job at Red Hat is working on the Software Management team. The two big projects on that team are dnf and rpm. But we also have a lot of dnf and rpm adjacent software. I am upstream for or contribute to numerous other projects. I also maintain a variety of packages in Fedora and EPEL as well as RHEL (and by extension CentOS Stream).

I am a sponsor for new contributors and I help mentor new developers in both the community and at Red Hat (that is, developers at Red Hat wanting to participate more in Fedora).

I am a member of the Fedora Council where I focus on engineering issues when we discuss large topics and strategy.

How do you handle disagreements when working as part of a team?

Communication has always been a challenge in our industry and community. We have language differences, cultural differences, and communication medium differences. One thing I notice a lot is that some discussions lead to people taking things personally. Often the root cause of that is people feeling like they are not being heard. A solution I have found is to suggest changing the communication medium. I am perfectly fine communicating over email, or chat, or other online methods. But talking in person can go a long way. We know the value of having in-person events and a lot of people find that their interactions with people in the community improve simply because they finally met someone in person at an event. While that is not always possible, we do have video conference capabilities these days. I do use that in Fedora and it helps quite a bit.

For everyone, if you find yourself in a frustrating situation, I recommend first stepping away and collecting your thoughts. Then remind yourself why everyone is involved in the first place. We all want to achieve the same things, so let’s try to work towards that and find common ground. And if necessary, suggest an alternate communication mechanism.

What else should community members know about you or your positions?

Most people are surprised to learn that I support protons more than electrons. I like being positive in everything I pursue. It’s ok for us to disagree. It’s ok to have a position, learn something new, and then change that position. The important thing to me is that Fedora ultimately remains a fun project.

My favorite color is orange. I use an Android mobile phone. I do not use current Apple hardware, but I am a big fan of the Apple II series and 68k Macintosh series. If you corner me, I will likely talk your ear off about the Apple IIgs or any Macintosh Quadra (particularly the various crazy and horrible operating systems Apple made for the platform).

The post F43 FESCo Elections: Interview with Dave Cantrell (dcantrell) appeared first on Fedora Community Blog.

F43 FESCo Elections: Interview with Zbigniew Jędrzejewski-Szmek (zbyszek)

Posted by Fedora Community Blog on 2025-12-17 08:00:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Zbigniew Jędrzejewski-Szmek

  • FAS ID: zbyszek
  • Matrix Rooms: #devel:fedoraproject.org, #reproducible-builds:fedora.im, #jenkins-coreos:fedoraproject.org, #coreos:fedoraproject.org, #quality:fedoraproject.org

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I think Fedora as a project is in a good place. Our core responsibility is to put out a new release every six months, and we are doing that on schedule and with high quality. But there are always new challenges and issues that need to be solved. As a member of FESCo, I take the Change process seriously, trying to work with submitters to improve their proposals before they are approved, and keeping track of what remains to be done. I do my best to move the things I’m personally working on in the right direction, and I try to help others move the things they are working on.

Most of the proposals that FESCo gets to vote on are obvious. But every once in a while there are proposals which are a mistake. The tough part of the job is to distinguish between something that is risky but will be good for the project if done correctly, and ideas that are a mistake and should be rejected. FESCo is in the position to push back, and needs to do that with enough strength and visiblity to be effective.

The part of being in FESCo that I (and everybody else) likes the least is the slow-as-molasses tickets that get stuck on infrastructure changes or other external constraints. FESCo should do a better job of regularly returning to those, pushing for updates, and figuring out how to finally solve the problem. I like the idea of introducing the limits on consecutive terms of FESCo members
to bring in new people and hopefully use this energy to tackle some long-standing issues.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I maintain systemd and a bunch of other packages in the python scientific stack,
a bunch of tools related to installing Linux (mkosi, pacman, archlinux-keyring), and
tooling for reproducibile builds (add-determinism).

I’m active in FESCo and I contribute to the Packaging Guidelines and in various other places that need help. Over the last year I worked on build reproducibility (https://fedoraproject.org/wiki/Changes/Package_builds_are_expected_to_be_reproducible), bin-sbin unification (https://fedoraproject.org/wiki/Changes/Unify_bin_and_sbin), and helped with introduction of nix (https://fedoraproject.org/wiki/Changes/Nix_package_tool).

How do you handle disagreements when working as part of a team?

Discuss. Evaluate. Discuss again. Reach a compromise.

Rarely there is an issue where compromise is not possible. If the issue is worth fighting for, agree to disagree, try to convince everyone else.

What else should community members know about you or your positions?

n/a

The post F43 FESCo Elections: Interview with Zbigniew Jędrzejewski-Szmek (zbyszek) appeared first on Fedora Community Blog.

🛡️ PHP version 8.1.33, 8.2.29, 8.3.23 and 8.4.10

Posted by Remi Collet on 2025-07-04 04:49:00 UTC

RPMs of PHP version 8.4.10 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.23 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.29 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.1.33 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

🛡️ These Versions fix 3 security bugs (CVE-2025-1220, CVE-2025-1735, CVE-2025-6491), so the update is strongly recommended.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

🎲 PHP version 8.3.29RC1, 8.4.16RC1 and 8.5.1RC1

Posted by Remi Collet on 2025-12-05 08:58:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.1RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.16RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.29RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.1RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.29, 8.4.16 and 8.5.1RC1 are planed for December 18th, in 2 weeks.

Software Collections (php83, php84, php85)

Base packages (php)

A half-assed assessment of open source AI code review tools

Posted by Adam Williamson on 2025-12-16 20:27:19 UTC

Introduction

Hi there, blog readers! For the last week or so I've been poking into AI code review tools. Yes, this is partly because of the Red Hat "you must do AI things!" policy. But also, to be honest, because they seem to be...actually good now. I set up AI reviews for pull requests to our openQA test repo as an experiment. But especially over the last couple of months, they've got to the point where well over half of the review notes are actually useful, and the writing style isn't so awful I want to stab myself in the eyeballs. So I'd quite like to keep doing them, but in a more open source-y way. So far I've simply been cloning the pull requests to a GitHub mirror of the repo that exists solely to get AI reviews done. That repo has Gemini Code Assist enabled so the PRs are reviewed by Gemini automatically, e.g. here. It's very simple, but entirely closed source, there's no control over it, and Google could take it away at any time.

We're in the middle of migrating Fedora projects from Pagure to our new Forgejo instance, so I decided to try and get some sort of AI review system integrated with Forgejo. And I kinda succeeded! I wrote a Forgejo integration for ai-code-review, a tool I found that was written by another Red Hatter, and managed to set up a proof-of-concept Forgejo Actions workflow using it on a repo I own that's hosted at Codeberg (since Codeberg has public Forgejo Actions runners available; we don't have Actions entirely set up in the Fedora instance yet). Right now it's using Gemini as the model provider just because that was the easiest thing to set up for a PoC, but ai-code-review's design makes the LLM provider easily pluggable, so it's trivial to swap it out. Long term I hope we'll get a Fedora LLM provider set up, serving open source models, and we can make it use that. There's an Ollama backend, and adding an OpenAI API backend should be pretty easy.

Before going any further with that, though, I decided to look around and see if there are other tools out there, and if so, which might be the best one. I poked around a bit and found a few, and wrote up a very half-assed comparative assessment. I figured this might interest others, so I've prettied it up a tiny bit and put it below. I make no claims that this is comprehensive, accurate or fair, please send all complaints to the happyassassin.net HR department! The takeaway is that I'll probably keep working on the ai-code-review approach and also experiment with forking Qodo's archived open-source pr-agent project and see if I can add Forgejo support to it, to compare it against ai-code-review.

If anyone knows of any I missed, please let me know! I briefly looked at RhodeCode but discounted it because it's a whole-ass forge, not just a review tool. ReviewBoard doesn't seem to have any LLM integration as best as I could tell.

The Contenders

ai-code-review

  • Repo: https://gitlab.com/redhat/edge/ci-cd/ai-code-review
  • Author: Juanje Ojeda (Red Hat)
  • Language: Python (typed)
  • Architecture: Modular
  • Tests: Yes, LLM-generated, fairly comprehensive unit tests, very limited integration tests
  • Begun: August 2025
  • Status: Active
  • Forges: GitLab, GitHub, local changes (Forgejo supported submitted)
  • Model providers: Gemini, Anthropic, Ollama
  • Output: Console or PR/MR comment
  • Deployment: Local execution, GitLab CI, GitHub Actions (one-shot deployment via container image in CI job)
  • Prompts: Here

ai-codereview

  • Repo: Red Hat internal
  • Author: Tuvya Korol (Red Hat)
  • Language: Python (untyped)
  • Architecture: Monolithic
  • Tests: No
  • Begun: June 2025
  • Status: Active
  • Forges: GitLab, local changes
  • Model providers: RH-internal Claude, Gemini, Granite
  • Output: Console or MR comment
  • Deployment: Local execution, GitLab CI (ad hoc deployment via curl/pip in CI job)
  • Prompts: Red Hat internal

kodus-ai

  • Repo: https://github.com/kodustech/kodus-ai
  • Author: Kodus
  • Language: Typescript
  • Architecture: Modular
  • Tests: Yes, handwritten, unit and integration, not sure of coverage
  • Begun: April 2025
  • Status: Active
  • Forges: GitHub, GitLab, BitBucket
  • Model providers: OpenAI, Gemini, Anthropic, Novita, OpenRouter, any OpenAI-compatible
  • Output: MR/PR comment and/or review (seems to depend on configuration)
  • Deployment: Local via yarn (indicated as for development only), as containerized webapp (for prod) with own installer - looks complex
  • Prompts: Here

pr-agent

  • Repo: https://github.com/qodo-ai/pr-agent
  • Author: Qodo (formerly Codium)
  • Language: Python (untyped)
  • Architecture: Modular
  • Tests: Yes, handwritten, unit and integration, somewhat primitive, many commented out, 24% coverage (per codecov)
  • Begun: July 2023
  • Status: Archived (Nov 2025)
  • Forges: GitHub, GitLab, Gitea, Gerrit, BitBucket, AWS CodeCommit, Azure DevOps, local changes
  • Model providers: Any OpenAI-compatible (looks like some special handling for Azure), LiteLLM
  • Output: MR/PR comment and/or review, has interactive features
  • Deployment: Local execution or Forge CI. There's a custom GitHub action but it may be abandoned. Installable via pip, should be trivial to containerize for simple one-shot CI job deployment
  • Prompts: Here

ai-pr-reviewer

  • Repo: https://github.com/coderabbitai/ai-pr-reviewer
  • Author: CodeRabbit
  • Language: Typescript
  • Architecture: Modular
  • Tests: Barely any
  • Begun: Feb 2023
  • Status: Archived (Nov 2023)
  • Forges: GitHub
  • Model providers: OpenAI
  • Output: PR review/comment
  • Deployment: GitHub Action (no longer maintained). No generic or local deployment documented
  • Prompts: Here

Conclusions

ai-code-review (Juanje) and pr-agent (Qodo/Codium) seem the best options.

Of the RH-developed, greenfield projects, ai-code-review is more featureful and better architected than ai-codereview, and not tied to an RH-internal model provider.

Of the existing public projects, ai-pr-reviewer (CodeRabbit) was very tied to GitHub, has no documented standalone deployment ability, and was archived fairly early in development. Plus it's in TypeScript. Kodus is actively developed, but similarly is in TypeScript, deployment looks complex, and from what I've seen I don't love its review style. Hard to say why but the project overall gives me a sloppy vibe. pr-agent (Qodo) had the longest development history and seems the most mature and capable at the point where it was abandoned (well, they actually seem to have done a heel turn and gone closed source / SaaS). It has a documented standalone deployment process which looks relatively simple and subject to integration into generic CI workflows.

accounts.fedoraproject.org degraded

Posted by Fedora Infrastructure Status on 2025-12-16 12:00:00 UTC

We're getting heavy attacks on accounts.fedoraproject.org which is impacting the ability to register new accounts. Timeouts are likely, but password reset should work afterwards.

We are investigating and trying to mitigate it.

Partitioning with Ansible Storage Role: Partitions

Posted by Vojtěch Trefný on 2025-12-15 09:13:00 UTC

The storage role always allowed creating and managing different storage technologies like LVM, LUKS encryption or MD RAID, but one technology seemed to be missing for a long time, and surprisingly, it was the most basic one, the actual partitioning. Support for partition management was always something that was planned for the storage role, but it was never a high priority. From the start, the role could create partitions. When creating a more complex storage setup on an empty disk, for example creating a new LVM volume group or adding a new physical volume to an existing LVM setup, the role would always automatically create a single partition on the disk. But that was all the role could do, just one single partition spanning the entire disk.

The reason for this limitation was simple: creating multiple partitions is something usually reserved for the OS installation process, where users need to have separate partitions required by the bootloader, like /boot and /boot/efi. The more advanced “partitioning” is then delegated to a more complex storage technologies like LVM, which is where most of the changes are done in an existing system and where users will usually employ Ansible to make changes later.

But the requirement for more advanced partition management was always there, and since the 1.19 release, the role can now create and manage partitions in the Ansible way.

Partition Management with Storage Role

The usage of the role for partition management is simple and follows the same logic as the other storage technologies, with the management divided into two parts: managing the storage_pools, which in the case of partitions is the underlying disk (or to be more precise, the partition table), and the volumes, which are the partitions themselves. A simple playbook to create two partitions on a disk can look like this:

  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: sdb1
              type: partition
              size: 1 GiB
              fs_type: ext4
            - name: sdb2
              type: partition
              size: 10 GiB
              fs_type: ext4

and the partitions it creates will look like this

NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb      8:16   0  20G  0 disk
├─sdb1   8:17   0   1G  0 part             ext4
└─sdb2   8:18   0  10G  0 part             ext4

Other filesystem-related properties (like mount_point or fs_label) can be specified, and these work in the same way as for any other volume type.

The only property that is specific to partitions is part_type, which allows you to choose a partition type when using the MBR/MSDOS partition table. Supported types are primary, logical and extended. If you don’t specify the partition type, the role will create the first three partitions as primary and for the fourth one, add an extended partition and create it as a logical partition inside it. On GPT, which is used as the default partition table, the partition type is ignored.

Encrypted partitions can be created by adding the encryption: true option for the partition and setting the passphrase:

  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: sdb1
              type: partition
              size: 1 GiB
              fs_type: ext4
              encryption: true
              encryption_password: "aaaaaaaaa"
            - name: sdb2
              type: partition
              size: 10 GiB
              fs_type: ext4
              encryption: true
              encryption_password: "aaaaaaaaa"

Don’t forget that adding the encryption layer is a destructive operation – if you run the two playbooks above one after another, the filesystems created by the first one will be removed, and all data on them will be lost. Adding the LUKS encryption layer (so-called re-encryption) is currently not supported by the role.

Idempotency and Partition Numbers

One of the core principles of Ansible is idempotency, or the ability to re-run the same playbook, and if the system is in the state specified by the playbook, no changes will be made.

This is true for partitioning with the storage role as well. When running the playbook from our example above for the second time, the role will check the sdb disk and look for the two specified partitions. And if there are two partitions 1 and 10 GiB large, it won’t do anything. This is how the role works in general, but with partitions, there is a new challenge: partitions don’t have unique names and using partition numbers for idempotency can be tricky.

Did you know that partition numbers for logical partitions are not stable? If you have two logical partitions sdb5 and sdb6, removing the sdb5 partition will automatically re-number the sdb6 partition to sdb5.

Predicting the partition name is not always straightforward. For example, disks that end in a number (common with NVMe drives) require adding a p separator before the partition number (nvme0n1 becomes nvme0n1p1).

For these reasons, the role requires explicitly using the state: absent option to remove a partition, and partitions can be referred to by their numbers in the playbooks as well as their full names. So, for example, the following playbook will resize the sdb2 partition from our first example

  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: 2
              type: partition
              size: 15 GiB
              fs_type: ext4

and the first partition won’t be removed, because it is not explicitly mentioned as absent, only omitted in the playbook:

NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb      8:16   0  20G  0 disk
├─sdb1   8:17   0   1G  0 part             ext4
└─sdb2   8:18   0  15G  0 part             ext4

Feedback and Future Features

With this change, the storage role can now manage all basic storage technologies. We are of course not yet covering all the potential features, but we are always looking for more ideas from our users. If you have any features you’d like to see in the role, please don’t hesitate and let us know.

Creating Data Analysis Pipelines using DuckDB and RStudio

Posted by Fedora Magazine on 2025-12-15 08:00:00 UTC

Motivation and Vision

The core motivation behind data analysis pipelines, and the focus of this article, is the need to establish a clear path from unprocessed data to actionable insights for contributor engagement and impact. The key question is “what are we trying to measure to ensure the continuity of community work?”

As a side note, my preparation for the ADSP (Advanced Data Analysis Semi-Professional) certification in Korea utilized RStudio Desktop, running on a Fedora Linux environment. I got hands-on with R’s core statistical toolkit, leveraging base functions. Among these were summary()1 and lm()2 as the basis for fundamental hypothesis testing and regression analysis3. I became more intrigued by R’s power after testing its data manipulation packages (especially the key package dplyr).

With this background in mind, the article focuses on the design of an analysis pipeline that fulfills three objectives:

  • it leverages the power of DuckDB4 and S3 storage,
  • it redefines the workflow,
  • it ensures scalable data transformation and analysis capabilities

Establishing such a robust foundation is essential for producing reliable and validated metrics for the contributor community, which itself is subject to ongoing definition and validation.

Acknowledgement: I extend my sincere gratitude to Justin Wheeler for connecting me with the Fedora Data Working Group (FDWG), and to Michael Winters and K Naraian for their guidance, discussion, and support throughout the design and validation of this data analysis pipeline.

Scope and Tool Selection: Please note that this analysis pipeline represents a combination of tools and methods chosen from my perspective as a data analyst, particularly one with a background in the CRM (Customer Relationship Management) domain and consumer electronics industry. Therefore, other analysts or data engineers may utilize different combinations of technologies based on their own expertise or project requirements.

Why data analysts must engage with ELT5/ETL6

The role of the analyst is undergoing a fundamental transformation in 2025. We are moving beyond the traditional responsibility of performing statistical analysis and presenting visualization on pre-cleaned data. Today, the modern analyst must evolve into a “Data Ops(Operations)”. This requires a holistic understanding of the data lifecycle and bridging the gap between business context and data engineering. This expansion mandates a familiarity with ELT/ETL processes to examine the quality and structure of the data source.

Moreover, data analysts must be adept at processing diverse data types such as semi-structured data (for example, schema-less JSON strings or variant) and understanding various data access methods such as leveraging the efficiency of in-situ processing over the constraints of in-memory loading of datasets.

RStudio: The Unified IDE for Hybrid R and Python workflows

My ADSP examination requirements motivated my initial deep dive into RStudio. However, it is worth highlighting its utility as a tool for any data professional. The most significant benefit of using RStudio is its seamless ability to leverage the best tools from both the R and Python language ecosystems. This eliminates the need for the analyst to switch environments which leads to dramatically higher operational efficiency. This unified approach streamlines the analysis lifecycle from code execution to final reporting.

Python for data engineering

Use Python’s libraries like Pandas for efficient ETL/ELT operations, data manipulation on large datasets, and integrating with production environments or machine learning workflows (TensorFlow/PyTorch).

R for analysis and visualization

Utilize R’s statistical packages and its superior data visualization capabilities (ggplot2, R Shiny) for data analysis modeling, beautiful reporting, and creating customized, publication-ready graphics.

RStudio Desktop: Installation Instructions7 for Fedora Linux

Install R base packages using the terminal and verify:

$ sudo dnf install R
$ R --version

Now, install RStudio from the Fedora COPR repository. Note that these COPR packages conflict with the binary package provided by Posit. Remove the existing Posit RStudio binary package if you installed it.

$ sudo dnf copr enable iucar/rstudio
$ sudo dnf install rstudio-desktop

Dependencies and dev tools are required so install them:

$ sudo dnf install @development-tools
$ sudo dnf install libxml2-devel openssl-devel curl-devel

Launch the RStudio. When the < prompt appears on the RStudio Console enter the following commands. Note that this prompt should appear in the bottom-left pane of the default layout.

Install the reticulate package and execute the function reticulate::py_install() to manage Python dependencies:

install.packages("reticulate")
reticulate::py_install(packages = c("duckdb", "pandas"))

Set the global options for all code chunks within the R Markdown Canvas:

knitr::opts_chunk$set(echo = TRUE, message = FALSE, warning = FALSE)

ragg is an indirect but critical dependency of core Tidyverse packages (such as ggplot2):

install.packages("ragg")

Install base packages for data manipulation:

install.packages("tidyverse")

DBI, tools for database interface, is an essential R package that provides a standardized, vendor-agnostic interface for connecting to and interacting with various database systems (both local and remote)

install.packages("DBI")

Install tools for Parquet files and S3 data lake access:

install.packages("arrow")

Install R Markdown for combining R code, and install Quarto for combining R/Python/SQL with its execution results, and explanatory text into reproducible data pipelines directly within the environment. The Quarto (.qmd) file runs code chunks in R, Python, and SQL in a single document.

install.packages(c("rmarkdown","quarto"))

Load packages for ELT and EDA:

library(tidyverse)
library(arrow)

RStudio environment showing the contents of a Parquet file displayed in a data frame

Data architecture pipeline

The specific dataset chosen, Datanommer (Fedora Messaging Streams), aligns with the strategic objectives of the Fedora Data Working Group, where I contribute. The data is stored in the Bronze Data Layer where raw data from source systems is ingested and stored, as-is, for scalable data lake storage. The Bronze Layer allows for schema evolution without breaking downstream processes.

To provide the Working Group with transparent access and initial insight into this data, I have prepared a shared Initial Exploratory Data Analysis (EDA) Notebook. This notebook serves as the initial public view of the data quality and patterns, and it informed the subsequent architectural decisions for the scalable pipeline I am about to outline.

Given the complexity of the architecture, I will proceed with an outline of the core components, organized by their role in the ELT pipeline:

Data Architecture Diagram: Assisted by Figma ‘Infinite Canvas’

This restructured pipeline, leveraging the new Lakehouse architecture, unlocks several core benefits crucial for scaling contributor analysis and enabling future insights:

Elimination of Memory Constraints via In-Situ Processing

DuckDB acts as a high-performance analytical engine that enables In-Situ Processing. It queries data directly from storage (specifically the Parquet files) without requiring the entire dataset to be loaded into RAM. This not only solves the memory problem but also delivers rapid query execution and significantly lowers operational costs associated with large computational clusters hosted on the OpenShift/Fedora AWS infrastructure.

Quarto runs R code chunks to connect to DuckDB

Future-Proofing

The shift to a Lakehouse model ensures the pipeline is ready for growth and evolving data complexity. Future integration of Apache Iceberg and Apache Polaris will provide schema evolution capabilities. This ensures the pipeline is fully future-proofed against changes in underlying data structures.

Streamlined ELT Workflow and Multi-Lingual Access

I have redefined the processing workflow from a bottlenecked ETL model to a resilient Extract-Load-Transform (ELT) pattern. Parquet files with the variant type store semi-structured data (like JSON/nested structures), loaded raw into S3, simplifies the ingestion stage. When using R, it is recommended to read Parquet files using the Apache Arrow library.

Exploratory Data Analysis (EDA) using data frames in the Tidyverse

The parsed data is then accessible by multiple analytical platforms (R Shiny, Python, BI tools) without duplication or manual preparation. This multi-lingual access maximizes the utility of the clean data layer, supporting a growing number of analytical users and more complex queries necessary for defining long-term contributor metrics.

Initial EDA Notebook

The preliminary Exploratory Data Analysis (EDA) was conducted within the Jupyter Notebook format. This allowed broad compatibility with the existing execution and review environment of the Fedora Data Working Group.

The Initial EDA Notebook is documented to ensure complete reproducibility. This included all necessary steps for the Python library installation and environment setup. Any standard Python script containing ELT logic can be seamlessly run within RStudio’s Python mode or “knitting8” an R Markdown document or rendering a Quarto file.

Conclusion

The establishment of this analysis pipeline represents a crucial step in transforming unprocessed Fedora data into actionable insights. By addressing the core challenges of scaling and in-memory processing through DuckDB, and enabling transparent analysis via the hybrid RStudio/Jupyter workflow, I have demonstrated viable methods for performing Exploratory Data Analysis (EDA) and Extract, Load, Transform (ELT) processes on vast community datasets. In conclusion, the purpose of this work is to foster deeper engagement across a broader community by analyzing data with a view that relates to the Fedora Project community.

I hope this pipeline will serve as the technical foundation that activates and focuses the community discussion around the specific variables and metrics needed to define and ensure the continuity of community contributions.

AI Assistance

The ideation, structural planning, and terminology refinement of the pipelines were assisted by Gemini and Figma.

Software version

RStudio Desktop 2025.05.1 Build 513 (Fedora COPR repository)

R version 4.5.2 (2025-10-31) / Python 3.14.0

Notes

  1. summary(): When used on a data object (for example, DataFrame), it provides basic statistics (min, max, mean, median). When used on a fitted linear model object (lm), it delivers key diagnostic information like coefficient estimates and p-values. ↩
  2. lm(): Stands for Linear Model. This is the core function for fitting linear regression models in R, allowing the user to examine and model the linear relationship between variables. ↩
  3. Regression analysis examines which factors affect the other and which ones are irrelevant for statistical and business context. ↩
  4. DuckDB is a column-oriented database architecture.
    – Direct Querying: It directly queries data from file formats such as Parquet, CSV, and JSON.
    – Local compute engine: It is widely used as a high-performance local compute engine for analytical workloads. It runs in-process, meaning it operates within your application (like a Python script or R session) without needing a separate server or cluster management.
    – Cloud Integration: It supports querying data stored in cloud storage services like AWS S3, GCS (Google Cloud Storage), and Azure Blob Storage.
    ↩
  5. ELT (Extract, Load, Transform): In a modern data environment like a Lakehouse, ELT is preferred: data is first extracted from the source and loaded raw into the cloud data lake (S3), and then transformed in place by the processing engine like DuckDB. ↩
  6. ETL (Extract, Transform, Load): transformations occur before loading the data into the final destination. ↩
  7. Key Advantages of RStudio over Jupyter Notebook for Production Workflows;

    Even with its slightly more complex initial setup compared to Jupyter Notebooks, the advantages become significant when moving from exploration (Jupyter’s strength) to reproducible, production-ready workflows (RStudio’s strength).

    – Integrated Console, Source, Environment, and Files: RStudio offers a cohesive, four-pane layout that allows for seamless navigation between writing code, running commands, inspecting variables, and managing files/plots. Jupyter requires constant shifting between code cells and external tabs.
    – Superior Debugging Tools: RStudio includes a powerful, visual debugger that allows you to set breakpoints, step through code line-by-line, and inspect variable states directly in the environment pane. Jupyter’s debugging is typically cell-based and less intuitive.
    – Native Project Management: RStudio Projects (.Rproj files) automatically manage the working directory and history. This makes it easy to switch between different analytical tasks without conflicts.
    – Integrated Environment Management (renv): RStudio integrates seamlessly with tools like renv (R Environment) to create isolated, reproducible R environments. This addresses dependency hell by ensuring the exact package versions used in development are used in production, which is crucial for data pipeline version control.
    – Quarto/R Markdown Integration: RStudio provides dedicated tools and buttons for easily compiling and rendering complex analytical documents (like your Quarto file) into HTML, PDF, or presentation slides.
    – Shiny Integration: RStudio is the native environment for developing Shiny web applications—interactive dashboards and tools that turn analysis into deployable products. Jupyter requires separate frameworks (like Dash or Streamlit) for similar deployment.
    – Focus on Scripting: RStudio’s source editor is optimized for writing clean, structured R/Python scripts, which are preferred for building robust, scheduled pipeline components (like those managed by Airflow).
    – Code Chunk Execution (Quarto): Even when using Quarto, RStudio allows for superior navigation and execution of code chunks compared to the often sequential and state-dependent nature of Jupyter Notebook cells. ↩
  8. knitr executes code in R Markdown (.Rmd) file by chunks or as a whole (typically by clicking the “Knit” button in RStudio or using rmarkdown::render() in R) ↩

Home Assistant, Govee Lights Local, VLANs, Oh my!

Posted by Evgeni Golov on 2025-12-14 15:48:08 UTC

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

Johnnycanencrypt 0.17.0 released

Posted by Kushal Das on 2025-12-14 08:16:23 UTC

A few weeks ago I released Johnnycanencrypt 0.17.0. It is a Python module written in Rust, which provides OpenPGP functionality including allows usage of Yubikey 4/5 as smartcards.

Added

  • Adds verify_userpin and verify_adminpin functions. #186

Fixed

  • #176 updates kushal's public key and tests.
  • #177 uses sequoia-openpgp 1.22.0
  • #178 uses scriv for changelog
  • #181 updates pyo3 to 0.27.1
  • #42, we now have only acceptable expect calls and no unwrap calls.
  • Removes cargo clippy warnings.

The build system now moved back to maturin. I managed to clean up CI, and now testing properly in all 3 platforms (Linux, Mac, Windows). Till this release I had to manually test the smartcard functionalities by connecting a Yubikey in Linux/Mac systems, but that will change for the future releases. More details will come out soon :)

infra weekly recap: mid december 2025

Posted by Kevin Fenzi on 2025-12-13 19:03:11 UTC
Scrye into the crystal ball

Another busy week for me and Fedora infrastructure in general, and also the last working week of the year for me. I am out on vacation for the holidays and back 2026-01-12.

Of course I will be around and checking in for outages/urgent issues and working on things in the community that I find enjoyable.

( see: https://www.scrye.com/blogs/nirik/posts/2023/12/13/time-off-when-youre-paid-to-work-in-a-community/ )

RUD2-CC to RDU3 datacenter move

This last monday was the physical datacenter move. It had been pushed back for various reasons, but I am glad we could get it done and over with this year.

Things didn't go as smoothly as planned unfortunately.

  • There was bad weather in the area of the datacenters (snow and ice). The truck transporting things took a bit longer to arrive, the folks doing the move had to head home before things became impassible and also took longer to get back in to finish cabling. :(

  • There was a miscommunication between planning folks and datacenter folks on the ground: we thought that everything was moving to dual 10G network (so networking can upgrade/reboot switches and we are fine). The folks doing the actual cabling thought that we were just reconnecting things the way the old datacenter was setup (1 1G connection). So, it took a while to get 10G all connected and configured.

  • Of course there were some casualties too: One machine (our retrace server) had a broken rail. DC folks got it setup anyhow, but new rails are going to need to be installed soon. And another of our servers for some reason refuses to accept any mgmt passwords. That will need to be reset locally.

  • There's one cursed machine that has a 10G network card in it, and lspci on the server shows it, but it has no interfaces for it, and the mgmt interface doesn't show it at all. Probibly the card is dead or otherwise needs vendor intervention.

Otherwise important things are back up with some reinstalling and cleanup to be done. Here's hoping for a datacenter moveless 2026!

Scraper news

I did a bunch of tweaking since last week to try and get things in a state where we could not need manual intervention for scraper handling. This included some httpd changes, some proxy changes and a bit of hardware tuning. So far, we are doing good. I haven't had to manually look at it much this week. We have still been under scraper load, but blocking the blame endpoint really helped along with the other tuning. I hope it will be a quiet holidays.

Decemeber of docs

So far we are just under half of december gone by, and so far I have kept up working on at least one docs pr/ticket every day.

We have moved infra docs over to forge now also!

You can see activity here:

https://forge.fedoraproject.org/infra/docs/activity/monthly

Hopefully I can keep it up. We are down about 21 tickets now. Perhaps I can even do a bit more now that I am on holidays.

Happy holidays everyone

Happy holidays everyone!

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115713891385384001

Fedora 41 support

Posted by Phil Wyett on 2025-12-13 14:46:00 UTC
Fedora 41 went End Of Life (EOL) on December 10, 2025. I have removed my Virtual Machines (VMs) for Fedora 41 and will be doing no further maintenance or builds for this release.

The illusion of sovereignty: Why curl | bash destroy your investment in Cybersecurity

Posted by Rénich Bon Ćirić on 2025-12-12 16:00:00 UTC

Your company just finished Series B. You have cash to spend. You have a great product with a Go stack that compiles to a single static binary.

Your engineering team spends 25% of their time securing that stack. You invest millions in infrastructure defense: Stateful firewalls, AI-augmented scanning, Red Teams, Blue Teams, and rigorous DevSecOps pipelines. You even partner with major cloud providers to ensure your supply chain is audited.

You are serious about security. You have built a fortress.

And then, to install your product, you tell enterprise customers to run this:

curl -fsSL https://my-really-cool-company.com/install.sh | sh

This single line undermines your entire security architecture.

Why is this so dangerous?

First, consider the distribution mechanism. Teams automate releases. They push to Git, CI runs tests, and the binary is pushed to an object store or CDN. The install.sh script is just a pointer to that location.

The problem is Mutability

When a sysadmin runs that command, they are piping an unverified, unsigned script directly into a shell; often as root.

If any point in that supply chain is compromised; if your CDN is hijacked, or your build server is breached (like the Codecov or SolarWinds attacks), or a rogue maintainer inserts a backdoor (like the recent XZ Utils / liblzma incident); your customer downloads the malware instantly.

There is no audit trail. There is no cryptographic signature verifying the author. There is just a script that can change content between the time you audit it and the time you run it.

The Trust Fallacy

We operate in good faith. You assume the vendor is secure. But supply chain security isn't about trusting the vendor; it's about verifying the artifact.

If you are selling to Government, Defense, or Finance, "trust" is not a strategy. Sovereignty is the strategy. These clients need:

  1. Immutability: A guarantee that the binary hasn't changed.
  2. Provenance: Cryptographic proof of origin (GPG).
  3. Sovereignty: The ability to mirror the software in an air-gapped environment without reaching out to the public internet during installation.

The Solution: Native Packaging

In the Enterprise Linux ecosystem (RHEL, CentOS, Fedora, etc), we solved this decades ago.

  • RPMs allow for offline installation.
  • GPG Signatures ensure the binary was built by you.
  • Repositories allow clients to mirror and scan the software before it touches their production servers.

If you are asking enterprise clients to pipe shell scripts, you are asking them to surrender their sovereignty. It is time to treat your delivery mechanism with the same rigor as your source code.

Certificados SSH: Una Guía Completa

Posted by Rénich Bon Ćirić on 2025-12-12 16:00:00 UTC

Tengo rato pensando: "¿Qué más ahbrá en cunato a SSH y sus certificados, llaves y demás cosas?". El openssh tiene más que ofrecer, seguramente, que lo que usamos al día a día. Basta con echarte un clavado en los man pages del mismo y ver que así es.

La neta, son perros para manejar autenticación a escala, con expiración automática y políticas bien cabronas. En Fedora 43, con SELinux cuidándonos la espalda, es aún más seguro.

Esta guía completa es tu mapa (y el mío) para dominar los certificados SSH. Incluye comparaciones profundas, mejores prácticas de seguridad, tips de automatización y ejemplos extensos. Ya sea que estés asegurando un centro de datos o un laboratorio casero, esto te va a subir el nivel en cuanto a SSH se refiere.

Nota

Esta guía asume OpenSSH 7.0+ para soporte completo de certificados. Checa tu versión con ssh -V. Para producción, usa módulos de seguridad de hardware (HSM) para las llaves de CA.

Paralelos con SSH Estándar

SSH estándar con llaves de usuario:

  • Flujo de trabajo: Los usuarios generan pares de llaves; los admins agregan manualmente las públicas a ~/.ssh/authorized_keys en cada servidor.
  • Pros: Elimina contraseñas; criptografía fuerte; funciona offline.
  • Contras: Pesadilla de escalabilidad; agregar/quitar usuarios requiere tocar cada servidor; no hay expiración automática ni restricciones; llaves comprometidas quedan hasta que se limpien manualmente; trazabilidad de auditoría limitada a logs.

SSH estándar con contraseñas:

  • Flujo de trabajo: Los usuarios se autentican con contraseñas almacenadas en servidores (a menudo hasheadas).
  • Pros: Configuración cero para usuarios; simple de implementar.
  • Contras: Susceptible a ataques de fuerza bruta; las contraseñas débiles son comunes; no hay auditoría de logins exitosos; las contraseñas se pueden pezcar o reusar; los cambios de contraseña centralizados son propensos a errores.

Certificados SSH:

  • Flujo de trabajo: CA firma llaves en credenciales portátiles y es rica en políticas.
  • Mejor que llaves de usuario: Emisión/revocación centralizada; los certificados expiran automáticamente; opciones para restricciones de comando/IP; auditoría más fácil vía IDs de llaves y seriales.
  • Mejor que contraseñas: Criptográficamente fuerte; sin secretos compartidos; soporta multi-factor (ej. con FIDO); amigable a la auditoría.
  • Peor que ambos: Complejidad inicial de configuración de CA; llave privada de CA es un punto único de fallo (si se compromete, todos los certificados son inválidos; a rótarla inmediatamente); requiere soporte de OpenSSH; no es compatible con clientes/servidores SSH viejos.

En resumen, los certificados brillan para organizaciones que necesitan autenticación escalable y manejado con políticas. Para uso personal o despliegues pequeños, las llaves tradicionales suelen bastar.

Cómo Funcionan los Certificados SSH

Un certificado SSH es una extensión firmada por una llave pública, que contiene:

  • Llave pública: La llave del usuario/host siendo certificada.
  • ID de llave: Un identificador legible a humanos (ej. "juan@empresa").
  • Principals: Usuarios permitidos (para certificados de usuario) o hostnames (para certificados de host).
  • Opciones: Permisos/restricciones (por ejemplo. "no-port-forwarding").
  • Período de validez: Fechas de inicio/fin para una expiración automática.
  • Número serial: ID único para revocación.
  • Firma de CA: Prueba autenticidad.

Los servidores checan la llave pública de la CA para verificar firmas, eliminando el almacenamiento por usuario de llaves. Los certificados son portátiles y auto-contenidos.

Prerrequisitos

  • OpenSSH 7.0+ (los certificados fueron introducidos en la v5.4, pero lo acabaron hasta la v7.0). En Fedora 43, viene instalado por defecto (openssh-10.0p1-5.fc43.x86_64 al momento).
  • Acceso a una máquina segura para operaciones de CA (idealmente offline).
  • Conocimiento básico de generación de llaves SSH.

Consejo

En Fedora, SELinux puede prevenir el acceso de archivos de CA o certificados. Siempre checa contextos con ls -Z y ajústalo si es necesario.

Generando una llave de CA

La llave de CA es la base; su parte privada firma todos los certificados, así que protéjala fiéramente.

Elige un tipo de llave fuerte (ed25519 recomendado por velocidad/seguridad):

ssh-keygen -t ed25519 -f ca_key -C "SSH CA para ejemplo.tld"

Esto crea ca_key (privada; nunca la compartas) y ca_key.pub (pública; distribúyela entre los servidores y clients).

Para RSA (si ed25519 no está soportado):

ssh-keygen -t rsa -b 8192 -f ca_key -C "SSH CA para ejemplo.tld"

Importante

Respalda la llave privada de forma segura.

Creando Certificados de Usuario

Los certificados de usuario permiten a usuarios autenticarse en servidores sin agregar llaves individualmente. Vamos paso a paso, empezando con lo básico y agregando funcionalidad para hacerlo más seguro y flexible. Así es más fácil entender qué hace cada opción.

Paso 1: Certificado Básico

Primero, genera la llave del usuario si no tienes una:

ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -C "juan@ejemplo.tld"

Ahora, firma la llave pública con la CA para crear un certificado básico. La opción -s especifica la llave privada de la CA, -I es la identidad (un ID legible para identificar el cert), y -z es un número de serie único para evitar colisiones.

ssh-keygen -s ca_key -I "juan@ejemplo.tld" -z 1 ~/.ssh/id_ed25519.pub

Esto crea ~/.ssh/id_ed25519-cert.pub, un certificado válido para cualquier usuario en cualquier host que confíe en la CA. Básico pero funcional.

Opciones adicionales

Limitar Principals

Para más seguridad, limita el certificado a usuarios específicos con -n (principals). Esto evita que el certificado se use para logins no autorizados.

ssh-keygen -s ca_key -I "juan@ejemplo.tld" -n juan,respaldo -z 2 ~/.ssh/id_ed25519.pub

Ahora, solo "juan" o "respaldo" pueden usar este certificado para autenticarse. Útil para equipos compartidos.

Opción 2: Agregar Restricciones

Agrega opciones con -O para limitar qué puede hacer el usuario. Por ejemplo, no-port-forwarding bloquea túneles, no-agent-forwarding previene reenvío del agente SSH.

ssh-keygen -s ca_key -I "juan@ejemplo.tld" -n juan,respaldo -O no-port-forwarding -O no-agent-forwarding -z 3 ~/.ssh/id_ed25519.pub

Esto hace el certificado más seguro, previniendo abusos como port forwarding no autorizado.

Determinar Validez

Determina un período de validez con -V para que el certificado expire automáticamente. Usa formatos como +30d (30 días desde ahora) o fechas absolutas.

ssh-keygen -s ca_key -I "juan@ejemplo.tld" -n juan,respaldo -O no-port-forwarding -O no-agent-forwarding -V +30d -z 4 ~/.ssh/id_ed25519.pub

Ahora el certificado dura solo 30 días, forzando renovación periódica para mantener la seguridad.

Forzar un Comando

Para automatización (como backups), usa -O force-command para limitar el certificado a un comando específico. Ideal para scripts que no necesitan un shell completo.

ssh-keygen -s ca_key -I "respaldo@ejemplo.tld" -n respaldo -O force-command="/usr/bin/rsync --server --daemon" -V +1d -z 5 ~/.ssh/id_ed25519.pub

Este certificado solo permite rsync, nada más. Cambié el ID a "respaldo" para diferenciarlo.

Llaves basadas en hardware

Para llaves basadas en hardware, usa tokens PKCS#11 con -D (ruta a la lib):

ssh-keygen -s ca_key.pub -D /usr/lib/opensc-pkcs11.so -I "juan@ejemplo.tld" -z 6 ~/.ssh/id_ed25519.pub

O, para poner la CA en memoria con un ssh-agent, agrega la llave y usa -U:

ssh-add ca_key
ssh-keygen -Us ca_key.pub -I "juan@ejemplo.tld" -z 7 ~/.ssh/id_ed25519.pub

Estos son para despliegues más avanzados; no siempre necesarios.

Creando Certificados de Host

Los certificados de host prueban identidad de servidor a clients, previniendo ataques MitM.

Pasos (corre en cada host):

  1. Genera llaves de host (si no están presentes; especifica tipo para evitar clutter):

    sudo ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ""
    

    Nota

    Seguramente, esto lo hace tu distribución. En Fedora 43, lo hace un servicio llamado sshd-keygen, el cual corre siempre al iniciar y genera la llave de host si esta no existe. Te genera 3, de hecho.

  2. Firma básica:

    ssh-keygen -s ca_key -I "web01.ejemplo.tld" -h -z 1001 /etc/ssh/ssh_host_ed25519_key.pub
    

    Output: /etc/ssh/ssh_host_ed25519_key-cert.pub.

  3. Limita a los principals (hostnames e IPs para que sea más robusto):

    ssh-keygen -s ca_key -I "web01.ejemplo.tld" -h -n web01.ejemplo.tld,web01,10.0.0.5 -z 1002 ~/.ssh/ssh_host_ed25519_key.pub
    
  4. Determina la validez:

    ssh-keygen -s ca_key -I "web01.ejemplo.tld" -h -V 20240101:20241231 -z 1003 /etc/ssh/ssh_host_ed25519_key.pub
    

Revocando Certificados

Usa "Key Revocation Lists (KRLs)" para invalidar certificados comprometidos. Siempre usa seriales únicos (-z) para una revocación efectiva.

  1. Crea KRL (por archivo de certificado):

    ssh-keygen -k -f revoked.krl ~/.ssh/id_ed25519-cert.pub
    
  2. Actualiza KRL (agrega otro con el certificado):

    ssh-keygen -k -u -f revoked.krl another-cert.pub
    
  3. Revoca por Serial (no necesitas el certificado):

    ssh-keygen -k -u -f revoked.krl -s ca_key -z 10
    
  4. Distribuye KRL a servidores (ej. via config):

    En /etc/ssh/sshd_config:

    RevokedKeys /path/to/revoked.krl
    
  5. Checa la revocación:

    # Exit != 0 si fue revocado
    ssh-keygen -Q -f revoked.krl ~/.ssh/id_ed25519-cert.pub
    

Configurando Confianza

La confianza se configura para que servidores y clientes reconozcan la CA y verifiquen certificados. Sin esto, los certificados son papel mojado; el servdor rechaza logins porque no confía en la firma de la CA. La neta, es como darle una carta de recomendación a alguien que no conoce al firmante. Vamos por partes, compa, para que quede clarito.

Certificados de usuario (de lado del servidor)

Aquí, lo que te conviene es usar el método centralizado. Ya tienes a tus usuarios considerados en el certificado y solo hay que repartir la llave pública de la CA en todos los nodos. Esto es ideal para empresas o clusters grandes, porque evitas tocar archivos de cada usuario.

Pongo los otros métodos para que estés enterado nomás, por si los necesitas en setups pequeños.

  • Método Centralizado: Usa TrustedUserCAKeys en /etc/ssh/sshd_config para manejo centralizado. Esta directiva le dice a SSH que confíe en la CA para firmar certificados de usuario, sin necesidad de authorized_keys individuales.

    TrustedUserCAKeys /etc/ssh/ca.pub
    

    Luego pon la pubkey de CA en /etc/ssh/ca.pub. Distribúyela a todos los servers (ej. con scp o Ansible). Esto evita tocar archivos de usuario y simplifica revocaciones.

  • Método por usuario: Agrega a ~/.ssh/authorized_keys (útil para servers personales, pero no escala). La línea "cert-authority" indica que cualquier cert firmado por esa CA es válido para ese usuario.

    echo "cert-authority $(cat ca_key.pub)" >> ~/.ssh/authorized_keys
    
  • Identidades Desacopladas: Usa AuthorizedPrincipalsFile en sshd_config para mapear principals de certificado a usuarios locales (ej. mapea "juan@corp" a "centos"). Los principals son como IDs en el cert que dicen quién eres, sin depender del username del sistema.

    AuthorizedPrincipalsFile /etc/ssh/principals/%u
    

    Crea /etc/ssh/principals/centos con:

    juan@empresa
    pancho@empresa
    

    Nota

    Asegúrate que el dueño del archivo sea root:root con permisos 600 para prevenir escalamiento de privilegios. En Fedora, SELinux puede requerir contextos correctos (ej. restorecon -Rv /etc/ssh/principals). Esto permite cuentas compartidas sin authorized_keys per-user, pero checa que los principals matchen exactamente.

Para Certificados de Host (de lado del servidor)

Los certificados del host prueban la identidad del server al cliente. Configúralos agregando el path del certificado a sshd_config. SSH lo presentará automáticamente durante conexiones para evitar ataques MitM.

  • Agrega el path del certificado a /etc/ssh/sshd_config:

    HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub
    

De lado del cliente:

El cliente también necesita confiar en la CA para verificar certificados.

  • Para los certificados del host: Agrega la CA a ~/.ssh/known_hosts o /etc/ssh/ssh_known_hosts. El patrón "@cert-authority" con wildcard (*.ejemplo.tld) confía en cualquier host en ese dominio si está firmado por la CA, previniendo spoofing.

    echo "@cert-authority *.ejemplo.tld $(cat ca_key.pub)" >> ~/.ssh/known_hosts
    
  • Para certificados de usuario: SSH carga automáticamente los certificados si están nombrados de manera adecuada: id_ed25519-cert.pub al lado de la llave privada. Si usas ssh-agent, agrégalo con ssh-add para que esté disponible.

Reinicia servicios SSH después de hacer cambios: systemctl reload sshd. Prueba con ssh -v user@host para ver si la confianza funciona.

Ejemplos

Ejemplo 1: Certificado de Usuario Básico

  • Escenario: Acceso estándar de usuario.

    ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
    
    ssh-keygen -s ca_key -I "pancho@empresa" -z 10 ~/.ssh/id_ed25519.pub
    
    # Distribuye ca_key.pub a servidores
    # Login: ssh pancho@servidor
    

Ejemplo 2: Usuario de Backup Restringido

  • Escenario: Rsync automatizado con límites de IP/comando. (Nota: source-address puede ser frágil en entornos cloud dinámicos.)

    ssh-keygen -s ca_key -I "respaldo@servidor" -n respaldo -O force-command="/usr/bin/rsync --server" -O source-address="10.0.0.0/8" -V +1d -z 11 ~/.ssh/id_ed25519.pub
    

Ejemplo 3: Certificado de Host para Cluster

  • Escenario: Nodos de cluster seguros. (Flujo seguro: Trae llaves a máquina CA, firma localmente, despliega certificados.)

    for host in node1 node2; do
      # Copia la llave de host a la máquina CA segura
      scp $host:/etc/ssh/ssh_host_ed25519_key.pub /tmp/$host.pub
    
      # Firma localmente con CA
      ssh-keygen -s ca_key -I "$host.cluster" -h -z 1004 /tmp/$host.pub
    
      # Despliega certificado de vuelta
      scp /tmp/$host-cert.pub $host:/etc/ssh/ssh_host_ed25519_key-cert.pub
    done
    
    # Los clientes agregan @cert-authority a known_hosts
    

Ejemplo 4: Certificado de Usuario FIDO

  • Escenario: Llaves por hardware.

    ssh-keygen -t ecdsa-sk -f ~/.ssh/id_ecdsa_sk
    ssh-keygen -s ca_key -I "juan@fido" -z 12 ~/.ssh/id_ecdsa_sk.pub
    

Ejemplo 5: Certificado con Extensiones

  • Escenario: Opciones custom.

    ssh-keygen -s ca_key -I "dev@team" -O critical:source-address="192.168.1.0/24" -O extension:permit-pty -z 13 ~/.ssh/id_ed25519.pub
    

Ejemplo 6: Inspeccionando Certificados

  • Ver detalles:

    ssh-keygen -L -f ~/.ssh/id_ed25519-cert.pub
    

Ejemplo 7: Revocando un Certificado

  • Agrega a KRL y despliega:

    ssh-keygen -k -u -f revoked.krl ~/.ssh/id_ed25519-cert.pub
    scp revoked.krl server:/etc/ssh/
    

Mejores Prácticas y Consideraciones de Seguridad

  • Seguridad de la llave privada del CA: Guarda la llave privada del CA en algún lugar seguro. Nunca las uses en servidores de producción. Rota las CAs anualmente o en caso de compromiso; usa una CA dual: Agrega ambas partes públicas de las llaves de la CA vieja y nueva a TrustedUserCAKeys durante transición para evitar lockouts.

  • Períodos de Validez: Usa ciclos de vida cortos (días/semanas) para certificados de usuario y más largos para los hosts (meses/años). Automatiza la renovación.

  • Auditoría: Log uso de certificados via sshd; monitorea anomalías.

  • Integración: Automatiza con Ansible o Chafánsible la distribución de certificados. Un rol de ejemplo con Ansible:

    - name: Despliega llave pública del CA
      copy:
        content: "{{ ca_pub_key }}"
        dest: /etc/ssh/ca.pub
      notify: reload sshd
    
  • Evita errores comunes: No firmes certificados con opciones débiles; prueba la revocación; usa ed25519.

Automatización y Escalado

Para despliegues grandes, evita exponer llaves privadas del CA. Usa herramientas seguras:

  • Firmado manual con script (solo en máquina CA segura):

    #!/usr/bin/bash
    
    # ssh-signer.bash - Corre en la máquina con el CA
    user=$1
    key=$2
    serial=$( date +%s%N )  # Serial único basado en timestamp
    
    ssh-keygen -s ca_key -I "$user" -V +7d -O no-port-forwarding -z "$serial" "$key"
    
  • Certificados de Host en bola: Para inicializar, usa OpenTofu para injectar la llave pública del CA. Para sconfigurarlo manualmente:

    serial=$( date +%s%N )  # Serial único basado en timestamp
    
    for host in $(cat hosts.txt); do
      scp $host:/etc/ssh/ssh_host_ed25519_key.pub /tmp/$host.pub
      ssh-keygen -s ca_key -I "$host" -h -z $serial /tmp/$host.pub
      scp /tmp/$host-cert.pub $host:/etc/ssh/ssh_host_ed25519_key-cert.pub
    done
    
  • Monitoreo: Usa ssh-audit o scripts custom para verificar certificados.

Conclusión

Los certificados SSH lo transforman de un protocolo simple en un sistema de autenticación robusto. Centralizando confianza y habilitando políticas, se ofrece escalabilidad y seguridad sin par. Empieza en chiquito, prueba con un usuario/host; luego escala. Recuerda, la CA es tu joya de la corona; protéjala.

RTFM!

Contribuye mejoras en:

https://github.com/openssh/openssh-portable

¡Chido, ahora ve y securiza tus despliegues!

Fedora 42→43, more eventful than usual

Posted by Tomasz Torcz on 2025-12-12 10:39:03 UTC

I've found time for dist-upgrade of my home server, finally. As usual, there was one thing needing manual intervention: PostgreSQL update. But this time it was more complicated.

Between Fedora 42 and 43, PostgreSQL jumped from v16 to v18. And postgresql-setup --upgrade handles adjacent versions upgrades only. Fortunately, Fedora ships other version-suffixed packages for this database.

It is possible (and needed!) to use postgresql-server17 and postgresql17-upgrade packages as an intermediate step in the upgrade. Commands are documented in bz#2411778#c1. It should be included in F43 Common Bugs list, but it isn't. (And the list itself was moved from Wiki into Discourse…)

Note to self: the upgrade always fails with my customised postgresql.conf. Remember to plant the default config for the duration of postgresql upgrade.

026/100 of #100DaysToOffload

4 cool new projects to try in Copr for December 2025

Posted by Fedora Magazine on 2025-12-12 08:00:00 UTC
4 package to try from the Copr repos

This article series takes a closer look at interesting projects that recently landed in Copr.

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects with a wide variety of purposes, targeting diverse groups of users. Some of them should never be installed by anyone, some are already transitioning into the official Fedora repositories, and others fall somewhere in between. Copr allows you to install third-party software not found in the standard Fedora repositories, try nightly versions of your dependencies, use patched builds of your favourite tools to support some non-standard use-cases, and experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether is it safe to use Copr, please consult the project documentation.

Vicinae

Vicinae is a fast application launcher written in C++/QT. Inspired by tool Raycast, it provides instant app and file search and clipboard history. It also includes built-in utilities such as a calculator and web search, along with support for extensions written in TypeScript. It is designed to be highly responsive and native for Wayland environment. Therefore, if you like keeping your hands on the keyboard or want a customizable, extensible launcher for your desktop, Vicinae may be worth trying.

Vicinae launcher in action.

Installation instructions

The repo currently provides vicinae for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable scottames/vicinae
sudo dnf install vicinae

UZDoom

UZDoom is a modern DOOM source port that builds upon classic GZDoom engine, offering hardware-accelerated rendering, an updated scripting system, improved mod support, and high-quality audio playback. At the same time, it maintains compatibility with classic WAD files while making the experience smooth on current systems.

Whether you are playing the original episodes or diving into extensive mod packs, UZDoom offers a convenient way to enjoy them.

Installation instructions

The repo currently provides uzdoom for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable nalika/uzdoom
sudo dnf install uzdoom

Plasma Panel Colorizer

Plasma Panel Colorizer is a widget for KDE Plasma that allows you to customize the panel’s appearance. In addition, it offers options for background tinting, blur, custom opacity levels, shadows, floating panels, or themes that differ from the stock Plasma look. It also includes full blur support and is updated for Plasma 6, making it easy to adjust your panel exactly the way you want.

Different looks you can get with the Plasma Panel Colorizer.

Installation instructions

The repo currently provides plasma-panel-colorizer  for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable peridot-augustus/plasma-panel-colorizer
sudo dnf install plasma-panel-colorizer

sfizz-ui

Sfizz-ui is the graphical interface for the sfizz sampler engine, which is an open-source player for SFZ instrument libraries. The UI provides an accessible way to load SFZ instruments, adjust parameters, and integrate the sampler into your workflow. It also includes plugin support such as LV2 and VST3, making it suitable for music creation in a Linux DAW environment.

For musicians, sound designers, or anyone using SFZ sample libraries, sfizz-ui offers a polished interface.

Installation instructions

The repo currently provides sfizz-ui for Fedora 41, 42, and 43. To install it, use these commands:

sudo dnf copr enable lexridge/sfizz-ui
sudo dnf install sfizz-ui

The syslog-ng Insider 2025-12: logrotation; release RPM; nightly RPM

Posted by Peter Czanik on 2025-12-11 12:15:55 UTC

The December syslog-ng newsletter is now on-line:

  • File size-based log rotation in syslog-ng
  • Syslog-ng release packages for RHEL & Co.
  • Nightly syslog-ng RPM packages for RHEL & Co.

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2025-12-logrotation-release-rpm-nightly-rpm

syslog-ng logo

matrix server upgrades

Posted by Fedora Infrastructure Status on 2025-12-10 12:30:00 UTC

Element Matrix services will be upgrading our fedora.im and fedoraproject.org servers to use the new Matrix Authentication Server. This will allow clients to use the new element X and similar clients.

During the outage matrix servers will be unavailable, but messages will be received after the outage is …

The do’s and don’ts of measuring contributions “outside of working hours”

Posted by Ben Cotton on 2025-12-10 12:00:00 UTC

LFX Insights is a handy platform from the Linux Foundation that provides a variety of data on open source projects. Among the statistics it reports is contributions outside of working hours. Some users reported errors with how this information is reported, which got me thinking about the value of this measure. The short version: there’s very little value.

Why measure outside-of-working-hours contributions?

LFX Insights includes this measure as a signal of a project’s sustainability. Projects that rely heavily on people making after hours contributions, the thinking goes, will have a harder time attracting and retaining contributors.

As a software consumer, you don’t want your upstreams to suddenly disappear because that will present supply chain risks. It could mean vulnerabilities go unpatched. It could also mean that new features aren’t added. Either way, this puts the onus on the project’s users to carry the load.

As a project leader, you may be less concerned about whether or not a company downstream has to devote extra engineering time, but you probably do want your contributors to stick around anyway. Onboarding, mentoring, and growing contributors takes a lot of time and effort. You want to make sure people can stick around.

Why this measure fails

Despite the good intentions of measuring contributions outside working hours, the reality fails to deliver. There are some straightfoward reasons for this. Not everyone’s working hours are the same. Not everyone’s working hours are consistent. Some people use a different time zone on their computer. Not everyone’s working days are the same. Holidays vary widely across countries and religions. People (hopefully) take time off.

Then there’s the cultural mismatch. Linux Foundation projects are, to a first approximation, by companies for companies. The Linux Foundation is a 501(c)(6), not a charity, so it makes sense that it would view the world through a business lens. I don’t fault them for that. LF project contributors are more likely to make contributions during the working day than contributors to a “hobbyist” project.

But that workday tendency doesn’t necessarily mean people will stick around projects longer if the project is tied to their job. As the last few years have shown, tech sector layoffs can come for anyone at any time. If someone is only working on an open source project because it’s part of their job, then when the job changes, they’ll probably stop. People who work on an open source project for non-job reasons will likely stick around through job changes.

Thus one could argue that a project with a high degree of outside-working-hours contributions is more sustainable.

What to measure instead

If measuring contributions outside of working hours isn’t helpful, what is? Focus on what you’re worried about. Worried that everyone will disappear? Measure the activity over time. Worried that when a new vulnerability is discovered the lone maintainer will be backpacking through the Alps? Measure the spread of the contributions. Worried that the project doesn’t have enough people to follow secure coding practices? Measure the security posture.

Of course, the best answer is to stop trying to measure sustainability and contribute to making the project more sustainable instead.

This post’s featured photo by Joshua Olsen on Unsplash.

The post The do’s and don’ts of measuring contributions “outside of working hours” appeared first on Duck Alignment Academy.

Talk to your Fedora system with the linux-mcp-server!

Posted by Fedora Magazine on 2025-12-10 08:00:00 UTC
Talk to your Fedora system with the Linux MCP Server!

Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables LLMs to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!

Introduction

 Large language models (LLMs) can be an invaluable tool when investigating an issue on a Linux system. However, this can involve a lot of copy/pasting of information from the Linux terminal into a web based interface to an LLM model.  

The model context protocol (MCP) acts as a bridge, enabling LLMs to interact with external tools and data sources.  The linux-mcp-server utilizes this protocol to give LLMs the ability to interact with a Fedora Linux system.  Instead of you manually copying and pasting terminal output, the linux-mcp-server enables the LLM to directly query system information and log entries.  

By enabling an LLM direct access to system information and logs, it is transformed into an active part of the investigation process when troubleshooting an issue.  It empowers an LLM to directly query the system state, allowing it to help identify performance bottlenecks, and identify important log entries that might be missed by a manual review.  

What is the model context protocol (MCP)?

Anthropic introduced MCP in November 2024 as an open standard for LLM tool use. This provides a way for LLMs to interact with outside systems and data sources.  

Prior to MCP,  there wasn’t as strong a standard and ecosystem for LLM systems to call tools. LLMs were thus frequently limited to have only the information contained in their training. They were isolated from the outside world.  For example, if you asked an LLM “what is the weather going to be next week”, the LLM would respond with a message indicating that it doesn’t know what the weather will be, as it doesn’t have access to that information. MCP helps solve this problem by enabling a standardized way for an LLM to access an outside data source, such as the weather forecast.   

At a high level, users can use an AI agent application, such as Goose (open source), or Claude Desktop, and specify which MCP servers they would like to use.  The AI agent application informs the LLM that there are tools available via these MCP servers that can be used to help answer the requests from the user. The LLM model can then decide when to invoke these tools.  

MCP is an open standard. You have the flexibility to use MCP servers, such as linux-mcp-server, with either open source-licensed LLM models, or hosted proprietary LLM models. 

What is the linux-mcp-server?

The linux-mcp-server is a project started by Red Hat’s RHEL Engineering team.  It provides a number of tools that enable an LLM to query information from a Linux system, such as system info, service information and logs, process information, journald and other logs, network information, and storage and disk information.  For a full list of the tools provided, refer to the project’s Github page.  

These tools, provided by linux-mcp-server, are focused on providing the LLM access to read-only information.  In the future, we’ll be exploring expanding past these read-only use cases.  

The linux-mcp-server can be used to interact with the local Fedora Linux system that it is running on. It can also be used to interact with remote Fedora Linux systems over SSH.  For example, if you have SSH key authentication setup with the remote systems, you could make a request to your AI agent application such as “Determine the current memory usage on the fedora1.example.com, fedora2.example.com, and fedora3.example.com servers”.  

Prerequisites

The main components needed are an AI agent application, access to LLM model inference, and the linux-mcp-server. 

There are a number of options for the AI agent application, both open source and proprietary.  An example of an open source AI agent is Goose, which provides an RPM that can be installed on Fedora.

There are several LLM model inference options, ranging from locally hosted open source models, to proprietary hosted LLM models.  If hosting a model locally, you generally need to have GPU/accelerator hardware available.  Open source tools that can be used to locally host LLM models include RamaLama, llama.cpp, and Ollama. There are a number of open source LLM models capable of tool calling. These include models such as gpt-oss-20b and Granite 4 small.

Once you’ve installed your preferred AI agent application, the next step is to install the linux-mcp-server and configure your AI agent application to use it. The linux-mcp-server Github page includes installation instructions, and configuration details for Goose and Claude Desktop.

Use cases for linux-mcp-server

⚠ NOTE

The example scenarios below include LLM generated output. The output is presented here as it was generated by the AI systems cited so you can assess the quality of the output they are capable of, particularly in a non-cloud, self-hosted scenario.

The sample LLM output provided here is not intended to be run by readers without the same level of review we recommend anyone perform on AI generated content prior to use on their systems.

Dear Fedora, why are you so slow today?

Has this happened to you? Time to hop on a call and give a demo. You’re pumped. It’s your go-time aaaaand… your system starts lagging. Folks on the call say your video is choppy or they can’t understand you. OH NO.

You’ve got to assess what’s up, quickly. Run `top`, check `dmesg`, check your network connection (read the man page for `netstat` because you can’t remember the flag you want), check `journalctl`, cross-reference what you’re seeing, then start killing processes, restarting services, reloading kernel modules willy-nilly based on hastily-crafted theories, see if it improves… 

This is a case where you could actually ask the system to tell you why it’s slow and give it the ability to do the analysis via linux-mcp-tools. The LLM you connect with won’t just run the commands through the MCP server; it will understand what the numbers it gets means in context and be able to explain in human terms.

So let’s start working with a relatively new laptop. It’s a Thinkpad T14S with the AMD Ryzen™ AI Pro 7  ( Ryzen™ AI 360 chip.) Recently one of the wifi networks it must connect to regularly was upgraded. It now has  issues with the  network connection stability. So let’s talk to Goose + gpt-oss:20b +  linux-mcp-server about it. Here’s the initial prompt I used:

My Wifi doesn't work very well. It cuts out after a few minutes. Help me fix it. Can you find the error messages in the system logs and diagnose the problem?

It responded with the following:

I'll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

🤖 LLM output 🤖

I’ll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

It then made a series of tool calls, including Get System Information, Get Network Interfaces, Get Journal Logs, Get Hardware Information, Get Service Status service_name:NetworkManager, Get Service Status service_name:wpa_supplicant, Read Log File. The result was this analysis, which is the actual problem I was hoping it could find!


🤖 LLM output 🤖

Problem Summary – AT‑H12K PCI Wi‑Fi Issue

⚡ Problem Summary

Your ATH12K PCI (Qualcomm 802.11ax) wireless card repeatedly reports


ath12k_pci 0000:c2:00.0: failed to enqueue rx buf: -28

every few minutes, which triggers a drop of the wireless link. The driver is trying to enqueue an RX buffer for the hardware but fails with errno ‑28.

‑28

-ENOMEM
(out‑of‑memory for the driver buffer); the kernel cannot allocate a DMA‑coherent RX descriptor.

✅ Likely Root Causes

CauseEvidence / Typical ScenarioFix / Work‑around
Driver bug or missing patch The kernel’s ath12k driver has had a long‑running issue with RX ring management in older 5.15/5.4 series. Update the kernel or apply the latest patch from the
linux
tree (5.18+ or the
ath12k
backports).
Out‑of‑memory in the driver The error itself is “‑28” – a memory‑allocation failure inside the driver. Reduce the number of RX descriptors, or enable fast‑RX disabled mode (
modprobe ath12k_pci ath12k_pci_disable_fast_rx=1
).
Firmware mismatch / corruption Old firmware (
ath12k_hwgen
files) don’t match the driver version.
Reinstall firmware from your distro (
sudo apt‑install --reinstall linux‑firmware
) or copy the latest from kernel source.
System memory pressure If many big processes run, DMA‑coherent memory pool can be exhausted. Free RAM, close memory‑heavy apps, or add swap.
ACPI/PCIe power‑management quirk Some laptops throttle the Wi‑Fi device after idle, resetting the buffer queue. Add
pci_bus_reset = off
or set
ath12k_pci_disable_ps=1
in a modprobe conf.
Hardware issue (e.g., faulty PCIe slot or NIC) Persistent failure after multiple driver/kernel upgrades. Test on another machine/PCIe slot or replace the card.

🔎 Diagnostic Checklist

  1. Kernel & Driver Version
    
    
    uname -r
    modinfo ath12k_pci | grep ^version:
  2. Driver Parameters
    
    
    cat /sys/module/ath12k_pci/parameters/*

    – See if

    ath12k_pci_disable_fast_rx
    or similar is already set.

  3. Firmware Location
    
    
    ls /lib/firmware/ath12k/ath12k/ar9009/ | grep wlan0 | head

    – If the files are missing or older than the driver, copy newer ones from

    /lib/firmware
    of a newer kernel.

  4. Kernel Log Around the Error
    
    
    sudo dmesg --ctime | grep -A3 'ath12k_pci 0000:c2:00.0'

    – Observe if there are any other “-ENOMEM” or “resource exhaustion” messages.

  5. Memory Usage Stats
    
    
    free -h
    vmstat 1 5
  6. PCIe Latency / Link State
    
    
    sudo lspci -vvv -s c2:00.0 | grep -iE "link|latency"

🚀 Quick‑Fix Path (Try in this order)

StepCommandWhat it does
1

sudo apt update && sudo apt upgrade linux-firmware
Refresh firmware to the latest distro version.
2

sudo apt upgrade linux-image-$(uname -r)
Apply latest kernel with patched ath12k driver (or switch to 5.18+ if still on 5.15).
3

echo "ath12k_pci ath12k_pci_disable_fast_rx=1" | sudo tee /etc/modprobe.d/ath12k.conf
Force the driver to use a more conservative RX scheme.
4 Reboot and monitor:
dmesg -T | grep ath12k_pci
Verify the error disappears.
5 Optional: test on another machine or use a USB‑WiFi dongle (USB 802.11ax / 802.11ac) as a temporary workaround. Ensure the issue is not hardware‑specific.

**Tip:** After reboot, you can also verify driver parameters:

cat /sys/module/ath12k_pci/parameters/ath12k_pci_disable_fast_rx

📺 Advanced Debugging (if basic fixes don’t help)

  1. Enable debug logs
    
    
    sudo bash -c 'echo 1 > /sys/module/ath12k_pci/parameters/debug'
    sudo dmesg -T | grep ath12k
  2. Check PCIe device state
    
    
    sudo lspci -vv -s c2:00.0
  3. Use
    ethtool
    on the Wi‑Fi interface
    
    
    sudo ethtool -S wlp4s0
  4. Look for firmware logs
    
    
    grep -i 'ath12k' /var/log/syslog

📌 Bottom Line

The “failed to enqueue rx buf: –28” is an internal driver allocation failure that typically follows one of two paths:

  1. The driver or firmware is out of date → upgrade kernel/firmware.
  2. The driver configuration is too aggressive for your system’s memory budget → tweak module parameters or free RAM.

If you're still stuck after following the quick‑fix path, it's worth capturing more detailed logs with the above diagnostic commands and sharing them. We can then cross‑reference the exact kernel version/driver commit to see if a known patch exists.

Feel free to let me know which step you’ve tried or if you encounter any new log messages, and we’ll dig deeper!


Not too bad an analysis and set of debugging steps! I’ll follow up in the comments and let you know if the solution works.

Where’d my disk space go, Fedora?

Trawling through `du` on the command line, converting the numbers in your head, sorting them – it can be tedious. We even have graphical tools like `baobab`, but it takes time to load in the results and explore the data. And sometimes you might have questions it can’t answer in terms of the risk of deleting a specific directory, or the age of a specific file. It’s easier to be sloppy and just let the disk fill up. But there comes a time where that is no longer an option.


You can ask the system using the linux-mcp-server to analyze your disk usage, give you a hitlist of the top space-eating places on the file system, and give you advice with a risk analysis of how to clean it up.

Here’s my prompt:

I need help analyzing disk usage on my system. Can you:

1. Show me the current disk usage across all filesystems using get_disk_usage
2. Use list_directories_by_size to show me the 10 largest directories (this will help identify where disk space is being consumed)
3. Check if there are any large log files consuming space using get_journal_logs or read_log_file
4. Review running processes with list_processes to see if any are consuming excessive resources or creating temporary files

After gathering this information, help me identify potential space-saving opportunities and assess the safety of cleaning up specific directories. Please don't only examine top-level directories. I want you to go deep if needed to find out where the largest files are!

I ran this prompt on my Fedora 42 system using linux-mcp-server with Goose connected to locally-served gpt-oss:20b, and here’s the output I got:


🤖 LLM output 🤖

Disk‑space overview (as of Nov 24 2025 15:21)

FilesystemSizeUsedAvailUse %Mounted on
/dev/mapper/luks‑61cedb0e…952.3 GB548.9 GB399.6 GB57.9 %/
/dev/mapper/luks‑61cedb0e…952.3 GB548.9 GB399.6 GB57.9 %/home
/dev/nvme0n1p2973.4 MB462.8 MB443.4 MB51.1 %/boot
/dev/nvme0n1p1598.8 MB19.3 MB579.6 MB3.2 %/boot/efi

Overall usage is comfortably below 60 % of a 952 GB volume, so immediate capacity pressure isn’t a concern. The biggest contributors to disk space today are in the ~/. hierarchy.


1. Biggest directories on the system

Path (relative to the mount point)Size (≈)Notes
~/ – User home (2 mounts: /home + /)1 TB total57 % used; the bulk of the space is in ~/

| ~/duffy/.local/share | 238 GB | Containers sub-dir, followed by Trash (7GB) – typical podman/docker container images |
| ~/duffy/.local/share/containers | 238 GB | Container layers, images, root filesystems – the prime candidate for cleanup |
| ~/duffy/.local | 266 GB | share (237 GB) dominates it |
| ~/duffy/.local/share/containers (inside share) | 238 GB | Same as above – just a different view |
| ~/duffy/.local/lib | 13 GB | Likely a virtual‑environment library set |
| ~/duffy/.local/bin | 4 GB | Helper binaries |
| ~/duffy/.local/share/Trash | 7 GB | User‑deleted items still resident on disk |
| ~/duffy/.local/share/containers (nested) | 238 GB | Same as above; redundant listings from recursive depth‑1 scans |
| ~/duffy/.local/share (top‑level) | 238 GB | containers is the absolute largest item |

Take‑away – 75 % of the 238 GB live inside three nested directories:

  • ~/duffy/.local/share/containers (238 GB) → Rooted in Podman/Container‑Store
  • ~/duffy/.local/share (238 GB) → Same data exposed at a higher level
  • ~/duffy/.local/share/Trash (7 GB)

2. Possible space savers

CategoryRecommendationRationale
Containerspodman system prune -a (or docker system prune -a if Docker is in use)Removes unreferenced images, containers, and caches. This can reclaim 200–250 GB if most containers are outdated or no longer referenced.
Trashgio trash –emptyThe 7 GB in Trash is still taking physical space; simply emptying it will free that amount.
Large libs/binariesCheck ~/duffy/.local/lib – 13 GB, and ~/duffy/.local/bin – 4 GBIf these are not actively used by a project, consider uninstalling or moving them to a secondary, less‑frequented store.
Ramalama & UV9 GB (ramalama), 8 GB (uv)Possibly temporary files or caches from development tools. Inspecting the contents of these directories (du -h) can reveal if they are still needed.
Other cachespnpm store prune, virtualenv or pip cache purgeDevelopment tool caches often accumulate; they’re usually safe to delete once you’re done with the current development sessions.
LogsThe system journal shows normal activity, and syslog access is blocked by policy.Systemd journal stores are rotated automatically; no urgent action required.
Boot partitionsNot relevant – they’re low‑usage (boot/efi only 3 % used).No action needed.

3. Disk‑IO perspective

  • Reads: 111.8 GB; Writes: 85.6 GB since boot (≈ 1.2 TB total).
  • If writes are high relative to free space, consider enabling a tmpfs overlay for temporary files or moving large data to a separate, faster SSD. But in this case, writes are not approaching the 399 GB free.

4. Suggested cleanup workflow

# 1. Empty Trash

gio trash --empty

# 2. Prune unused containers/images

podman system prune -a   # or docker system prune -a

# 3. Remove leftover caches

pnpm store prune         # if pnpm is installed
pip cache purge          # for Python
virtualenv --clean       # optional

# 4. Inspect large “RAM” directories manually if you need deeper cleanup

du -h --max-depth=1 ~/duffy/.local/share/containers

Tip: After each step run df -h /home or du -sh ~/duffy/.local/* to see the impact.


5. Safety checklist

ActionWhy it’s safe (or risk)
Emptying TrashDeletes only user‑deleted files; confirm you don’t need the content.
podman system prune -aDeletes container images, containers, and volumes that are not referenced by any running or stopped container. Verify with podman ps -a before pruning.
Removing lib/bin from .localOnly safe if you’re sure those libraries/bin files aren’t part of a running application.
Caution with tmpfs or moving large directoriesIf you move a directory on‑the‑fly, ensure no running application depends on that path.

Bottom line

  • Current free space: ≈ 400 GB on both / and /home.
  • Largest consumptions: User‑space under ~/duffy/.local/share/containers (≈ 238 GB).
  • Actionable clean‑ups: Empty Trash, prune unused container images, and review large libraries/binaries. This should reclaim over 200 GB and bring the usage from 57.9 % down to roughly 40‑45 % – plenty of headroom for the future.

Feel free to ask what next steps you’d like to dig into (e.g., detailed contents of the containers dir, specific application logs, or targeted pruning commands).


Upgrade planning

So you may have noticed a little detail above – the system is a Fedora 42 system… and Fedora 43’s been out a month now! 😅 So, time to upgrade.

This example shows where we have some tools missing from the set provided in the linux-mcp-server. We’re including it for two reasons:

  • So you can see how this works manually
    You can see that even when specific tools you might not need are available in the MCP server, you can have the response give you instructions on commands to run on your own, and copy/paste the command output back into your chat to get analysis alongside the data the system is able to pull via the MCP tool calls;
  • To encourage you to submit additional tools to our project!
    We would love your additions to the project! Here’s where to get started: https://github.com/rhel-lightspeed/linux-mcp-server/blob/main/docs/CONTRIBUTING.md

Here’s the prompt I started with, with the same Goose + gpt-oss:20b + linux-mcp-server combination:

You are a Linux system administrator assistant analyzing a Fedora system for upgrade readiness.

TASK: Examine this Fedora 42 system and provide a comprehensive upgrade readiness report for Fedora 43.

ANALYSIS CHECKLIST:
1. Check current Fedora version: cat /etc/fedora-release
2. Review system updates status: dnf check-update
3. Identify third-party repositories: dnf repolist
4. List installed packages from non-Fedora repos: dnf list installed | grep -v @fedora
5. Check for broken dependencies: dnf check
6. Review disk space on root partition: df -h /
7. Check for unsupported or deprecated packages
8. Identify custom kernel modules: lsmod | grep -v "^Module"
9. Review SELinux status: sestatus
10. Check for pending system updates: dnf upgrade --refresh --assumeno

REPORT FORMAT:

# Fedora 43 Upgrade Readiness Report

## Current System Status
- Fedora version: [version]
- Kernel: [kernel version]
- Updates status: [current/outdated]

## Potential Issues
[List blocking issues with HIGH/MEDIUM/LOW severity]

## Third-Party Software
[List non-Fedora packages that may need attention]

## Recommendations
[Specific pre-upgrade steps needed]

## Overall Readiness: [READY/NEEDS ATTENTION/NOT READY]
Run the necessary commands and provide this analysis. Be specific about any issues found and give actionable recommendations.

Now, right away the model came back to me to complain it doesn’t have access to `dnf`, `cat`, etc. And that’s expected here. What it did was give me a list of homework to run for it to complete the analysis… for example, `dnf check-update` and `cat /etc/fedora-release`. I had a little back and forth in Goose with the model where it would ask me to run a command and I’d copy the output into the context. This resulted, finally, in the following report:


🤖 LLM output 🤖

Fedora 43 Upgrade Readiness Report

Analyzed System – Fedora 42 (Kernel 6.17.7‑200.fc42.x86_64)
Date – 2025‑11‑24


1. Current System Status

ItemValue
Fedora release42 (Adams)
Kernel6.17.7‑200.fc42.x86_64
Root filesystem953 GB LUKS, 58 % used (411 GB free) – > 15 % free
/var /usrSame as root partition (mounted the same LUKS device)
SELinuxenabled (policy targeted), currently permissive
System‑wide packagesNo held packages (dnf versionlock list empty)
System modulesNo modules held.

TL;DR – The base system is clean; the kernel is up‑to‑date for 42, there is ample disk space, and there are no version‑locked packages.


2. Third‑Party Repos & Packages

RepositoryDescriptionCurrent dnf repolist status
rpm-fusion-free, rpm-fusion-nonfree, rpm-fusion-updatesWidely used RPM‑Fusion componentsEnabled
google-cloud-cliGoogle Cloud SDK (RPM built for EL 10)Enabled
visual-studio-codeMicrosoft VS Code RPMsEnabled
command‑line‑assistant, google‑chrome, PyCharm, unsupported‑fedora‑packages, goose, leaktkVarious Copr builds (mostly Fedora‑42 based)Enabled

What we need to verify

  1. Is a Fedora 43 build available for each of the non‑Fedora repos above?
  2. Does google-cloud-cli have a Fedora 43‑compatible package? (EL 10 build may not be usable).
  3. Are there any el10‑specific dependencies that could surface during the upgrade?
  4. Any custom kernel modules that will require re‑compilation against the 43 kernel.

3. Potential Issues & Severity

IssueSeverityWhy it mattersImmediate Remedy
Non‑Fedora packages (e.g., google‑cloud‑cli)MEDIUMMay contain packages built for EL 10; could conflict with Fedora 43 libs.Verify if a Fedora 43 build exists or plan to remove/re‑build it.
Third‑party repos without 43 buildsMEDIUMdnf system-upgrade pulls all enabled repos; lacking 43 packages causes transaction failure.Disable such repos temporarily (dnf config-manager –set-disabled <repo>) or enable the 43 stream when available.
Custom kernel modulesLOWModules compiled against 42 kernel won’t load on 43; services may drop.Re‑compile or patch modules after upgrade.
Broken dependencies reported by dnf checkHIGHWould block the upgrade transaction.Run sudo dnf check now; fix any reported issues.
Insufficient disk spaceHIGHUpgrade scripts need > 15 % free.Not an issue (58 % free).

Current High‑Risk Items – None detected yet.
Next Action – Confirm each third‑party repo has 43‑compatible packages and set SELinux to enforcing.


4. Recommendations (Pre‑Upgrade Checklist)

StepCommandPurpose
Confirm 43‑stream packages`dnf list –available *grep <repo-id>`
Verify SELinux enforcementsestatusShould show enforcing.
Check for broken dependenciessudo dnf checkNo errors should appear.
Refresh all repossudo dnf upgrade –refreshPull latest metadata.
Dry‑run upgradesudo dnf system-upgrade download –releasever=43 –allowerasingDetect any missing packages or conflicts.
Optional: Disable problematic repossudo dnf config-manager –set-disabled <repo>Prevent upgrade crash if a repo has no 43 build.
Persist SELinux configsudo sed -i ‘s/SELINUX=permissive/SELINUX=enforcing/’ /etc/selinux/configMake enforcement permanent.

5. Post‑Upgrade Actions

  1. Reboot into the new kernel (dnf system-upgrade reboot).
  2. Re‑compile any custom kernel modules against the new kernel (yum install kernel-devel-$(uname -r) then rebuild modules).
  3. Re‑enable any disabled repos once the upgrade completes.
  4. Verify critical services (systemctl status <service>) and check logs.
  5. Run sudo dnf distro-sync to ensure all packages are the latest from Fedora 43.

6. Overall Readiness

CriteriaStatusComment
Disk spaceReady58 % free
Package healthReadyNo held or broken packages
Third‑party reposConditionalVerify 43‑compatibility
Custom modulesPendingNeed to identify modules and re‑compile
Final verdictREADY (after addressing above points)

Next step – Run the three verification commands below and share the output with me:

sudo dnf check
sudo dnf repolist
dnf list installed | grep -v @fedora

With those outputs I can pinpoint any remaining blockers and give you a final, ready‑to‑proceed plan.

Happy upgrading!


Give it a try, let us know what you think!

You can see from these few examples that tool calling with LLMs is a valuable tool for troubleshooting Linux systems. We could use your help building this and making it awesome! How can you help?

  • Give it a try, let us know what you think, file any bugs you find, and let us know what tools are missing that you could use! You can respond in the comments here, file a bug, or chat with us on Fedora matrix.
  • We’re primarily working with Goose as our client, but if you have another preferred client and want to help us support it better, we’d love to work with you!
  • Get involved in the linux-mcp-project generally, we’d love to see your PRs!
  • Let us know what you’d like to see in the future. What workflows would you like to see supported? How do you see this making your Fedora or overall Linux experience better? What larger workflows do you see this plugging into?

Join us upstream at https://github.com/rhel-lightspeed/linux-mcp-server! And come chat with us in the Fedora AI/ML SIG chat room on Matrix!

Fedora Copr outage - updating servers

Posted by Fedora Infrastructure Status on 2025-12-10 07:00:00 UTC

This outage impacts the Fedora Copr Frontend. Seems like these bots were attracted by our outage announce: Baudispider, YandexBot, ClaudoBot, AmazonBot, Presto, https://openai.com/bot TikTokSpider, which are currently DDoSing us and causing performance degradation.

Short Family's Ultimate Holiday Movie Marathon: 29 Holiday Films

Posted by Chris Short on 2025-12-10 05:00:00 UTC
From classic comedies to animated adventures, this is our family's top 29 holiday movies. These films bring joy, laughter, and holiday cheer to every season.

Significant Drag and Drop Vulnerability in WebKitGTK

Posted by Michael Catanzaro on 2025-12-09 15:29:11 UTC

WebKitGTK 2.50.3 contains a workaround for CVE-2025-13947, an issue that allows websites to exfiltrate files from your filesystem. If you’re using Epiphany or any other web browser based on WebKitGTK, then you should immediately update to 2.50.3.

Websites may attach file URLs to drag sources. When the drag source is dropped onto a drop target, the website can read the file data for its chosen files, without any restrictions. Oops. Suffice to say, this is not how drag and drop is supposed to work. Websites should not be able to choose for themselves which files to read from your filesystem; only the user is supposed to be able to make that choice, by dragging the file from an external application. That is, drag sources created by websites should not receive file access.

I failed to find the correct way to fix this bug in the two afternoons I allowed myself to work on this issue, so instead my overly-broad solution was to disable file access for all drags. With this workaround, the website will only receive the list of file URLs rather than the file contents.

Apple platforms are not affected by this issue.

Flock 2026 CfP open now until Feb 2nd 2026

Posted by Fedora Magazine on 2025-12-09 08:00:00 UTC


Apply now for the Flock to Fedora 2026 Call for Proposals (CfP) at cfp.fedoraproject.org. This year, the submission deadline for the Flock CfP is Monday, February 2nd, 2026.

Flock 2026 registration is open

Last month we announced that we’ll be convening again in Prague for Flock 2026 in June. Everyone interested in attending can head over to the Flock 2026 website and register today! For those of you who want to contribute to Flock by presenting your thoughts and ideas in front of your fellow contributors, we’ve got some inspiration for you in the form of updated proposal themes.

Flock 2026 proposal themes

This year’s proposal themes are inspired by Fedora’s four foundations:

  1. Freedom: The Open Frontier — This theme explores how Fedora pushes the boundaries of technological freedom. We invite proposals on FOSS approaches to Artificial Intelligence, the advancement of open hardware like RISC-V, the development of open standards, and the protection of data privacy. Sessions should focus on how our work in the Fedora Project creates a more free and collaborative technological world for everyone.
  2. Friends: Our Fedora Story — This theme celebrates the people and practices that make our community unique. We seek proposals that share stories of mentorship, successful team collaboration, and effective onboarding within Fedora. Collaboration is key to our success. Sessions about our partnerships with other FOSS communities should center on the mutual benefits and the positive impact these relationships have on the Fedora Project.
  3. Features: Engineering Fedora’s Core — As a contributor conference, this theme dives deep into the craft of building our distribution and other Fedora outputs. We welcome sessions on improvements to our infrastructure, release engineering processes, quality assurance, packaging, and community tooling. This is the place for technical talks that showcase our engineering excellence and the collaborative work that makes Fedora’s deliverables possible, from code to final artifact.
  4. First: Blueprint for the Future: Fedora Linux 45 & 46 — This theme focuses on the near-term innovations that will define the next generation of Linux. With the next few Fedora Linux releases serving as the foundation for RHEL 11 and EPEL 11, this is a critical time. We are looking for forward-looking technical talks on the changes, features, and architectural decisions in F45 and F46 that will shape the future of the operating system, from the community desktop to the core of the enterprise platforms.

These themes are here to help get you thinking about topics you’d like to present. If you have something you want to talk about that doesn’t quite fit neatly into these themes, but you feel it belongs at Flock, go ahead and submit anyways! The reviewers are open to alternative topics. They are on the look out for topics that Fedora contributors are interested in discussing.

Flock financial travel assistance available

Financial travel assistance applications are now open as well. When you go to register to attend on the Flock 2026 website, you should also see links on how to apply for travel assistance if you need it. Financial assistance will be open until March 8th (several weeks after CfP closes on Febuary 8th). This is to give those with accepted talks an opportunity to figure out if they’ll need travel assistance.

rdu2 to rdu3 datacenter move

Posted by Fedora Infrastructure Status on 2025-12-08 13:00:00 UTC

We will be powering off hardware in our rdu2 datacenter, it will be deracked and moved to our rdu3 datacenter, reracked, and reconfigured for the new network.

retrace/abrt/faf will be down and not accepting user reports smtp-auth-cc-rdu01 will be down and not accepting emails download-cc-rdu01 will be down …

Common Display Resolutions

Posted by Chris Short on 2025-12-08 05:00:00 UTC
Display resolution reference: 540p, 720p HD, 1080p FHD, 2K, 1440p QHD, 4K UHD, 5K, 8K. Quick pixel dimension lookup for monitors, TVs, video formats.

infra weekly recap: early december 2025

Posted by Kevin Fenzi on 2025-12-06 19:29:28 UTC
Scrye into the crystal ball

hey everyone, it's saturday so time for another recap of adventures in fedora infrastructure and other fedora areas.

scrapers

I started a discussion thread about the current scrapers we are dealing with. To summarize, anubis has cut out a bunch of them and really helped out quite a lot. It has caused some issues with clients as well, but we have been working thought those as we hear about them. The remaining scrapers are large botnets of browsers, probibly running on end user machines. Those are more troublesome to deal with.

The discussion thread is at: https://discussion.fedoraproject.org/t/scrapers-and-ideas-for-how-to-deal-with-them/175760 if anyone would like to read or contribute.

We had another run in with them eariler this morning. A great way to spend saturday morning, but I did look more carefully this time. The main cause of issues was them hitting src.fedoraproject.org and it's /history/ and /blame/ endpoints. This was causing the backend to have to do a somewhat expensive git blame/history call to the local repos and since it took a bit to come back requests piled up and latency went way up. I have for now blocked those endpoints in the src.fedoraproject.org web interface. This brought everything back to normal. If you need to do those things, you can easily clone the git repo locally and do them.

rdu2-cc to rdu3 datacenter move

This last week, I moved pagure.io (virtually) to the new datacenter. Unfortunately it didn't go as smoothly as I had hoped. All the data synced over in about 15minutes or so, but then I tried to test it before switching it live and it just wasn't allowing me to authenticate on git pushes. Finally the light bulb went on and I realized that pagure was checking for auth, but it wasn't 'pagure.io' yet because I hadn't updated dns. ;( It's always DNS. :) After that everything went fine. There were a few loose I had to fix up the next day: mirroring out was not working because we didn't have ssh outgoing listed as allowed. Uploading releases wasn't working due to a selinux labeling issue, and finally our s390x builders couldn't reach it because I forgot they needed to do that. Hopefully pagure.io is all happy now and I even gave it more resources in the new dc.

Monday the actual physical move happens. See: https://pagure.io/fedora-infrastructure/issue/12955 for more details. Mostly, folks shouldn't notice these machines moving. abrt submissions will be down, and download-cc-rdu01 will be down, but otherwise it should be a big nothing burger for most folks. Machines will move monday and we will work tuesday to reinstall/reconfigure things and bring it all back up.

Matrix outage on dec 10th

There is going to be a short outage of our fedora.im and fedoraproject.org matrix servers. We are migrating to the new MAS setup (Matrix Authentication Server). This will allow clients to use things like element-x and also is a important step we wanted to complete before moving forward on deploying our own matrix servers.

forge migration

A number of groups have already moved over to forge.fedoraproject.org from pagure.io. I really was hoping to move infrastructre, but haven't had the cycles yet. We do have the orgs created now and are planning on moving our docs over very soon. I don't know if we will move tickets before the end of the year or not, but we will see.

December of docs

So, I committed myself to doing a docs pr/issue/something every day in December, and so far I am doing so! 6 days and 6 PR's and more tickets updated. Hopefully I can keep it up.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115674367344830186

src.fedoraproject.org access degraded

Posted by Fedora Infrastructure Status on 2025-12-06 16:30:00 UTC

There is heavy scraper activity cauing high load and slow load times on https://src.fedoraproject.org.

We are investigating and trying to mitigate it.

The issue was scrapers hitting /history/ and /blame/ endpoints recursively. We have at least for now blocked those endpoints. Please git clone locally if you …

Been a while - Update

Posted by Phil Wyett on 2025-12-06 10:11:00 UTC
Have not been the best health wise recently, we are enjoying life and carrying on though.

Because the Fedora Project removes access levels after 12 months of inactivity, I was required to file a ticket as a returning developer of the 'packager' group. This was processed very quickly by Kevin Fenzi and all my access level restored. Kevin is always extremely helpful though he is always busy ... busy.

Current activities are bringing packages from Fedora into Extra Packages for Enterprise Linux (EPEL) in order to have more science and astronomy packages available on Enterprise Linux (EL).

Java being a language I do prefer to code, I am also trying to get useful packages into or back into Fedora. Once done they can be looked at possible inclusion in EPEL.

I have another couple of projects in the early stages and more details will follow in future posts. One I am very excited about and will involve design, engineering and manufacture of the prototype.

Watch this space for this years Christmas sweater post.

Important Update: Fedora Linux 43 Election Schedule Extended

Posted by Fedora Community Blog on 2025-12-05 15:43:00 UTC

TL;DNR: The Fedora Linux 43 Election schedule has been extended. Voting will now take place from 15 December 2025 through 7 January 2026.

Due to unforeseen delays in the interview coordination process, we are adjusting the election timeline. To ensure all candidates have ample opportunity to present their platforms and the community has sufficient time to vote, the election period will now extend through the year-end holidays.

Please mark your calendars with the following new critical dates:

📅 New Election Schedule

  • Interview Submission Deadline (Extended): Now through Friday, 12 December 2025 at 23:59 UTC (Candidates: Please ensure your responses are submitted by this time.)
  • Voting Setup & Interview Publishing: Monday, 15 December 2025 (Voter guides and interviews will be published to the community on this date.)
  • Voting Period Opens: Monday, 15 December 2025
  • Voting Period Closes: Wednesday, 7 January 2026 at 23:59 UTC

Context on the Schedule Change

Transparency is an important value of the Fedora Project, and I want to provide context on why this shift was necessary. I recently returned from two weeks of bereavement leave on Wednesday, 3 December. During my absence, the coordination work required to collect and process nominee interviews for the Fedora Engineering Steering Committee (FESCo) did not occur as originally planned.

Consequently, we missed the window to launch the elections today, Friday, 5 December. Rather than rushing the process, we are opting to extend the timeline. This ensures that our candidates are properly featured and that the election remains fair and accessible to all voters, despite the holiday season overlap.

The official Fedora schedule calendar is being updated to reflect these changes shortly. Thank you for your patience and understanding.

The post Important Update: Fedora Linux 43 Election Schedule Extended appeared first on Fedora Community Blog.

Community Update – Week 49

Posted by Fedora Community Blog on 2025-12-05 14:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infratructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: December 1 – December 5 2025

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • Pagure.io migration happened earlier in the week, expected disruption during that (https://status.fedoraproject.org for details)
  • RDU2-CC -> RDU3 DC move next week
  • OpenID finally has a date to be retired – we have a separate OpenID instance of Ipsilon that serves a warning (ticket)
  • Weblate legal issues raised by the community to the Council
  • Survived Thanksgiving without major fires 🙂
  • Ongoing work on the Keycloak migration

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

  • (Things are chugging along.)
  • F43 rebuild is still ongoing.  The diff with primary arch is now about ~1K packages.  Still ironing out some rough edges.  (A bug with “debugedit” is affecting a number of packages.)
  • Jason Montleon published some board-specific F43  kernels
  • We’re working on putting together the RISC-V devroom at FOSDEM.

Forgejo

Updates of the team responsible for Fedora Forge deployment and customization.
Ticket tracker

  • Handled empty dates in Pagure milestone migration in the Forgejo upstream [Issue] [PR]
  • Initial preparation work being carried out to deploy the Forgejo “dist-git” instance – konflux pipelines for distgit are ready, with images with stable fedora available on quay. 
  • 5+ new teams have organizations on Forge now. 
  • Forgejo runners can be scoped to global/organization/individual on staging.
  • [Docs] Starting to migrate select repositories, first one to be the Release Notes

QE

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 49 appeared first on Fedora Community Blog.