/rss20.xml">

Fedora People

infra weekly recap: First full week of October 2025

Posted by Kevin Fenzi on 2025-10-11 19:41:50 UTC
Scrye into the crystal ball

Wow, saturday again!? At least it's fall now (which is the very best season).

The connection timeout problem

So, the connection timeout problem that we first really saw last week was back.

To recap: Sometimes (a very small % of the time) connections from our proxies to kojipkgs backend and pkgs/src backend also, just timeout. This is causing builds to sometimes fail with a 503 error, which is what haproxy returns back to the user when a connection times out.

The problem was actually worse than I though it was. I thought it only happened when haproxy marked all the backend servers for that service down, but since the issue can seemingly happen anytime, connections haproxy is sending to a server it thinks is up can timeout too.

It's very frustrating, I can't seem to find a cause, so it's very hard to come up with a solution. I did try a... lot of things with no luck. Finally later in the week I came up with a bandaid for it. This consists of telling haproxy to retry if it gets a retryable error, and if possible try the other backend with the retry. This doesn't fully fix the issue, but it makes it much much much much less likely to happen (at least from the end user viewpoint).

RDU2-CC to RDU3 datacenter move

Some progress on the rdu2-cc to rdu3 move later this year:

  • I wrote up desired network acls for the new rdu3 network

  • I got 3 power9s that we are repurposing to do copr builds moved to that new vlan and I'm working on installing them.

  • We are in process of getting a new server that will replace two old ones. That will also allow us to migrate important things on our schedule and with hopefully limited downtime.

  • The current target move date is now December 8th due to various factors.

Fedora Fourty Three Final Freeze

Say that 5 times fast. We are in the final freeze for f43. Hopefully it will be a smooth one and we can release on time.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115357338178933176

Infra and RelEng Update – Week 41

Posted by Fedora Community Blog on 2025-10-10 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: Oct 6 – Oct 10 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 41 appeared first on Fedora Community Blog.

ICFK2025: Calligraphy festival

Posted by Rajeesh KV on 2025-10-10 06:03:37 UTC

I’m very glad to participate in the 2025 edition of the International Calligraphy Festival of Kerala, and present a talk to a great audience.

ICFK is organized by KaChaTaThaPa foundation headed by master calligrapher Narayana Bhattathiri. The event usually takes place on 2–5 October in Kochi. Varying talks, workshops, demonstration sessions, exhibitions, and above all meeting and learning from exemplary calligraphers is the best part of the event. The venue always bursts with beauty, energy, and fun; where everyone is approachable.

Reconnected with old friends and made new friends. Ashok Parab was traveling pan-India and documenting scripts, that lead to teaching scripts — including Malayalam — as well. Abhishek Vardhan is doing research on Nāgarī script. Syam is doing research on Malayalam calligraphy. They promised to share their findings and public/open resources, which would be very interesting to look at. Vinoth Kumar, Michel D’Anastasio, Nikheel Aphale, Muqtar Ahammed, and Shipra Rohtagi gave me souvenirs — thank you! I had chances for interesting long chats with Uday Kumar (who asked me about Sayahna Foundation after the t-shirt I wore), Achyut Palav, Sarang Kulkarni, Brody Neuenschwander, and also Shyam Patel of Kochi Biennale Foundation.

On many occasions delegates approached and asked me about font development process, complex text shaping and related topics. It was also too tempting to not buy fountain pens and Bhattathiri’s merchandise on sale, as gift to friends. The dinner with the ICFK team at Boulangerie Art Cafe was delicious. TM Krishna’s carnatic music concert on Saturday evening was a heavenly experience — Krishna Seth sitting next to me was spontaneously drawing on the notebook for the entire duration of the concert.

For the last edition, I presented a talk about font development, font engineering, complex text shaping, and such back-end tasks that designers generally find difficult. This year, I talked about the ‘Fundamentals of Typography’. I hope the talk succeeded to some extent in making everyone unhappy when they look at a badly typeset page 🙂.

The slides for the presentation are available here.

🎲 PHP version 8.3.27RC1 and 8.4.14RC1

Posted by Remi Collet on 2025-10-10 04:26:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.14RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.27RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.0RC2 will be soon in Fedora rawhide for QA
  • version 8.5.0RC2 is also available in the repository
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.27 and 8.4.14 are planed for October 23th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Loadouts For Genshin Impact v0.1.11 Released

Posted by Akashdeep Dhar on 2025-10-09 18:30:33 UTC
Loadouts For Genshin Impact v0.1.11 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.11 is OUT NOW with the addition of support for recently released artifacts like Night of the Sky's Unveiling and Silken Moon's Serenade, recently released characters like Aino, Lauma and Flins and for recently released weapons like Blackmarrow Lantern, Bloodsoaked Ruins, Etherlight Spindlelute, Master Key, Moonweaver's Dawn, Nightweaver's Looking Glass, Propsector's Shovel, Serenity's Call and Snare Hook from Genshin Impact Luna I or v6.0 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Installation command for Fedora Linux

Changelog

  • Update renovate config to use exact commit message by @gridhead in #396
  • Remedy renovate commit message lower casing issue by @gridhead in #397
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #395
  • chore(deps): update dependency python to 3.12 || 3.13 by @renovate[bot] in #394
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #399
  • chore(deps): update actions/checkout action to v5 by @renovate[bot] in #398
  • Create a comprehensive contribution guide by @sdglitched in #387
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #424
  • chore(deps): update actions/setup-python action to v6 by @renovate[bot] in #425
  • Introduce the recently added artifacts Silken Moon's Serenade by @gridhead in #414
  • Introduce the recently added character Lauma to the roster by @gridhead in #413
  • Introduce the recently added artifacts Night of the Sky's Unveiling by @gridhead in #415
  • Introduce the recently added weapon Master Key by @gridhead in #417
  • Introduce the recently added weapon Serenity's Call by @gridhead in #416
  • Introduce the recently added weapon Prospector's Shovel by @gridhead in #418
  • Introduce the recently added weapon Blackmarrow Lantern by @gridhead in #419
  • Introduce the recently added weapon Etherlight Spindlelute by @gridhead in #421
  • Introduce the recently added weapon Moonweaver's Dawn by @gridhead in #427
  • Introduce the recently added weapon Nightweaver's Looking Glass by @gridhead in #423
  • Introduce the recently added character Aino to the roster by @gridhead in #431
  • Introduce the recently added weapon Snare Hook by @gridhead in #420
  • Update dependency ruff to ^0.2.0 || ^0.3.0 || ^0.6.0 || ^0.7.0 || ^0.11.0 || ^0.12.0 || ^0.13.0 by @renovate[bot] in #433
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #435
  • Update dependency pytest-cov to v7 by @renovate[bot] in #434
  • Correct base stat for character Lauma by @sdglitched in #437
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #439
  • Introduce the recently added character Flins to the roster by @gridhead in #412
  • Introduce the recently added weapon Bloodsoaked Ruins by @gridhead in #422
  • Create the v0.1.11 version release for Luna I (or Genshin Impact 6.0) by @gridhead in #441
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #442

Artifacts

Two artifacts have debuted in this version release.

Night of the Sky's Unveiling

  • Bonus for Two Piece Equipment
    Increases Elemental Mastery by 80.
  • Bonus for Four Piece Equipment
    When nearby party members trigger Lunar Reactions, if the equipping character is on the field, gain the Gleaming Moon: Intent effect for 4s: Increases CRIT Rate by 15%/30% when the party's Moonsign is Nascent Gleam/Ascendant Gleam. All party members' Lunar Reaction DMG is increased by 10% for each different Gleaming Moon effect that party members have. Effects from Gleaming Moon cannot stack.

Silken Moon's Serenade

  • Bonus for Two Piece Equipment
    Energy Recharge +20%.
  • Bonus for Four Piece Equipment
    When dealing Elemental DMG, gain the Gleaming Moon: Devotion effect for 8s: Increases all party members' Elemental Mastery by 60/120 when the party's Moonsign is Nascent Gleam/Ascendant Gleam. The equipping character can trigger this effect while off-field. All party members' Lunar Reaction DMG is increased by 10% for each different Gleaming Moon effect that party members have. Effects from Gleaming Moon cannot stack.

Characters

Three characters have debuted in this version release.

Aino

Aino is a claymore-wielding Hydro character of four-star quality.

Lauma

Lauma is a catalyst-wielding Dendro character of five-star quality.

Flins

Flins is a polearm-wielding Electro character of five-star quality.

Weapons

Nine weapons have debuted in this version release.

Blackmarrow Lantern

Token of Covenant - Scales of Elemental Mastery.

Loadouts For Genshin Impact v0.1.11 Released
Blackmarrow Lantern - Workspace

Bloodsoaked Ruins

Mournful Tribute - Scales on Crit Rate.

Loadouts For Genshin Impact v0.1.11 Released
Bloodsoaked Ruins - Workspace

Etherlight Spindlelute

Last Singer - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.11 Released
Etherlight Spindlelute - Workspace

Master Key

Fall Into Place - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.11 Released
Master Key - Workspace

Moonweaver's Dawn

Secret Silver's Testament - Scales on ATK%.

Loadouts For Genshin Impact v0.1.11 Released
Moonweaver's Dawn - Workspace

Nightweaver's Looking Glass

Millennial Hymn - Scales on Elemental Mastery.

Loadouts For Genshin Impact v0.1.11 Released
Nightweaver's Looking Glass - Workspace

Prospector's Shovel

Swift and Sure - Scales on ATK%.

Loadouts For Genshin Impact v0.1.11 Released
Prospector's Shovel - Workspace

Serenity's Call

Solemn Silence - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.11 Released
Serenity's Call - Workspace

Snare Hook

Phantom Flash - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.11 Released
Snare Hook - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1503 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

Choro das 3 / Trio Meyer Ferreira

Posted by Avi Alkalay on 2025-10-09 17:46:48 UTC

O Choro é um dos estilos musicais mais típicos do Brasil. Sempre foi dominado por homens nas formações de rodas de tocadores e no arcabouço de compositores, com mestres plenos como Pixinguinha, Bonfiglio de Oliveira, K-Ximbinho, Jacob do Bandolim, Ernesto Nazareth, Abel Ferreira, uma distante Chiquinha Gonzaga e muitos outros gigantes.

Mas eis que um dos melhores Choros da atualidade é feito por mulheres. Sim, as irmãs Meyer Ferreira — Corina, Elisa e Lia — formam o Choro das 3 (rebatizado recentemente para Trio Meyer Ferreira — @triomeyerferreira). Tocam belissimamente, mas, acima de tudo, têm composições próprias incríveis.

Na pandemia começaram a fazer ótimas lives semanais no YouTube e continuam até hoje. E têm uma séries de álbuns de studio realmente extraordinários nos juckeboxes digitais.

Vá conhecer, assista, ouça, desfrute!

Também no meu Instagram e Facebook.

The syslog-ng Insider 2025-08: Values; BastilleBSD; Debian

Posted by Peter Czanik on 2025-10-09 14:16:06 UTC

The October syslog-ng newsletter is now on-line:

  • The core values of syslog-ng
  • Running syslog-ng in BastilleBSD
  • Debian and Ubuntu blogs updated

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2025-10-values-bastillebsd-debian

syslog-ng logo

GSoC 2025: Bringing package reviews to Forgejo

Posted by Fedora Community Blog on 2025-10-09 10:00:00 UTC

Hi everyone, I’m Mayank Singh, and I’ve been working on a GSoC 2025 project aimed at simplifying the packaging process in Fedora.

I have been working on making an implementation of the new package review process, where everything happens in one place around git. Instead of switching between Bugzilla tickets, for hosting the source rpm and logs. Instead new contributors can review their packages through pull requests.

Let’s dive into a brief comparison between the current Bugzilla process and the new one.

The current Bugzilla process

If you want a package that is not available in Fedora repositories yet, the process looks something like:

  • Package Recipe: start by writing a .spec file, which declares your package sources and instructions on building it.
  • Build: the same specfile is then used to build a Source RPM.
  • Upload: upload the Source RPM and specfile to a file host or a place like fedorapeople.
  • New Ticket: Now, start a new bugzilla ticket with the SRPM url and Spec url along with the description of the package.
  • Fedora-review-service: This service builds your package in COPR from the provided SRPM and specfile URL, runs fedora-review after a successful build to report compliance.
  • Review Cycle: A reviewer comes along, provides feedback, make changes, upload again. This creates a tedious, repetitive cycle for every minor change.

This process isn’t reliable, it forces developers to constantly juggle multiple tools and tabs. This creates a fragmented and disjointed experience.

The new process

We try to solve this problem by trying to focus the workflow around git. A tool developers already use daily. The process is made much simpler in these steps:

  • Fork the central package review repository on forgejo.
  • Commit your .spec files and patches.
  • Start a new Pull Request.

The service takes care of the rest for assisting in the review process.

Once the Pull Request is open, the following automated actions are triggered to assist the review:

One can navigate to the error logs by going through the details link.

Checking the latest entry here, we can find the Testing Farm run and inspect the logs to trace the cause of failure.

The checks failed due to a broken package and its dependencies.

  • Automatic Build: The service clones your changes into a COPR build environment and builds the package.
  • Direct Feedback: After a successful build, the fedora-review tool runs, and its formatted output is posted directly as a comment in the pull request discussion.
  • Testing: The service runs installability and rpmlint checks using Testing Farm.
  • Commit Status Updates: The status of the COPR build and all tests is reported back directly to the pull request via commit statuses (the green checkmarks or red crosses).

If a change is needed, one can just push a new commit and the service takes care of building and testing now.

Check out the service here: review

Under the Hood

The service is based on the packit-service codebase to handle COPR builds, Testing Farm integration, and orchestration of the actions described above with forgejo support.

Repo link: packit/avant

Future Scope

Looking ahead, this workflow can be extended to transfer the packages into dist-git after approval of the package.

Big thanks to my mentor and Fedora community for guidance throughout the GSoC timeline.

Feedback’s appreciated ! Thank you.

The post GSoC 2025: Bringing package reviews to Forgejo appeared first on Fedora Community Blog.

Transporte Público Gratuito em todo o Brasil

Posted by Avi Alkalay on 2025-10-09 00:39:05 UTC

Governo Federal estuda transporte público gratuito para todas as cidades do país. Se der certo, pode ser uma das maiores evoluções econômicas, urbanas, ambientais, sociais e de segurança que o Brasil vai ver.

Social, porque muita gente deixa de sair, passear, fazer seus bicos, acessar saúde, porque transporte público é caro para elas.

Urbana e ambiental, porque teremos menos carros nas ruas. E menos carros melhora tudo, inclusive o próprio transporte público. Mais gente caminhado, mais espaço e segurança para outros tipos de transporte, como bicicleta etc. Sem contar que os ônibus serão elétricos daqui para frente. Menos poluição ainda.

Econômica, porque mais gente fazendo mais atividades e finalizando a pé seus trajetos incentiva todo tipo de comércio, inclusive o local. Deixa as ruas mais seguras também.

A forma como planejamos cidades (e seu transporte, moradia, construções etc) é um dos fatores que mais influencia o bem estar de seus cidadãos. As pessoas pouco se dão conta disso. A ciência que estuda isso é o Urbanismo.

Há quem pergunte “mas quem é que vai pagar essa conta?”. E aí vale a reflexão: quem é que paga a conta de não democratizar transporte, e quem paga a conta de décadas de priorização só para carros?

Também no meu Facebook, LinkedIn e Instagram.

Ruby Central’s lesson in how not to do it

Posted by Ben Cotton on 2025-10-08 12:00:00 UTC

Last month, Ruby Central made significant changes to the RubyGems enterprise on GitHub, renaming the enterprise and kicking out maintainers. I’m not active in that community, so I can’t judge the reasoning provided by Ruby Central. But that doesn’t matter because even if you take Ruby Central at their word and assume they acted in good faith, the change was executed poorly.

“We want to protect Ruby against supply chain attacks, so we executed a supply chain attack,” is (paraphrased) how I saw someone describe this on social media shortly after the news broke. As an outsider, that’s how I would characterize what happened if I didn’t know better. Changing a name, adding a new owner, kicking a bunch of existing maintainers out? That’s the sort of behavior you would expect from an account takeover.

Where Ruby Central went wrong is handling the process backwards. First they made the change and they they explained it. No matter how good your intentions, explaining after acting will always seem more suspicious. You can’t stop bad faith actors from having a bad faith interpretation of your actions, but there’s no need to give others the chance to go down that road.

Community-driven projects — regardless of whether they’re backed by a corporation or foundation — run on consensus. To make major changes, you have to get buy-in from the core members, at a minimum. This doesn’t mean that everyone needs to agree, but they do have to understand. Trust is easy to lose and hard to build. Changes that destroy trust can permanently damage a community.

Unless there’s an emergency, explanation and discussion must happen before you take action. This includes being honest about the motivation, even if you’d rather not. Honesty and good-faith engagement build trust, which leads to a stronger, more robust community.

This post’s featured photo by Jason D on Unsplash.

The post Ruby Central’s lesson in how not to do it appeared first on Duck Alignment Academy.

Simplifying Package Submission Progress (15 August – 22 August) – GSoC ’25

Posted by Fedora Community Blog on 2025-10-07 10:00:00 UTC

This week in the Fedora project, we did some small changes to the details and reporting of information in the service.

Small Changes

Better Review Comments

Previously, we were fetching the text version of the fedora-review report, the problem was it had way too much text for a comment and I had to parse the text file, to format the comment before posting into sections in collapsible tags manually.

While testing, I found out that the tool also publishes a json version with proper segregation of categories, items, checks and the related documentation to each rule. It made it much easier to parse and comment.

Clearer Status Reporting

Another small change was to report the status for install-ability and rpmlint checks separately from Testing Farm. My mentor suggested this approach to make it easier for users to interpret feedback.

One can check detailed logs of each run by going to dashboard through the details button.

Testing

I’m still working through some challenges with unit testing. The tricky part is mocking external dependencies to properly test the integration code in the service. The aim is to catch smaller bugs earlier with better coverage in ogr, the library that’s being used for interacting with forgejo.

What’s Next?

We almost have a way to review packages directly within a Forgejo repository. This will allow us to set up a dedicated repo for performing package reviews with automated feedback from tools and experienced packagers.

In the future, this idea could be extended further as Fedora moves to Forgejo, even handling dist-git setup.

For now, my next tasks are:

  • Deploying the service
  • Writing setup instructions for local development
  • Setting up the bot account
  • Completing the work needed for merging the relevant code upstream

I’m grateful to my mentor, František Lachman, for his constant support and guidance.

The post Simplifying Package Submission Progress (15 August – 22 August) – GSoC ’25 appeared first on Fedora Community Blog.

Budapest Audio Expo 2025

Posted by Peter Czanik on 2025-10-07 07:45:38 UTC

Last year’s Budapest Audio Expo was the first hifi event I had truly enjoyed in years. Needless to say, I spent a day this weekend at the Audio Expo again :-) Building on last year’s experience, I chose to visit the expo on Sunday. There were fewer people and better-sounding systems.

TL;DR:

If I had to sum up the expo in a one statement: Made in Hungary audio rivals the rest of the world in quality, while often being available at a much more affordable price. My top three favorite sounds came from systems with components mostly made in Hungary: Dorn / Heed, NCS Audio / Qualiton, and Popori / 72 Audio (in alphabetical order :-) ). I have listened to quite a few systems costing a lot more. However, these were the systems that I enjoyed listening to the most.

Dorn / Heed

Dorn / Heed Audio holds a special place in my heart. Not just there: I listen to a full Heed system at home. I was very happy to see them at the event. You could listen there to their latest speaker, connected to an Elixir (amplifier) and an Abacus (DAC). I listened to the exact same setup just a few weeks ago in their show room. Here they sounded even better. As you can see on the photos, “just” a pair of bookshelf speakers, still they could fill the room with clean sound. No matter their size, bass was also detailed and loud enough (disclaimer: there was no Metallica in the playlist, which might have changed this opinion ;-) ). Probably one of the cheapest systems on display at the Expo (not counting DIY), but still one of the best sounding. Natural, life-like sound, a joy to listen to. I went back there to rest, when I was tired from all the artificially sounding systems.

Audio Expo 2025: Heed / Dorn

Audio Expo 2025: Heed / Dorn

NCS Audio / PRAUDIO / Qualiton

I first wrote about NCS Audio three years ago. Last year I called Reference One Premium the best value speaker at the expo, as it sounded equally good or sometimes even better than speakers costing an order of magnitude more. Well, nothing has changed from this point of view.

This year NCS Audio shared a room with PRAUDIO and Qualiton. I had a chance to participate in a quick demo, where we learned more about the various digital and analog sources and the speakers, and then listened to them. It was a kind of stereotypical hifi event: songs I listened many times in various rooms during the day. Still, it was good, as it was a wide selection of music, and they sounded just as good as I expected them :-)

Audio Expo 2025: NCS Audio

Audio Expo 2025: NCS Audio

Popori / 72 Audio

While most rooms featured devices, which are in production and available on the market, the room of 72 Audio was different. Everything we listened to, except for the electrostatic speakers from Popori Acoustics, were hand built long time ago. I was rude, as I responded to a text message while sitting in the back row and listening to the music in the room. While I was typing, the music stopped. A new song started, and suddenly I looked up confused: for a moment I was looking for the lady singing. Of course, she was not in the room, just the recording :-) Well, my ears are very difficult to trick, and it only works, when my mind is somewhere else. This was just the third time happening to me.

Audio Expo 2025: Popori Acoustics / 72 Audio

Disappointments

Of course, not everything was perfect. I do not want to say names here, just a few experiences.

I have seen ads about a pair of streaming speakers multiple times a day for the past few months. Finally I had a chance to listen to them. Well. Extreme amount of details. Extreme amount of bass, my weakness. Still, everything sounded too much processed, too artificial. Not my world.

Recently I learned about a speaker brand, developed and manufactured in the city of a close friend. Of course I became curious, how it sounds. Well, practically it’s a high-end home theater speaker. My immediate reaction was that I was looking around where can I watch the film. The exact same song, which was perfectly life-like on the NCS Reference One speakers, sounded like a background music of a movie on this lot more expensive system.

The hosts in most rooms were really kind, helpful, smiling. Not everywhere. When someone blocks the exit and tries to push a catalog in my hands without much communication, that’s a guarantee that I do not want to return there. Luckily this mentality was not typical at all at this event.

Others

Of course I cannot describe everything from a large expo in a single blog. But other than my top 3 favorites, there were a few more I definitely have to mention.

  • Allegro Audio was good, as always. They also had a Made in Hungary component, the Flow amplifier.

  • Hobby Audio was playing music from a tape using a self-built amplifier and pair of speakers. They looked DIY, and they were actually DIY, but had a much more natural sound than some of the much more expensive systems at the expo.

Audio Expo 2025: Audio Hobby

  • Natural Distortion demonstrated a prototype DAC and amplifier. Some of the features are still under development, none-the-less what already worked that sounded really nice and natural. A story definitely worth to follow!

  • Sound Mania had a nice sounding pair of speakers from Odeon Audio. Well, they look a kind of strange, but sound surprisingly good :-)

Audio Expo 2025: Sound Mania

Audio Expo 2025: Sound Mania

Disclaimers

I did not listen to everything. I skipped rooms with headphones, and probably two rooms at the end of a corridor which were always full when I tried to get in. Nobody asked me to shut up about stuff I did not like, I just do not like to be negative on things what are mostly subjective. Neither did anybody promise me money or any kind of audio equipment to write nice things, even if I would not mind receiving a new pair of speakers, an amplifier and a DAC :-)

Closing words

I borrow my closing words from my blog last year: I really hope that next year we will have a similarly good Audio Expo in Budapest!

Project Membership

Posted by Brian (bex) Exelbierd on 2025-10-06 06:25:00 UTC

Membership in projects is a critical topic for collaboration and success. I co-wrote an article on this subject with Ben Cotton, which is hosted on duckalignment.academy. The article explores the nuances of project membership, including roles, responsibilities, and alignment strategies.

By understanding the dynamics of project membership, teams can better align their goals and achieve greater outcomes.

infra weekly recap: Early October 2025

Posted by Kevin Fenzi on 2025-10-04 18:40:19 UTC
Scrye into the crystal ball

Another saturday, so time for another weekly recap!

Looping mailing list bounces are not fun

We had a bit of fun in the early part of the week with our mailman server alerting about having a lot of mails in the queue. Looking at it, I found that they were almost entirely bounces, but why?

Well, it was a sad confluence of events: some providers send bounce emails that are almost completely useless. I'll go ahead and name names: mimecast (used by Red Hat) and ibm (their own thing I guess?). These bounces don't include the orig email, don't include headers from the orig email, don't include who the email was sent to. So, for example, say a fictional someone named Bob Smith signs up for a fedora list with bsmith@redhat.com. They then leave the company and emails to them bounce with a message saying "foobar@somethinginternal" bounced. You have no way to tell who it was really unless their internal name and external name match up. mailman cannot process these bounce messages at all, so it just drops them.

But it gets worse. If someone in that state was a owner of a list, and they also enabled the 'send bounce emails you cannot process to list owners' then... the email bounces, the bounce can't be processed, it sends to the list owners, where it causes another bounce, etc. ;(

I managed to figure out the addresses causing the current issue, but it's frustrating. </end rant>

rdu2-cc to rdu3 dc move

The move of our rdu2 community cage hardware to rdu3 continues. I was working on network acls to pass to the networking folks this week. Hopefully I got everything at least to a working starter state.

Still looking like we are going to try a november move, but I am hoping we can get in a new replacement machine before that and I can actually migrate pagure.io before the move. The rest of the hosts there are not too critical and can be down, but it would be nice to avoid downtime for pagure.io.

mass update/reboot

We did another mass update/reboot cycle this week. Wanted to get everything up and on the latest updates before going into final freeze next week.

To give a bit of history here, you may wonder why we do these periodic outages instead of just making it so everything is up all the time? We may consider that again, but at least in the past there were problems with databases and a few other difficult to manage things. Of course you can definitely setup databases these days with clusters and we might try and move to that at some point now, but in the past failover and back was prone to a lot of issues. In the mean time a few hours every few months doesn't seem like a undue burden.

some builder adjustments / additions

This last week I brought on line 5 more buildhw-a64's (hardware aarch64 builders). With that done, I then adjusted our 'heavybuilder' and 'secure-boot' channels to take advantage of the new hardware.

So, I think we are in pretty good shape on x86_64 and aarch64 now. On power, our power10 buildvm's seem to be doing fine to me. We are planning some changes there in coming weeks though: We are moving from a 'entire machine is kvm host' to using lpars (logical partitions). This will allow us to move 1/2 of the current builders to a second power10 chassis and perhaps increase performance. On s390x, nothing much has changed. We continue to restrict ci jobs there and I try and balance out number of builders vs specs.

DANE update

Small update for anyone who noticed or cares: I updated the ssl cert for *.fedoraproject.org last week, and finally got to updating the DANE record for it today.

DANE is a way to tie a ssl cert to dns for the host. Postfix and exim at least can automatically use that to verify things, as well as a firefox extension I have that tells you if it validates or not.

slim7x laptop news

I've kept using my lenovo slim7x laptop and have switched over to mainline rawhide kernels a while back. The only missing thing that wasn't upsreamed for me was bluetooth support and since that was heading upstream, I got Fedora kernel maintainers to just include the patch.

Recent merge window kernels seem to have broken something in the devicetree file for the laptop tho. It boots to a blank screen with the dtb from the kernel. Passing it an old one and it works fine.

There's still work to get the devicetree files on the live media, at which time booting from usb on these just becomes a manual step of passing the right dtb, which is a great deal better than 'build your own live media with devicetree files on it'.

I guess for now I'll just keep daly driving it, but the lack of webcam is kinda anoying.

Radxa Orion O6

Picked up one of these the other day with a set of flimsy excuses: "I can use it to build kernels for the laptop" and "I can help test fedora rcs". It was also on sale at the time. :)

I just installed it this morning. Pretty painless overall, just switching it to 'acpi' mode from 'devicetree' and then an anoying detour of it not liking the first usb stick I plugged in. With an older one it booted right up with the f43 workstation live and was installed a few minutes later.

Will probibly do a seperate blog post with review soon.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115317546174534133

New badge: Contributor Recognition Nominee !

Posted by Fedora Badges on 2025-10-03 06:27:15 UTC
Contributor Recognition NomineeYou went above and beyond - Fedora Project would not be the same without you!

Fedora Strategy 2028 - Growth in Two Parts

Posted by Robert Wright on 2025-10-03 00:00:00 UTC

The Fedora community’s guiding north star for Strategy 2028 is “By the end of 2028, double the number of contributors active every week,” which boils down to thinking about how we attract and support people who are making an effort to improve or grow Fedora, both as a Linux distribution and as a project. Many of the teams involved in Fedora on a day-to-day basis are technologists who focus on writing code and delivering high-quality software to end users. At the same time, there is a community of contributors who focus more on enabling, promoting, and building a better space for all. In how Fedora represents these two bodies, the Mindshare committee and the FESCo (Fedora Engineering Steering Committee) both hold the task of providing guidance to the SIGs/Teams/WGs/loosely organized flocks of contributors who are working to make Fedora better.

New badge: TXLF 2025 !

Posted by Fedora Badges on 2025-10-02 17:26:28 UTC
TXLF 2025You visited the Fedora booth at TXLF 2025!

Rituals

Posted by Brian (bex) Exelbierd on 2025-10-02 12:38:00 UTC

Rituals are a funny thing. We form them without even realizing it. It’s the cafe you visit with a friend for breakfast each month - not for the food, but for the company, the setting, and the sense of comfort.

A new ritual has entered my life: dropping my daughter off at first grade. The walk to school is a novelty for me. I’ve never lived walking distance from any school I attended, so our quick three-minute stroll feels special. It’s the same path we took to her kindergarten tram last year, but now we just cross the street, and we’re there. First grade is the “start of school” year in Czech Republic, so this time the drop off has more meaning … and less parental involvement.

Inside the entry hall, our goodbye is its own small ceremony: a hug, a kiss, a few seconds of token tears about not wanting to go to school, then a sniff as she joins the stream of kids heading into the locker room. Czech kids change into slippers at school, so she has to swim upstream through the crowd to her locker. I know she leaves her coat there, but beyond that, it’s a mystery. Parents are barred from the locker room for “hygienic” reasons1 - or so the sign says. I suspect it’s really about helping the kids find their independence.

My job is to wait. At the end of the hall are two glass-paned doors, locked from my side. I wait and wonder what’s taking so long, and then she appears from another door, heading for the grand staircase to her classroom. We wave and blow kisses as she climbs. I get a wave halfway up and another from the top. Just before she disappears, she ducks to peer back through a gap between columns for one last wave, then straightens up and enters her world.

Every day, a group of parents performs the same ritual. Many have younger children with them, observing how it’s all done. One woman always has a small dog over her shoulder, facing away from the action. A few kids get caught in an eddy, taking forever to emerge, and you can see the parents glancing at their watches. Others come to the glass to wave, mouthing words and pointing, only to be met with gestures to “go upstairs.” They never seem to realize the doors aren’t soundproof; they’re just old, and we could hear them perfectly if they didn’t mouth the words like a whisper.

My part of the ritual is done. I leave with a smile on my face and return later for the reverse. I tap an NFC chip on my keyring to a reader, which tells me if she’s in the after-school program or still at lunch or otherwise occupied and not allowed to leave yet. This tap also tells the school I am here so they send her out. She appears from somewhere - I honestly don’t know where - and comes to the glass doors. Only she can open them from her side, and she makes a show of struggling with the handle until, with great effort, she’s free. Then the stories from her day come spilling out, a mix of songs she sang and things she made, all while I’m trying to get her to put on her coat.

It’s a life worth living.

A large T-shaped stairwell rises between columns to a landing about one and a half floors above ground; reflections of the space behind the photographer appear in the glass door through which the photo was taken.

  1. It’s hygienic in the same way that all restaurants that close unexpected for the evening do so for “technical reasons” that don’t explain that the cook is sick or whatever. You don’t need to know. 

Simplifying Package Submission Progress (7 August – 14 August) – GSoC ’25

Posted by Fedora Community Blog on 2025-10-02 10:00:00 UTC

This week in the project, we covered the changes on how we handle the pull request to make it a more intuitive process.

No config? No problem!

One of our key goals is to reduce friction for contributors as much as possible. With that in mind, my mentor suggested we align our required file structure with the standard dist-git layout, which defines a package simply with a specfile and patches for the upstream source.

Previously, our service required a packit.yaml configuration file in the repository just to trigger COPR builds and Testing Farm jobs. We have now eliminated this dependency. By parsing the specfile directly from within a pull request, our service can infer all the required context automatically. This removes the need for any boilerplate configuration.

Small troubles

As I have mentioned before that we are currently using the rawhide container images for development, since the current stable Fedora release does not have the required versions of the python bindings for forgejo.

The problem was I was running a dnf update before installing anything in the container and for some reason ca-certificates got upset in the container, which caused the service to not be able to communicate over the network.

With some hit and trial, I discovered that removing that line and simply just not updating can work. That fixed it for now, but the good news is Fedora 43 is going to enter a Beta or a package freeze phase so there should not be any such problems now.

What’s Next?

With these core features now in place, my focus for the upcoming weeks will be on two main tasks:

  1. Documentation: I’ll be writing documentation for the service, covering its usage for end-users and development guidelines for future contributors.
  2. Upstreaming Code: The final step is to contribute the Forgejo integration code back to the upstream OGR and packit-service projects so the wider community can benefit from the forgejo integration, still in progress.

The post Simplifying Package Submission Progress (7 August – 14 August) – GSoC ’25 appeared first on Fedora Community Blog.

Updates and Reboots

Posted by Fedora Infrastructure Status on 2025-10-01 21:00:00 UTC

We will be updating and rebooting various servers. Services will be up or down during the outage window.

Version 4.10.1 of syslog-ng now available

Posted by Peter Czanik on 2025-10-01 13:21:53 UTC

Version 4.10.1 is a bugfix release, not needed by most users. It fixes the syslog-ng container and platform support in some less common situations.

Before you begin

I assume that most people are lazy and/or overbooked, just like me. So, if you already have syslog-ng 4.10.0 up and running, and packaged for your platform, just skip this bugfix release.

What is fixed?

  • You can now compile syslog-ng on FreeBSD 15 again.
  • The syslog-ng container has Python support working again.
  • Stackdump support compiles only on glibc Linux systems, but it also used to be enabled on others when libunwind was present. This problem affected embedded Linux systems using alternative libc implementations like OpenWRT and Gentoo in some cases. It is now turned off by default, therefore it needs to be explicitly enabled.

What is next?

If you are switching to 4.10.X now, using 4.10.1 might spare you some extra debugging. Especially if you are developing for an embedded system.

syslog-ng logo

📝 Valkey version 8.1

Posted by Remi Collet on 2025-08-01 07:35:00 UTC

With version 7.4 Redis Labs choose to switch to RSALv2 and SSPLv1 licenses, so leaving the OpenSource World.

Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement. 

So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.

With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.

RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

So you now have the choice between Redis and Valkey.

1. Installation

Packages are available in the valkey:remi-8.1 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-8.1/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module enable valkey:remi-8.1

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.

Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

So users will have the choice and can even use both.

ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valley 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valley 8.0, so it should be part of EL-9.7.

4. Statistics

valkey

The solution to deadlines is usually “cut scope”

Posted by Ben Cotton on 2025-10-01 12:00:00 UTC

Deadlines come for all of us, even in open source projects. The deadlines are often self-imposed (individually or collectively), but that doesn’t make them any less of a deadline. As the deadline approaches and you realize you’re not going to meet it, what do you do? There are a variety of approaches, but the best is usually to do less work.

As you may recall from Program Management for Open Source Projects (or innumerable other books, articles, and manifestos), a project’s cost, scope, quality, and time are all interrelated. When you’re time bound, you can increase the cost (“cost” in an open source project is usually “contributor effort”), decrease the quality, or decrease the scope. My recommendation is to cut scope. Doing less work is both effective and least likely to upset your users.

Cutting quality by doing less testing or being more rushed just leads to more work later as you fix the bugs your users found. Users typically don’t like finding bugs; they much prefer you fix them pre-release. Increasing the cost is rarely an option — contributors are probably already giving you as much time as they can. You might be able to cajole a brief burst of extra effort, but you can’t do that repeatedly.

So that leaves cutting scope. Fewer functional features beats more half-implemented features. The great thing is that there’s always the next release. In commercial work, the scope is somewhat fixed, too. In open source projects, you’re not contractually-bound to the feature set. Users and downstream projects may be really hoping for a new feature, but you’re not obligated.

Of course, cutting scope isn’t always the answer. You can extend self-imposed deadlines and sometimes negotiate externally-imposed deadlines. When you’re truly aaalllmost there, an extension beats cutting work that’s mostly done. The downside is that while you’re waiting for feature A to be done, someone will try to get feature B ready for the release as well. When feature A is finish, feature B is almost done, so you hold on a little longer, and someone will try to get feature C ready for the release. Without sticking to a deadline, you can end up never actually releasing.

This post’s featured photo by Markus Winkler on Unsplash.

The post The solution to deadlines is usually “cut scope” appeared first on Duck Alignment Academy.

Lucro com Ações da Primoris

Posted by Avi Alkalay on 2025-10-01 06:06:39 UTC

Hoje vendi ação da PSC Primoris que valorizou 445% desde quando a comprei, em 2021-10. Que gostinho bom, significa que os 100 dinheiros que aloquei nessa ação hoje valem 545. Henrique Vasconcellos da Nord mandou vender só 47% do que temos de PRIM, para realizar lucro e reduzir o risco, mas manter o resto porque tende a continuar subindo, talvez com menos força.

Foi a maior valorização da carteira Global da Nord, que sigo e acho muito boa porque me valorizou 85% em dolar desde quando comecei em 2021. Outras potentes valorizações dessa carteira são Hims&Hers (208%), Netflix (155%), TSMC (146%), Halozyme (75%) entre outras.

Saber operar em mercado de ações é conhecimento que considero obrigatório para qualquer um, mesmo que a pessoa não queira operar. É tão básico e importante, e simples, quanto saber lavar louça, saber dirigir etc. Dedicar-se a seus investimentos é a melhor coisa que você pode fazer pela sua aposentadoria e independência financeira. Ações globais são só uma faceta. Fundos imobiliários (no tempo certo), renda fixa no Brasil (com 15% anuais!!!), criptomoedas, são todas coisas que você deve considerar.

Ollama on Fedora Silverblue

Posted by Debarshi Ray on 2025-10-01 00:30:09 UTC

I found myself dealing with various rough edges and questions around running Ollama on Fedora Silverblue for the past few months. These arise from the fact that there are a few different ways of installing Ollama, /usr is a read-only mount point on Silverblue, people have different kinds of GPUs or none at all, the program that’s using Ollama might be a graphical application in a Flatpak or part of the operating system image, and so on. So, I thought I’ll document a few different use-cases in one place for future reference or maybe someone will find it useful.

Different ways of installing Ollama

There are at least three different ways of installing Ollama on Fedora Silverblue. Each of those have their own nuances and trade-offs that we will explore later.

First, there’s the popular single command POSIX shell script installer:

$ curl -fsSL https://ollama.com/install.sh | sh

There is a manual step by step variant for those who are uncomfortable with running a script straight off the Internet. They both install Ollama in the operating system’s /usr/local or /usr or / prefix, depending on which one comes first in the PATH environment variable, and attempts to enable and activate a systemd service unit that runs ollama serve.

Second, there’s a docker.io/ollama/ollama OCI image that can be used to put Ollama in a container. The container runs ollama serve by default.

Finally, there’s Fedora’s ollama RPM.

Surprise

Astute readers might be wondering why I mentioned the shell script installer in the context of Fedora Silverblue, because /usr is a read-only mount point. Won’t it break the script? Not really, or the script breaks but not in the way one might expect.

Even though, /usr is read-only on Silverblue, /usr/local is not, because it’s a symbolic link to /var/usrlocal, and Fedora defaults to putting /usr/local/bin earlier in the PATH environment variable than the other prefixes that the installer attempts to use, as long as pkexec(1) isn’t being used. This happy coincidence allows the installer to place the Ollama binaries in their right places.

The script does fail eventually when attempting to create the systemd service unit to run ollama serve, because it tries to create an ollama user with /usr/share/ollama as its home directory. However, this half-baked installation works surprisingly well as long as nobody is trying to use an AMD GPU.

NVIDIA GPUs work, if the proprietary driver and nvidia-smi(1) are present in the operating system, which are provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion; and so does CPU fallback.

Unfortunately, the results would be the same if the shell script installer is used inside a Toolbx container. It will fail to create the systemd service unit because it can’t connect to the system-wide instance of systemd.

Using AMD GPUs with Ollama is an important use-case. So, let’s see if we can do better than trying to manually work around the hurdles faced by the script.

OCI image

The docker.io/ollama/ollama OCI image requires the user to know what processing hardware they have or want to use. To use it only with the CPU without any GPU acceleration:

$ podman run \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:latest

This will be used as the baseline to enable different kinds of GPUs. Port 11434 is the default port on which the Ollama server listens, and ~/.ollama is the default directory where it stores its SSH keys and artificial intelligence models.

To enable NVIDIA GPUs, the proprietary driver and nvidia-smi(1) must be present on the host operating system, as provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion. The user space driver has to be injected into the container from the host using NVIDIA Container Toolkit, provided by the nvidia-container-toolkit package from Fedora, for Ollama to be able to use the GPUs.

The first step is to generate a Container Device Interface (or CDI) specification for the user space driver:

$ sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml
…
…

Then the container needs to be run with access to the GPUs, by adding the --gpus option to the baseline command above:

$ podman run \
    --gpus all \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:latest

AMD GPUs don’t need the driver to be injected into the container from the host, because it can be bundled with the OCI image. Therefore, instead of generating a CDI specification for them, an image that bundles the driver must be used. This is done by using the rocm tag for the docker.io/ollama/ollama image.

Then container needs to be run with access to the GPUs. However, the --gpus option only works for NVIDIA GPUs. So, the specific devices need to be spelled out by adding the --devices option to the baseline command above:

$ podman run \
    --device /dev/dri \
    --device /dev/kfd \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:rocm

However, because of how AMD GPUs are programmed with ROCm, it’s possible that some decent GPUs might not be supported by the docker.io/ollama/ollama:rocm image. The ROCm compiler needs to explicitly support the GPU in question, and Ollama needs to be built with such a compiler. Unfortunately, the binaries in the image leave out support for some GPUs that would otherwise work. For example, my AMD Radeon RX 6700 XT isn’t supported.

This can be verified with nvtop(1) in a Toolbx container. If there’s no spike in the GPU and its memory then its not being used.

It will be good to support as many AMD GPUs as possible with Ollama. So, let’s see if we can do better.

Fedora’s ollama RPM

Fedora offers a very capable ollama RPM, as far as AMD GPUs are concerned, because Fedora’s ROCm stack supports a lot more GPUs than other builds out there. It’s possible to check if a GPU is supported either by using the RPM and keeping an eye on nvtop(1), or by comparing the name of the GPU shown by rocminfo with those listed in the rocm-rpm-macros RPM.

For example, according to rocminfo, the name for my AMD Radeon RX 6700 XT is gfx1031, which is listed in rocm-rpm-macros:

$ rocminfo
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 7 5800X 8-Core Processor 
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 7 5800X 8-Core Processor 
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
…
…
*******                  
Agent 2                  
*******                  
  Name:                    gfx1031                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon RX 6700 XT              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU
…
…

The ollama RPM can be installed inside a Toolbx container, or it can be layered on top of the base registry.fedoraproject.org/fedora image to replace the docker.io/ollama/ollama:rocm image:

FROM registry.fedoraproject.org/fedora:42
RUN dnf --assumeyes upgrade
RUN dnf --assumeyes install ollama
RUN dnf clean all
ENV OLLAMA_HOST=0.0.0.0:11434
EXPOSE 11434
ENTRYPOINT ["/usr/bin/ollama"]
CMD ["serve"]

Unfortunately, for obvious reasons, Fedora’s ollama RPM doesn’t support NVIDIA GPUs.

Conclusion

From the puristic perspective of not touching the operating system’s OSTree image, and being able to easily remove or upgrade Ollama, using an OCI container is the best option for using Ollama on Fedora Silverblue. Tools like Podman offer a suite of features to manage OCI containers and images that are far beyond what the POSIX shell script installer can hope to offer.

It seems that the realities of GPUs from AMD and NVIDIA prevent the use of the same OCI image, if we want to maximize our hardware support, and force the use of slightly different Podman commands and associated set-up. We have to create our own image using Fedora’s ollama RPM for AMD, and the docker.io/ollama/ollama:latest image with NVIDIA Container Toolkit for NVIDIA.

Fedora 43 will ship with FOSS Meteor, Lunar and Arrow Lake MIPI camera support

Posted by Hans de Goede on 2025-09-30 18:55:09 UTC
Good news the just released 6.17 kernel has support for the IPU7 CSI2 receiver and the missing USBIO drivers have recently landed in linux-next. I have backported the USBIO drivers + a few other camera fixes to the Fedora 6.17 kernel.

I've also prepared an updated libcamera-0.5.2 Fedora package with support for IPU7 (Lunar Lake) CSI2 receivers as well as backporting a set of upstream SwStats and AGC fixes, fixing various crashes as well as the bad flicker MIPI camera users have been hitting with libcamera 0.5.2.

Together these 2 updates should make Fedora 43's FOSS MIPI camera support work on most Meteor Lake, Lunar Lake and Arrow Lake laptops!

If you want to give this a try, install / upgrade to Fedora 43 beta and install all updates. If you've installed rpmfusion's binary IPU6 stack please run:

sudo dnf remove akmod-intel-ipu6 'kmod-intel-ipu6*'

to remove it as it may interfere with the FOSS stack and finally reboot. Please first try with qcam:

sudo dnf install libcamera-qcam
qcam

which only tests libcamera and after that give apps which use the camera through pipewire a try like gnome's "Camera" app (snapshot) or video-conferencing in Firefox.

Note snapshot on Lunar Lake triggers a bug in the LNL Vulkan code, to avoid this start snapshot from a terminal with:

GSK_RENDERER=gl snapshot

If you have a MIPI camera which still does not work please file a bug following these instructions and drop me an email with the bugzilla link at hansg@kernel.org.

comment count unavailable comments

Local Voice Assistant Step 5: Remote Satellite

Posted by Jonathan McDowell on 2025-09-30 18:23:01 UTC

The last (software) piece of sorting out a local voice assistant is tying the openWakeWord piece to a local microphone + speaker, and thus back to Home Assistant. For that we use wyoming-satellite.

I’ve packaged that up - https://salsa.debian.org/noodles/wyoming-satellite - and then to run I do something like:

$ wyoming-satellite --name 'Living Room Satellite' \
    --uri 'tcp://[::]:10700' \
    --mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw -D plughw:CARD=CameraB409241,DEV=0' \
    --snd-command 'aplay -D plughw:CARD=UACDemoV10,DEV=0 -r 22050 -c 1 -f S16_LE -t raw' \
    --wake-uri tcp://[::1]:10400/ \
    --debug

That starts us listening for connections from Home Assistant on port 10700, uses the openWakeWord instance on localhost port 10400, uses aplay/arecord to talk to the local microphone and speaker, and gives us some debug output so we can see what’s going on.

And it turns out we need the debug. This setup is a bit too flaky for it to have ended up in regular use in our household. I’ve had some problems with reliable audio setup; you’ll note the Python is calling out to other tooling to grab audio, which feels a bit clunky to me and I don’t think is the actual problem, but the main audio for this host is hooked up to the TV (it’s a media box), so the setup for the voice assistant needs to be entirely separate. That means not plugging into Pipewire or similar, and instead giving direct access to wyoming-satellite. And sometimes having to deal with how to make the mixer happy + non-muted manually.

I’ve also had some issues with the USB microphone + speaker; I suspect a powered USB hub would help, and that’s on my list to try out.

When it does work I have sometimes found it necessary to speak more slowly, or enunciate my words more clearly. That’s probably something I could improve by switching from the base.en to small.en whisper.cpp model, but I’m waiting until I sort out the audio hardware issue before poking more.

Finally, the wake word detection is a little bit sensitive sometimes, as I mentioned in the previous post. To be honest I think it’s possible to deal with that, if I got the rest of the pieces working smoothly.

This has ended up sounding like a more negative post than I meant it to. Part of the issue in a resolution is finding enough free time to poke things (especially as it involves taking over the living room and saying “Hey Jarvis” a lot), part of it is no doubt my desire to actually hook up the pieces myself and understand what’s going on. Stay tuned and see if I ever manage to resolve it all!

File size-based log rotation in syslog-ng

Posted by Peter Czanik on 2025-09-30 11:51:19 UTC

Version 4.10 of syslog-ng introduced file size-based log rotation. Thanks to this, storage space is no longer filled with logs with the risk that you might not see older logs if the message rate is higher than expected.

Read more at https://www.syslog-ng.com/community/b/blog/posts/file-size-based-log-rotation-in-syslog-ng

syslog-ng logo

Simplifying Package Submission Progress (29 July – 7 August) – GSoC ’25

Posted by Fedora Community Blog on 2025-09-30 10:00:00 UTC

This week in the project, we added the support for status reporting of the package actions via commit statuses.

Feedback and Commit status

A key improvement this week is in status reporting. Now, whenever an action like a package build or a test succeeds or fails, the result is directly reported as a commit status on the pull request. This provides immediate clear feedback right where it’s most visible, the list of checks one can find on the bottom of the PR page.

To achieve this, I had to implement the necessary commit status logic for Forgejo within OGR, the library we use for Git forge interactions, as it was previously missing.

I noticed that Forgejo does not really support commenting under individual commits and every comment therefore is just a normal comment in the discussion. I focused on the latter instead.

Currently, what’s left is writing unit tests for the new code, will be done along with pushing the code into upstream sources.

What’s Next?

With this feature in place, my next steps will be:

  • Project Demonstration: Prepare a comprehensive demo to showcase the concept and the new workflow.
  • Code Refinement: Continue with bug fixes and code refactoring to improve the maintainability of the service. Primarily, downsizing from the original packit-worker code for the parts where we don’t need the integrations of services not involved in this project.

The post Simplifying Package Submission Progress (29 July – 7 August) – GSoC ’25 appeared first on Fedora Community Blog.

Asztropapucs debut single

Posted by Peter Czanik on 2025-09-28 06:20:15 UTC

The Hungarian band Asztropapucs has a special place in my heart. I have known these musicians for a long time, some of them even before they formed the band. Like almost everyone else, they started out playing cover songs years ago. Recently, however, they started writing their own songs. I have seen them perform at various concerts. They practiced regularly, and their hard work has led to continuous improvement. This weekend, they published their first song on several streaming services: “Maja” I’ve listened to it many times, and I recommend you do the same. :-)

TIDAL: https://tidal.com/album/460285061

infra weekly recap: End of sept 2025

Posted by Kevin Fenzi on 2025-09-27 17:38:25 UTC
Scrye into the crystal ball

Another saturday, time for another weekly recap. September is almost over and time is flying by. But of course fall is the very best season, so we have that going for us.

Logging and looping apps

This week our central log server ran low on disk space again, and it was the same issue we have hit in the past: A application that processes fedora messaging messages hit one where processing caused a traceback. So, it puts the message back on the queue and retries. This results in rather a lot of logs. :( We actually have discussed some fixes for this in the past, but haven't gotten around to implementing any of them. Perhaps this latest issue can revive that work.

Monitoring news

We currently have a nagios setup and a new zabbix setup we are moving to. This week we found that nagios wasn't monitoring some of the external servers it should have been. Turns out it was some changes we made for the datacenter move, when we have 2 nagios servers and only wanted one to monitor external things, but then it didn't properly get moved to the new one after the move. So, we fixed that and then I had to fix a number of aws security groups and firewalls for those servers to get everything working again.

I'm really looking forward to just finishing the switch to zabbix. I really hope we can turn nagios off before the end of the year. zabbix has a number of advantages and will get us to a nicer space.

Really very anoying sporadic 503 errors on kojipkgs requests

Some small percentage of the time, we have been seeing 503 errors from requests to our kojipkgs servers. This is most visible on builds, but also started to affect composes and other things.

The setup here is somewhat complex, but a build or compose request to kojipkgs.fedoraproject.org goes to one of two internal proxies (proxy110/prox101), there apache terminates ssl and proxies to a haproxy instance. That haproxy in turn has two varnish servers behind it (kojipkgs01 and kojipkgs02). varnish on those servers caches as much as it possibly can in memory, and anything it cannot cache is requested from a local apache in front of a netapp nfs server.

The problem was the haproxy -> varnish step. Sometimes haproxy would see failed health checks and disable a backend, sometimes that would happen with _both_ backends and clients would get a 503.

Digging around on it I found: - Seemingly no particular pattern to when it happened - There was no change in packages or configuration on the proxy servers - Rebooting proxies or varnish servers caused no change - I could not duplicate it from any other machines than proxy servers - I could duplicate it via curl on the proxies - when it happened the proxy had connections showing it sent a SYN

tcpdump would have been nice, but given the massive amount of traffic it wasn't too practical.

Finally, I realized all the stuck connections were with a very high local port. Switching curl to use under 32k local ports the problem was gone. But why? I still don't know why... but just switching the proxies to use a local port range under 32k seems to have completely cleared the problem up. Networking folks say there have been no changes, I could not find anything that changed on our end either.

So, builds are back to normal, and composes (that were sometimes failing and sometimes taking 2x or more time to finish) are back to normal also.

Oncall

A number of years ago in Fedora Infrastructure we setup a weekly role called 'oncall'. This person would watch chats for people asking about problems or pinging specific team members and instead triage their issue and guide them in filing a ticket or deciding if the problem was urgent enough to bother others on the team with.

In this weeks infra meeting we talked about it, and while this was getting used back when we were on IRC, it doesn't really seem to be getting too much use at all on matrix. There of course could be a number of reasons for that: people realize they should just file a ticket or there just have not been too many urgent issues or people are unaware of it or just general answers to questions has been good enough people don't escalate.

Anyhow, we are thinking of dropping or revamping how that works. I'm planning on starting a discussion thread on it soon...

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115277573858704803

Infra and RelEng Update – Week 39 2025

Posted by Fedora Community Blog on 2025-09-26 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 22nd – 26th September 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 39 2025 appeared first on Fedora Community Blog.

🎲 PHP 8.5 as Software Collection

Posted by Remi Collet on 2025-07-04 06:14:00 UTC

Version 8.5.0alpha1 has been released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users (see the schedule).

RPM of this upcoming version of PHP 8.5, are available in remi repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) in a fresh new Software Collection (php85) allowing its installation beside the system version.

As I (still) strongly believe in SCL's potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.

I also plan to propose this new version as a Fedora 44 change (as F43 should be released a few weeks before PHP 8.5.0).

Installation :

yum install php85

⚠️ To be noticed:

  • the SCL is independent from the system and doesn't alter it
  • this SCL is available in remi-safe repository (or remi for Fedora)
  • installation is under the /opt/remi/php85 tree, configuration under the /etc/opt/remi/php85 tree
  • the FPM service (php85-php-fpm) is available, listening on /var/opt/remi/php85/run/php-fpm/www.sock
  • the php85 command gives simple access to this new version, however, the module or scl command is still the recommended way.
  • for now, the collection provides 8.5.0-alpha1, and alpha/beta/RC versions will be released in the next weeks
  • some of the PECL extensions are already available, see the extensions status page
  • tracking issue #307 can be used to follow the work in progress on RPMS of PHP and extensions
  • the php85-syspaths package allows to use it as the system's default version

ℹ️ Also, read other entries about SCL especially the description of My PHP workstation.

$ module load php85
$ php --version
PHP 8.5.0alpha1 (cli) (built: Jul  1 2025 21:58:05) (NTS gcc x86_64)
Copyright (c) The PHP Group
Built by Remi's RPM repository  #StandWithUkraine
Zend Engine v4.5.0-dev, Copyright (c) Zend Technologies
    with Zend OPcache v8.5.0alpha1, Copyright (c), by Zend Technologies

As always, your feedback is welcome on the tracking ticket.

Software Collections (php85)

🎲 PHP on the road to the 8.5.0 release

Posted by Remi Collet on 2025-09-26 05:04:00 UTC

Version 8.5.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.

RPMs are available in the php:remi-8.5 stream for Fedora ≥ 41 and  Enterprise Linux 8 (RHEL, CentOS, Alma, Rocky...) and as Software Collection in the remi-safe repository (or remi for Fedora)

 

⚠️ The repository provides development versions which are not suitable for production usage.

Also read: PHP 8.5 as Software Collection

ℹ️ Installation : follow the Wizard instructions.

Replacement of default PHP by version 8.5 installation, module way (simplest way):

Using dnf 4 on Enterprise Linux

dnf module switch-to php:remi-8.5/common

Using dnf 5 on Fedora

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection (recommended for tests):

dnf install php85

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL9 rpm are build using RHEL-9.6
  • EL8 rpm are build using RHEL-8.10
  • lot of extensions are also available, see the PHP extension RPM status page and PHP version 8.5 tracker
  • follow the comments on this page for update until final version
  • proposed as a Fedora 44 change

ℹ️ Information, read:

Base packages (php)

Software Collections (php84)

⚙️ PHP version 8.3.26 and 8.4.13

Posted by Remi Collet on 2025-09-26 05:00:00 UTC

RPMs of PHP version 8.4.13 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.26 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

⚙️ PHP version 8.3.25 and 8.4.12

Posted by Remi Collet on 2025-08-29 04:43:00 UTC

RPMs of PHP version 8.4.12 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.25 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

🎲 PHP version 8.3.26RC1 and 8.4.13RC1

Posted by Remi Collet on 2025-09-12 07:04:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.13RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.26RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.4.13RC1 is in Fedora rawhide for QA
  • version 8.5.0beta3 is also available in the repository, as SCL
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.26 and 8.4.13 are planed for September 25th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Council Policy Proposal: Policy on AI-Assisted Contributions

Posted by Fedora Community Blog on 2025-09-25 13:10:55 UTC

Artificial Intelligence is a transformative technology, and as a leader in open source, the Fedora Project needs a thoughtful position to guide innovation and protect our community’s values.

For the past year, we have been on a journey with the community to define what that position should be. This process began in the summer of 2024, when we asked for the community’s thoughts in an AI survey. The results, which we discussed openly at Flock and in Council meetings, gave us a clear message: we see the potential for AI to help us build a better platform, but we also have valid concerns about privacy, ethics, and quality.

The draft we are proposing below is our best effort to synthesize the Fedora community’s input into a set of clear, actionable guidelines. It is designed to empower our contributors to explore the positive uses of AI we identified, while creating clear guardrails to protect the project and its values from the risks we highlighted.

Next Steps

In accordance with the official Policy Change policy, we are now opening a formal two-week period for community review and feedback. We encourage you to read the full draft and share your thoughts on the discussion.fedoraproject.org post.

After the two-week feedback period, the Fedora Council will hold a formal vote on ratifying the policy via ticket voting. Thank you for your thoughtful engagement throughout this process. We look forward to hearing your feedback as we take this important next step together.


Fedora Project Policy on AI-Assisted Contributions

Our Philosophy: AI as a Tool to Advance Free Software

The Fedora Project is a community built on four foundations: Freedom, Friends, Features, and First. We envision a world where everyone benefits from free and open source software built by inclusive, open-minded communities. In the spirit of our “First” and “Features” foundations, we see Artificial Intelligence as a tool to empower our contributors to make a more positive impact on the world.

We recognize the ongoing and important global discussions about how AI models are trained. Our policy focuses on the responsible use of these tools. The responsibility for respecting the work of others and adhering to open source licenses always rests with the contributor. AI assistants, like any other tool, must be used in a way that upholds these principles.

This policy provides a framework to help our contributors innovate confidently while upholding the project’s standards for quality, security, and open collaboration. It is a living document, reflecting our commitment to learning and adapting as this technology evolves.

1. AI-Assisted Project Contributions

We encourage the use of AI assistants as an evolution of the contributor toolkit. However, human oversight remains critical. The contributor is always the author and is fully accountable for their contributions.

  • You are responsible for your contributions. AI-generated content must be treated as a suggestion, not as final code or text. It is your responsibility to review, test, and understand everything you submit. Submitting unverified or low-quality machine-generated content (sometimes called “AI slop”) creates an unfair review burden on the community and is not an acceptable contribution.
  • Be transparent about your use of AI. When a contribution has been significantly assisted by an AI tool, we encourage you to note this in your pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like Assisted-by: <name of code assistant>. This transparency helps the community develop best practices and understand the role of these new tools.
  • Fedora values Your Voice. Clear, concise, and authentic communication is our goal. Using AI tools to translate your thoughts or overcome language barriers is a welcome and encouraged practice, but keep in mind, we value your unique voice and perspective.
  • Limit AI Tools for Reviewing. As with creating code, documentation, and other contributions, reviewers may use AI tools to assist in providing feedback, but not to wholly automate the review process. Particularly, AI should not make the final determination on whether a contribution is accepted or not.

2. AI In Fedora Project Management: To avoid introducing uncontrollable bias, AI/ML tools must not be used to score or evaluate submissions for things like code of conduct matters, funding requests, conference talks, or leadership positions. This does not prohibit the use of automated tooling for tasks like spam filtering and note-taking.

3. AI Tools for Fedora Users

Our commitment is to our users’ privacy and security. AI-powered features can offer significant benefits, but they must be implemented in a way that respects user consent and control.

  • AI features MUST be opt-in. Any user-facing AI assistant, especially one that sends data to a remote service, must not be enabled by default and requires explicit, informed consent from the user.
  • We SHOULD explore AI for accessibility. We actively encourage exploring the use of AI/ML tools for accessibility improvements, such as for translation, transcription, and text-to-speech.

3. Fedora as a Platform for AI Development

One of our key goals is to make Fedora the destination for Linux platform innovation, including for AI.

  • Package AI tools and frameworks. We encourage the packaging of tools and frameworks needed for AI research and development in Fedora, provided they comply with all existing Fedora Packaging and Licensing guidelines.

4. Use of Fedora Project Data

The data generated by the Fedora Project is a valuable community asset. Its use in training AI models must respect our infrastructure and our open principles.

  • Aggressive scraping is prohibited. Scraping data in a way that causes a significant load on Fedora Infrastructure is not allowed. Please contact the Fedora Infrastructure team to arrange for efficient data access.
  • Honor our licenses. When using Fedora project data to train a model, we expect that any use of this data honors the principles of attribution and sharing inherent in those licenses.

The post Council Policy Proposal: Policy on AI-Assisted Contributions appeared first on Fedora Community Blog.

Fedora DEI Team Q1–Q3 2025 Highlights

Posted by Fedora Community Blog on 2025-09-25 10:00:00 UTC

We’re excited to share our Quarterly Reports from Q1 to Q3 2025. From January to September, we’ve welcomed new members to the DEI team, hosted the Fedora Mentor Summit (FMS), joined Outreachy, connected during Fedora Social Calls, and worked on our Event DEI Location Policy. Here’s a quick look back at what we’ve accomplished together so far this year!

Fedora Mentor Summit (FMS)

The Fedora Mentor Summit was held in person at Flock to Fedora 2025 in Prague, Czech Republic, and it was a great opportunity to connect and share. Some of the highlights included:

See more in the Past Events section

Outreachy

This year, three Fedora Outreachy interns joined us:

These projects highlight Fedora’s commitment to mentorship and building new contributor pathways.

Fedora Social Calls

In July 2025, the DEI team kicked off Fedora Social Calls, casual hangouts to connect beyond project work.

  • Held thrice so far, with plans to run monthly
  • Led by Cornelius Emase
  • We’d love to see you at the next one! Our last calls had a smaller turnout, but these gatherings are all about having fun, don’t miss the next one this September

These calls help build community bonds and keep Fedora fun and welcoming.

See the Discussion thread here. If you’d like to participate or propose another day, kindly reply in the thread. Updates about the calls will be shared there. At present, we meet on the second Friday of each month from 3 pm UTC. I hope to see you there.

Event DEI Location Policy

The Event DEI Location Policy is now an official Council policy. Thanks to Michael Scherer (misc) for helping push this forward! 

Its goal is simple: to ensure Fedora’s global events are hosted in places where all contributors, including those from marginalized communities, can participate without barriers.

This policy helps keep Fedora events inclusive and welcoming for everyone worldwide.

Read the full policy

New members in the DEI Team

In June 2025, Cornelius Emase (lochipi) was nominated and welcomed as a new member of the Fedora DEI Team. Cornelius joined as part of the June–August 2025 Outreachy cohort with the DEI team.

In August 2025, Chris Idoko (chris) was nominated to join the Fedora DEI Team, following his contributions as an Outreachy intern in 2023, representing Fedora at OSCAFEST 2025, and helping organize Week of Diversity 2024.

September 2025 – Micheal (misc) was nominated and welcomed to the Fedora DEI Team. misc has been actively involved in Fedora through creating the Event DEI Location Policy, presenting “Injecting DEI into Community Event Policies” at Flock 2024, contributing to badges, community infrastructure, and more.

Looking ahead to Q4 2025

Stay tuned for updates on upcoming social calls, ongoing mentorship, and preparations for 2026 DEI activities.

The post Fedora DEI Team Q1–Q3 2025 Highlights appeared first on Fedora Community Blog.

Getting podman quadlets talking to each other

Posted by Major Hayden on 2025-09-25 00:00:00 UTC

Quadlets are a handy way to manage containers using systemd unit files. Containers running via quadlets have access to the external network by default, but they don’t automatically communicate with each other like they do in a docker-compose setup. Adding networking only requires a few extra steps.

Setting up some quadlets #

I often need a postgres server laying around on my local machine for quick tasks or testing something I’m working on. Lately, I’ve been focused on RAG databases and that usually involves pgvector.

The pgvector extension adds vector data types and functions to PostgreSQL, which is great for storing embeddings from machine learning models. You can search via all of the usual SQL queries that you’re used to, but pgvector adds new capabilities for searching rows based on vector similarity.

Here’s the quadlet for pgvector in ~/.config/containers/systemd/pgvector.container:

[Unit]
Description=pgvector container
After=network-online.target

[Container]
Image=docker.io/pgvector/pgvector:pg17
Volume=pgvector:/var/lib/postgresql/data
Environment=POSTGRES_USER=postgres
Environment=POSTGRES_PASSWORD=secrete
Environment=POSTGRES_DB=postgres
PublishPort=5432:5432

[Service]
Restart=unless-stopped

This gets a postgres server with pgvector up and running with a persistent volume. It’s listening on the default port 5432.

Sometimes I’m in a hurry and pgadmin4 is a quick way to poke around the database. It’s also a good example here since it needs to talk to the pgvector container. Here’s the quadlet for pgadmin4 in ~/.config/containers/systemd/pgadmin4.container:

[Unit]
Description=pgAdmin4 container
After=network-online.target

[Container]
Image=docker.io/dpage/pgadmin4:latest
Volume=pgadmin4:/var/lib/pgadmin
Environment=PGADMIN_DEFAULT_EMAIL=major@mhtx.net
Environment=PGADMIN_DEFAULT_PASSWORD=secrete
Environment=PGADMIN_CONFIG_SERVER_MODE=False
Environment=PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False
PublishPort=8080:80

[Service]
Restart=unless-stopped

Awesome! Let’s reload the systemd configuration for my user account and start these containers:

systemctl --user daemon-reload
systemctl --user start pgvector
systemctl --user start pgadmin4

We can check the running containers:

> podman ps --format "table {{.ID}}\t{{.Names}}"
CONTAINER ID NAMES
b099fdaa6b18 valkey
f8ab764c299c systemd-pgadmin4
052c160fb45b systemd-pgvector

Testing communication #

Let’s hop into the pgadmin4 container and see if we can connect to the pgvector database:

> podman exec -it systemd-pgadmin4 ping pgvector -c 4
ping: bad address 'pgvector'
> podman exec -it systemd-pgadmin4 ping systemd-pgvector -c 4
ping: bad address 'systemd-pgvector'

This isn’t great. There are two problems here:

  1. The containers aren’t on the same network
  2. I want to refer to the pgvector container as pgvector, not systemd-pgvector

Let’s fix that.

Fixing communication #

Open up the ~/.config/containers/systemd/pgvector.container file and make the two changes noted below with comments:

[Unit]
Description=pgvector container
After=network-online.target

[Container]
# Use a consistent name 👇
ContainerName=pgvector
Image=docker.io/pgvector/pgvector:pg17
Volume=pgvector:/var/lib/postgresql/data
Environment=POSTGRES_USER=postgres
Environment=POSTGRES_PASSWORD=secrete
Environment=POSTGRES_DB=postgres
# Add the container to a network 👇
Network=db-network
PublishPort=5432:5432

[Service]
Restart=unless-stopped

Also open the ~/.config/containers/systemd/pgadmin4.container file and make the same network change:

[Unit]
Description=pgAdmin4 container
After=network-online.target

[Container]
Image=docker.io/dpage/pgadmin4:latest
Volume=pgadmin4:/var/lib/pgadmin
Environment=PGADMIN_DEFAULT_EMAIL=major@mhtx.net
Environment=PGADMIN_DEFAULT_PASSWORD=secrete
Environment=PGADMIN_CONFIG_SERVER_MODE=False
Environment=PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False
# Add the container to a network 👇
Network=db-network
PublishPort=8080:80

[Service]
Restart=unless-stopped

Create the network:

podman network create db-network

Now, reload the systemd configuration and restart the containers:

systemctl --user daemon-reload
systemctl --user restart pgvector
systemctl --user restart pgadmin4

Testing communication again #

Now, let’s hop into the pgadmin4 container and see if we can connect to the pgvector database:

> podman exec -it systemd-pgadmin4 ping pgvector -c 4
PING pgvector (10.89.5.6): 56 data bytes
64 bytes from 10.89.5.6: seq=0 ttl=42 time=0.026 ms
64 bytes from 10.89.5.6: seq=1 ttl=42 time=0.036 ms
64 bytes from 10.89.5.6: seq=2 ttl=42 time=0.034 ms
64 bytes from 10.89.5.6: seq=3 ttl=42 time=0.088 ms

--- pgvector ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.026/0.046/0.088 ms

Perfect! 🎉 🎉 🎉

Extra credit #

If you want to deploy your system with automation and avoid the manual network creation, you can add one extra file to your ~/.config/containers/systemd/ directory. Save this as db-network.network:

[Network]
Label=app=db-network

From Zero to Web Server: Building with Image mode for Fedora Linux & Caddy

Posted by Fedora Magazine on 2025-09-24 08:00:00 UTC

Image mode for Fedora Linux leverages bootable containers. This technology enables OCI containers to serve as a transport and delivery mechanism for operating system content. This article will guide you through how to use that technology to quickly create a Web Server using Caddy

Introduction

Bootable containers leverage existing OCI container tools (like Podman and Docker) and transport protocols for operating system management. This streamlines the configuration and distribution of operating systems through Containerfiles and container registries. The Universal Blue (ublue) community has embraced this technology, offering diverse operating systems. They also provide a project template to simplify the creation of custom operating systems.

Using the image-template repository

We will begin by using the image-template GitHub repository to create our own repository. For this guide I have named it fedora-web-server.

If you don’t have a GitHub account, you can simply clone the repository to your local machine and rename it to reflect your project’s name.

$ git clone git@github.com:<username>/fedora-web-server.git
$ cd fedora-web-server

Using Fedora Bootc 42 as base image

Image-template simplifies custom OS creation by providing a pre-built structure and essential files. To begin building our web server, we need to choose a base image. While image-template defaults to the ublue Bazzite image, we’ll switch to quay.io/fedora/fedora-bootc:42. Make this change by editing the Containerfile (see immediately below) and replacing the default Base Image.

# Allow build scripts to be referenced without being copied into the final image
FROM scratch AS ctx
COPY build_files /

# Base Image
FROM quay.io/fedora/fedora-bootc:42

RUN --mount=type=bind,from=ctx,source=/,target=/ctx \
--mount=type=cache,dst=/var/cache \
--mount=type=cache,dst=/var/log \
--mount=type=tmpfs,dst=/tmp \
/ctx/build.sh

### LINTING
## Verify final image and contents are correct.
RUN bootc container lint

fedora-bootc is a minimal image designed for customized installations. For this project, I’ll install cloud-init to facilitate deployment and testing both in the cloud and locally. I’ll modify the build_files/build.sh script, which is executed from within the Containerfile, to incorporate cloud-init into our customized OS.

The build.sh file will appear as follow:

#!/bin/bash

set -ouex pipefail

### Install packages
dnf5 install -y --setopt=install_weak_deps=0 cloud-init

Since I want to keep my OS minimal I am not installing weak dependencies.

Installing Caddy

Caddy is an open-source, modern web server known for its simplicity, security, and automatic configuration—it “just works.” For our Fedora Bootc server, we’ll run Caddy using its container image, docker.io/caddy. We’ll leverage quadlet for seamless integration with systemd, allowing our Caddy container to operate like any other systemd service. Create the following file:

$ cd build_files
$ touch caddy.container

and enter the following text into caddy.contatiner :

[Unit]
Description=Caddy Web Server
After=network-online.target

[Container]
Image=docker.io/caddy:2-alpine
PublishPort=80:80
PublishPort=443:443
Memory=512m
Volume=/var/caddy-data/:/data:Z
Volume=/var/caddy-config/:/config:Z
Volume=/var/log/caddy/:/var/log/caddy:Z
Volume=/etc/caddy:/etc/caddy:Z
Volume=/var/www/:/var/www:Z

[Service]
Restart=always

[Install]
# Start by default on boot
WantedBy=multi-user.target

For those familiar with systemd services, the syntax and directives will be recognizable. The [Container] section declares the image to be used, the ports to be published, and the volumes to be shared between the host and the container.

We can now modify the build.sh to copy that file in our custom OS, as shown here:

#!/bin/bash

set -ouex pipefail

### Install packages
dnf5 install -y --setopt=install_weak_deps=0 cloud-init

# Copy caddy.container to /etc/containers/systemd/caddy.container
cp /ctx/caddy.container /etc/containers/systemd/caddy.container

Configuring Caddy

The final step to create a working Caddy server is to add the configuration file. Let’s create a Caddyfile:

$ touch Caddyfile

and enter the text shown below:

# Caddy configuration with automatic Let's Encrypt certificates
# Replace 'your-domain.com' with your actual domain name

# For automatic HTTPS with Let's Encrypt, use your domain name instead of :80
# your-domain.com {
# root * /var/www
# file_server
# log {
# output file /var/log/caddy/access.log
# }
# }

# For local development (HTTP only)
:80 {
root * /var/www
file_server
log {
output file /var/log/caddy/access.log
}
}

We can now copy that file in our OS, by editing build.sh

#!/bin/bash

set -ouex pipefail

### Install packages
dnf5 install -y --setopt=install_weak_deps=0 cloud-init

# Copy caddy.container to /etc/containers/systemd/caddy.container
cp /ctx/caddy.container /etc/containers/systemd/caddy.container


# Create /etc/caddy directory and copy Caddyfile
mkdir -p /etc/caddy
cp /ctx/Caddyfile /etc/caddy/Caddyfile

To complete the setup, we’ll use systemd.tmpfiles to create Caddy’s necessary internal directories. This approach is essential because /var in bootable containers is mutable through overlayfs, meaning it’s only created at runtime and isn’t part of the container build process. Systemd.tmpfiles provides a straightforward solution to this limitation. Modify your build.sh file as follow:

#!/bin/bash
set -ouex pipefail

### Install packages
dnf5 install -y --setopt=install_weak_deps=0 cloud-init

# Copy caddy.container to /etc/containers/systemd/caddy.container
cp /ctx/caddy.container /etc/containers/systemd/caddy.container

# Create /etc/caddy directory and copy Caddyfile
mkdir -p /etc/caddy
cp /ctx/Caddyfile /etc/caddy/Caddyfile

# Create tmpfiles.d configuration to set up /var directories at runtime
cat > /usr/lib/tmpfiles.d/caddy.conf << 'EOF'
# Create Caddy directories at runtime
d /var/log/caddy 0755 root root -
d /var/caddy-data 0755 root root -
d /var/caddy-config 0755 root root -
EOF

That’s it. We now have a Containerfile using fedora-bootc:42 as base image, a build script installing cloud-init and installing Caddy via quadlet and copying the Caddy configuration as well as setting up Caddy’s internal directories.

We can now build our custom operating system. The process involved a Containerfile based on fedora-bootc:42, a build script that installed cloud-init and Caddy (configured via quadlet), and the necessary Caddy configuration and directory setup.
If you want your Caddy web server to serve a custom html page, you can copy the following files https://github.com/cverna/fedora-web-server/tree/main/build_files/web and edit the build.sh script as follows:

#!/bin/bash

set -ouex pipefail

### Install packages
dnf5 install -y --setopt=install_weak_deps=0 cloud-init 

# Copy caddy.container to /etc/containers/systemd/caddy.container
cp /ctx/caddy.container /etc/containers/systemd/caddy.container

# Create /etc/caddy directory and copy Caddyfile
mkdir -p /etc/caddy
cp /ctx/Caddyfile /etc/caddy/Caddyfile

# Copy web content to /usr/share for the base image
mkdir -p /usr/share/caddy/web
cp -r /ctx/web/* /usr/share/caddy/web/

# Create tmpfiles.d configuration to set up /var directories at runtime
cat > /usr/lib/tmpfiles.d/caddy.conf << 'EOF'
# Create Caddy directories at runtime
d /var/log/caddy 0755 root root -
d /var/caddy-data 0755 root root -
d /var/caddy-config 0755 root root -

# Copy web content from /usr/share to /var at runtime
d /var/www 0755 root root -
C /var/www/index.html 0644 root root - /usr/share/caddy/web/index.html
C /var/www/fedora-logo.png 0644 root root - /usr/share/caddy/web/fedora-logo.png
C /var/www/caddy-logo.svg 0644 root root - /usr/share/caddy/web/caddy-logo.svg
EOF

Building the bootable container

The image-template repositories equip you with the necessary tools for local image construction. Let’s begin by verifying that all dependencies are in place.

$ sudo dnf install just git jq podman

Then we can run just to build the container. The following shows the just command and subsequent output:

$ just build fedora-web-server latest
[1/2] STEP 1/2: FROM scratch AS ctx
[1/2] STEP 2/2: COPY build_files /
--> Using cache 94ec17d1689b09a362814ab08530966be4aced972050fedcd582b65f174af3a3
--> 94ec17d1689b
[2/2] STEP 1/3: FROM quay.io/fedora/fedora-bootc:42
[2/2] STEP 2/3: RUN --mount=type=bind,from=ctx,source=/,target=/ctx     --mount=type=cache,dst=/var/cache     --mount=type=cache,dst=/var/log     --mount=type=tmpfs,dst=/tmp     /ctx/build.sh &&     ostree container commit
--> Using cache 8cac20d690bda884b91ae2a555c4239a71f162fbd4ff36a1e2ed5f35f5dfb05a
--> 8cac20d690bd
[2/2] STEP 3/3: RUN bootc container lint
--> Using cache 5cb978e3db5a2bcd437018a6e97e1029d694180002f5fa098aaf540952941dd4
[2/2] COMMIT fedora-web-server:latest
--> 5cb978e3db5a
Successfully tagged localhost/fedora-web-server:latest

Just is a useful tool for running project-specific commands, similar to Makefiles, and uses a ‘justfile’. As the image functions like any other container image, we can inspect it using Podman.

$ podman images
REPOSITORY                   TAG         IMAGE ID      CREATED      SIZE
localhost/fedora-web-server  latest      5cb978e3db5a  4 hours ago  1.95 GB
quay.io/fedora/fedora-bootc  42          586b146c456e  2 days ago   1.88 GB

Building a Disk Image

The disk image for our server is built using just as follows:

$ just build-vm localhost/fedora-web-server latest

This will use the bootc-image-builder project to build a disk image of our bootable container. Once the command has finished, we have a qcow2 image in the output directory

$ ls output/qcow2                                                                                                                                                              
disk.qcow2

We can now use that qcow2 image to start a Virtual Machine and run our webserver.

Running the Web Server

To run the server we can use any virtualization software. In this case I am using virt-install:

$ sudo dnf install virt-install libvirt
$ virt-install --cloud-init root-ssh-key=/path/to/ssh/public.key --connect qemu:///system --import --name fedora-web-server --memory 2048 --disk output/qcow2/disk.qcow2 --os-variant fedora41
Starting install...
Creating domain...
...
Fedora Linux 42 (Adams)
Kernel 6.16.5-200.fc42.x86_64 on x86_64 (ttyS0)

enp1s0: 192.168.100.199
fedora login:

Once the virtual machine is up and running, you can use ssh to login as root, using the local IP address of the virtual machine.

$ ssh root@192.168.100.199
$ bootc status                                                                                                                                                    
● Booted image: localhost/fedora-web-server:latest                                                                                                                               
       Digest: sha256:a813a8da85f48d8e6609289dde87e1d45ff70a713d1a9ec3e4e667d01cb470f2 (amd64)                                                                                  
      Version: 42.20250911.0 (2025-09-12T07:36:05Z) 

Currently, the server’s update capability is limited because the images point to the localhost/fedora-web-server:latest container image. This can be resolved by building and pushing these container images to a container registry such as quay.io or ghcr.io. Detailed instructions for these steps are available in the README file of the ublue-os/image-template repository.

To verify that the web server is running successfully, access that same ip address from your web browser and you should get the following web page.

Summary

This guide demonstrates the power of image mode for Fedora Linux using bootc and Caddy to build a lightweight, custom web server. By leveraging container technologies for OS delivery and Caddy for simplified web serving, users can efficiently deploy and manage web applications, setting a strong foundation for future customization. Make sure to check the library of examples to get ideas on how to use bootc.

syslog-ng 4.10.0 released

Posted by Peter Czanik on 2025-09-23 13:32:10 UTC

Version 4.10.0 of syslog-ng is now available. Among others it adds:

  • support for file size based logrotation
  • a filter that test if a value is blank
  • updated MongoDB driver support

For more details check the syslog-ng release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.10.0

This release fixes several bugs introduced in syslog-ng version 4.9.0, which is the syslog-ng version available in openSUSE Leap 16.0 and Fedora 43. It’s feature freeze (and thus package version freeze) for both distros, but do not worry: bug fixes are back ported.

syslog-ng logo

A Cookiecutter template to quickstart NeuroML modelling projects

Posted by Ankur Sinha on 2025-09-23 09:44:04 UTC
Cookiecutter and NeuroML project logos

Creating new modelling projects, like other programming projects, include a number of repeated steps. When using the NeuroML standard, one would use the libNeuroML API to:

  • create a new NeuroMLDocument (top level container class that includes all NeuroML components)
  • create a network
  • create populations
  • add cells
  • add connections
  • add stimuli
  • record various quantities (membrane potentials, for example)
  • create a LEMS simulation

One also needs to, among other steps, at least:

  • install the NeuroML python packages: pyNeuroML pulls in most of them
  • select the simulation engine, and install the pyNeuroML extra to pull that in

This is sufficient to create and simulate a model. Most people would tend to stop here, because this is really all they need to run their simulations and carry out their research. This is fine, but it misses out on recommended/best practices which make the modelling project:

  • easier to manage and collaborate on
  • ready for sharing/reuse from the beginning (instead of people "cleaning up" their code in a panic when they need to share it as part of a publication)

So, for NeuroML projects, I created a Cookiecutter template that generates all of this boilerplate code, and implements a number of general and NeuroML related best practices:

  • adds a license
  • creates a clean directory structure that separates code from data, model code from analysis code, and model/simulation parameters from the model generation logic
  • creates a template Python script to create and simulate the model
  • adds Typer based command line support in the script
  • sets up the project as a Git repository
  • adds requirements.txt files, linters, pre-commit hooks, and so on---things that we commonly use in software development
  • adds continuous validation of the model using the OSB Model Validation framework as a GitHub Action (model validation is another strength of NeuroML)
  • enables the use of git-annex to manage data in a separate repository

Here is what the directory structure looks like. The {{ cookiecutter.__project_slug }} and similar bits get renamed by Cookiecutter to create the required files/folders:

$ tree -a \{\{\ cookiecutter.__project_slug\ \}\}/
{{ cookiecutter.__project_slug }}/
├-- code
|   ├-- analysis
|   |   └-- Readme.md
|   ├-- .flake8
|   ├-- model
|   |   ├-- cells
|   |   |   └-- .test.validate.omt
|   |   ├-- {{ cookiecutter.__project_slug_nospace }}.py
|   |   ├-- inputs
|   |   |   └-- .test.validate.omt
|   |   ├-- parameters
|   |   |   ├-- general.json
|   |   |   └-- model.json
|   |   ├-- Readme.md
|   |   └-- synapses
|   |       └-- .test.validate.omt
|   ├-- .pre-commit-config.yaml
|   ├-- requirements-dev.txt
|   └-- requirements.txt
├-- data
|   └-- Readme.md
├-- .github
|   └-- workflows
|       └-- omv-ci.yml
├-- LICENSE
└-- Readme.md

Here is a video that illustrates creation of an example project using the template. One can install Cookiecutter in a virtual environment using pip or uv from PyPi, and run this command to get the template and interactively create a new project:

$ cookiecutter gh:sanjayankur31/neuroml-model-template

I am using the template myself, so I have tested the template, and there is CI in the repository to make sure it functions correctly with the default set up. I expect it will evolve further as others use it and provide more ideas/feedback on additional features that may be useful to include.

So, please, give it a go, and let me know what you think.

Monitor system and GPU performance with Performance Co-Pilot

Posted by Major Hayden on 2025-09-23 00:00:00 UTC

I’ve used so many performance monitoring tools and systems over the years. When you need to know information right now, tools like btop and glances are great for quick overviews. Historical data is fairly easy to pick through with sysstat.

However, when you want a comprehensive view of system performance over time, especially with GPU metrics for machine learning workloads, Performance Co-Pilot (PCP) is an excellent choice. It has some handy integrations with Cockpit for web-based monitoring, but I prefer using the command line tools directly.

This post explains how to set up PCP on Fedora and enable some very basic GPU monitoring for both NVIDIA and AMD GPUs.

Installing Performance Co-Pilot #

Install the core packages and command line tools:

sudo dnf install pcp pcp-system-tools

Enable and start the PCP services:

sudo systemctl enable --now pmcd pmlogger
sudo systemctl status pmcd

These two services work together like a team:

  • pmcd (Performance Metrics Collection Daemon) gathers real-time metrics from various sources on your system when you request them.
  • pmlogger records these metrics to log files for historical analysis.

You can verify that the services are working as expected:

# Check available metrics
pminfo | head -20

# View current CPU utilization
pmval kernel.all.cpu.user

# Show memory statistics
pmstat -s 5

Adding GPU metrics collection #

I do a lot of LLM work locally and I’d like to keep track of my GPU usage over time. Fortunately, PCP supports popular GPUs through something called a PMDA (Performance Metrics Domain Agent). These are packaged in Fedora, but they have an interesting installation process.

NVIDIA GPUs #

Unverified instructions: I only have an AMD GPU, but I pulled this NVIDIA information from various places on the internet. Please let me know if you find any issues and I’ll update the post!

For NVIDIA GPUs, ensure you have the NVIDIA drivers and nvidia-ml library:

# Check if nvidia-smi works
nvidia-smi

# Install the NVIDIA management library if needed
sudo dnf install nvidia-driver-cuda-libs

Now install the NVIDIA PMDA:

cd /var/lib/pcp/pmdas/nvidia
sudo ./Install

The installer will prompt you for configuration options. Accept the defaults unless you have specific requirements.

Thanks to Will Cohen for helping me get these NVIDIA steps corrected! 👏

After installation, verify GPU metrics are available:

# List all NVIDIA metrics
pminfo nvidia

# Check GPU utilization
pmval nvidia.gpuactive

# Monitor GPU memory usage
pmval nvidia.memused

AMD GPUs #

For AMD GPUs, PCP provides the amdgpu PMDA that works with the ROCm stack:

# Ensure rocm-smi is installed and working
rocm-smi

# Install the AMD GPU PMDA package
sudo dnf install pcp-pmda-amdgpu

# Install the PMDA
cd /var/lib/pcp/pmdas/amdgpu
sudo ./Install

After installation, verify AMD GPU metrics:

# List all AMD GPU metrics
pminfo amdgpu

# Check GPU utilization
pmval amdgpu.gpu.load

# Monitor GPU memory usage
pmval amdgpu.memory.used

Querying performance data #

There are lots of handy tools for querying PCP data depending on whether you need information about something happening now or want to analyze historical trends.

Real-time monitoring with pmrep #

The pmrep tool provides formatted output perfect for dashboards or scripts. It’s great for situations where you need to see what’s happening right now. It’s much like iostat or vmstat from the sysstat package, but you get a lot more flexibility.

# System overview with 1-second updates
pmrep --space-scale=MB -t 1 kernel.all.load kernel.all.cpu.user mem.util.used

# GPU metrics for LLM monitoring (NVIDIA)
pmrep --space-scale=MB -t1 nvidia.gpuactive nvidia.memused nvidia.temperature

# GPU metrics for LLM monitoring (AMD)
pmrep --space-scale=MB -t 1 amdgpu.gpu.load amdgpu.memory.used amdgpu.gpu.temperature

Historical analysis with pmlogsummary #

If you’re used to to running sar commands from the sysstat package, you’ll find pmlogsummary very familiar. Again, you can do a lot more with pmlogsummary than with sar, but the basic concepts are similar.

# Summarize yesterday's GPU utilization (NVIDIA)
pmlogsummary -S @yesterday -T @today /var/log/pcp/pmlogger/$(hostname)/$(date -d yesterday +%Y%m%d) nvidia.gpuactive

# Summarize yesterday's GPU utilization (AMD)
pmlogsummary -S @yesterday -T @today /var/log/pcp/pmlogger/$(hostname)/$(date -d yesterday +%Y%m%d) amdgpu.gpu.load

# Find peak memory usage over the last hour
pmlogsummary -S -1hour /var/log/pcp/pmlogger/$(hostname)/$(date +%Y%m%d) mem.util.used

Troubleshooting tips #

If GPU metrics aren’t showing up:

# Check if the PMDA is properly installed
pminfo -f pmcd.agent | grep -E "amdgpu|nvidia"

# Restart PMCD to reload PMDAs
sudo systemctl restart pmcd

# Check PMDA logs for errors
sudo journalctl -u pmcd -n 50

# Verify GPU drivers are working
rocm-smi # for AMD
nvidia-smi # for NVIDIA

Further reading #

Hello Fedora Planet

Posted by Héctor Hugo Louzao Pozueco on 2025-09-22 11:22:00 UTC

This is my first post in Fedora Planet!

Booting Vagrant boxes with UEFI on Fedora: Permission denied

Posted by Evgeni Golov on 2025-09-22 10:37:23 UTC

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'

The important part here is

Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)

Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!

# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied  { read } for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice.

For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here.

As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:

# semanage fcontext -l |grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0

Okay, that all makes sense, and it's just missing the Vagrant-specific folders!

# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'

Now relabel the Vagrant boxes:

% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0

And it works!

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

Next Open NeuroFedora meeting: 22 September 2025 1300 UTC

Posted by Ankur Sinha on 2025-09-22 09:59:02 UTC
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 22 September 2025 at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date -d 'Monday, September 22, 2025 13:00 UTC'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

How to rebase to Fedora Silverblue 43 Beta

Posted by Fedora Magazine on 2025-09-22 08:00:00 UTC

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. This article provides the steps to rebase to the newly released Fedora Linux 43 Beta, and how to revert if anything unforeseen happens.

NOTE: Before attempting an upgrade to the Fedora Linux 43 Beta, apply any pending upgrades.

Updating using the terminal

Because Fedora LInux 43 Beta is not available in GNOME Software, the whole process must be done through a terminal.

First, check if the 43 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/43/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as an option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use following command ( “2” corresponds to the entry position in the output from rpm-ostree status ):

$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora 43 branch.

$ rpm-ostree rebase fedora:fedora/43/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Silverblue 43 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 43 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu (you need to press ESC during boot sequence to see the GRUB menu in newer versions of Fedora Silverblue), and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 43 Beta and fall back. So why not do it today?

Known issues

FAQ

Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.

Question: Can I skip versions during rebase of Fedora Linux? For example from Fedora Silverblue 41 to Fedora Silverblue 43?

Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (41->42 for example) to avoid unnecessary errors.

Question: I have rpm-fusion layered and I got errors during rebase. How should I do the rebase?

Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:

rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release

After doing this you can follow the guide in this article.

Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?

Yes, you can follow the Updating using the terminal part of this guide for every ostree edition of Fedora. Just use the corresponding branch. For example for Kinoite use fedora:fedora/43/x86_64/kinoite