/rss20.xml">

Fedora People

Day 4: Microsoft Hackathon — Threading Failure, Recursion Repair, and Model Limits

Posted by Brian (bex) Exelbierd on 2025-09-18 22:50:00 UTC

Today’s work (really posted just after midnight) started smoothly: I pushed a candidate set of mailing‑list threads all the way through summarization to validate the next phase of the pipeline. Then it cratered - duplicate subject lines that exposed my threading logic was flawed.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

Test run before failure

I wasn’t choosing “important” threads yet - just seeing what wholesale summarization output looked like so I could later score, filter, and review. That smoke test did its job by flushing out a structural bug instead of letting it lurk until the importance pass.

The threading bug

I’d tried to get a little “clever” with heuristics while lacking full headers (older emails, inconsistent client behaviors). The result: unrelated conversations collapsed because headers like In-Reply-To and References were not always present or complete. I also rely on some supplemental external data for stitching things together when the headers fail. Deterministic structure first, cleverness later - apparently I needed to relearn yesterday’s lesson.

Recursion beats cleverness

I rewrote the logic as a plain recursive rebuild of parent/child relationships using what reliable metadata I do have. This was basically a CS 101 traversal plus extra credit for malformed inputs and odd nesting. The LLM handled the boilerplate once I spelled out the core structural invariant: each message links to at most one parent (by Message-ID via In-Reply-To / References), roots have none, and no cycles. I owned that mental model and the test cases. The model filled in syntax. That division worked. Watching multiple models miss the full shape of the bug was useful: they offered partial patches, none produced the end‑to‑end correction without guided constraints.

On expectations (and “LLM denying friends”)

To my LLM denying friends (you’re not skeptics and you know it!): nobody promised you could type “make Jira but with cat memes” and go to lunch. Marketing slides might imply that. Reality is still stochastic autocomplete with oversight and bite-sized work needed. It’s good at cranking out a recursive walk or refactoring a loop. It is terrible at handing you judgment, taste, or guardrails. I got cute, collapsed unrelated threads, paid the tax, rewrote it. That’s the actual human‑in‑the‑loop job: think, state the few non‑negotiable structural rules, let the model grind, verify, repeat.

What you do get: fast pattern completion over training data. “Traverse this tree and stitch orphan replies” lives there. “Subtly infer intent from half-missing mail headers and not overmerge” does not unless you spell it out. So yes, some of the debugging prompts contained some profanity. That didn’t summon intelligence, it just vented mine.

What I learned (again)

  • Deterministic scaffolding first. Probabilistic layers second.
  • “Clever” heuristics without gold tests are debt.
  • LLMs are fast at churning variants. They still need a crisp structural rule set to follow or test against.
  • A blunt recursive pass is often the right baseline even if it feels unsophisticated.

Back to the pipeline

With threading fixed I’m back to grading summarization output and preparing the next human‑in‑the‑loop pass for importance ranking. Tomorrow’s goal: start scoring threads with the heuristics + enrichment stack from earlier days and see if explanations remain concise enough not to drown me.

Posting this late counts as progress anyway. Better a truthful “I broke it and fixed it” than a shiny but misleading narrative.

More tomorrow.

Fedora Signing Update, September edition

Posted by Jeremy Cline on 2025-09-18 15:34:00 UTC

It’s been a while since my last update about content signing in Fedora, so I figured I should write one before the end of this month. While I’ve not been able to devote all my time to working on Siguldry, I have made enough progress I think it’s worth covering, and I also had the opportunity to chat with Miloslav Trmač, the original author of Sigul, about its design.

History Lesson for Sigul

In my post about various protocol tweaks I covered most of the changes I wanted to make to Sigul’s protocol and my reasons for doing so. A major theme of the changes was pushing more of the content-related “smarts” to the client. For example, in sigul, the server receives a container file and prepares a JSON document based off it to sign with GPG. In my proposed approach, the client should handle that and simply asks the server to sign the given document.

Miloslav was able to provide me with some history on Sigul and why it pushes the responsibility of dealing with the content to the bridge and server, and I think it’s worth recording here. In the original design, signing didn’t happen automatically: a Fedora contributor would need to request a signature using the Sigul client. In this scenario, there are a lot more Sigul clients and they’re running on machines that Fedora infrastructure doesn’t manage. Thus, the bridge and server needed to make every effort to ensure clients didn’t request signatures for things they shouldn’t - this is why the bridge is able to integrate with the Fedora Account System, and why it also contacts Koji when RPM signatures are requested: it has every reason to expect the client to be malicious.

These days, there’s only a few Sigul clients. Fedora’s infrastructure admins have management accounts, and the robosignatory service, which runs in Fedora’s infrastructure, also acts as a client. While we should be cautious in our design, the deployment is significantly different from Sigul’s original expectations so we can make different trade-offs.

Siguldry Progress

Back in July, I completed the protocol implementation. Since then, I’ve worked on actually doing things on top of it.

Binding Keys to Hardware

One feature I largely ignored in Sigul because it seemed complicated was its ability to encrypt signing keys using a combination of user-provided passwords and hardware (a Yubikey or TPM, for example). This means that if you were to obtain a copy of the Sigul database and you also managed to get the user’s password to access a signing key, you still wouldn’t be able to access the private keys without getting access to the signing machine’s TPM or a Yubikey that Fedora’s infrastructure team has access to.

I have partially re-implemented this particular feature in Siguldry. I say partially because Sigul had a number of different ways to bind keys to hardware on both the server and client side, and I have opted to implement only what Fedora infrastructure currently uses. You can use any key accessible via PKCS #11 to encrypt signing keys in addition to user passwords. You can even use multiple keys, and any one key is enough to unlock things (so you can have a backup Yubikey or three).

On the client side, the expectation is that user passwords and certificates will be bound to the TPM via systemd-credentials.

The Server Supports PGP Signing

I’ve implemented creating PGP keys as well as signing content with them. The interface supports detached signatures, the cleartext format you see, for example, with CHECKSUM files, and inline signatures.

This covers all current PGP signature types used by Fedora. However, more work is needed on the client side to use this interface to sign RPMs. The client needs to extract the header from the RPM, have the server produce a detached signature, and then send that signature to Koji.

The Server Supports RSASSA-PKCS1-v1_5 and ECDSA Signatures

These are the signatures you get with openssl-dgst and related commands, and they’re used for Secure Boot signatures, container signatures with cosign, and IMA.

What’s Next

Siguldry now has all the primitive interfaces to implement signing in the client for various types of content (RPM, containers, git tags, etc). It’s now time to work on implementing those, figure out what server interfaces need changing, and then add a whole lot of polish: a comprehensive end-to-end test suite for all content types, API documentation, an admin guide, and a set of tools to migrate the Sigul database to Siguldry.

Comments and Feedback

Thoughts, comments, or feedback greatly welcomed on Mastodon

Flash sale on all Pragmatic Bookshelf titles

Posted by Ben Cotton on 2025-09-18 14:00:00 UTC

With less than 100 days until Christmas, now is a great time to save 45% on titles from The Pragmatic Bookshelf! Use promo code flashsale at pragprog.com between 1400 UTC (10am Eastern) September 18 and 1400 UTC (still 10am Eastern) on September 20 to save 45% on every title (except The Pragmatic Programmers).

Not sure what you should get? If you don’t have a copy of Program Management for Open Source Projects, now’s a great time to get one. I’ve been a technical reviewer for a few other books as well:

I’ve read (or have in my stack to read) other books as well:

With hundreds of titles to choose from, there’s something for you and the techies in your life.

This post’s featured photo by Josh Appel on Unsplash.

The post Flash sale on all Pragmatic Bookshelf titles appeared first on Duck Alignment Academy.

Announcing the Soft Launch of Fedora Forge

Posted by Fedora Community Blog on 2025-09-18 10:00:00 UTC

We are thrilled to announce the soft launch of Fedora Forge, our new home for Fedora Project subprojects and Special Interest Groups (SIGs)! This marks a significant step forward in modernizing our development and collaboration tools, providing a powerful platform built on Forgejo. For more background on why we chose Forgejo, see the previous community blog post.

A New Home for Fedora Collaboration

We designed Fedora Forge as a dedicated space for official Fedora Project work. Unlike pagure.io, which hosted personal projects alongside Fedora Project work, we focus on supporting subprojects and SIGs. This structured approach streamlines our efforts and simplifies contribution to official Fedora teams.

We are migrating projects from select teams, including: Release Engineering (RelEng), Council, and Fedora Engineering Steering Committee (FESCo). This phase-based approach lets us test the platform thoroughly before opening it to more subprojects and SIGs.

However, If you are a leader of a team or SIG and would like to request a new organization or team on Fedora Forge, please see our Requesting a New Organization and/or Team guide for detailed instructions.

Seamless Migration with Pagure Migrator

The Pagure Migrator is a key part of this launch. We developed and upstreamed this new Forgejo feature to ensure smooth transitions. This utility moves projects from Pagure-based Git forges seamlessly. It brings over historical data like pull requests, issue tickets, topics, labels, and users. As subprojects and SIGs move over, their valuable history and ongoing work come with them. This ensures continuity and a painless transition for contributors.

Get Ready for What’s Next!

This soft launch is just the beginning. As we test the waters by settling these first subprojects and SIGs on Fedora Forge, we will be preparing to open it up in the coming weeks. We are confident that Fedora Forge will become an invaluable tool for the community, providing a robust and modern platform for collaboration.

Please use the #git-forge-future tag on Fedora Discussion to communicate your feedback and the #fedora-forgejo:fedoraproject.org channel on Fedora Chat to collaborate with us.

The post Announcing the Soft Launch of Fedora Forge appeared first on Fedora Community Blog.

Day 3: Microsoft Hackathon — Thread Heuristics, Importance Signals, and Agent Editing

Posted by Brian (bex) Exelbierd on 2025-09-17 18:00:00 UTC

Today started with a plan: drop my kid at school, head to a coworking space, meet my teammate, and push the importance model forward. Reality: unexpected stuff pulled me back home first. Momentum recovered later, but the change in expectations reinforced how fragile context can be when doing iterative LLM + data work.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

Thread heuristic exporter

I built a first pass “thread stats” exporter: sender count, participant diversity, tokens of note, and other structural hints. Deterministic, fast, and inspectable. This gives a baseline before letting an LLM opine. The goal: reduce the search space without prematurely deciding what’s “important.”

Planning importance signals

With that baseline, I worked (with ChatGPT-5) on how to move beyond raw counts. What makes a thread worth surfacing? Some dimensions that matter to me:

  • Governance, Policy or consensus decisions
  • Direct relevance to Azure or adjacent cloud platform concerns
  • Presence of Debian package names (as a proxy for concrete change surface)
  • Diversity of participants vs. a back-and-forth between two people
  • Emergence of unusual or “quirky” sidebars that might signal cultural or directional shifts (these are often interesting even if not strictly impactful)

I want to avoid letting pure volume masquerade as importance. A long bikeshed is still a bikeshed.

Data enrichment

I experimented with a regexp to match Debian package names. That worked a bit too well. Claude Sonnet was reporting threads moving forward and I couldn’t believe the numbers. A little investigation and it turned out the regexp was capturing everything. Sonnet suggested pulling in a Debian package list to tag occurrences inside threads. That, plus spaCy-based entity/token passes, lets me convert unstructured text into a feature layer the LLM can later consume. The MVP now narrows roughly 424 August 2025 threads to ~250 candidates for deeper scoring. Not “good,” just narrower. False negatives remain a risk; I’d rather over‑include at this stage.

Why deterministic first

Locking down deterministic extraction serves to reduce some of the noise before LLM scoring. It also provides a dial I can change as part of the review from the human-in-the-loop process I envision.

Next phase: human-in-the-loop LLM

Tomorrow I plan to let the model start proposing which threads look important or just interesting, then review those outputs manually - back and forth until the signals feel reliable. Goal: lightweight human-in-the-loop review, not handing over judgment. Keeping explanations terse will matter or I and any hypothetical readers will drown in synthetic prose.

Agent editing workflow

While agents “compiled1,” I created an AGENTS.md doc to formalize how I want writing edits to work. This is about editing my prose, not letting a model co‑author new ideas. Core rules I laid down include:

  • Challenge structure and assumptions when they look shaky - do not invent content
  • Preserve hedges; mark uncertainty with [UNCLEAR] instead of guessing
  • Keep my voice; I review diff output before accepting anything

Most importantly, I am not an emoji wielding performative Thought Leader(tm) turned Influencer(tm). The new guidance has already reduced noise. I still refuse to let a model start from a blank page; critique mode is the win. Visual Studio Code diff views make trust-building easier - everything is inspectable.

Closing

Today was half heuristics, half meta-process. The importance problem still feels squishy, but the scaffolding is there. Now I’m going to stop and pick the kid up and let her redeem a grocery-store stamp card for a Smurf doll. Grocery stores are weird and since she speaks Czech she can do the talking.

  1. Obligatory xkcd. In this case, letting an LLM grind out code for review. 

Planning ahead is the most important part of code of conduct enforcement

Posted by Ben Cotton on 2025-09-17 12:00:00 UTC

The goal of a code of conduct is to explicitly define the boundaries of acceptable behavior in a community so that all members can voluntarily abide by it. Unfortunately, people will occasionally violate the code of conduct. When this happens, you have to take some kind of corrective action. This is where many communities struggle.

Too often, the community lacks a well-developed plan for responding to code of conduct violations. The first reason I’ve seen for this is the belief (hope?) that the community members will behave. This is too optimistic, but totally relatable. We all want to think the best of our communities. The other reason I see is a general reluctance to create policy before it’s needed. This makes a lot of sense in almost every other situation, but code of conduct enforcement is different. Developing processes on an as-needed basis is usually my suggested approach, but you cannot build your code of conduct enforcement on the fly. Here’s why.

Ad hoc processes seem unfair

Even if the outcome is correct, a process that’s made up on the fly will be perceived as unfair. If someone is given a timeout from the project because they were clearly harassing another community member, you won’t get much push back. On the other hand, if someone is subtly being a jerk while advancing a controversial opinion, a decision to suspend them could be seen as a punishment for their controversial opinion. People inclined to a bad-faith interpretation can always find a reason to cry foul, but most potential critics will understand if you’re following an established process.

Sometimes the on-the-fly process seems unfair because it is unfair. If two similar incidents occur a year apart, they may be handled very differently. This isn’t because of malice toward the more harshly dealt with person or favoritism toward the more leniently dealt with person. It’s because a year has passed and different people are potentially handling the cases. This is a recipe for inconsistent response.

Code of conduct response can be complicated

The other reason to plan ahead is that code of conduct response can be complicated. In minor cases, someone talks to the offending party, and everyone moves on with life. Those cases are easy. But in more severe cases, like where someone is temporarily (or permanently) suspended from the project, there are more steps to take. You may have to coordinate disabling one or more accounts (including social media accounts). If funding for travel or events is involved, you may need to pause or revoke that. If the person is a maintainer of some component, you need to ensure that someone else is available to handle those responsibilities.

Trying to figure out in the moment what needs to be done almost guarantees missing something. And the people with the ability to make the necessary changes might reasonably hesitate to do it in response to a request out of nowhere. In the case of temporary suspensions, you need to remember to re-enable access. Again, without a defined process, you might forget entirely, or at least forget to re-enable certain privileges.

With severe incidents, the situation is already distressing enough. A pre-defined process doesn’t make it easy, but it reduces the strain.

This post’s featured photo by Tingey Injury Law Firm on Unsplash.

The post Planning ahead is the most important part of code of conduct enforcement appeared first on Duck Alignment Academy.

The Couple Across the Way

Posted by Brian (bex) Exelbierd on 2025-09-17 11:50:00 UTC

From my window and balcony, I can see the balcony of a couple who live across the way. They seem to be about 10 to 15 years older than me, and I gather from their habits that they’re both retired. Several times a day, they step out onto their balcony with cups of coffee. One of them is often on the phone, while the other might sit quietly. Their voices are distinctive, so if my window is open, I always hear them.

They say you shouldn’t make up stories about people you observe and expect them to be true. It’s like assuming someone’s Instagram reflects their real life. But because I work from home, I see this couple frequently, and I think it’s lovely that they have this time, this space, and this ritual.

I often wonder what they’re saying—either to each other or to the person on the phone. My Czech isn’t strong enough to understand much, so I can’t piece together their conversations. I’ve never asked a Czech-speaking friend to listen in, either. If I did, I suspect I’d hear complaints, as I’ve been told that Czechs are prone to complaining. But one of the perks of not speaking the language fluently is that I can imagine otherwise. When I walk down the street, I assume people are talking about how beautiful the day is and how lucky we are to live in this city, rather than airing grievances.

What prompted me to write this today is that, for the first time, I noticed the woman wearing what I can only describe as a silly hat. She’s never worn a hat before, at least not that I’ve noticed. The man occasionally wears a baseball cap, but this was different. It made me wonder: is today silly hat day? If so, I hope it’s a happy one.

Nightly syslog-ng RPM packages for RHEL & Co.

Posted by Peter Czanik on 2025-09-17 09:36:10 UTC

I have been providing syslog-ng users with weekly git snapshot RPM packages for almost a decade. From now on, RHEL & Co users can use nightly packages provided by the syslog-ng team, and from a lot less obscure location. As usual, these packages are for testing, not for production.

Read more at https://www.syslog-ng.com/community/b/blog/posts/nightly-syslog-ng-rpm-packages-for-rhel-co

syslog-ng logo

Introducing complyctl for Effortless Compliance in Fedora

Posted by Fedora Magazine on 2025-09-17 08:00:00 UTC

complyctl is a powerful command-line utility implementing the principles of “ComplianceAsCode” (CaC) with high scalability and adaptability for security compliance.

In today’s rapidly evolving digital landscape, maintaining a robust security posture isn’t just a best practice – it is a necessity. For Fedora users, system administrators, and developers, ensuring that your systems meet various security and regulatory requirements can often feel like a daunting, manual task. But what if you could standardize and automate much of this process, making compliance checks faster, easier to audit, and seamlessly integrated into your workflows?

This is now a reality enabled with multiple ComplyTime projects. These focus on specific tasks designed to be easily integrated. They allow a robust, flexible, and scalable combination of microservices communicating with standardized formats that ultimately allow a powerful capability to much more easily adapt to the compliance demands. This also allow faster adoption of new technologies. There are multiple exciting projects actively and quickly evolving under the umbrella of ComplyTime organization. In this article I would like to highlight complyctl, the ComplyTime CLI for Fedora, and its main features that make it an excellent option to easily maintain a robust security posture in your Fedora systems.

complyctl is a powerful command-line utility available since Fedora 42. It’s design uses the principles of “ComplianceAsCode” (CaC) with high scalability and adaptability. It contains a technology agnostic core and is easily extended with plugins. This allows users to use the best of every available underlying technology with a simple and standardized user interface.

The Power of ComplianceAsCode with complyctl

At its heart, complyctl is a tool for performing compliance assessment activities, scaled by a flexible plugin system that allows users to perform compliance check activities with a flexible combination of the best available assessment technologies.

The complyclt plugin architecture allows quick adoption and combination of different scanner technologies. The core design is technology agnostic with standardizing inputs and outputs using machine readable formats that allow high reusability and shareability of compliance artifacts. Currently it leverages the Open Security Controls Assessment Language (OSCAL) and its anti-fragile architecture also allows a smooth adoption of future standards, making it a reliable and continuous modern solution for the long-term.

 This might sound technical, but the benefits are simple:

  1. Automation and Speed: Traditional compliance audits can be slow, manual, complex and prone to human error. complyctl relies on standardized machine readable formats, allowing automation without technology or vendor lock-in.
  2. Accuracy and Consistency: Machines are inherently more consistent than human reviewers. complyctl’s reliance on OSCAL provides a standardized format for expressing security controls, assessment plans, and results. This standardization is crucial for interoperability. It allows consistent processing and understanding of compliance data across different tools and systems.
  3. Scalability and Integration: complyctl simplifies compliance checks integration in your development and deployment pipelines. An OSCAL Assessment Plan can be created and customized once and reused across multiple systems. Ultimately compliance checks can be implemented faster and compliance gaps are caught earlier. This prevents non-compliant configurations from reaching production environments.
  4. Extensibility with Plugins (including OpenSCAP): The plugin-based architecture of complyctl makes it incredibly versatile. An example is the complyctl-openscap-plugin, which extends complyctl’s capabilities to use OpenSCAP Scanner and the rich content provided by scap-security-guide package. This allows an immediate and smooth adoption of complyctl using a well-established assessment engine while providing a modern, OSCAL-driven workflow for managing and executing security compliance checks. It also allows a smooth and gradual transition to other scanner technologies.

By embracing complyctl, Fedora users can more easily maintain a strong security posture.

Getting Started with complyctl: A Practical Tutorial

Ready to put complyctl to work? It is likely simpler than you expect. The following is a step-by-step guide to start using complyctl on your Fedora system.

1. Installation

First, install complyctl, if necessary. It is available as an RPM package in official repositories:

sudo dnf install complyctl

2. Understanding the Workflow

complyctl follows a logical, sequential workflow:

  • list: Discover available compliance frameworks.
  • plan: Create an OSCAL Assessment Plan based on a chosen framework. This plan acts as your assessment configuration.
  • generate: Generate executable policy artifacts for each installed plugin based on the OSCAL Assessment Plan.
  • scan: Call the installed plugins to scan the system using their respective policies and finally aggregate the results in a single OSCAL Assessment Results file.

Let’s walk through these commands.

3. Step-by-Step Tutorial

Step 1: List Available Frameworks

To begin, you need to know which compliance frameworks complyctl can assess your system against. Currently the complyctl package includes the CUSP Profile out-of-the-box.

Use the list command to show the available frameworks:

complyctl list

This command will output a table, showing the available frameworks. Look for the Framework ID column, as you’ll need this for the next step.

Example:

Optionally, you can also include the --plain option for simplified output.

Step 2: Create an Assessment Plan

Once you’ve identified a Framework ID, you can create an OSCAL Assessment Plan. This plan defines what will be assessed. The plan command will generate an assessment-plan.json file in the complytime directory.

complyctl plan cusp_fedora

This command creates the user workspace in the “complytime” directory:

tree complytime
complytime/
└── assessment-plan.json

The JSON file is a machine-readable representation of your chosen compliance policy.

Step 3: Install a plugin

In this tutorial we will use OpenSCAP Scanner as the underlying technology for compliance checks. So, we also want to install the OpenSCAP plugin for complyctl as well the OpenSCAP content delivered by scap-security-guide package:

sudo dnf install complyctl-openscap-plugin scap-security-guide

Step 4: Generate Policy Artifacts

With your assessment-plan.json in place, and the desired plugins installed, the generate command translates this declarative plan into policy artifacts for the installed plugins. These are the actual plugin specific instructions complyctl plugins will use to perform the checks.

complyctl generate

This command prepares the assessment for execution.

tree complytime/

complytime/
├── assessment-plan.json
└── openscap
    ├── policy
    │   └── tailoring_policy.xml
    ├── remediations
    │   ├── remediation-blueprint.toml
    │   ├── remediation-playbook.yml
    │   └── remediation-script.sh
    └── results

Step 5: Execute the Compliance Scan

Finally, the scan command runs the assessment using the installed plugins. The results will appear in the assessment-results.json, file by default.

complyctl scan

For human-readable output, which is useful for review and reporting, you can add the --with-md option. This will generate both assessment-results.json and assessment-results.md files.

complyctl scan --with-md

This Markdown file provides a clear, digestible summary of your system’s compliance status, making it easy to share with auditors or other stakeholders.

tree complytime/
complytime/
├── assessment-plan.json
├── assessment-results.json
├── assessment-results.md
└── openscap
    ├── policy
    │   └── tailoring_policy.xml
    ├── remediations
    │   ├── remediation-blueprint.toml
    │   ├── remediation-playbook.yml
    │   └── remediation-script.sh
    └── results
        ├── arf.xml
        └── results.xml

Final thoughts

complyctl is an open-source tool built for and by the community. We encourage you to give it a try.

  • Find us on GitHub at complyctl repository.
  • If you find an issue or have a feature request, please open an issue, propose a PR, or contact the maintainers. Your feedback will help shape the future of this tool.
  • Collaboration on ComplianceAsCode/content community is also welcome to help us shaping Compliance profiles for Fedora.

Hermeto Pascoal vive!

Posted by Avi Alkalay on 2025-09-17 01:09:26 UTC

🇧🇷 Pra quem quiser conhecer um pouco da genialidade de Hermeto Pascoal (1936-2025), preparei uma playlist com algumas de suas composições interpretadas por outros músicos brilhantes do Brasil e do mundo. Prepare-se para melodias de fora deste planeta, frases de originalidade intrigante, harmonias dissonantes, tudo frequentemente sobrepostas sobre ritmos brasileiros. Hermeto foi um dos maiores compositores de nossa era. Hermeto vive para sempre em sua música.

🇬🇧 For anyone who wants to experience a bit of Hermeto Pascoal’s genius, I’ve put together a playlist with some of his compositions interpreted by other brilliant musicians from Brazil and around the world. Get ready for melodies from out of this world, phrases of intriguing originality, dissonant harmonies, all often layered over Brazilian rhythms. Hermeto was one of the greatest composers of our time. Hermeto lives forever in his music.

Also in my Facebook

Day 2: Microsoft Hackathon — Distractions, Brainstorming, and Infrastructure

Posted by Brian (bex) Exelbierd on 2025-09-16 19:50:00 UTC

Today was a mixed bag. I started with the goal of advancing the central metadata architecture, which is critical for figuring out what’s worth surfacing in the notebook. However, distractions and infrastructure challenges dominated the day. The day included a hot project at work, a meeting that couldn’t be skipped, and the ever-present TPS reports (or in my case, an expense report from a recent trip). These interruptions made it hard to focus on the metadata work.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

Brainstorming with ChatGPT

Despite the distractions, I spent some time brainstorming with ChatGPT on methods for surfacing important information. We explored various heuristics and LLM concepts. It was a productive session that gave me ideas to refine and potentially turn into something valuable.

One area we explored was how to determine if a thread was “important.” This included qualitative factors like the number of unique participants in a thread and, in a future with more data, the history of those participants’ involvement in the list. We also discussed keyword surfacing as a way to highlight significant topics and the potential for trend analysis to predict emerging themes over time. While we touched on some additional qualitative measures, I don’t recall all the specifics.

Infrastructure Challenges

The day ultimately became about infrastructure. Scaling issues forced me to shift to a database for the MVP, and SQLite came to the rescue. While not ideal, it’s a practical solution for now.

The motivation for SQLite was scalability. To ensure thread completion, I had to load more than one month of data into the MVP. Bonus months from testing added even more data, so it made sense to work with fresh data rather than repeatedly processing the same old files. SQLite also provided a way to query the data efficiently without having to read through tons of JSON files. While this approach works for the MVP, it’s clear that a more robust solution—like an MCP server—might be needed in the future.

Final Takeaway

One thing this day reinforced is how fragile AI-driven development can feel without proper context management. Whether brainstorming with ChatGPT or coding in agent mode using Sonnet in Visual Studio Code, I’ve noticed that when tools lack memory or context, things can quickly go off the rails. For example, fragile solutions like resorting to regex to parse HTML instead of using a proper library like BeautifulSoup can emerge. This highlights the need for better agent hints or configuration files, though the idea of managing those feels cumbersome.

Tomorrow, I hope to meet with a teammate to advance the more interesting parts of the project. With the infrastructure in place, I’m optimistic about making real progress.

This remains a work in progress, but I’m hopeful that the brainstorming and infrastructure work will pay off in the coming days.

Announcing Fedora Linux 43 Beta

Posted by Fedora Magazine on 2025-09-16 14:05:00 UTC

On Tuesday, 16 September 2025, it is our pleasure to announce the availability of Fedora Linux 43 Beta! This release comes packed with the latest version upgrades of existing features, plus a few new ones too. As with every beta release, this is your opportunity to test out the upcoming Fedora Linux release and give some feedback to help us fine tune F43 final. We hope you enjoy this latest version of Fedora!

How to get the beta release

You can download F43 Beta, or our pre-release edition versions, from any of the following places:

The Fedora CoreOS “next” stream moves to the beta release one week later, but content for F43 is still available from their current branched stream to enjoy now.

You can also update an existing system to the beta using DNF system-upgrade.

The F43 Beta release content is also available for Fedora Spins and Labs, with the exception of the following:

  • Mate – not currently available on any architectures with F43 content
  • i3 – not currently available on aarch64 only with F43 content

F43 Beta highlights

Installer and desktop Improvements

Anaconda WebUI for Fedora Spins by default: This creates a consistent and modern installation experience across all Fedora desktop variants. It brings us closer to eventually replacing the older GTK installer. This ensures all Fedora users can benefit from the same polished and user-friendly interface.

Switch Anaconda installer to DNF5: This change provides better support and debugging for package-based applications within Anaconda. It is a bigger step towards the eventual deprecation or removal of DNF4, which is now in maintenance mode.

Enable auto-updates by default in Fedora Kinoite: This change ensures that users are consistently running a system with the latest bug fixes and features after a simple reboot. Updates are applied automatically in the background.

Set Default Monospace Fallback Font: This change ensures that when a specified monospace font is missing, a consistent fallback font is used. Font selection also remains stable and predictable, even when the user installs new font packages. No jarring font changes should occur as appeared in previous versions.

System enhancements

GNU Toolchain Update: The updates to the GNU Toolchain ensures Fedora stays current with the latest features, improvements, and bug and security fixes from the upstream gcc, glibc, binutils, and gdb projects. They guarantees a working system compiler, assembler, static and dynamic linker, core language runtimes, and debugger.

Package-specific RPM Macros For Build Flags: This change provides a consistent and standard way for packages to add to the default list of compiler flags. It also offers a cleaner and simpler method for package maintainers to make per-package adjustments to build flags. This avoids the need to manually edit and re-export environmental variables, and prevents potential issues that could arise from the old manual method. It ensures the consistent applications of flag adjustments.

Build Fedora CoreOS using Containerfile: This change brings the FCOS build process under a standard container image build, moving away from the custom tool, CoreOS Assembler. It also means that anyone with Podman installed can build FCOS. This simplifies the process for both individual users and automated pipelines.

Upgrades and removals

Deprecate The Gold Linker: Deprecate the binutils-gold subpackage. This change simplifies the developer experience by reducing the number of available linkers from four to three. It streamlines choices for projects, and moves towards safeguarding the project against potential issues from “bitrot” where a package’s quality can decline and become unbuildable or insecure over time.

Retire python-nose: The python-nose package will be removed in F43. This prevents the creation of new packages with a dependency on an unmaintained test runner. Developers are encouraged to migrate to actively maintained testing frameworks such as python3-pytest or python3-nose2.

Retire gtk3-rs, gtk-rs-core v0.18, and gtk4-rs v0.7:  This change prevents Fedora from continuing to depend on old, unmaintained versions of these bindings. It also prevents shipping obsolete software and fewer unmaintained versions of packages.

Python 3.14: Updating the Python stack in F43. This means that by building Fedora packages against an in-development version, critical bugs can be identified and reported before the final 3.14.0 release. This helps the entire Python ecosystem. Developers also gain access to the latest features in this release. More information is available at https://docs.python.org/3.14/whatsnew/3.14.html.

Golang 1.25: This change provides Fedora Linux 43 Beta with the latest new features in Go. These include that go build -asan now defaults to leak detection at program exit, the go doc -http option starts a documentation server, and subdirectories of a repository can now be used as a module root. Since Fedora will keep as close to upstream as possible, this means we will continue to provide a reliable development platform for the Go language and projects written in it. 

Idris 2: Users gain access to new features in Idris 2. These include Quantitative Type Theory (QTT), which enables type-safe concurrent programming and fine-grained control over resource usage. It also has a new core language, a more minimal prelude library, and a new target to compile to, Chez Scheme.

More information

Details and more information on the many great changes landing in Fedora Linux 43 are available on the Change Set page.

Configuring Vim Solarized to Follow the System Dark Mode Preference

Posted by Mat Booth on 2025-09-16 10:00:00 UTC

I like to switch my system between light and dark modes as lighting conditions change, and I want my terminal windows to always respect my current system preference. This article shows how to configure vim to automatically change between light and dark variants of the Solarized colour scheme, to match the current system dark mode preference.

I usually go years and years between OS re-installs, so I’ve forgotten how to do this. Hopefully this article will be useful for future me. The first thing to do after installing Fedora on a new machine however, is to switch the default editor back to vim, because I have no muscle memory for nano and I refuse to change. 😅

$ sudo dnf swap nano-default-editor vim-default-editor

Now onto the main business of configuring the terminal and vim to use my favourite colour palette, Solarized by Ethan Schoonover.

Solarize The Terminal

Ptyxis, the new default terminal in Fedora Workstation Edition, has an excellent set of colour palette options. From the hamburger menu drop-down, select the Follow System Style button from the three options at the top. This causes Ptyxis to switch between light and dark modes when you change the system dark mode preference instead of being in dark mode all the time. Then open the Preferences dialog to select the Solarized colour palette from the options listed there:

Screenshot of terminal preferences dialog.

Solarize Vim

This works well for normal terminal operation, but vim’s own default colours can clash terribly with the terminal colour scheme. Sometimes the foreground and background colours are either the same or extremely low contrast, which results in impossible to read text, as shown here after performing a search for the string “init”:

Screenshot of search result highlights in vim making text unreadable.

Fortunately Ethan provides a vim-specific implementation of the Solarized colour palette in his vim-colors-solarized repository. This can be installed by downloading the provided vim script into your .vim/colors directory:

$ mkdir -p ~/.vim/colors
$ wget https://raw.githubusercontent.com/altercation/vim-colors-solarized/refs/heads/master/colors/solarized.vim \
    -o ~/.vim/colors/solarized.vim

And then configuring the colour scheme in your .vimrc file by adding the following lines:

" Enable Solarized Theme
syntax enable
colorscheme solarized

New vim sessions will now use the correct colours, and are even able to detect whether to use the light or dark variant of Solarized.

Dark Mode Detection

However, vim is only able to detect whether to use the light or dark variant at start up. This means if I switch the system dark mode preference while vim is open, then I have to close and reopen all my open vim sessions before they will use the correct Solarized variant:

Not even re-sourcing the .vimrc using the :so command corrects the colours. We can however edit it such that re-sourcing does fix the colour scheme variant in use without needing to exit and reload vim.

" Enable Solarized Theme
syntax enable
let sys_colors=system('gsettings get org.gnome.desktop.interface color-scheme')
if sys_colors =~ 'dark'
    set background=dark
else
    set background=light
endif
colorscheme solarized

Expanding upon the previous .vimrc snippet, this explicitly sets vim’s background setting depending on the output of a gsettings query for the current system dark mode preference. Now the :so ~/.vimrc command can be used to fix the colours without having to close and reopen vim.

Vimterprocess Communication

It would be even better of course, to have vim automatically re-source the .vimrc automatically when the system dark mode preference changes.

Vim has a kind of interprocess communication mechanism built in. If it’s started with the --servername {NAME} option then vim can accept commands from another vim processes running on your machine. To ensure vim is always started with this option, just add this line to your .bashrc file to create a command alias:

# Always start vim as a server
# with a unique name
alias vi='vi --servername VIM-$(date +%s)'

Now when you run vim (or vi) the session will be named with VIM-<SOME_NUMBER>. Commands can be sent to such named sessions using a specially crafted invocation of vim:

$ vim --servername VIM-<SOME_NUMBER> --remote-send ":so ~/.vimrc<CR>"

So all we need to do is write a small shell script to find all running vim processes, determine their session names, and execute the above command for each one. Create the script in your user’s local bin directory, e.g. ~/.local/bin/vsignal.sh and make it executable with chmod +x ~/.local/bin/vsignal.sh:

#!/bin/bash

function signal_vim() {
	# Signal all running instances of vim
	PIDS=$(pgrep -u $USER vim)
	for PID in $PIDS ; do
		VIM_ID=$(ps --no-headers -p $PID -o args | cut -d' ' -f3)
		vim --servername $VIM_ID --remote-send ":so ~/.vimrc<CR>"
	done
}

# Wait for color-scheme change
gsettings monitor org.gnome.desktop.interface color-scheme | \
while read -r COLOR_SCHEME ; do
	signal_vim
done

Piping the gsettings monitor command into read will cause the script to block until the system dark mode preference is changed. When it does, it will issue a call to the signal_vim function, perform the magic, and then go back to blocking. Now while ever the vsignal.sh script is running, all active vim sessions will immediately switch to the appropriate Solarized colour scheme variant when the system dark mode preference is changed.

A Systemd Theme Sync Service

It’s a bit incovenient to have to start a script whenever you open a terminal though. The best way to have this script always running is by letting systemd handle it. A new, user-specific service can be created with the following command:

$ systemctl edit --user --force --full theme-sync.service

The following service definition will cause systemd to start the script when you log into your Gnome session:

[Unit]
Description=Dark Mode Sync Service

[Service]
ExecStart=%h/.local/bin/vsignal.sh

[Install]
WantedBy=gnome-session.target

And finally, enable and start the service with the following commands:

$ systemctl --user enable theme-sync.service
$ systemctl --user start theme-sync.service

Now we can switch between light and dark modes to our heart’s content, safe in the knowledge that vim will follow suite. 😌

Day 1: Microsoft Hackathon — Building a Focused Summarizer for Upstream Linux

Posted by Brian (bex) Exelbierd on 2025-09-15 18:50:00 UTC

This week is the Microsoft Hackathon, and I’m using it as a chance to prototype something I’ve been thinking about for a while: a tool that summarizes what’s happening in upstream Linux communities in a way that’s actually useful to people who don’t have time to follow them day-to-day.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal opinions.

For my MVP, I’m going to try to produce a “What happened in Debian last month” summary from selected mailing lists. It’s not a full picture of the community, but it’s a solid basis for a proof of concept.

Why this project?

Part of my work at Microsoft involves helping others understand what’s going on in upstream Linux communities. That’s not always easy — the signal is buried in a lot of noise, and most people don’t have time to follow mailing lists or community threads. If this works, I’ll have a tool that can generate a newsletter-style summary that’s actually useful.

Why Debian?

For this MVP, I chose Debian. It’s a community I work with but haven’t traditionally followed as closely as Fedora, where I have deeper experience. That makes Debian a good test case — I know enough to judge the output, and I have colleagues who can help validate it. I’m focusing on August 2025 because I already know what happened that month, which gives me a baseline to evaluate the results.

Agentic coding, not vibe coding

Agentic coding, in my view, is when you rely on an LLM to do the heavy lifting — generating code, suggesting structure — but you stay in the loop. You review what’s written, check the inputs and outputs, and make sure nothing weird slips in. It’s not fire-and-forget, and it’s not vibe coding where you just hope for the best. I don’t read every line as it’s generated, but I do check the architecture and logic. One of my frequent prompt inclusions is “don’t assume, ask and challenge my assumptions where appropriate.” This helps uncover ideas as I develop, similar to an agile process.

A breakfast pivot

This morning over breakfast with a friend, I walked through the architecture I’d outlined with Copilot on Friday. Originally, I was planning to build a vector database and use retrieval-augmented generation (RAG) to power the summarization. But as we talked, it became clear that this was overkill for the MVP. What I really needed was a simpler memory model — something that could support basic knowledge scaffolding without the complexity of full semantic search.

So I pivoted. Today’s work focused on getting the initial data in place: downloading a couple of months of Debian mailing-list emails to ensure I had full threads from August, storing them locally to avoid putting any load on Debian’s infrastructure, and building scaffolding to sort and store the data so it supports both metadata generation and LLM access.

Could I have used a vector database or IMAP-backed mail store? Sure. But this was quick, easy, and gave me a chance to practice agentic coding in Python — something I don’t get to do much in my day-to-day product management work.

What I’m hoping to learn

This MVP is about testing whether AI-generated insights from community data are actually useful. In OSPO and community spaces, we talk a lot about gathering insights — but we don’t always ask whether those insights answer real questions. This is a chance to test that. Can we generate something that’s not just interesting, but actionable? It feels a bit like the tail wagging the dog, but sadly that’s where we seem to be.

Any surprises?

Nothing major yet, but I appreciated that the LLM caught a pagination issue I missed. I’d assumed a dataset was complete; while reconstructing threads it exposed an oddly truncated dataset. Today’s work also reminded me to be deliberate about model selection — not all LLMs are created equal, and the choice matters if you don’t arbitrarily default to the latest frontier models.

What’s on deck for tomorrow?

Thanks to how some data structures came together, I’m rearchitecting the metadata store. This lets me defer generating the basic, memory-style knowledge passed to the LLM until I’m closer to using it, which should prevent some ugly backtracking.

I keep relearning this: don’t build perfect infrastructure for an MVP - ship the smallest thing that answers the question.

About

Posted by Robert Wright on 2025-09-14 06:41:41 UTC

👋 Hey there, I’m Robert

I’m based out of Amsterdam, NL (US-BOI > US-SEA > UK-BOH > US-RAL > US-PDX > NL-AMS). I spend most of my spare time outside of work and home building and sharing in the world Free Open Source Software (FOSS). I’m an active contributor to the Fedora Project, where I focus on community health and growth.

💾 My professional background is rooted in PostgreSQL and data engineering—I love working with big datasets, optimizing queries, and building pipelines that actually make sense for people. Over the years, I’ve worked across GRC (governance, risk, compliance), and analytics, but what excites me most is helping communities and teams use their data to become healthier, more transparent, and more resilient.

Misc infra bits from 2nd week of sept 2025

Posted by Kevin Fenzi on 2025-09-13 17:05:53 UTC
Scrye into the crystal ball

Welcome to saturday! Another week gone by. For some reason, for me at least, this week marked the end of the quiet of the previous few. Seemed like lots of people got back from summer vacations and wanted to discuss their new plan or idea. Thats wonderful to see, but also makes for a busy week of replying, pondering and discussing.

Next (small) datacenter move planning underway

We have a small number of machines in a datacenter often referred to as rdu2 community cage. With our big datacenter move eariler this year to rdu3, we are going to be moving the rdu2 community cage gear over to rdu3 to consolidate it.

There's only a few machines there, but there's 2 services people will notice: pagure.io and download-cc-rdu01. We are currently trying to see if we can arrange things so we can just migrate pagure.io to a new machine in rdu3 and switch to it. If we can do that, downtime should be on the order of a few hours or less. If we cannot for some reason, the outage will be on the order of a day or so while the machine it's on is moved. For download-cc-rdu01, we will just likely have to have it down for a day or so, which shouldn't be too bad since there's many other mirrors to pull from.

It's looking tenatively like this might occur in november. So, look for more information as we know it. :)

communishift upgrades

Our communishift openshift cluster ( https://docs.fedoraproject.org/en-US/infra/communishift/ ) has been lagging on updates for a while. Finally we got a notice that the 4.16.x release it was on was going to drop out of support. So, I contacted openshift dedicated folks about it, and they responded in minutes (awesome!) that the upgrade from 4.16.x required moving from SDN to OVH networking and that was a thing that customers should do. Fine with me, I just didn't know.

So, after that I:

  • Upgraded networking from SDN to OVH

  • Upgraded from 4.16.x to 4.17.x

  • Upgraded to cgroups v2

  • Upgraded to 4.18.x

  • Checked that we were not using any depreciated things

  • Upgraded to 4.19.x

So, it's all up on the latest version and back on track with regular updates now. Let us know if you see any problems on it.

anubis testing in staging

A bunch more anubis testing in staging this last week. I worked on getting things working with our proxy network and using the native fedora package. Sadly the package I was testing with had a golang compile issue and just didn't work. I lost a fair bit of time trying to figure out what I had done wrong, but it wasn't me or our setup.

Luckily there is a package with a workaround pushed now, and hopefully work to sort it out once and for all. So, if you are testing in fedora, make sure you have the latest package.

Once that was solved things went much more smoothly. I now have koji.stg, bodhi.stg and lists.stg all using it and everything seems to work fine. You can see a pretty big drop in bw on those services too.

Early next week I will add koschei and forgejo to testing, and then after the beta is out and we are out of freeze, I am going to propose we enable all those in production along with pagure.io.

f43 beta reelease next tuesday

Amazingly, we managed to hit the early date again and Fedora 43 Beta will be released next tuesday. Thanks to everyone who worked so hard on this milestone.

We did run into an issue yesterday with f43 updates, and I thought I would expand on the devel-announce posting about it.

When beta is "go" we do a bunch of prep work, make sure we have a final nightly compose that matches the RC that was declared go and then we unfreeze updates for the f43 release. This means a ton of updates that were in updates-testing and blocked from going stable due to the beta freeze suddently can go stable.

However, a while back, we added and made more consistent the checks that bodhi uses when pushing updates stable. Before it just saw that the update had a request for stable and pushed it. Now it checks all the things that should allow it to be pushed: Has it passed gating checks or have those been waived, has it spent the right amount of time in updates testing or has it gotten enough karma to go stable.

Also when we unfreeze beta we switch the updates policy in bodhi to require more days in testing, as we are nearing the release.

What this all meant was that if you had a update submitted recently during the freeze, it spent 3 days in testing and requested stable. But we changed that value from 3 days to 7 (or 14 for critpath) and so when we tried to process those updates, bodhi kicked them out of the compose saying "sorry, not enough days in testing".

Probibly what we will end up doing is changing the requirements instead at beta freeze instead of at the end of it. That way updates will need the new updated days in testing from the start of the freeze.

So, if you had a update kicked out, it should be able to go to stable after the required number of days (or karma!) is reached.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115198218305728577

2025: Week --37 update

Posted by Ankur Sinha on 2025-09-12 10:39:55 UTC

The last time I wrote an update was, funnily enough, about a year ago. Also week 37. Clearly a lot has happened since then, but it won't all fit in this one post. I'll limit this post to the higher level details

Job/Research

Work has been extremely busy as always, and also busy enough where my backlog has been growing and growing. There is just so much to do.

Modelling

I have been working on a number of computational models.

We are close to having the full Human L2-3 Cortical Micro-circuit converted to NeuroML. The cells have all been standardised. This includes the ion channels, and the various morphologies and biophysics. What remains is replicating the network behaviour. This is not the hard bit. We have already done all that.

Another model I have been working on is the Golgi cell network that was previously developed in the lab (https://doi.org/10.1016/j.neuron.2021.03.027, model: https://github.com/sanjayankur31/GolgiCellNetwork). There are several predictions about how the Granule cells in the Cerebellum function, and the effect of Golgi cell inhibition on them that will be investigated using this model.

The Golgi cell network model also led to a CookieCutter template, but that merits its own post.

NeuroML

NeuroML logo
Screenshot of NeuroML paper in E-life: https://elifesciences.org/articles/95135

Since the last update, we have publish a NeuroML paper in E-Life after a few rounds of review (https://doi.org/10.7554/eLife.95135.3). This was a great achievement and it did take a lot of work.

Other than that, multiple new releases of the NeuroML libraries have been made. We continue to maintain them and push out improvements and fixes regularly on PyPi under the NeuroML PyPi organisation: https://pypi.org/org/neuroml/.

Google Summer of Code

We participated in Google Summer of Code (GSoC) again this year. Our candidate, Hengye, did an excellent job of converting various bits of the Macaque auditory thalamocortical model to NeuroML. The conversion also lives on GitHub, under the Open Source Brain organisation: https://github.com/OpenSourceBrain/Macaque_auditory_thalamocortical_model_data/tree/feat-neuroml-gsoc. We are going to continue working on this to standardise the full model.

Open Source Brain

Screenshot of the Open Source Brain website at https://opensourcebrain.org

Open Source Brain (OSB), an integrated web-platform for neuroscience, continues to tick along. We have made a number of maintenance updates and bug fixes. A new release is ready to be deployed to production after more testing.

We have worked on also integrating the E-BRAINS models into OSBv2. I need to test the E-BRAINS adapter implementation to prepare the PR for merging. This was blocked by some access issues, but they have now been ironed out and this can now proceed.

A paper on OSB is in the works, and should be published as a pre-print in the next coming months.

Grants and funding

This has taken a lot of time since the last update. We have applied for two Wellcome Discovery Awards, two Software Sustainability Institute grants, and are going to apply for several more.

So far, we have not been lucky enough to be accepted even though we were highly rated. Each Wellcome grant application takes about a month of dedicated work. Further, once triaged and accepted for interview, it is another two weeks of preparation. So, it has been a lot of work for no rewards at all.

There just seems to be quite a limited amount of funding, and the intense competition for them means that the success rate for most grant applications has dropped. So, more time spent on writing grants with fewer rewards. It is currently an even tougher time to be a researcher/academic than it already normally is.

Volunteering

I have managed to find time for volunteering, but with the increased work load, this time has reduced.

Software carpentries

I completed my Software Carpentry Instructor Training, and now am a certified instructor. The training was excellent. It pointed out lots of good practices for teaching newcomers/novices computational skills.

I will probably teach a session or two here at UCL this year and if time permits do a couple of classes for the Fedora classrooms.

Fedora

Fedora logo

Package maintenance continues. So does the Fedora Join SIG.

We have shifted how we do things in NeuroFedora. We just had too many packages, and it was no longer clear if people were using these. So, we are now testing lots of Python packages directly from PyPi instead of packaging them. You can read more about this post: Packaging changes at NeuroFedora.

The Join SIG is doing well. Quite a few new people are now helping out with onboarding and the Welcome to Fedora process, which means we have so much more human resource at our disposal. There are also more discussions on how we can do better. So we expect tweaks and improvements to make the "Welcome to Fedora" process even better.

OCNS

OCNS logo.

I remain an elected member of the Board of Directors for the Organization for Computational Neuroscience. I could not get a Schengen visa to attend the annual conference in Florence, though (again). That was quite disappointing, but not a lot can be done about it. Europe is really popular in the summer, and there just aren't enough appointments for visa applications.

Summary

That will do for now. There is certainly a lot happening on multiple fronts, and I have been busy---too busy to write blog posts even. I am going to try to restart regular blog writing, but that's what I said last year before dropping out. So who knows.

🎲 PHP version 8.3.26RC1 and 8.4.13RC1

Posted by Remi Collet on 2025-09-12 07:04:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.13RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.26RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.4.13RC1 is in Fedora rawhide for QA
  • version 8.5.0beta3 is also available in the repository, as SCL
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.26 and 8.4.13 are planed for September 25th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Day Two - Exploring Prague

Posted by Akashdeep Dhar on 2025-09-11 18:30:54 UTC
Day Two - Exploring Prague

By 09th June 2025, the majority of folks attending Flock To Fedora 2025 were checking out of the hotel to either head back home or travel forward to attend DevConf.CZ 2025. I ended up waking up as early as 0500am Central European Summer Time on that day even though I had scheduled an alarm for a couple of hours later. After some conversations with my friends and family back in India over a phone call and getting ready for the day ahead, I headed downstairs to meet with the likes of David Kirwan, Sherif Nagy, Jonathan Dieter, and Fabian Arrotin with my equipment. As they were on their way to the airport and I was planning on heading out by myself for some more exploration, we decided to leave the hotel together after I was through with a quick breakfast of barely half a glass of orange juice (much to Fabian's surprise) at around 0745am Central European Summer Time.

Bidding farewell to the four of them who boarded the Bus #191 to Vaclav Havel Airport Prague (PRG), I boarded the Tram #15 to Staromestska from the Andel tram station. I decided to proceed on by myself when I did not hear back from Vipul Siddharth and Samyak Jain for the itinerary planning. After getting off at Malostranska, I took a walk through a public garden and the rising staircase to finally make it to the Prague Castle. The climb was a steep one, but there was nothing like a brisk morning walk to freshen me up on that day when I was travelling by myself. I managed to reach the entrance by 0815am Central European Summer Time and was one of the first ones to make it there, as I was concerned about possible crowds. After making it through a brief security check, I headed into a weirdly vacant campus with most shops being closed or just about to open up.

I, therefore, took the time to explore the campus of the Prague Castle as the ticket kiosks were still not open for service. The vistas gave me a rich understanding of the prestigious cultural heritage of the place and just how robust their infrastructure had been for ages. As the campus was located at the peak of Prague, I had a breathtaking view of the entire city from there - something that I could not help but keep recording and photographing. I ended up visiting a nearby souvenir shop when I was right about done with my first circuit around the campus to purchase some goodies for my friends and family. On inquiring from the staff there, I got to know that the ticket kiosks would not open for another ninety minutes or so - which was alarming, as I also planned on visiting the Old Town Square on that day. I could not afford to waste time here while exploring Prague by myself.

After munching on a bar of Snickers because the half glass of Orange Juice from the morning was clearly not enough to energize me, I decided to go for yet another round of the campus. I lucked out this time, as at around 0930am Central European Summer Time, the ticket kiosks had opened — but that also meant that I had to deal with the surging crowd, thus nullifying any possible benefits I would have had for arriving early. I decided to go only for the main circuit exploration of Prague Castle, which included exhibits I, III, V and VIII. After purchasing the ticket, I joined the queue heading into exhibit VIII, which was the St. Vitus Cathedral. Of all the statues and structures that I got to witness there, the stained glasses happened to have captivated me in their beauty the most. I headed into exhibit V next, which was the Old Royal Palace, closely following an adjacent tour group.

After exploring through the vistas of the said exhibit, I headed into exhibit III next, which was the St. George's Basilica. There was seating arrangement here for the folks entering and some interesting cave-like corridors to explore. Finally, I headed towards exhibit I, which was the Golden Lane - consisting of a small alley with colorful houses. This alley ended with a small room with some ages-old equipment stowed in a room to give a glimpse of how life looked like back in the day. The one thing that probably ended up intriguing me the most happened to be the Prison Tower, which I witnessed on my way out of exhibit I. There were a couple of floors of stone-walled prisons with torture equipment and imprisoning shackles. The one thing that made me feel a lot more restricted there was the narrow stairwell leading into the places - thus establishing a desolate atmosphere.

While not being an exhibit to generally take pride in, the Prison Tower drew a dramatic perspective on human beings trying their best at their worst times. After viewing this exhibit, I could say for sure that I had explored most of what Prague Castle had to offer, minus the permanent exhibitions that I purposefully decided not to visit. On my way down from the campus through the steep staircase, I witnessed a group of musicians trying to earn their livelihood there. Prague has been a lively city, and there always happened to be something around the corner to catch my intrigue. It was also time for me to head over to yet another one of the liveliest places around Prague, which was the Old Town Square. While I had budgeted four hours for exploring the Prague Castle, I was already on my way out at around 1130am Central European Summer Time, barely after three hours of visiting.

I swiftly made my way back to the Malostranska tram station, from where I caught a Tram #15 again to get off at the terminal station, Staromestska. With the Old Town Square being a walkable distance away from there, the place definitely had a different level of vitality. This was most definitely true on that day, as I had to weave through a sizeable crowd to be able to get to the Astronomical Clock. Old Town Square housed not only the said tourist attraction but a bunch of other interesting spots, including historical hotels, fancy restaurants, dramatic theatres, and goodies stores. I decided to visit one of the souvenir stores again while having a brief chat with Tomas Hrcka, and I was glad to have taken his advice on going around the place when I could have some time. I soon found myself around the Staromestska tram station at around 1230pm Central European Summer Time.

I was able to wrap up the first half of the exploration pretty quickly while not having to worry about if I gave enough time to each of these tourist spots. Aboard the Tram #15 bound towards Kotlarka, I got off at the Andel tram station for a quick no-brainer KFC takeaway for the lunch. I decided to catch up on Yashwanth Rathakrishnan's travel status, and thankfully, he had reached his house in India by then while I headed back to my hotel room. After having some conversations with my friends and family over the lunch, I decided to catch up on some well-deserved rest with an afternoon nap. It was a difficult feat to achieve (you read it correctly), as my biological clock had been malfunctioning severely by then, so I decided to give up on the siesta at around 0430pm Central European Summer Time, even though I initially planned on waking up about a couple of hours later.

On following the advice from Aurelien Bompard, Sherif, and Jonathan, I decided to visit the Museum Of Communism. After some nine stops from the Andel tram station, I got off at the Jindrisska using the Tram #9 bound towards Spojovaci. The destination was only a couple of blocks away from there, but I kept taking detours every now and then to take in the wonderful vistas of the Prague evening. On entering the museum with a nominal entry fee, I got to explore the history around the beginnings and the demise of the communist regime across the globe. While the exhibits were a bit heavy on the reading, there were plentiful experiences to be had with both historically significant objects and timelessly detailed footages. After getting a couple of souvenirs from the attached store, I ran into a stranger on my way out of the museum who asked me if the place was worth the visit.

That was a tricky question because while I was not much into communism (or into politics in general, for that matter) - I did appreciate the information bestowed and the impressive quality. I did provide the museum the feedback on potentially improving the accessibility to appeal to the differently abled folks, as a ton of what was there to be witnessed involved reading. I suggested giving Museum Of Communism a try as he was there anyway - and who knows, he might actually find something interesting there. With that, I was on my ride back from Jindrisska to Andel on the Tram #9 bound towards Sidliste Repy at around 0830pm Central European Summer Time. I decided to bide my time to packing my luggage up for my departure tomorrow, as I had to ensure that I was carrying no more than 30kgs of content in my check-in luggage, before freshening up and calling it a day.

Fedora 43 Wallpaper Wrap Up

Posted by Fedora Community Blog on 2025-09-11 10:00:00 UTC

And just like that, the Fedora 43 Wallpaper has been finalized!

I recently attended Flock (for the first time 😮) at the beginning of the summer. I had the privilege of facilitating a workshop with Emma Kidney about getting started in the Fedora Design Team. You can read Emma’s workshop recap here if you didn’t get the opportunity to attend! Participants helped create inspiration for the F44 wallpaper.

Now that it’s the end of the summer, I thought it might be a good time to discuss the F43 wallpaper process.

The final F43 Day Wallpaper

The final F43 Night Wallpaper

Let’s rewind to the start

If you’re not a blog post person, then you can always see the process documented on the F43 ticket here.

The first step involves a list of people or topics in STEM that the community can vote on. We were choosing between people whose last name starts with R, since we’ve followed the alphabet for the past 18 wallpapers.

Jess Chitas wrote on the ticket, “Thinking of ones that would make really nice wallpapers- Vera Rubin was an Astronomer who worked on galaxy rotation rates. Rooting for this one myself, a galaxy wallpaper would be epic! Wilhelm Röntgen essentially discovered the X-ray. Something with the skeleton could be cool? Henry Augustus Rowland worked with diffraction gratings (lots of colour!!)

Vera Rubin and Wilhelm Röntgen made it into the poll (which can be found here on Fedora discussions), but Sally Ride ultimately won!

Sally Ride

“Everywhere I go I meet girls and boys who want to be astronauts and explore space, or they love the ocean and want to be oceanographers, or they love animals and want to be zoologists, or they love designing things and want to be engineers. I want to see those same stars in their eyes in 10 years and know they are on their way.“- Sally Ride

Sally Ride (May 26, 1951 – July 23, 2012) was a physicist and astronaut who became the first American woman in space on June 18, 1983. The third woman ever!

After finishing her training at NASA, she served as the ground-based CapCom for the second and third Space Shuttle flights. She helped develop the Space Shuttle’s robotic arm, which helped her get a spot on the STS-7 mission in June 1983.

Two communication satellites were deployed, including the first Shuttle pallet satellite. Ride operated the robotic arm to deploy and retrieve SPAS-1, which carried ten experiments to study the formation of metal alloys in microgravity. 

Mind Map

The mind map we created together in a design team meeting.

Based on Ride’s career, we started gathering images. We looked up the books she had worked on that had exclusive photographs of her missions. Her work in education and STEM made us think of the infographic posters that usually hang in classrooms. Ride’s mission happened in the 80s, which is technically too late to classify as mid-century. However, the bright, colorful mid-century space art and retro futurism felt perfectly suited to represent her optimism and approach to learning in STEM.

Sketches

I usually do quick sketches on paper or my iPad and came up with these three options. The first one isn’t a wallpaper, so much as a blueprint for an exploring spacecraft. More realistic vs ones with exaggerated shapes for children’s books. I was much happier with the second and third compositions. That’s why we always push to come up with multiple ideas, because it’s like working a muscle. The design muscle!

People commented on the ticket that we could go forward with sketch 2 or sketch 3. I brought sketch 2 into Krita and started drawing over it on a new layer. I wanted to map out how the clouds would billow out and where stars could be.

Rough and Final Drafts

At this point, I received feedback from people on the Fedora Design team. It was suggested to bring the clouds on the right down a little and perhaps add a moon in the sky. The color and arch of the sunrise and sunset were also things we discussed in our weekly team meeting (open to community members 😉), and I believe they turned out pretty great.

Post F43 Thoughts

Without the community, these wallpapers wouldn’t happen! If you’re a passionate artist or designer, you’re more than welcome to participate in this recurring project.

The post Fedora 43 Wallpaper Wrap Up appeared first on Fedora Community Blog.

Toolbx — about version numbers

Posted by Debarshi Ray on 2025-09-10 21:49:26 UTC

Those of you who follow the Toolbx project might have noticed something odd about our latest release that came out a month ago. The version number looked shorter than usual even though it only had relatively conservative and urgent bug-fixes, and no new enhancements.

If you were wondering about this, then, yes, you are right. Toolbx will continue to use these shorter version numbers from now on.

The following is a brief history of how the Toolbx version numbers evolved over time since the beginning of the project till this present moment.

Toolbx started out with a MAJOR.MINOR.MICRO versioning scheme. eg., 0.0.1, 0.0.2, etc.. Back then, the project was known as fedora-toolbox, was implemented in POSIX shell, and this versioning scheme was meant to indicate the nascent nature of the project and the ideas behind it.

To put it mildly, I had absolutely no idea what I was doing. I was so unsure that for several weeks or few months before the first Git commit in August 2018, it was literally a single file that implemented the fedora-toolbox(1) executable and a Dockerfile for the fedora-toolbox image on my laptop that I would email around to those who were interested.

A nano version was reserved for releases to address brown paper bag bugs or other critical issues, and for release candidates. eg., several releases between 0.0.98 and 0.1.0 used it to act as an extended set of release candidates for the dot-zero 0.1.0 release. More on that later.

After two years, in version 0.0.90, Toolbx switched from the POSIX shell implementation to a Go implementation authored by Ondřej Míchal. The idea was to do a few more 0.0.9x releases to shake out as many bugs in the new code as possible, implement some of the bigger items on our list that had gotten ignored due to the Go rewrite, and follow it up with a dot-zero 0.1.0 release. That was in May 2020.

Things went according to plan until the beginning of 2021, when a combination of factors put a spanner in the works, and it became difficult to freeze development and roll out the dot-zero release. It was partly because we kept getting an endless stream of bugs and feature requests that had to be addressed; partly because real life and shifting priorities got in the way for the primary maintainers of the project; and partly because I was too tied to the sanctity of the first dot-zero release. This is how we ended up doing the extended set of release candidates with a nano version that I mentioned above.

Eventually, version 0.1.0 arrived in October 2024, and since then we have had three more releases — 0.1.1, 0.1.2 and 0.2. Today, the Toolbx project is seven years old, and some things have changed enough that it requires an update to the versioning scheme.

First, both Toolbx and the ideas that it implements are a lot more mature and widely adopted than they were at the beginning. So much so, that there are a few independent reimplementations of it. It’s time for the project to stop hiding behind a micro version.

Second, the practice of bundling and statically linking the Go dependencies sometimes makes it necessary to update the dependencies to address security bugs or other critical issues. It’s more convenient to do this as part of an upstream release than through downstream patches by distributors. So far, we have managed to avoid the need to do minimal releases targeting only specific issues for conservative downstream distributors, but the recent NVIDIAScape or CVE-2025-23266 and CVE-2025-23267 in the NVIDIA Container Toolkit gave me pause. We managed to escape this time too, but it’s clear that we need a plan to deal with these scenarios.

Hence, from now on, Toolbx releases will default to not having a micro version and use a MAJOR.MINOR versioning scheme. A micro version will be reserved for the same purposes that a nano version was reserved for until now — to address critical issues and for release candidates.

It’s easier to read and remember a shorter MAJOR.MINOR version than a longer one, and appropriately conveys the maturity of the project. When a micro version is needed, it will also be easier to read and remember than a longer one with a nano version. Being easy to read and remember is important for version numbers, because it separates them from Git commit hashes.

So, this is why the latest release is 0.2, not 0.1.3.

Bash functions for working with UV

Posted by Ankur Sinha on 2025-09-10 16:04:09 UTC

I finally made the switch from pip to uv recently. uv is much quicker that pip. I won't go into how/why here, but I will link you to this video that explains it. When working with packages such as PyNeuroML that have quite a few dependencies, the speed up begins to matter, especially when one is developing and may create and remove virtual environments quite often.

I used to use Pew to manage my Python virtual environments. It is a great tool. Unfortunately, it isn't quite maintained any more, and that means support for things like uv are probably not going to happen soon.

I went looking for a similar wrapper around uv and there are some, but they're simple enough where I thought writing my own bash functions/aliases is probably preferable. So, here's what I've added to my bashrc. They are a number of functions/aliases to:

  • list virtual environments
  • create a new virtual environment using uv
  • activate virtual environments
  • remove virtual environments
  • install packages using uv

A simple bash shell completion function is also setup. The listing and removal functions aren't really uv specific here, but I prefix them with uv for consistency.

(I don't use another shell, so please adapt these to whatever you use):

# uv helpers
export VIRTUAL_ENV_HOME="$HOME/.virtualenvs/"
uvnew () {
    if command -v uv 2>&1 > /dev/null
    then
        pushd $VIRTUAL_ENV_HOME && uv venv "$@" && popd
    else
        echo "uv not installed"
    fi
}
uvls () {
    ls $VIRTUAL_ENV_HOME
}
uvrm () {
    venv_name=$1
    read -p "Delete env \"$venv_name\"? Are you sure? (Yy)" -n 1 -r
    # (optional) move to a new line
    echo
    # input is stored in REPLY if no var is given to read
    if [[ $REPLY =~ ^[Yy]$ ]]
    then
        echo "Removing $VIRTUAL_ENV_HOME/$venv_name"
        rm -rf $VIRTUAL_ENV_HOME/$venv_name
    fi
}
_venv_completions () {
    local cur
    cur="${COMP_WORDS[COMP_CWORD]}"
    pushd $VIRTUAL_ENV_HOME 2>&1 >/dev/null
        COMPREPLY=($(compgen -o dirnames -- "$cur"))
    popd >/dev/null 2>&1
}
uvactivate () {
    venv_name=$1
    if [ -e "$VIRTUAL_ENV_HOME/$venv_name/bin/activate" ]
    then
        source "$VIRTUAL_ENV_HOME/$venv_name/bin/activate"
    else
        echo "Could not find activation script"
        echo "Available venvs are"
        uvls
    fi
}
complete -F _venv_completions uvactivate
complete -F _venv_completions uvrm

alias uvpip="uv pip"

On Fedora, uv is in the repositories, so it can be installed with:

sudo dnf install uv

If you have your own wrappers for working with uv that are better than these simple bits, please drop me a word.

Fedora Copr outage

Posted by Fedora Infrastructure Status on 2025-09-10 00:00:00 UTC

We are running out of storage, and AI scraper load, Fedora Copr is now very slow or halted.

This outage impacts the copr-frontend and the copr-backend.

Browser wars

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2025-09-09 12:49:02 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Anaconda WebUI: Progress Update and Roadmap

Posted by Fedora Community Blog on 2025-09-09 10:00:00 UTC

Over the past few Fedora releases, the Anaconda team has been gradually replacing the GTK-based installer with a new web-based interface. As this work expands, we want to share a quick update on the current status, where things are headed, and how to stay involved.

Current Status

  • The WebUI was introduced in Fedora 42 Workstation Live ISO as the  default installer, supporting Live image installations only. DNF-based installs are not yet supported, this is planned for a future phase.
  • A change proposal for Fedora 43 has been approved, enabling the WebUI by default for all Fedora Spins and KDE.
  • Community testing has already taken place, and we’ve received an incredible amount of useful feedback. Thank you to everyone who participated and helped us shape the experience! We are now actively reviewing and processing that feedback to guide the next phases of development.

Below is an example of the WebUI installer in action, as currently used in one of the Fedora Spins.

Documentation Update

The existing Fedora Installation Guide is outdated and no longer reflects the current installer. Our team is open to co-owning updated installer documentation and hosting it upstream to keep it accurate and maintainable.

To reduce the documentation overhead, we’re working on autogenerated docs directly from the source code. While this is still early work, we have a small but up-to-date example live now: Web UI Installer documentation

Roadmap (Shared at Flock 2025)

Here’s a rough view of how we plan to roll out the WebUI across Fedora editions:

Fedora VersionStatus/Target
Fedora 42Workstation Live ISO (WebUI available)
Fedora 43Spins + KDE (approved)
Fedora 44uBlue, Atomic Desktops, Remote Browser
Fedora 45Server Edition
After 45Deprecation of GTK UI

Get Involved

We’re open to contributions across all areas of the project – from design suggestions, to feature development, testing, and documentation. If you’re interested in helping out or learning more, join us in the #anaconda:fedora.im Matrix channel. We’d be happy to talk!

The post Anaconda WebUI: Progress Update and Roadmap appeared first on Fedora Community Blog.

📝 Valkey version 8.1

Posted by Remi Collet on 2025-08-01 07:35:00 UTC

With version 7.4 Redis Labs choose to switch to RSALv2 and SSPLv1 licenses, so leaving the OpenSource World.

Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement. 

So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.

With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.

RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

So you now have the choice between Redis and Valkey.

1. Installation

Packages are available in the valkey:remi-8.1 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-8.1/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module enable valkey:remi-8.1

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.

Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

So users will have the choice and can even use both.

ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valley 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valley 8.0, so it should be part of EL-9.7.

4. Statistics

valkey

Initial Minimal Fedora Image for Raspberry Pi 5

Posted by Peter Robinson on 2025-09-08 13:15:53 UTC

I know this has been much awaited because of all the queries on the various forums and direct to me so here we are the first Fedora image that can run a native userspace with a Fedora kernel with some enablement patches.

This image is far from complete and is NOT yet suitable for desktop UXes and related usecases that require a display.

So what works, what doesn’t, and how can you get started?

The things that are working and tested:

  • The original RPi5 rev c0 SoCs: the older 4Gb/8Gb variants
  • Serial console
  • Late boot HDMI0 display output (IE once the kernel has started) via simple DRM/FB
  • The compute Subsystems (CPUs etc) of both SoC revs
  • The micro SD slot – the only supported OS disk ATM
  • Wired ethernet port
  • Wireless network interface
  • USB ports (NOT for OS disks)

The things that don’t work:

  • The new RPi5 rev d0 SoCs: the 2Gb/16Gb, newer 1.1 Rev 4/8Gb variants, the RPi500, CM5 – the kernel will crash on boot
  • Early boot display output
  • Accelerated GPU
  • Basically everything else

What might work:

  • PCIe for HATs via the add-on HATs and related products including NVME. Not currently for boot/root.

For getting started you will need to have a serial console, ATM booting off anything other than the micro SD card won’t work.

We will eventually support booting off USB/NVME etc and display output as well as other HW but unfortunately we’re not there yet. I feel with USB/eth/wifi support the device is now at a point where it’s actually usable for a lot of Fedora users so I decided it’s time to expand this to more people than just me 😀

Note I don’t have the spare cycles to assist with debugging any issues that have been listed above as explicitly not working, I want to spend my time on making the device more usable (SORRY, but I do this in my personal spare time). I will update when any of this changes.

You can get the image from here and get started in the usual way with either DD or arm-image-installer (update the storage media name):

arm-image-installer --resizefs --target=none --media=/dev/XXX --image=rpi5-250907-fedora-43-minimal-raw-xz-aarch64.raw.xz

I am working to get more HW support enabled, my focus is on the d0 rev boards (2Gb, 16Gb, 500, CM5 and newer 4/8Gb) and PCIe, from there I will look at what else is possible with my available time. These enhancements will land in new kernel updates to my copr repo, or fixes into Fedora proper. which will arrive by either new published images or kernel updates. I will provide updates when particularly useful milestones are passed and new things start to work.

Bug reports are of course welcome but not for desktopsm or RFEs for other HW enablement, especially things I’ve already stated above, I am working to get more features but I am limited to what I can do because of available time, upstream work and access to HW docs so please be patient as I am doing this in my spare time! Of course being downstream of Fedora please don’t file bugs there unless they are a general problem with userspace. The only thing that’s not vanilla Fedora currently is the kernel. I have an original Pi 5 8Gb and a Pi 5 rev d0 model I am testing with, everything else can be considered untested so YMMV!

New badge: F45 i18n Test Day Participant !

Posted by Fedora Badges on 2025-09-08 06:55:08 UTC
F45 i18n Test Day ParticipantYou helped testing Fedora 45 i18n features

New badge: F44 i18n Test Day Participant !

Posted by Fedora Badges on 2025-09-08 06:54:19 UTC
F44 i18n Test Day ParticipantYou helped testing Fedora 44 i18n features

New badge: F43 i18n Test Day Participant !

Posted by Fedora Badges on 2025-09-08 06:52:54 UTC
F43 i18n Test Day ParticipantYou helped testing Fedora 43 i18n features