/rss20.xml">

Fedora People

Comment j’organise mon NAS et mes services auto-hébergés

Posted by Guillaume Kulakowski on 2026-01-12 11:51:35 UTC

Cela fait maintenant un bon moment que je possède un NAS, et je n’avais encore jamais pris le temps de décrire réellement ma stack ni les applications que j’y héberge. Cet article est donc l’occasion de corriger cela. Références et inspiration S’il ne fallait citer qu’une seule référence incontournable dans le monde du self-hosted, ce […]

Cet article Comment j’organise mon NAS et mes services auto-hébergés est apparu en premier sur Guillaume Kulakowski's blog.

Loadouts For Genshin Impact v0.1.13 Released

Posted by Akashdeep Dhar on 2026-01-11 18:30:45 UTC
Loadouts For Genshin Impact v0.1.13 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.13 is OUT NOW with the addition of support for recently released characters like Durin and Jahoda, and for recently released weapons like Athame Artis, The Daybreak Chronicles and Rainbow Serpent's Rain Bow from Genshin Impact Luna III or v6.2 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Changelog

  • Automated dependency updates for GI Loadouts by @renovate[bot] in #468
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #470
  • Update actions/checkout action to v6 by @renovate[bot] in #472
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #471
  • Update dependency pytest to v9 by @renovate[bot] in #469
  • Migrate from Poetry to UV for dependency management by @gridhead in #476
  • Introduce the recently added weapon Athame Artis by @gridhead in #479
  • Introduce the recently added character Durin to the roster by @gridhead in #477
  • Introduce the recently added weapon The Daybreak Chronicles by @gridhead in #480
  • Update dependency python to 3.12 || 3.13 || 3.14 by @renovate[bot] in #448
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #482
  • Introduce the recently added weapon Rainbow Serpent's Rain Bow by @gridhead in #481
  • Update actions/upload-artifact action to v6 by @renovate[bot] in #483
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #484
  • Introduce the recently added character Jahoda to the roster by @gridhead in #478
  • Stage the release v0.1.13 for Genshin Impact Luna III (v6.2 Phase 2) by @gridhead in #485

Characters

Two characters have debuted in this version release.

Durin

Durin is a sword-wielding Pyro character of five-star quality.

Jahoda

Jahoda is a bow-wielding Anemo character of four-star quality.

Weapons

Three weapons have debuted in this version release.

Athame Artis

Day King's Splendor Solis - Scales on Crit Rate.

Loadouts For Genshin Impact v0.1.13 Released
Athame Artis - Workspace

The Daybreak Chronicles

Dawning Song of Daybreak - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.13 Released
The Daybreak Chronicles - Workspace

Rainbow Serpent's Rain Bow

Astral Whispers Beyond the Sacred Throne - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.13 Released
Rainbow Serpent's Rain Bow - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1527 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

La Neta sobre GEMINI.md y AGENTS.md: Poniéndole reglas al juego

Posted by Rénich Bon Ćirić on 2026-01-09 14:33:00 UTC

No mames, ¿sabías que la IA puede ser tu mejor chalán o tu peor pesadilla?

Si le estás pegando duro al código asistido por IA, seguro te ha pasado que la IA se pone necia o "alucina" bien gacho. Le pides un script y te lo da para Ubuntu (¡wákala!) cuando tú eres puro Fedora, o te mete librerías bien pesadas cuando tú quieres algo KISS y DRY.

Ahí es donde entran al quite los archivos de configuración de "memoria" o reglas: ~/.gemini/GEMINI.md y ~/.config/opencode/AGENTS.md. La neta, son un paro.

¿Qué son estos archivos?

Básicamente, son el manual de estilo y las reglas del juego que tú le dictas a la IA. Es la forma de decirle: "A ver, carnal, aquí se hacen las cosas así". En lugar de estarle repitiendo en cada prompt las mismas instrucciones, lo escribes una sola vez y la IA lo toma como ley.

Es como poner al tiro a un chalán nuevo, pero este sí tiene memoria fotográfica y no se le va la onda si se lo configuras bien.

La carnita: ¿Qué le pongo al archivo?

No se trata de escribir la Biblia, pero sí de dejar claras tus "líneas rojas". Aquí te van unos ejemplos de lo que yo tengo en mi GEMINI.md para que te sirva de ejemplo:

Preferencias del sistema:

Para que no me salga con cosas de Debian.

- The host system is Fedora 43 x86_64 with SELinux enabled.
- OS Tool Preference: Use tools available in the OS (latest Fedora) via `dnf`.
- Distro Preference: The user despises Debian/Ubuntu; never considers them.
Filosofía de código:

Para que no se pase de listo con soluciones complejas.

- KISS (Keep It Simple, Stupid): Always prefer simple, readable code.
- DRY (Don't Repeat Yourself): Extract repeated code.
- Avoid Premature Optimization: Write clean code first.
Ops y contenedores:

Porque aquí usamos Podman, mi compa. Nada de andar con cosas raras.

- Containers: Prefer Podman over Docker (Docker is sub-optimal).
- Containerfile: Use `Containerfile` instead of `Dockerfile`.
- Quadlets: Use systemd quadlets (`*.container`) when practical.

El truquencio: Un solo archivo

Para no andar manteniendo dos archivos diferentes y que luego se te desincronicen, la movida maestra es crear uno "maestro" y hacer un enlace simbólico (symlink). ¡Bien práctico!

Creas tu ~/.gemini/GEMINI.md bien tuneado y luego tiras el symlink para OpenCode:

ln -sf ~/.gemini/GEMINI.md ~/.config/opencode/AGENTS.md

Tip

Así, cambias una regla en un lado y se actualiza en todos lados. Te ahorras un chingo de chamba y mantienes la consistencia.

¿Cómo se refleja esto en la talacha diaria?

La diferencia es garrafal, carnal; te ahorra un chingo de tiempo:

  1. Git Limpiecito: Si le pones reglas de Git Management, la IA solita te va a sugerir commits con formato Conventional Commits y hasta te va a recordar firmarlos con GPG. Nada de mensajes tipo "fix bug".
  2. Infraestructura al Tiro: Si le pides un despliegue, ya no te va a dar un docker-compose.yml genérico. Te va a dar un archivo Quadlet listo para tu Fedorita.
  3. Seguridad Primero: La IA te va a avisar si estás a punto de regarla subiendo una API key. Es como tener un senior dev cuidándote la espalda.

Note

El código que generas se siente tuyo, adaptado a tu flujo de trabajo (EVALinux, en mi caso), y no un copy/paste genérico de Stack Overflow. ¿No crees que así conectas mejor con la gente?

Conclusión

Dedicarle unos 15 minutos a optimizar tu GEMINI.md no es pérdida de tiempo, es una inversión. Es la diferencia entre pelearte con la IA para que te entienda y tener un copiloto que ya se sabe el camino de memoria.

Así que ya se la saben, configuren sus reglas, no sean desidiosos y ¡a darle al ether!

Friday Links 26-01

Posted by Christof Damian on 2026-01-09 12:25:00 UTC
New prerecorded cassettes from Queen, Bjork, Bon Jovi, De La Soul, ...
First round of links for 2026. Everything is bad, so have some links to cheer you up.
 
I really liked the article about visibility, the one about Friday deploys, and sitting alone in a café … which I can relate to.  
I am a big fan of Bill Nighy's podcast, that certainly will improve your mood. 
 
For some reason, the random section is full of cassette tape related links.  

Leadership

Team's “Wrapped 2025” to Increase Velocity - nice idea I clearly didn't implement

The Product Operating Model at Google – A Critical View - possibly a bit outdated. 

Why Federated Design Systems Keep Failing - design systems need leadership, not democracy, as so many decisions 

You Can’t Debug a System by Blaming a Person - people need to feel safe to make good decisions and do good work 

Visibility is Velocity - this is so true, on every level of organisations 

Engineering 

The cardinal sin of software architecture - "The worst kind of accidental complexity in software is the unnecessary distribution, replication, or restructuring of state, both in space and time." 

On Friday Deploys: Sometimes that Puppy Needs Murdering (xpost) - I like this: "Deploy freezes are a hack, not a virtue" 

LLM-powered coding mass-produces technical debt - especially if you go anywhere near vibe-coding

Tackling tech debt | Meri Williams | LeadDev New York 2025  [YouTube] - another perspective on tech debt, going into the problems and some nice metrics to track them.

What is a PC compatible? - apparently nothing 

How AI is transforming work at Anthropic - some interesting data 

Environment 

Wind power slashed 4.6 billion euros off electricity bills in Spain last year claim - good for the environment, good for the wallet 

Urbanism 

Geometry, Empire &Control - the massive influence of military engineers on the history of urbanism  [YouTube] - long video about how history influences how we live 

Why Europe’s night-train renaissance derailed - it's expensive and will take time, nobody has the patience 

Car Brain - opening roads in San Francisco 

Random Cassettes

*PREMIERE*: Tanith Tape Archiv 01: Cybertape II (1989) [German, YouTube] - Tanith is releasing some of his old mixes on cassette tapes. 

Why I Quit Streaming And Got Back Into Cassettes [$$$] - "tapes remind us what we’re missing when we stop taking risks." 

Streaming Music To Cassette - because we love the sound … apparently. 

Stuttgart 21: In der Bahnsteighalle werden die Gleise gefräst  [German YouTube] - I love specialised train machines!    

The Amiga's filesystem is now on Linux and Mac, thanks to an emulated driver - good old Amiga. 

The Unbearable Joy of Sitting Alone in A Café - bonus points for not using your phone or a laptop. 

Why Didn’t AI “Join the Workforce” in 2025? - because nobody wants it and it isn't ready

ill-advised by Bill Nighy [Podcast] - When I grow up, I want to be as cool and well-dressed as Bill. 

Friday Links Disclaimer
Inclusion of links does not imply that I agree with the content of linked articles or podcasts. I am just interested in all kinds of perspectives. If you follow the link posts over time, you might notice common themes, though.
More about the links in a separate post: About Friday Links.

Community Update – Week 02 2026

Posted by Fedora Community Blog on 2026-01-09 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 05 Jan – 09 Jan 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Preparatory steps for the next Mass Rebuild which is currently scheduled for next week, Wednesday January 14th.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 02 2026 appeared first on Fedora Community Blog.

Introducing EktuPy

Posted by Kushal Das on 2026-01-09 06:49:21 UTC

Py (daughter) is now 11 years old, and she spends a lot of time on Scratch, makes beautiful and fun things. But, she thinks she is not a programmer as she is moving blocks and not typing code like us. I had questions for long time about how to move this Scratch generation into programming in general via Python. EktuPy is my probable solution.

Home page

In simple words, we have an editor to write code in the left, and a Canvas/stage on the left. You can do all the similar things you do on scratch here, I have a list of examples in the editor.

Hello World

We use PyScript and thanks to Astral we have both Ruff and ty for LSP/linting support in the editor (using webassembly). The whole code is executing on the browser of the user.

Drawing via keyboard

Drawing via pen

Pong

Yesterday I took part in the monthly PyScript Fun call because Nicholas reminded me, had fun to demonstrate it there and watched what others are building.

Two Sprites

The first time Py pocked around for 1:30 hours, she gave me 11 bugs, and next for 5 minutes and asked me to get tutorials, she did not want to read the documentation. So, for every example in the editor we have tutorials, not too detailed yet, but good enough to start.

Tutorial

You can create an account and save your programs. You can share them as public from your dashboard, then others can find those in the explorepage and run the code or remix if they want.

Space shooter

I am super nostalgic about one implementation :)

Oh, because I think kids may not have to learn about async programming in this platform, calls like wait() or ask() or play_sound_until_done() or wait_until() are all synchronous for the edtior, and then some AST transformer adds the async/await as needed.

Feel free to try this out, share the link to your kid or teachers/parents you know. Let me know how to improve. I will publish the codebase, a Django application with proper license and hopefully we can make it even better togather.

This project was not possible without all the work done before, including Scratch, or CodeMirror for the edtior, PyScript / PyOdide, the bigger Python community and Claude/Opus4.5 for incredicable TypeScript/Javascript help :)

The OpenCode "Dual-Pipeline" Architecture

Posted by Rénich Bon Ćirić on 2026-01-09 05:51:00 UTC

Let's be honest: most AI coding setups are a mess. One day you are using a cheap model that can't even close a bracket, and the next you're using a premium orchestrator that burns $10 in tokens just to fix a "hello world" typo. It's frustrating as hell, right?

I got tired of the "Context Fatigue" and the bill at the end of the month, so I came up with what I call the Dual-Pipeline Architecture. It's basically a way to turn yourself into a one-man agency without going broke.

Note

This setup is production-grade. It's meant for people who actually build stuff, not just prompt-jockeys.

Why This Setup Rocks

The core idea is a Hierarchical Mixture of Experts. Instead of asking one "smart" model to do everything, we load-balance the work.

  1. Dual Orchestrators (The "Manager" Layer):

    manager-opencode (COO):

    The workhorse. Runs on cheap, fast models (like GLM). It handles 80% of the routine coding and operations.

    manager-gemini (CTO):

    The big brain. Only called for high-stakes architecture or when things get really weird. It plans, then delegates the typing to the "juniors."

  2. Specialized Departments:

    We don't use a generic assistant. We split the brains:

    The Dev Team:
    • @architect-core: Designs the systems.
    • @builder-fast: Types code at light speed.
    • @qa-hawk: Audits everything with a nasty personality to find bugs.
    The Business Team:
    • @biz-strategist: Decides if what you're building actually makes money.
    • @creative-lead: Handles the copy so it doesn't sound like a robot wrote it.

Manual Setup Instructions

If you're like me and prefer to do things by hand instead of running a magic script, here's how you get this running.

Prerequisites:
  • Node.js & NPM (obviously).
  • uv installed (it makes ast-grep way faster).
  • The OpenCode CLI (npm i -g opencode-ai).
  1. Configuration:

    First, create your config folder:

    mkdir -p ~/.config/opencode
    

    Then, set up your base config.json to handle authentication:

    {
      "$schema": "https://opencode.ai/config.json",
      "plugin": ["opencode-gemini-auth@latest"]
    }
    
  2. Shell Alias:

    Don't waste time typing long commands. Add this to your .bashrc or .zshrc:

    alias opencode='npx opencode-ai@latest'
    

Tip

Keep your context clean! I explicitly disable heavy tools like Puppeteer or SQLite globally and only enable them for the specific agents that need them. Your wallet will thank you.

Usage Examples

Routine Work:
$ opencode > @manager-opencode create a simple landing page.
Complex Architecture:
$ opencode > @manager-gemini I need to refactor the auth system. Ask @architect-core for a plan first.

What do you think? This setup changed the game for me. It’s fast, it’s organized, and it’s significantly cheaper than just throwing GPT-4 at every problem.

Improve traceability with the tmt web app

Posted by Fedora Community Blog on 2026-01-08 12:00:00 UTC

The tmt web app is a simple web application that makes it easy to explore and share test and plan metadata without needing to clone repositories or run tmt commands locally.

At the beginning, there was the following user story:

As a tester, I need to be able to link the test case(s) verifying the issue so that anyone can easily find the tests for the verification.

Traceability is an important aspect of the testing process. It is essential to have a bi-directional link between test coverage and issues covered by those tests so that we can easily:

  • identify issues covered by the given test
  • locate tests covering given issues

Link issue from test

Implementing the first direction in tmt was relatively easy: We just defined a standard way to store links with their relations. This is covered by the core link key which holds a list of relation:link pairs. Here’s an example test metadata:

summary: Verify correct escaping of special characters
test: ./test.sh
link:
  - verifies: https://issues.redhat.com/browse/TT-206

Link test from issue

The solution for the second direction was not that straightforward. Thanks to its distributed nature, tmt does not have any central place where a Jira issue could point to. There is no server which keeps information about all tests and stores a unique id number for each which could be used in the link.

Instead of integers, we’re using the fmf id as the unique identifier. It contains url of the git repository and name of the test. Optionally, it can also define ref instead of using the default branch and path to the fmf tree if it’s not in the git root.

The tmt web app accepts an fmf id of the test or plan or both, clones the git repository, extracts the metadata, and returns the data in your preferred format:

  • HTML for human-readable viewing
  • JSON or YAML for programmatic access

The service is currently available at the following location:

Here’s an example of what the parameters would look like when requesting information about a test in the default branch of a git repository:

By default, a human-readable HTML version of the output is provided to the user. Include the format parameter in order to choose your preferred format:

It is possible to link a test, a plan, or both test and plan. The last option can be useful when a single test is executed under several plans. Here’s how the human readable version looks like:

Create new tests

In order to make the linking as smooth as possible, the tmt test create command was extended to allow automated linking to Jira issues.

First make sure you have the .config/tmt/link.fmf config prepared. Check the Link Issues section for more details about the configuration.

issue-tracker:
  - type: jira
    url: https://issues.redhat.com
    tmt-web-url: https://tmt.testing-farm.io/
    token: ***

When creating a new test, use the --link option to provide the issue which is covered by the test:

tmt test create /tests/area/feature --template shell --link verifies:https://issues.redhat.com/browse/TT-206

The link will be added to both test metadata and the Jira issue. Just note that the Jira link will be working once you push the changes to the remote repository.

Link existing objects

It’s also possible to use the tmt link command to link issue with already existing tests or plans:

tmt link --link verifies:https://issues.redhat.com/browse/TT-206 /tests/core/escaping

If both test and plan should be linked to the issue, provide both test and plan as the names:

tmt link --link verifies:https://issues.redhat.com/browse/TT-206 /tests/core/escaping /plans/features/core

This is how the created links would look like in Jira:

Closing notes

As a proof of concept, for now there is only a single public instance of the tmt web app deployed, so be aware that it can only explore git repositories that are publicly available. For the future we consider creating an internal instance in order to be able to access internal repositories as well.

We are looking for early feedback. If you run into any problems or any missing features, please let us know by filing a new issue. Thanks!

The post Improve traceability with the tmt web app appeared first on Fedora Community Blog.

🪦 PHP 8.1 is retired

Posted by Remi Collet on 2026-01-08 09:06:00 UTC

Two year afters PHP 8.0, and as announced, PHP version 8.1.34 was the last official release of PHP 8.1

To keep a secure installation, the upgrade to a maintained version is strongly recommended:

  • PHP 8.2 has security only support and will be maintained until December 2026.
  • PHP 8.3 has security only support and will be maintained until December 2027.
  • PHP 8.4 has active support and will be maintained until December 2026 (2028 for security).
  • PHP 8.5 has active support and will be maintained until November 2027 (2029 for security).

Read :

ℹ️ However, given the very important number of downloads by the users of my repository the version is still available in remi repository for Enterprise Linux (RHEL, CentOS, Alma, Rocky...) and Fedora and will include the latest security fixes.

⚠️ This is a best effort action, depending on my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.

You can also watch the sources repository on github.

Fedora Linux 43 (F43) election results

Posted by Fedora Community Blog on 2026-01-08 08:00:00 UTC

The Fedora Linux 43 (F43) election cycle has concluded. In this election round, there was only one election, for the Fedora Engineering Steering Committee (FESCo). Congratulations to the winning candidates. Thank you to all candidates for running in this election.

Results

FESCo

Five FESCo seats were open this election. A total of 214 ballots were cast, meaning a candidate could accumulate a maximum of 1,498 votes. More detailed information on the voting breakdown is available from the Fedora Elections app in the Results tab.

# votesCandidate
1013Kevin Fenzi
842Zbigniew Jędrzejewski-Szmek
784Timothée Ravier
756Dave Cantrell
706Máirín Duffy
685Fabio Alessandro Locati
603Daniel Mellado

The post Fedora Linux 43 (F43) election results appeared first on Fedora Community Blog.

2025 in books

Posted by Christof Damian on 2026-01-07 17:21:00 UTC

Another year, another bunch of books. 

I spend most of the year binging through thriller series that I had already had started in 2024. 

The books I really liked were the newest book from the Slough House series, the spy novel The Persian, and Fundamentals of Software Architecture (even if I didn't finish it). 

I am still using Goodreads to track these, so you can find the pretty Year in Books there.  

Currently, I am reading The John Varley Reader: Thirty Years of Short Fiction, which is great. Apparently, I have to wait until someone passes until people tell me about them

Non-Fiction

This year was very light on non-fiction. I just didn't have the patience for it.  

Never Search Alone: The Job Seeker’s Playbook Never Search Alone: The Job Seeker’s Playbook  - while searching for a new job, I used this book and the associated community to help. Some of it worked, a lot of it doesn't make sense for my roles in the current market. 

Life After Cars: Freeing Ourselves from the Tyranny of the Automobile - I love the podcast, but this was mostly about cars and not the life after them. 

Fundamentals of Software Architecture: An Engineering Approach - I quite liked this one, but I didn't finish it. I will continue at some point. 

Future Boy: Back to the Future and My Journey Through the Space-Time Continuum - Michael J. Fox autobiography from the Back to the Future days. Many cool titbits. 

Fiction 

I binged quite a few series, so I keep it short. 
 
All in the same universe, pleasant quick reads, exciting and with humour.  
 
Not much about these to say. I love Mark Dawson's thrillers. The stories are maybe a bit predictable, but they have exciting moments.  
 
Detective Inspector Declan Walsh Series #2-#6  - this was nice and then jumped the shark so badly I gave up on the series. 
 
Nightshade: A Novel a new Michael Connelly series. More cosy than the usual LA setting. 
 
The Lincoln Lawyer #8 The Proving Ground - OK, but so much better than the quite rubbish TV series. 
 
Use of Weapons - I wanted to continue with the Culture series by Ian M. Banks. It turns out this is the last one available on Kindle. 
 
Slough House #9: Clown Town - he is not writing fast enough. This is also so much better than the TV series. And the TV series is brilliant. 
 
The Persian: A Novel - another spy novel by David McCloskey, excellent, if a partly depressing. With this year's news, I would also like to have anything playing in those parts of the world.  
 
Titanium Noir #1: Titanium Noir - like a Raymond Chandler story in the near future 
 
Commissario Montalbano #1: The Shape of Water - not a series I will continue. It feels so outdated. 
 
Legends & Lattes #2 Brigands & Breadknives - I was eagerly awaiting this cosy fantasy story.  It was good, but there were not enough hot beverages and biscuits. 
 
Hidden in Memories - Cosy Nordic Noir - I think this is part three in the series.  
 
The Tourney: The Adventures of Maid Marian - I got this because I like the author on Mastodon, the book is very much average. At 65 pages, you read this romantic yarn quickly. 

Comics 

Lazarus Fallen #1-#6 - This is nearing the end, and it has been great. Good action, well drawn, and Greg Rucka at his best. 

GitHub Discussions versus Discourse

Posted by Ben Cotton on 2026-01-07 12:00:00 UTC

Some projects get by just fine with only communicating in the issue tracker and commits (or pull requests). Most projects need to talk more. They need to discuss governance issues, get community feedback on feature ideas, address user questions, plan events, and more. There’s no end to the number of venues to have these conversations, but for many projects, it comes down to one choice: GitHub Discussions versus Discourse.

Both of these platforms have their advantages and disadvantages. As I wrote in the appendix to Program Management for Open Source Projects, there’s no right tool, just the right tool for your community. I have used both of these tools, and can recommend both for different needs.

GitHub Discussions

GitHub Discussions is a relatively simple forum add-on available to GitHub repositories. Project administrators can add it with a simple checkbox. As a result, it requires no additional infrastructure and participants can use their existing GitHub account. You can easily convert issues to discussion threads or discussion threads into issues — some projects even require a discussion thread as a prerequisite for issue creation.

GitHub Discussions is tightly integrated into the rest of GitHub, as you might expect. This means it’s easy to tag users, cross-reference issues/commits, watch for new activity, and so on. On the other hand, this tight integration isn’t helpful if your project isn’t tightly integrated into GitHub. Depending on the nature of your project, users who come to ask questions may not even have a GitHub account.

Discourse

Discourse (not to be confused with the chat platform Discord) is an open source discussion platform. You can self-host it or pay for hosting from Discourse or their partners. Because it’s designed to be a community communication tool, it offers a lot more flexibility. This includes both themes as well as plugins and other configuration options.

Discourse includes a concept of “trust levels” that can automatically move users up through greater privileges based on a history of prosocial behavior. Moderators and access control can be adjusted on a per-category basis, which is particularly helpful for the largest of communities.

Discourse has a mailing list mode so that users who prefer can treat it like a mailing list. It also supports private conversations so that moderators and administrators can discuss concerns candidly.

GitHub Discussions versus Discourse: pick your winner

How you decide which tool to use will depend on several factors:

  • Other tooling. If your project’s infrastructure is entirely contained on GitHub, then GitHub Discussions is probably the best choice for you. If you don’t use GitHub at all, then Discourse makes more sense. In general, the more non-GitHub tooling you have (CI systems, for example), the more Discourse makes sense on this axis.
  • Infrastructure resources and budget. GitHub Discussions has zero (financial) cost to your community, so that’s a good fit for the vast majority of open source projects. Discourse requires you to have a budget to pay for hosting or the resources and skills to self-host. In my experience, self-hosting is fairly easy — if you have people in the community who can do it.
  • Project purpose. Communities that primarily build software — and mostly have development-oriented contributors — benefit from the tight integration that GitHub Discussions offers. If the community is not software-focused (e.g. if it’s an affinity group, advocacy organization, etc), then Discourse may be a better choice.
  • Target audience. If the people who will be participating in the conversation are primarily contributors or developer-like people, then GitHub Discussions can be a good fit. If you’re expecting general computer users — who may or may not even know what GitHub is — then Discourse is probably more approachable.
  • Community size. Discourse has a lot of flexibility and power to handle thousands of users. When you have ones or dozens of users, the simplicity of GitHub Discussions can be more appealing.

Ultimately, there’s no simple answer. You have to compare the tools across the axes above (plus any other technical or philosophical requirements you have).

This post’s featured photo by Volodymyr Hryshchenko on Unsplash.

The post GitHub Discussions versus Discourse appeared first on Duck Alignment Academy.

O problema não é a Venezuela

Posted by Avi Alkalay on 2026-01-07 10:48:00 UTC

Gente que diz que Maduro era péssimo e já vai tarde. Gente que diz que ninguém falou nada quando Maduro quis tomar parte da Guiana em 2023. E assim justifica Donal fazer o que fez na Venezuela, país que realmente está em frangalhos e agora seu povo foi libertado. Até celebra, até bate palmas.

Gente que diz essas coisas, acorde. Tente ver mais longe e para o futuro.

O problema maior não é bem a Venezuela.

O problema é a nova ordem mundial que se abre para o futuro agora.

Qual o próximo país que Donald vai querer invadir? Dinamarca? Colombia? Sim, porque abriu-se um precedente. E se Donald invade, porque Putin não invadiria em sua região também? E China? E Taiwan? Quando será a vez do Brasil ser invadido, com nossas infindáveis riquezas naturais, fontes de água potável, sol, população enorme? Invadir países pelas suas riquezas é coisa de século 15. Não de século 21, era da ONU.

Preparamos nossos filhos e nossas aposentadorias para uma certa ordem mundial que pode estar se desmanchando. O futuro fica tremendamente mais incerto agora para todo mundo. Esse é o risco. Isso é que é ruim.

Quanto a Venezuela, não há muitas evidências na História de país que foi invadido por suas riquezas naturais e que a situação melhorou muito para o povo lá. Invasão assim é para pilhar, usurpar, não para fazer caridade.

Bater palma para a abertura deste precedente é passar vergonha. Muita vergonha, gente. Em caso de dúvida, é melhor aproveitar a chance de ficar quieto.

Também no meu LinkedIn e Facebook.

I Voted, F43 edition

Posted by Tomasz Torcz on 2026-01-06 16:44:33 UTC

I've cast my votes in Fedora Engineering Steering Comittee. The voting closes tomorrow.

As usual, I've read interview with the candidates, and then decided on my preferences. Red Hat employees get a minus, new faces get plus. There are exceptions, it's not a hard rule!

That's one of the way I contribute to Fedora. Sometimes I blog about elections. I've also been a Fedora packages for almost 18 years!

https://badges.fedoraproject.org/pngs/ivoted-f43.png

028/100 of #100DaysToOffload

2025 blog review

Posted by Kushal Das on 2026-01-06 08:34:20 UTC

After 2005 again in 2025 I wrote only 8 blog posts. The year was difficult in many different ways. But, from September things became a bit better. I could not do a lot of things which I thought I would do, or rather I promised to do.

I hoping to catch up on those promises in the coming months. That not only includes blog posts on vairous things I am writing/building, but also I have a huge backlog of photos to work on and publish.

New badge: Fedora 47 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:19:56 UTC
Fedora 47 Change AcceptedYou got a "Change" accepted into the Fedora 47 Change list

New badge: Fedora 46 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:15:56 UTC
Fedora 46 Change AcceptedYou got a "Change" accepted into the Fedora 46 Change list

New badge: Fedora 45 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:14:25 UTC
Fedora 45 Change AcceptedYou got a "Change" accepted into the Fedora 45 Change list

New badge: Fedora 44 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:13:21 UTC
Fedora 44 Change AcceptedYou got a "Change" accepted into the Fedora 44 Change list

New badge: Fedora 43 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:12:31 UTC
Fedora 43 Change AcceptedYou got a "Change" accepted into the Fedora 43 Change list

DEADLINE 2026-01-07: Fedora Linux 43 FESCo Elections

Posted by Fedora Community Blog on 2026-01-05 14:39:09 UTC

Voting is currently open for the Fedora Engineering Steering Committee (FESCo). You have approximately 2 days and 9 hours remaining to participate.

DEADLINE: 2026-01-07 at 23:59:59 UTC
VOTE HERE: https://elections.fedoraproject.org/about/f43-fesco

Please ensure your ballot for the Fedora Linux 43 FESCo Elections is cast before the cutoff.

The post DEADLINE 2026-01-07: Fedora Linux 43 FESCo Elections appeared first on Fedora Community Blog.

🎲 PHP version 8.3.30RC1, 8.4.17RC1 and 8.5.2RC1

Posted by Remi Collet on 2026-01-02 07:05:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.2RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.17RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.30RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.30, 8.4.17 and 8.5.2 are planed for January 15th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Free Software Activities for 2025

Posted by Jonathan McDowell on 2026-01-05 07:57:19 UTC

Given we’ve entered a new year it’s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.

Conferences

My first conference of the year was FOSDEM. I’d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more “this is how we do it” rather than “here’s some neat Free Software that does it”. I’m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that’s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.

I’d more luck at DebConf25 where I gave a talk (Don’t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn’t seen in ages, and generally feeling a bit more invigorated about Debian.

Other conferences I considered, but couldn’t justify, were All Systems Go! and the Linux Plumbers Conference. I’ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.

I’m going to have to miss FOSDEM this year, due to travel later in the month, and I’m uncertain if I’m going to make DebConf (for a variety of reasons). That means I don’t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?

Debian

I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.

sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There’s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + Jürgen Spitzmüller from the LyX upstream for figuring out the issue + a fix.

Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.

I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.

While I don’t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.

libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.

Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There’s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.

I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.

Recognising the fact I wasn’t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas’ call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we’re not actually managing to do, to avoid the perception it’s all fine and no one else needs to step up. Took me too long to act on it.

The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.

Linux

TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Some were just cleanups, but several fixed real issues that were causing us issues. I’ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what’s going on there.

Personal projects

I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I’ve got a set of changes that should add RFC9580 (v6) support, but there’s not a lot of test keys out there at present for making sure I’m handling things properly. Equally there’s a plan to remove Berkeley DB from Debian, which I’m completely down with, but that means I need a new primary backend. I’ve got a draft of LMDB support to replace that, but I need to go back and confirm I’ve got all the important bits implemented before publishing it and committing to a DB layout. I’d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.

(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I’m aware of offers.)

That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I’m involved. I highly doubt 2026 will be much different, but I think that’s ok. I scratch my own itches, and if that helps out other folk too then that’s lovely, but not the primary goal.

Feliz Guerra Nova!

Posted by Avi Alkalay on 2026-01-04 13:46:00 UTC

Rússia invadindo Ucrania foi a precursora, mas agora com os EUA assaltando a Venezuela por petróleo fica re-estabelecida a lei do mais forte, desveladamente, como era séculos atrás. Foram prás cucuias 80 anos de ONU e diplomacia entre as nações.

O problema é o que vem agora com uma nova ordem mundial. Este kit de emergência foi divulgado recentemente para a população francesa pelo seu Ministério do Interior. A gente fica preocupado quando há radinho de pilha no kit.

Minha amiga francesa conta que tal alerta veio logo após os alertas similares emitidos pelos governos sueco e finlandês, países que estão geograficamente mais próximo da Rússia de Putin.

O MI6 britânico também, diz que “está operando numa zona entre guerra e paz”.

Acho que já podemos esperar que o mundo e sua geopolítica será lugar inesperadamente diferente dentro de uns 5 ou 10 anos. A frase é propositalmente incongruente para expressar a minha perplexidade. E tristeza.

Podemos até fazer um bolão de próxima invasão. Dinamarca e sua Groenlândia é a minha aposta mais quente. E tem também Taiwan pela China nos rumores.

Também no meu Facebook e LinkedIn.

What is a PC compatible?

Posted by Matthew Garrett on 2026-01-04 03:11:36 UTC

Wikipedia says “An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models”. But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?

Before we dig into that, let’s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They’d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who’d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn’t run elsewhere. CP/M’s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn’t need to care about the underlying hardware and would run on all systems that had a working CP/M port.

By 1979, boards based on the 8086, Intel’s successor to the 8080, were hitting the market. The 8086 wasn’t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM’s hardware, and the rest is history.

But one key part of this was that despite what was now MS-DOS existing only to support IBM’s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn’t include all the code needed to run on a PC - you needed IBM’s BIOS. To begin with this wasn’t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn’t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor’s ROM code wasn’t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.

And here’s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM’s functionality, or didn’t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you’d think wouldn’t be necessary given that’s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.

You’d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn’t maintain compatibility. As long as everything went via the BIOS this shouldn’t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn’t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.

And that’s what happened. IBM was the biggest player, so people targeted IBM’s platform. When BIOS interfaces weren’t sufficient they hit the hardware directly - and even if they weren’t doing that, they’d end up depending on behavioural quirks of IBM’s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.

So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you’d need some additional media for hardware-specific drivers. It’s something that still distinguishes the PC market from the ARM desktop market. But it’s not as true as it used to be, and it’s interesting to think about whether it ever was as true as people thought.

Let’s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don’t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn’t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no.

Ok. But the hardware is broadly the same, right? There’s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn’t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it’s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that’s really not going to work when you have a PCI card that’s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it’s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won’t2, so you’re still actually relying on the firmware to do the right thing but it’s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you’re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.

But imagine you are, or imagine you’re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that’s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you’re trying to run something built with IBM Pascal 1.0? There’s a risk that it’ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it’ll break. It’d work fine on an actual PC, and it won’t work here, so are we PC compatible?

That’s a very interesting abstract question and I’m going to entirely ignore it. Let’s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.

Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn’t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn’t display correctly on any future PCs either. This is going to become a theme.

There’s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn’t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you’re going to have a bad time. This isn’t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we’re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You’d likely say “yes”, but there’s software written for the original PC that won’t work there.

And, well, let’s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It’s fine, we’d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through.

So, what’s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it’ll run most old software, as long as it doesn’t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it’ll potentially be unusable or crash because time is hard.

The truth is that there’s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn’t run Flight Simulator. “PC Compatible” is a socially defined construct, just like “Woman”. We can get hung up on the details or we can just chill.


  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don’t provide BIOS compatibility ↩︎

  2. Back in the 90s and early 2000s operating systems didn’t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that’s how I made a laptop that could boot unmodified MacOS X ↩︎

  3. (my name will not be Wolfwings Shadowflight↩︎

  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA ↩︎

  5. and by advanced we’re still talking about the 90s, don’t get excited ↩︎

Looking for Mentors for Google Summer of Code 2026

Posted by Felipe Borges on 2026-01-02 12:39:47 UTC

It is once again that pre-GSoC time of year where I go around asking GNOME developers for project ideas they are willing to mentor during Google Summer of Code. GSoC is approaching fast, and we should aim to get a preliminary list of project ideas by the end of January.

Internships offer an opportunity for new contributors to join our community and help us build the software we love.

@Mentors, please submit new proposals in our Project Ideas GitLab repository.

Proposals will be reviewed by the GNOME Internship Committee and posted at https://gsoc.gnome.org/2026. If you have any questions, please don’t hesitate to contact us.

Open source trends for 2026

Posted by Ben Cotton on 2026-01-01 12:00:00 UTC

A new year is here and that means it’s time to clean up the confetti from last night’s party. It’s also time for my third annual trend prediction post. After a solid 2024, I did okay-ish in 2025. I am not feeling particularly confident about this year’s predictions in large part because so much depends on the direction of broader economic and political trends, which are far outside my expertise. But this makes a good segue into the first trend on my radar.

Geopolitics fracturing global cooperation

The US government proved to be an unreliable partner in a lot of ways in 2025 and I see little reason that will change in 2026. With capricious policy driven by retribution and self-serving, Europe has become more wary of American tech firms. This has led to efforts to develop a Europe-based tech stack and a greater focus on where data is stored (and what laws govern access to that data). Open source projects are somewhat insulated from this, but there are two areas that we’ll see effects.

First, is that US-based conferences will have an increasingly domestic attendee list. With anecdotes of foreign visitors held in detention for weeks and visa issuance contingent on not saying mean things about the president, it’s little wonder that fewer people are willing to risk travel to the United States. Global projects, like the Python Software Foundation, that have their flagship conference in the US may face financial challenges from a drop in attendance. The Europe versions of Linux Foundation events will be the main version (arguably that’s already true). FOSDEM will strain the limits of its venue, even more than it already does.

The other effect we may see is a sudden prohibition against individuals or nations participating in projects. Projects with US-based backers — whether company or foundation — already have to comply with US sanctions, the Entity List, and other restrictions. It’s conceivable that a nation, company, or individual who upsets the White House will find themselves subject to some kind of ban which could force projects to restrict participation. Whether these restrictions apply to open source is unclear, but I would expect organizations with something to lose to take a cautious approach. Projects with no legal entity will likely take a “how will you stop me?” approach.

A thaw in the job market

This section feels the most precarious, since it depends almost entirely on the macroeconomic conditions and what happens with generative AI. With the latter, I think my prediction of a leveling off in 2025 was just too soon. In 2026, we’ll see more recognition of where generative AI is actually useful and where it isn’t. Companies won’t fire thousands of workers to replace them with AI agents only to discover that the AI is…suboptimal. That’s not to say that AI will disappear, but the approach will be more measured.

With interest rates dropping, companies may feel more confident in trying to grow instead of cutting costs. Supply chain issues and Cyber Resilience Act (CRA) requirements (more on those in a moment) will drive a need for open source expertise specifically. Anecdotally, I’ve seen what seems to be an upward trend in hiring for open source roles in the last part of 2025 and I think that continues in 2026. It won’t be the huge growth we saw in the early part of the decade, but it will be better than the terrible job market we’ve seen in the last year or two.

Supply chain and compliance

Oh look: “software supply chain” is on my trends list. That’s never happened before, except for every time. It won’t stop being an issue in 2026, though. Volunteer maintainers will continue to say “I am not a supplier” as companies make ever increasing demands for information and support. September 11 marks the first significant deadline; companies must have a mechanism for reporting active vulnerabilities. This means they’ll be pushing on their upstream projects for that information.

Although open source projects don’t have obligations under the CRA, they’ll have an increased request burden to deal with. Unfortunately, I think this means that developing a process for dealing with the request deluge may distract from efforts to improve the project’s security. It may also drive more maintainers to give up.

This post’s featured photo by Jason Coudriet on Unsplash.

The post Open source trends for 2026 appeared first on Duck Alignment Academy.

pinnwand and bpa.st in 2025

Posted by Simon de Vlieger on 2026-01-01 06:00:00 UTC
pinnwand has existed for about a decade. It started as a separate pastebin to be used by bpython on the bpaste.net domain. Eventually I rewrote it into its current shape which is a Python package based on the Tornado Web Server. It is currently used by a bunch of people (the Python Discord, Rocky Linux, are the first that spring to mind; if you host a public instance I’d love to know about it.

johnnycanencrypt 0.18.0 released

Posted by Kushal Das on 2025-12-31 14:15:04 UTC

A few weeks ago I released Johnnycanencrypt 0.18.0. It is a Python module written in Rust, which provides OpenPGP functionality including allows usage of Yubikey 4/5 as smartcards.

This release was in response of CVE-2025-67897 against sequoia-pgp. This forced me to update to the latest 2.1.0 release of Sequoia.

Reviewing open source trends in 2025

Posted by Ben Cotton on 2025-12-31 12:00:00 UTC

It’s the end of the year, which I suppose means it’s time for the now-traditional look at my predictions.

Software supply chain

I was right that this would continue to be an area of interest in the open source world, just as it was in 2024 (and — spoiler alert! — it will be in 2026). I wrote “In 2025, I expect to see a marked split between “hobbyist” and “professional” open source projects.” That’s probably not as true as my ego would like, but I do think we’re trending that direction, in part due to the inequality I address in the next section.

It’s true that supply chain issues have not stopped in 2025. The Shai-Hulud worm spread through the NPM ecosystem in September (with a similar attack in November). Debian images on Docker Hub contained the XZ backdoor more than a year after it was discovered. Phishing attacks spoofing PyPI in July resulted in the compromise of four accounts, allowing the attackers to upload malicious packages.

But the news wasn’t all bad. GitHub rolled out a immutable releases feature that protects against attackers re-tagging previously-good releases with malicious code. crates.io (Rust), npm (Node.js), and NuGet (.NET) added support for trusted publishing. New tools and frameworks came out to help maintainers better understand and address risks, including the OSPS Baseline and Kusari Inspector (disclosure: I am a Kusari employee).

Inequity

This section had two parts. First, I wrote:

I think we’ll see a growing separation between the haves and have-nots. The projects that enterprises see as critical will get funding and effort. The other projects, whether or not they’re actually important to enterprises, will be left to the increasingly scarce efforts of volunteers.

This held true. Two big examples are the temporary pause of the Kubernetes External Secrets Operator project and Nick Wellnhofer resigning as the sole maintainer of libxml2. Both of these were due to a maintenance burden that exceeded the capacity of the maintainers. Josh Bressers found that almost half of npm packages with a million-plus monthly downloads have a single maintainer. This is likely generalizable across all ecosystems, so it’s no surprise that we’d see this. Some in the FFmpeg community took public issue with Google, suggesting the giant should provide more support or stop sending bugs.

The other part of this prediction concerned events:

Events where companies can make sales will do well. Community events will suffer from a lack of sponsorship and attendance due to lack of travel funding. I think we’ll start to see a shift from global events toward regional events in the community space.

I was wrong here, as far as I can tell. US-based events struggled somewhat, in part due to geopolitics, but European events seem to be doing well. Larger community events, from what I gathered, have done well, although the finances are not what they used to be. Smaller events, though, are struggling. DevOpsDays Detroit, as one example, didn’t accept my talk proposal because the conference was shuttered instead. Many of the local and regional events rely on a small number of committed people to keep going. Just like in software projects, these people are getting burnt out.

The general idea of the prediction seems to be holding up well enough. I’ve heard the phrase “K-shaped economy” approximately a million times in financial news this year. The open source world has seen it, too.

Artificial intelligence

I’ll admit to being wrong on this one, too:

If the bubble doesn’t burst this year, the hype at least slows way down…it will lead to a leveling off in AI-generated code and bug report “contributions” as vendors start charging more money for services.

I maintain that my wrongness is more a matter of timing than anything. Generative AI continues to lose money, but the price increases are not here. While some have expressed concerns about the circular dealing in the sector, it seems like the fallout has mostly been contained to Oracle (whose share price is down over 40% since an early-September high) for the time being. The hype may be slowing, but it’s a little hard to say that with certainty just yet. There’s definitely no indication of a slowdown in AI-generated bug reports in curl’s data.

Bar chart of Hackerone reports to the curl project by year. The "likely AI slop" count increased from 2 in 2023 to 6 in 2024 to 37 in 2025.
Bar chart of Hackerone reports to the curl project by year. The “likely AI slop” count increased from 2 in 2023 to 6 in 2024 to 37 in 2025.

Vibe check

I called my 2025 predictions “a little bleak,” and I think the vibe was spot on. One thing that didn’t fit well into any of the prediction categories was the attempt by Synadia to un-contribute NATS to the CNCF. Thankfully, that went nowhere. Unfortunately, so did the careers of many in the industry as job cuts continued at companies large and small.

If 2025 was bleak for you, rest assured that it is almost over. I truly appreciate everyone who has read these posts, bought a copy of Program Management for Open Source Projects, subscribed to the DAA newsletter, or in any other way made my year a little less bleak with your presence. Here’s hoping for an improved 2026!

This post’s featured photo by Agence Olloweb on Unsplash.

The post Reviewing open source trends in 2025 appeared first on Duck Alignment Academy.

Introducing the new bootc kickstart command in Anaconda

Posted by Fedora Magazine on 2025-12-31 08:00:00 UTC

Anaconda installer now supports installation of bootc based bootable container images using the new bootc command. It has supported several types of payload to populate the root file system during installation. These include RPM packages (likely the most widely used option), tarball images you may know from Fedora Workstation, ostree, and rpm-ostree containers. The newest addition to the family, from a couple of weeks ago, is bootc-based bootable containers.

The difference is under the hood

We have added a new bootc kickstart command to Anaconda to support the new feature. This is very similar to the ostreecontainer command that has been present for some time. From the user’s perspective the two are very similar. The main difference, however, is under the hood.

One of the most important setup steps for a deployment is to create a requested partitioning in both cases. When the partitioning is ready, the ostreecontainer command makes Anaconda deploy the image onto the root filesystem using the ostree tool. It also executes the bootupctl tool to install and set up the bootloader. By contrast, with bootc containers installed using the bootc kickstart command, both the filesystem population and bootloader configuration is performed via the bootc tool. This makes the deployment process even more integrated.

The content of the container images used for installation is another difference. The bootc-enabled images are somewhat more versatile. Apart from installation using Anaconda, they provide a self-installing option via the bootc command executed from within a running container.

On the other hand, both options provide you with a way to install an immutable system based on a container image. This option may be useful for particular use cases where regular installation from RPM packages is not desired. This might be due to potentially lower deployment speed or inherent mutability of the resulting system.

A simple how-to

In practice, you’d likely use a custom container with pre-configured services, user accounts and other configuration bits and pieces. However, if you want to quickly try out how the new Anaconda’s feature works, you just need to follow a few simple steps. Starting with a Fedora Rawhide ISO:

First, take an existing container from a registry and create a minimal kickstart file instructing Anaconda to install the bootable container image:

# Beware that this kickstart file will wipe out the existing disk partitions.
# Use it only in an experimental/isolated environment or edit it accordingly!
zerombr
clearpart --all --initlabel
autopart

lang en_US.UTF-8
keyboard us

timezone America/New_York --utc
rootpw changeme

bootc --source-imgref=registry:quay.io/fedora/fedora-bootc:rawhide

As a next step, place the kickstart file in some reachable location (e. g. HTTP server), point Anaconda to it by appending the following on the kernel command line:

inst.ks=http://url/to/kickstart 

Now start the installation.

Alternatively, you may use the mkksiso tool provided by the lorax package to embed the kickstart file into the installation ISO.

When installation and reboot is complete, you are presented with an immutable Fedora Rawhide system. It will be running on your hardware (or VM) installed from a bootable container image.

Is there anything more about bootc in Anaconda?

You may ask if this option is limited to Fedora Rawhide container images. Technically speaking, you can use the Fedora Rawhide installation ISO to install, for instance, a CentOS Stream container image:

bootc --source-imgref=registry:quay.io/centos-bootc/centos-bootc:stream10

Nevertheless, keep in mind that for now Anaconda will handle it as Fedora installation in such a case. This is because it runs from a Fedora Rawhide boot ISO. This may result in unforeseen problems, such as getting a btrfs-based partitioning that CentOS Stream won’t be able to boot from. This particular issue is easily overcome by explicitly telling Anaconda to use some different partitioning type, e. g. autopart –fstype=xfs. We would like to address the lack of container images handling based on the contained operating system or flavour in the future. For now, one just needs to take the current behavior into consideration when using the bootc command.

There are a couple more known limitations in Anaconda or bootc at this point in time. These include lack of support for partitioning setups spanning multiple disks, support for arbitrary mount points, or for installation from authenticated registries. But we hope it won’t take long to solve those shortcomings. There are also plans to make the new bootc command available even on the RHEL-10 platform.

We invite you to try out this new feature and share your experience, ideas or comments with the Installer team. We are looking forward to hearing from you in a thread on discussion.fedoraproject.org!

Arsenal Math font

Posted by Rajeesh KV on 2025-12-30 12:30:22 UTC

After my talk about TeX syntax-highlighting font at TUG2025 conference, then vice-president of TeX Users Group, Boris Veytsman approached me with a proposal to develop a Math counterpart for the beautiful Arsenal font designed by Andrij Shevchenko.

What followed was a deep dive into the TeXbook to learn about math font parameters, OpenType Math specification, and related documentation & resources. Fortunately, FontForge has really good support for editing Math tables; and the base font used (KpMath-Sans by Daniel Flippo) already had all the critical parameters set (which needed slight adjustments). I started the development of Arsenal Math by integrating the glyphs for Latin letters, Arabic numerals, some symbols etc. and with proper scaling & stem thickness corrections, for regular, bold, italic and bolditalic variants, plus math calligraphic letters. In addition, a lot of Math kerning (known as ‘cut-ins’ in OpenType parlance) was added to improve the spacing.

Fig. 1: Arsenal Math specimen, contributed by CVR.

Being an OpenType font — XeTeX, LuaTeX or some Unicode math typesetting engine (e.g. MS Word) is required to use Arsenal Math. Boris did testing and provided many feedback, and Vaishnavy Murthy graciously reviewed the glyph changes I made. The CTAN admins were quite helpful to get the font accepted into the repository. There is a style file and fontspec file supplied with the fonts to make the usage easy. The sources are available at RIT fonts repository.

Boris also donated funding for the project, but he had already paid me many folds by mailing The TeXbook autographed by Donald Knuth for me, so I requested the LaTeX devfund team to use funding for another project. Karl Berry suggested to write an article about the development process, which is published in the issue 46:3 of the TUGboat journal, and has a lot more technical details.

Fig. 2: The TeXbook autographed by Don Knuth for me.

The learning experience of Math typesetting internals, and contributing to the TeX ecosystem has been a fulfilling spare-time work for me in 2025. Many thanks to all those involved!

Pourquoi j’ai quitté Kodi pour Wholphin

Posted by Guillaume Kulakowski on 2025-12-29 18:05:33 UTC

J’utilise Kodi depuis 2012, à l’époque où le projet s’appelait encore XBMC. Ma toute première installation tournait sur un Raspberry Pi sous Raspbian. Les médias étaient stockés sur mon PC, puis mon NAS et partagés via Samba (SMB pour les intimes). Un setup simple, efficace. Avec l’arrivée des Smart TV, j’ai fini par remplacer le […]

Cet article Pourquoi j’ai quitté Kodi pour Wholphin est apparu en premier sur Guillaume Kulakowski's blog.

a look back at 2025

Posted by Kevin Fenzi on 2025-12-27 19:59:14 UTC
Scrye into the crystal ball

2025 is almost gone now, so I thought I would look back on the year for some interesting high and low lights. I'm sure to miss some things here, or miss mentioning someone who did a bunch of work on something, but I will do my best.

Datacenter moves

There was one gigantic Datacenter move, and one smaller one. Overall I am glad we moved and we are in a much better situation for doing so, but I really hope we can avoid moving more in 2026. It takes so much planning and energy and work and there is so much else to do that has to be put on hold.

As a reminder, some of those advantages:

  • We have lots of new machines. They are newer/faster/better in almost every way.

  • dual 25G links on all the new machines (and 10G on old ones)

  • all nvme storage in new machines

  • space to expand for things like riscv builders and such.

  • ipv6 support finally

So much of my year was datacenter moves. :(

Scrapers and outages

This year was sadly also the year of the scrapers. They are hammering pretty much everyone these days and it's quite sad. We did deploy anubis and it really helped a lot for most of the scrapers, but the's another group of them it wasn't able to. For those before the holidays I just tossed enough resources at our stuff that they can scrape and we can just not care. I'm not sure what next year will look like for this however, so we will just keep doing the best we can. I did adjust caching some that also really helped (all the src static files are cached now).

There were also a number of outages toward the end of the year, which I really am not happy about. There were a number of reasons for them:

  • A tcp_timeout issue which turned out to be a firewall bug that was super hard to track down.

  • The scrapers causing outages.

  • I myself caused a few outages with a switching loop of power10 lpars. ;(

  • Various smaller outages.

We have dicusssed a bunch of things to improve outages and preventing them, so hopefully next year will be happier on that front.

Power10

Speaking of power10, that was quite the saga. We got the machines, but the way we wanted to configure them didn't end up working so we had to move to a much more complex setup using a virtual hardware management console appliance and lpars and sr-iov and more. It's pretty complex, but we did get everything working in the end.

Fedora releases

We got Fedora 42 and 43 released this year, and pretty much on time too. 43 seems to be a release with a lot of small issues sadly, not sure why. From the postgresql upgrades, dovecot changing config format completely, nftables not enabled and matrix-synapse not being available, my home upgrades were not as smooth as usual.

Home Assistant

This was defintely a fun year of poking at home assistant and adding sensors and tweaking around with it. It's a nice fun hobby and does give you real data to solve real problems around your house. Also, all your data is your own and stored locally. This has really turned my perception of iot things all around. Before I woulde deliberately not connect things, now I connect them if they can be made only to talk to my home assistant.

I added a weather station, a rain guage, a new zigbee controller, a bunch of smart power plugs and temp sensors, and much more. I expect more on the way in 2026. Just when I think I have automated or instermented everything, there's a new thing coming along.

AI

I'm still in the 'There are in fact use cases for LLM's' group, but I am pretty weary of all the people wedging them in where they are not in fact a good use case, or insisting you find _some_ case no matter what.

I've found some of them useful for some things. I think this will continue to grow over time, but I think we need to be measured.

On the other side I don't get the outrage for things like calibre adding some support for LLM's. Its there, but it does exactly nothing by default. You have to set it up with your desired provider before it will do anything. It really doesn't affect you if you don't want to use it.

laptop

I have stuck with using my Lenovo slim 7x as my main laptop for most of this year. The main thing I still miss is webcam working (but I have an external one so it's not the end of the world). I'm looking forward to the X2 laptops out in the next few months. I really hope qualcomm has learned from the X1 ones and X2 will go better, but time will tell.

Fedmsg finally retired

We finally managed to turn off our old message bus. It took far too long, but I think it went pretty smoothly overall in the end. Thanks to much to Michal Konečný for doing everything around this.

nagios (soon to be) retired

Thanks to a bunch of work from Greg Sutcliffe, we have our zabbix setup pretty much done for a phase one and have officially announced that nagios is going to be retired.

iptables finally retired

We moved all our iptables setup over to nftables. There were a few hiccups, but overall it went pretty smoothly. Thanks to James Antill for all the work on this.

Blogs

I wrote a bunch more blogs this year, mostly for weekly recaps, but also a few home assistant review blogs. I find it enjoyable to do the recaps, although I don't really get much in the way of comments on them, so no idea if anyone else cares about them. I'll probibly continue to do them in 2026, but I might change it to do them sunday night or friday so I don't have to think about doing them saturday morning.

The world

The world was very depressing in 2025 in general, and thats speaking as someone living life on the easiest difficulty level ( https://whatever.scalzi.com/2012/05/15/straight-white-male-the-lowest-difficulty-setting-there-is/ ) I really hope sanity, science and kindness can make some recovery in 2026.

I'll probibly see about doing a 'looking forward to 2026' post soon.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115794282914919436

A font with built-in TeX syntax highlighting

Posted by Rajeesh KV on 2025-12-27 07:47:38 UTC

At the TUG2025 conference, I presented a talk about the development of a new colour font, which does automatic syntax highlighting for TeX documents/snippets. The idea was floated by CVR, and was inspired by a prior-art of HTML/CSS syntax highlighting font by Heikki Lotvonen.

Syntax highlighting is achieved by specialized grammar files or packages on desktop applications, code editors, the Web, and typesetting systems like TeX. Some of these tools are heavy (e.g. prism.js or pygmentize package). A light-weight alternative would be a font that uses recent OpenType technologies to do syntax highlighting of code snippets. I developed such a font, for highlighting TeX code snippets.

Fig. 1: OpenType colour font doing syntax highlighting of TeX document.

There are some novelties in the developed font:

  1. It supports both COLRv0 and COLRv1 colour format specifications (separate fonts, but generated from the same source).
  2. Supports plain TeX, LaTeX2 and LaTeX3 macro names.
  3. A novel set of OpenType shaping rules for TeX syntax colouring.

The base font used is M+ Code Latin by Coji Morishita. The details of the development, use cases, and limitations can be found in the 46:2 issue of the TUGboat journal publication. The binary font and sources are available at RIT fonts repository.

Playlist de Natal

Posted by Avi Alkalay on 2025-12-26 17:36:16 UTC

Este ano fomos a algumas celebrações de Natal. E ideia super original que as pessoas tinham em todas as festas era “botaê uma playlist de Natal”. Aí por isso ouvi algumas dezenas de vezes as mesmas canções de Natal em inglês. Dezenas de vezes Wham, dezenas de vezes a mesma Mariah Carey, dezenas de Sias.

No auge da minha náusea musical eu interceptava o alto-falante e trocava para O Quebra Nozes de Tchaikovsky — a mais típica música de Natal —, ou uma bela seleção de música instrumental brasileira, ou alguma playlist variada de clássicos de Natal. Eu acho importante ser música instrumental, não ter ninguém cantando, prá não atrapalhar a conversa das pessoas na festa.

Não passava nem 4 minutos para que algum arrombado reclamasse que a música tava baixo astral.

E lá vamos nós ouvir de novo Bublé, Mariah, Sia, Wham. Repetidamente.

Não me entenda mal, gosto muito de todos esses artistas pop. O que me dá náuseas é a falta de abertura para ouvir coisas fora do núcleo duro selecionado pelo jabá.

Browser wars

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.