/rss20.xml">

Fedora People

Browser wars

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2026-01-15 10:10:27 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Measuring contributor affiliations is complicated

Posted by Ben Cotton on 2026-01-14 12:00:00 UTC

In a comment on the GitHub issue that sparked my post on measuring contributions outside working hours, Clark Boylan wrote:

I suspect that tracking affiliations is a better measure of who is contributing as part of their employment vs those who might be doing so on a volunteer basis.

Unfortunately, this isn’t as easy as it sounds. Let’s explore the complexity with a concrete example. When I was the Fedora Program Manager, I’d occasionally get asked some variation of “what percentage of Fedora contributors are from Red Hat?” The question usually included an implicit “and by ‘from Red Hat’, I mean ‘are working on Fedora as part of their job at Red Hat.'” I never gave a direct answer because I didn’t have one to give. Why not?

What would you say you do here?

The first complication is that there aren’t two classifications of contributor: inside Red Hat and outside Red Hat. There are actually three classifications of contributor:

  1. Red Hat employee working on Fedora as part of their job responsibilities
  2. Person who happens to be a Red Hat employee but not working on Fedora as part of their job responsibilities
  3. Person not working at Red Hat

Because Fedora is a large project with many different work streams, the first two classifications of contributor have a fuzzy boundary. The same person may fit into the first or second category, depending on what they’re doing at the moment. When I was updating the release schedule, I was clearly in the first category. When I was updating the wordgrinder package, I was probably in the second category. Even the general work wasn’t clearly delineated. When I was editing and publishing elections interviews, that was a first category activity. But editing and publishing other Community Blog posts is more akin to a second category activity.

My job responsibilities were somewhat elastic, and my manager encouraged me to jump in where I could reasonably help. This meant I was doing category-two-like activities while “on the clock”. Depending on the context and intent behind the question, the answer to “is this a Red Hat contribution?” could be different for the same work.

Identities are blurry

It’s tempting to say “differentiate activities based on account.” People will use their work account for work things and their personal account for personal things, right? That’s not my experience. By and large, people don’t want to have to switch accounts in the issue tracker, etc., so they don’t unless they have to. A lot of people used their Red Hat email address for Bugzilla, commit messages, etc, regardless of whether the work was directly in scope for their job or not. By the same token, I knew at least one person who used a personal address, even though they were primarily working on stuff Red Hat wanted them to.

In other roles, I’ve seen people use the same GitHub account for work, foundation-led and other third-party open source projects, plus their personal projects. GitHub’s mail rules make that (somewhat) workable. Of course, people can use different emails for their commits based on if they’re contributing for vocation or for avocation. They don’t always.

A final factor

To make it even more complicated, someone might not even have a clear conception of whether or not they were contributing as part of their job or not. And their boss might have a different view. There are some projects that I only participated in because it was part of my job. When I no longer had that job, I stopped participating in the project. Other projects I started participating in on behalf of my employer, but kept participating after I had a new role.

Is a project tangential to my employer’s interests that I’d keep participating in even after I left a work contribution or a non-work contribution? Does it switch from being one to the other? As you can see, what seems simple gets complicated very quickly.

This post’s featured image by Mohamed Hassan from Pixabay.

The post Measuring contributor affiliations is complicated appeared first on Duck Alignment Academy.

🎲 PHP version 8.3.30RC1, 8.4.17RC1 and 8.5.2RC1

Posted by Remi Collet on 2026-01-02 07:05:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.2RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.17RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.30RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.30, 8.4.17 and 8.5.2 are planed for January 15th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Changes in the syslog-ng Elasticsearch destination

Posted by Peter Czanik on 2026-01-13 13:48:14 UTC

While testing the latest Elasticsearch release with syslog-ng, I realized that there was already a not fully documented elasticsearch-datastream() driver. Instead of fixing the docs, I reworked the elasticsearch-http() destination to support data streams.

So, what was the problem? The driver follows a different logic in multiple places than the base elasticsearch-http() destination driver. Some of the descriptions were too general, others were missing completely. You had to read the configuration file in the syslog-ng configuration library (SCL) to configure the destination properly.

While preparing for syslog-ng 4.11.0, the OpenSearch destination received a change that allows support for data streams. I applied these changes to the elasticsearch-http() destination, and did a small compatibility change along the way, so old configurations and samples from blogs work.

Read more at https://www.syslog-ng.com/community/b/blog/posts/changes-in-the-syslog-ng-elasticsearch-destination

syslog-ng logo

Super-heróis na minha sala

Posted by Avi Alkalay on 2026-01-12 20:35:56 UTC

Esta é Pooja Krishnamoorthy, uma heroína. Adil Mirza, seu treinador, também é.

Pooja e Adil pernoitaram em casa antes de seguirem diferentes destinos. Ela vai visitar o Rio de Janeiro e a Amazônia antes de voltar para Mumbai. E Adil já voltou hoje mesmo para a Índia.

Pooja acabou de terminar a fabulosa maratona Brasil 135 Ultra Journey, em que percorreu correndo, em 48 horas seguidas, os 217 quilômetros (135 milhas) entre Águas da Prata, SP a Luminosa, MG. Pooja conta que o Brasil é destino sagrado para os corredores desta ultra-maratona.

Eu nem sabia que essa maratona existia. Eu nem imaginava que o corpo humano era capaz disso. Como ela realizou isso por inteiro — é a primeira mulher indiana que terminou a maratona —, é para mim uma heroína.

Adil também, porque ele já correu até na maratona seguinte, que é a Badwater 135, na Califórnia. Wikipédia diz que é considerada a maratona mais difícil do mundo.

Como a maratona Brasil 135 é classificatória para Badwater 135, em julho Pooja já vai corrê-la também. Mas ela comenta que a Brasil 135 é bem mais difícil que a Badwater 135 por causa dos morros nessa região de Minas e São Paulo. A Badwater 135 tem um agravante que é o clima de deserto no verão californiano.

Ficando em casa, tem que pagar pedágio e responder todas as minhas perguntas hiper-detalhistas sobre a realização de façanha tão monumental.

Pooja na Ultra-maratona Brasil 135 itemizada e em números:

  • 🏃🏽‍♀️ Distância: 217km (135 milhas) em estradas de terra de Minas Gerais e São Paulo
  • ⏱ Tempo para finalizar a prova: 48h; de 2026-01-08 8:00 a 2026-01-10 8:00, de manhã, de tarde, de noite, de madrugada
  • 🚶🏽‍♀️ Relação correndo/caminhando: 60%/40%
  • 🏃🏽‍♀️ Maior trecho correndo/caminhando sem parar: 90km (mais do que 2 maratonas completas)
  • 😴 Tempo cochilando: 30 minutos total, que é a soma de 3 cochiladas de 10 minutos cada
  • 🍲 Outra parada: 1 jantar leve de salada, lentilhas e arroz; rápida massagem nas pernas pelo treinador enquanto comia, e continuar correndo em seguida
  • 🥪 Outras refeições: lanches, enquanto caminha
  • 🧑‍🧑‍🧒‍🧒 Membros no time de apoio: Adil numa van que vai junto todo o trajeto, e mais 3 brasileiros exclusivos para Pooja, que prestam esse serviço profissionalmente para maratonistas da Brasil 135
  • ⛑ Função da equipe de apoio: revezar para correr junto em alguns trechos, animar Pooja, tocar música, dar suporte psicológico para ajudar mente e força de vontade de Pooja a dominarem o corpo, fornecer água, comida, cuidados médicos, fotografar e documentar a façanha
  • 👟 Número de pares de tênis que Pooja trouxe para a maratona: 5
  • 👟 Vezes que trocou o par de tênis: 1
  • 🏃🏽‍♀️ Número de maratonistas solo, como Pooja (há também a modalidade de revezamento): ≈100
  • 🏆 Maratonistas solo que efetivamente terminaram a prova, como Pooja: ≈44
  • 💰 Patrocínio e financiamento: 40% Pooja, 60% outros
  • 🏅 Lista de mulheres indianas que realizaram uma ultra-maratona completa: Pooja.

Seres humanos extraordinários e as coisas incríveis que realizam. Esta é a inspiração.

Também no meu LinkedIn, Instagram e Facebook.

Comment j’organise mon NAS et mes services auto-hébergés

Posted by Guillaume Kulakowski on 2026-01-12 11:51:35 UTC

Cela fait maintenant un bon moment que je possède un NAS, et je n’avais encore jamais pris le temps de décrire réellement ma stack ni les applications que j’y héberge. Cet article est donc l’occasion de corriger cela. Références et inspiration S’il ne fallait citer qu’une seule référence incontournable dans le monde du self-hosted, ce […]

Cet article Comment j’organise mon NAS et mes services auto-hébergés est apparu en premier sur Guillaume Kulakowski's blog.

Loadouts For Genshin Impact v0.1.13 Released

Posted by Akashdeep Dhar on 2026-01-11 18:30:45 UTC
Loadouts For Genshin Impact v0.1.13 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.13 is OUT NOW with the addition of support for recently released characters like Durin and Jahoda, and for recently released weapons like Athame Artis, The Daybreak Chronicles and Rainbow Serpent's Rain Bow from Genshin Impact Luna III or v6.2 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Changelog

  • Automated dependency updates for GI Loadouts by @renovate[bot] in #468
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #470
  • Update actions/checkout action to v6 by @renovate[bot] in #472
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #471
  • Update dependency pytest to v9 by @renovate[bot] in #469
  • Migrate from Poetry to UV for dependency management by @gridhead in #476
  • Introduce the recently added weapon Athame Artis by @gridhead in #479
  • Introduce the recently added character Durin to the roster by @gridhead in #477
  • Introduce the recently added weapon The Daybreak Chronicles by @gridhead in #480
  • Update dependency python to 3.12 || 3.13 || 3.14 by @renovate[bot] in #448
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #482
  • Introduce the recently added weapon Rainbow Serpent's Rain Bow by @gridhead in #481
  • Update actions/upload-artifact action to v6 by @renovate[bot] in #483
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #484
  • Introduce the recently added character Jahoda to the roster by @gridhead in #478
  • Stage the release v0.1.13 for Genshin Impact Luna III (v6.2 Phase 2) by @gridhead in #485

Characters

Two characters have debuted in this version release.

Durin

Durin is a sword-wielding Pyro character of five-star quality.

Jahoda

Jahoda is a bow-wielding Anemo character of four-star quality.

Weapons

Three weapons have debuted in this version release.

Athame Artis

Day King's Splendor Solis - Scales on Crit Rate.

Loadouts For Genshin Impact v0.1.13 Released
Athame Artis - Workspace

The Daybreak Chronicles

Dawning Song of Daybreak - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.13 Released
The Daybreak Chronicles - Workspace

Rainbow Serpent's Rain Bow

Astral Whispers Beyond the Sacred Throne - Scales on Energy Recharge.

Loadouts For Genshin Impact v0.1.13 Released
Rainbow Serpent's Rain Bow - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1527 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

La Neta sobre GEMINI.md y AGENTS.md: Poniéndole reglas al juego

Posted by Rénich Bon Ćirić on 2026-01-09 14:33:00 UTC

No mames, ¿sabías que la IA puede ser tu mejor chalán o tu peor pesadilla?

Si le estás pegando duro al código asistido por IA, seguro te ha pasado que la IA se pone necia o "alucina" bien gacho. Le pides un script y te lo da para Ubuntu (¡wákala!) cuando tú eres puro Fedora, o te mete librerías bien pesadas cuando tú quieres algo KISS y DRY.

Ahí es donde entran al quite los archivos de configuración de "memoria" o reglas: ~/.gemini/GEMINI.md y ~/.config/opencode/AGENTS.md. La neta, son un paro.

¿Qué son estos archivos?

Básicamente, son el manual de estilo y las reglas del juego que tú le dictas a la IA. Es la forma de decirle: "A ver, carnal, aquí se hacen las cosas así". En lugar de estarle repitiendo en cada prompt las mismas instrucciones, lo escribes una sola vez y la IA lo toma como ley.

Es como poner al tiro a un chalán nuevo, pero este sí tiene memoria fotográfica y no se le va la onda si se lo configuras bien.

La carnita: ¿Qué le pongo al archivo?

No se trata de escribir la Biblia, pero sí de dejar claras tus "líneas rojas". Aquí te van unos ejemplos de lo que yo tengo en mi GEMINI.md para que te sirva de ejemplo:

Preferencias del sistema:

Para que no me salga con cosas de Debian.

- The host system is Fedora 43 x86_64 with SELinux enabled.
- OS Tool Preference: Use tools available in the OS (latest Fedora) via `dnf`.
- Distro Preference: The user despises Debian/Ubuntu; never considers them.
Filosofía de código:

Para que no se pase de listo con soluciones complejas.

- KISS (Keep It Simple, Stupid): Always prefer simple, readable code.
- DRY (Don't Repeat Yourself): Extract repeated code.
- Avoid Premature Optimization: Write clean code first.
Ops y contenedores:

Porque aquí usamos Podman, mi compa. Nada de andar con cosas raras.

- Containers: Prefer Podman over Docker (Docker is sub-optimal).
- Containerfile: Use `Containerfile` instead of `Dockerfile`.
- Quadlets: Use systemd quadlets (`*.container`) when practical.

El truquencio: Un solo archivo

Para no andar manteniendo dos archivos diferentes y que luego se te desincronicen, la movida maestra es crear uno "maestro" y hacer un enlace simbólico (symlink). ¡Bien práctico!

Creas tu ~/.gemini/GEMINI.md bien tuneado y luego tiras el symlink para OpenCode:

ln -sf ~/.gemini/GEMINI.md ~/.config/opencode/AGENTS.md

Tip

Así, cambias una regla en un lado y se actualiza en todos lados. Te ahorras un chingo de chamba y mantienes la consistencia.

¿Cómo se refleja esto en la talacha diaria?

La diferencia es garrafal, carnal; te ahorra un chingo de tiempo:

  1. Git Limpiecito: Si le pones reglas de Git Management, la IA solita te va a sugerir commits con formato Conventional Commits y hasta te va a recordar firmarlos con GPG. Nada de mensajes tipo "fix bug".
  2. Infraestructura al Tiro: Si le pides un despliegue, ya no te va a dar un docker-compose.yml genérico. Te va a dar un archivo Quadlet listo para tu Fedorita.
  3. Seguridad Primero: La IA te va a avisar si estás a punto de regarla subiendo una API key. Es como tener un senior dev cuidándote la espalda.

Note

El código que generas se siente tuyo, adaptado a tu flujo de trabajo (EVALinux, en mi caso), y no un copy/paste genérico de Stack Overflow. ¿No crees que así conectas mejor con la gente?

Conclusión

Dedicarle unos 15 minutos a optimizar tu GEMINI.md no es pérdida de tiempo, es una inversión. Es la diferencia entre pelearte con la IA para que te entienda y tener un copiloto que ya se sabe el camino de memoria.

Así que ya se la saben, configuren sus reglas, no sean desidiosos y ¡a darle al ether!

Friday Links 26-01

Posted by Christof Damian on 2026-01-09 12:25:00 UTC
New prerecorded cassettes from Queen, Bjork, Bon Jovi, De La Soul, ...
First round of links for 2026. Everything is bad, so have some links to cheer you up.
 
I really liked the article about visibility, the one about Friday deploys, and sitting alone in a café … which I can relate to.  
I am a big fan of Bill Nighy's podcast, that certainly will improve your mood. 
 
For some reason, the random section is full of cassette tape related links.  

Leadership

Team's “Wrapped 2025” to Increase Velocity - nice idea I clearly didn't implement

The Product Operating Model at Google – A Critical View - possibly a bit outdated. 

Why Federated Design Systems Keep Failing - design systems need leadership, not democracy, as so many decisions 

You Can’t Debug a System by Blaming a Person - people need to feel safe to make good decisions and do good work 

Visibility is Velocity - this is so true, on every level of organisations 

Engineering 

The cardinal sin of software architecture - "The worst kind of accidental complexity in software is the unnecessary distribution, replication, or restructuring of state, both in space and time." 

On Friday Deploys: Sometimes that Puppy Needs Murdering (xpost) - I like this: "Deploy freezes are a hack, not a virtue" 

LLM-powered coding mass-produces technical debt - especially if you go anywhere near vibe-coding

Tackling tech debt | Meri Williams | LeadDev New York 2025  [YouTube] - another perspective on tech debt, going into the problems and some nice metrics to track them.

What is a PC compatible? - apparently nothing 

How AI is transforming work at Anthropic - some interesting data 

Environment 

Wind power slashed 4.6 billion euros off electricity bills in Spain last year claim - good for the environment, good for the wallet 

Urbanism 

Geometry, Empire &Control - the massive influence of military engineers on the history of urbanism  [YouTube] - long video about how history influences how we live 

Why Europe’s night-train renaissance derailed - it's expensive and will take time, nobody has the patience 

Car Brain - opening roads in San Francisco 

Random Cassettes

*PREMIERE*: Tanith Tape Archiv 01: Cybertape II (1989) [German, YouTube] - Tanith is releasing some of his old mixes on cassette tapes. 

Why I Quit Streaming And Got Back Into Cassettes [$$$] - "tapes remind us what we’re missing when we stop taking risks." 

Streaming Music To Cassette - because we love the sound … apparently. 

Stuttgart 21: In der Bahnsteighalle werden die Gleise gefräst  [German YouTube] - I love specialised train machines!    

The Amiga's filesystem is now on Linux and Mac, thanks to an emulated driver - good old Amiga. 

The Unbearable Joy of Sitting Alone in A Café - bonus points for not using your phone or a laptop. 

Why Didn’t AI “Join the Workforce” in 2025? - because nobody wants it and it isn't ready

ill-advised by Bill Nighy [Podcast] - When I grow up, I want to be as cool and well-dressed as Bill. 

Friday Links Disclaimer
Inclusion of links does not imply that I agree with the content of linked articles or podcasts. I am just interested in all kinds of perspectives. If you follow the link posts over time, you might notice common themes, though.
More about the links in a separate post: About Friday Links.

Community Update – Week 02 2026

Posted by Fedora Community Blog on 2026-01-09 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 05 Jan – 09 Jan 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Preparatory steps for the next Mass Rebuild which is currently scheduled for next week, Wednesday January 14th.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 02 2026 appeared first on Fedora Community Blog.

Introducing EktuPy

Posted by Kushal Das on 2026-01-09 06:49:21 UTC

Py (daughter) is now 11 years old, and she spends a lot of time on Scratch, makes beautiful and fun things. But, she thinks she is not a programmer as she is moving blocks and not typing code like us. I had questions for long time about how to move this Scratch generation into programming in general via Python. EktuPy is my probable solution.

Home page

In simple words, we have an editor to write code in the left, and a Canvas/stage on the left. You can do all the similar things you do on scratch here, I have a list of examples in the editor.

Hello World

We use PyScript and thanks to Astral we have both Ruff and ty for LSP/linting support in the editor (using webassembly). The whole code is executing on the browser of the user.

Drawing via keyboard

Drawing via pen

Pong

Yesterday I took part in the monthly PyScript Fun call because Nicholas reminded me, had fun to demonstrate it there and watched what others are building.

Two Sprites

The first time Py pocked around for 1:30 hours, she gave me 11 bugs, and next for 5 minutes and asked me to get tutorials, she did not want to read the documentation. So, for every example in the editor we have tutorials, not too detailed yet, but good enough to start.

Tutorial

You can create an account and save your programs. You can share them as public from your dashboard, then others can find those in the explorepage and run the code or remix if they want.

Space shooter

I am super nostalgic about one implementation :)

Oh, because I think kids may not have to learn about async programming in this platform, calls like wait() or ask() or play_sound_until_done() or wait_until() are all synchronous for the edtior, and then some AST transformer adds the async/await as needed.

Feel free to try this out, share the link to your kid or teachers/parents you know. Let me know how to improve. I will publish the codebase, a Django application with proper license and hopefully we can make it even better togather.

This project was not possible without all the work done before, including Scratch, or CodeMirror for the edtior, PyScript / PyOdide, the bigger Python community and Claude/Opus4.5 for incredicable TypeScript/Javascript help :)

The OpenCode "Dual-Pipeline" Architecture

Posted by Rénich Bon Ćirić on 2026-01-09 05:51:00 UTC

Let's be honest: most AI coding setups are a mess. One day you are using a cheap model that can't even close a bracket, and the next you're using a premium orchestrator that burns $10 in tokens just to fix a "hello world" typo. It's frustrating as hell, right?

I got tired of the "Context Fatigue" and the bill at the end of the month, so I came up with what I call the Dual-Pipeline Architecture. It's basically a way to turn yourself into a one-man agency without going broke.

Note

This setup is production-grade. It's meant for people who actually build stuff, not just prompt-jockeys.

Why This Setup Rocks

The core idea is a Hierarchical Mixture of Experts. Instead of asking one "smart" model to do everything, we load-balance the work.

  1. Dual Orchestrators (The "Manager" Layer):

    manager-opencode (COO):

    The workhorse. Runs on cheap, fast models (like GLM). It handles 80% of the routine coding and operations.

    manager-gemini (CTO):

    The big brain. Only called for high-stakes architecture or when things get really weird. It plans, then delegates the typing to the "juniors."

  2. Specialized Departments:

    We don't use a generic assistant. We split the brains:

    The Dev Team:
    • @architect-core: Designs the systems.
    • @builder-fast: Types code at light speed.
    • @qa-hawk: Audits everything with a nasty personality to find bugs.
    The Business Team:
    • @biz-strategist: Decides if what you're building actually makes money.
    • @creative-lead: Handles the copy so it doesn't sound like a robot wrote it.

Manual Setup Instructions

If you're like me and prefer to do things by hand instead of running a magic script, here's how you get this running.

Prerequisites:
  • Node.js & NPM (obviously).
  • uv installed (it makes ast-grep way faster).
  • The OpenCode CLI (npm i -g opencode-ai).
  1. Configuration:

    First, create your config folder:

    mkdir -p ~/.config/opencode
    

    Then, set up your base config.json to handle authentication:

    {
      "$schema": "https://opencode.ai/config.json",
      "plugin": ["opencode-gemini-auth@latest"]
    }
    
  2. Shell Alias:

    Don't waste time typing long commands. Add this to your .bashrc or .zshrc:

    alias opencode='npx opencode-ai@latest'
    

Tip

Keep your context clean! I explicitly disable heavy tools like Puppeteer or SQLite globally and only enable them for the specific agents that need them. Your wallet will thank you.

Usage Examples

Routine Work:
$ opencode > @manager-opencode create a simple landing page.
Complex Architecture:
$ opencode > @manager-gemini I need to refactor the auth system. Ask @architect-core for a plan first.

What do you think? This setup changed the game for me. It’s fast, it’s organized, and it’s significantly cheaper than just throwing GPT-4 at every problem.

Improve traceability with the tmt web app

Posted by Fedora Community Blog on 2026-01-08 12:00:00 UTC

The tmt web app is a simple web application that makes it easy to explore and share test and plan metadata without needing to clone repositories or run tmt commands locally.

At the beginning, there was the following user story:

As a tester, I need to be able to link the test case(s) verifying the issue so that anyone can easily find the tests for the verification.

Traceability is an important aspect of the testing process. It is essential to have a bi-directional link between test coverage and issues covered by those tests so that we can easily:

  • identify issues covered by the given test
  • locate tests covering given issues

Link issue from test

Implementing the first direction in tmt was relatively easy: We just defined a standard way to store links with their relations. This is covered by the core link key which holds a list of relation:link pairs. Here’s an example test metadata:

summary: Verify correct escaping of special characters
test: ./test.sh
link:
  - verifies: https://issues.redhat.com/browse/TT-206

Link test from issue

The solution for the second direction was not that straightforward. Thanks to its distributed nature, tmt does not have any central place where a Jira issue could point to. There is no server which keeps information about all tests and stores a unique id number for each which could be used in the link.

Instead of integers, we’re using the fmf id as the unique identifier. It contains url of the git repository and name of the test. Optionally, it can also define ref instead of using the default branch and path to the fmf tree if it’s not in the git root.

The tmt web app accepts an fmf id of the test or plan or both, clones the git repository, extracts the metadata, and returns the data in your preferred format:

  • HTML for human-readable viewing
  • JSON or YAML for programmatic access

The service is currently available at the following location:

Here’s an example of what the parameters would look like when requesting information about a test in the default branch of a git repository:

By default, a human-readable HTML version of the output is provided to the user. Include the format parameter in order to choose your preferred format:

It is possible to link a test, a plan, or both test and plan. The last option can be useful when a single test is executed under several plans. Here’s how the human readable version looks like:

Create new tests

In order to make the linking as smooth as possible, the tmt test create command was extended to allow automated linking to Jira issues.

First make sure you have the .config/tmt/link.fmf config prepared. Check the Link Issues section for more details about the configuration.

issue-tracker:
  - type: jira
    url: https://issues.redhat.com
    tmt-web-url: https://tmt.testing-farm.io/
    token: ***

When creating a new test, use the --link option to provide the issue which is covered by the test:

tmt test create /tests/area/feature --template shell --link verifies:https://issues.redhat.com/browse/TT-206

The link will be added to both test metadata and the Jira issue. Just note that the Jira link will be working once you push the changes to the remote repository.

Link existing objects

It’s also possible to use the tmt link command to link issue with already existing tests or plans:

tmt link --link verifies:https://issues.redhat.com/browse/TT-206 /tests/core/escaping

If both test and plan should be linked to the issue, provide both test and plan as the names:

tmt link --link verifies:https://issues.redhat.com/browse/TT-206 /tests/core/escaping /plans/features/core

This is how the created links would look like in Jira:

Closing notes

As a proof of concept, for now there is only a single public instance of the tmt web app deployed, so be aware that it can only explore git repositories that are publicly available. For the future we consider creating an internal instance in order to be able to access internal repositories as well.

We are looking for early feedback. If you run into any problems or any missing features, please let us know by filing a new issue. Thanks!

The post Improve traceability with the tmt web app appeared first on Fedora Community Blog.

🪦 PHP 8.1 is retired

Posted by Remi Collet on 2026-01-08 09:06:00 UTC

Two year afters PHP 8.0, and as announced, PHP version 8.1.34 was the last official release of PHP 8.1

To keep a secure installation, the upgrade to a maintained version is strongly recommended:

  • PHP 8.2 has security only support and will be maintained until December 2026.
  • PHP 8.3 has security only support and will be maintained until December 2027.
  • PHP 8.4 has active support and will be maintained until December 2026 (2028 for security).
  • PHP 8.5 has active support and will be maintained until November 2027 (2029 for security).

Read :

ℹ️ However, given the very important number of downloads by the users of my repository the version is still available in remi repository for Enterprise Linux (RHEL, CentOS, Alma, Rocky...) and Fedora and will include the latest security fixes.

⚠️ This is a best effort action, depending on my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.

You can also watch the sources repository on github.

Fedora Linux 43 (F43) election results

Posted by Fedora Community Blog on 2026-01-08 08:00:00 UTC

The Fedora Linux 43 (F43) election cycle has concluded. In this election round, there was only one election, for the Fedora Engineering Steering Committee (FESCo). Congratulations to the winning candidates. Thank you to all candidates for running in this election.

Results

FESCo

Five FESCo seats were open this election. A total of 214 ballots were cast, meaning a candidate could accumulate a maximum of 1,498 votes. More detailed information on the voting breakdown is available from the Fedora Elections app in the Results tab.

# votesCandidate
1013Kevin Fenzi
842Zbigniew Jędrzejewski-Szmek
784Timothée Ravier
756Dave Cantrell
706Máirín Duffy
685Fabio Alessandro Locati
603Daniel Mellado

The post Fedora Linux 43 (F43) election results appeared first on Fedora Community Blog.

2025 in books

Posted by Christof Damian on 2026-01-07 17:21:00 UTC

Another year, another bunch of books. 

I spend most of the year binging through thriller series that I had already had started in 2024. 

The books I really liked were the newest book from the Slough House series, the spy novel The Persian, and Fundamentals of Software Architecture (even if I didn't finish it). 

I am still using Goodreads to track these, so you can find the pretty Year in Books there.  

Currently, I am reading The John Varley Reader: Thirty Years of Short Fiction, which is great. Apparently, I have to wait until someone passes until people tell me about them

Non-Fiction

This year was very light on non-fiction. I just didn't have the patience for it.  

Never Search Alone: The Job Seeker’s Playbook Never Search Alone: The Job Seeker’s Playbook  - while searching for a new job, I used this book and the associated community to help. Some of it worked, a lot of it doesn't make sense for my roles in the current market. 

Life After Cars: Freeing Ourselves from the Tyranny of the Automobile - I love the podcast, but this was mostly about cars and not the life after them. 

Fundamentals of Software Architecture: An Engineering Approach - I quite liked this one, but I didn't finish it. I will continue at some point. 

Future Boy: Back to the Future and My Journey Through the Space-Time Continuum - Michael J. Fox autobiography from the Back to the Future days. Many cool titbits. 

Fiction 

I binged quite a few series, so I keep it short. 
 
All in the same universe, pleasant quick reads, exciting and with humour.  
 
Not much about these to say. I love Mark Dawson's thrillers. The stories are maybe a bit predictable, but they have exciting moments.  
 
Detective Inspector Declan Walsh Series #2-#6  - this was nice and then jumped the shark so badly I gave up on the series. 
 
Nightshade: A Novel a new Michael Connelly series. More cosy than the usual LA setting. 
 
The Lincoln Lawyer #8 The Proving Ground - OK, but so much better than the quite rubbish TV series. 
 
Use of Weapons - I wanted to continue with the Culture series by Ian M. Banks. It turns out this is the last one available on Kindle. 
 
Slough House #9: Clown Town - he is not writing fast enough. This is also so much better than the TV series. And the TV series is brilliant. 
 
The Persian: A Novel - another spy novel by David McCloskey, excellent, if a partly depressing. With this year's news, I would also like to have anything playing in those parts of the world.  
 
Titanium Noir #1: Titanium Noir - like a Raymond Chandler story in the near future 
 
Commissario Montalbano #1: The Shape of Water - not a series I will continue. It feels so outdated. 
 
Legends & Lattes #2 Brigands & Breadknives - I was eagerly awaiting this cosy fantasy story.  It was good, but there were not enough hot beverages and biscuits. 
 
Hidden in Memories - Cosy Nordic Noir - I think this is part three in the series.  
 
The Tourney: The Adventures of Maid Marian - I got this because I like the author on Mastodon, the book is very much average. At 65 pages, you read this romantic yarn quickly. 

Comics 

Lazarus Fallen #1-#6 - This is nearing the end, and it has been great. Good action, well drawn, and Greg Rucka at his best. 

GitHub Discussions versus Discourse

Posted by Ben Cotton on 2026-01-07 12:00:00 UTC

Some projects get by just fine with only communicating in the issue tracker and commits (or pull requests). Most projects need to talk more. They need to discuss governance issues, get community feedback on feature ideas, address user questions, plan events, and more. There’s no end to the number of venues to have these conversations, but for many projects, it comes down to one choice: GitHub Discussions versus Discourse.

Both of these platforms have their advantages and disadvantages. As I wrote in the appendix to Program Management for Open Source Projects, there’s no right tool, just the right tool for your community. I have used both of these tools, and can recommend both for different needs.

GitHub Discussions

GitHub Discussions is a relatively simple forum add-on available to GitHub repositories. Project administrators can add it with a simple checkbox. As a result, it requires no additional infrastructure and participants can use their existing GitHub account. You can easily convert issues to discussion threads or discussion threads into issues — some projects even require a discussion thread as a prerequisite for issue creation.

GitHub Discussions is tightly integrated into the rest of GitHub, as you might expect. This means it’s easy to tag users, cross-reference issues/commits, watch for new activity, and so on. On the other hand, this tight integration isn’t helpful if your project isn’t tightly integrated into GitHub. Depending on the nature of your project, users who come to ask questions may not even have a GitHub account.

Discourse

Discourse (not to be confused with the chat platform Discord) is an open source discussion platform. You can self-host it or pay for hosting from Discourse or their partners. Because it’s designed to be a community communication tool, it offers a lot more flexibility. This includes both themes as well as plugins and other configuration options.

Discourse includes a concept of “trust levels” that can automatically move users up through greater privileges based on a history of prosocial behavior. Moderators and access control can be adjusted on a per-category basis, which is particularly helpful for the largest of communities.

Discourse has a mailing list mode so that users who prefer can treat it like a mailing list. It also supports private conversations so that moderators and administrators can discuss concerns candidly.

GitHub Discussions versus Discourse: pick your winner

How you decide which tool to use will depend on several factors:

  • Other tooling. If your project’s infrastructure is entirely contained on GitHub, then GitHub Discussions is probably the best choice for you. If you don’t use GitHub at all, then Discourse makes more sense. In general, the more non-GitHub tooling you have (CI systems, for example), the more Discourse makes sense on this axis.
  • Infrastructure resources and budget. GitHub Discussions has zero (financial) cost to your community, so that’s a good fit for the vast majority of open source projects. Discourse requires you to have a budget to pay for hosting or the resources and skills to self-host. In my experience, self-hosting is fairly easy — if you have people in the community who can do it.
  • Project purpose. Communities that primarily build software — and mostly have development-oriented contributors — benefit from the tight integration that GitHub Discussions offers. If the community is not software-focused (e.g. if it’s an affinity group, advocacy organization, etc), then Discourse may be a better choice.
  • Target audience. If the people who will be participating in the conversation are primarily contributors or developer-like people, then GitHub Discussions can be a good fit. If you’re expecting general computer users — who may or may not even know what GitHub is — then Discourse is probably more approachable.
  • Community size. Discourse has a lot of flexibility and power to handle thousands of users. When you have ones or dozens of users, the simplicity of GitHub Discussions can be more appealing.

Ultimately, there’s no simple answer. You have to compare the tools across the axes above (plus any other technical or philosophical requirements you have).

This post’s featured photo by Volodymyr Hryshchenko on Unsplash.

The post GitHub Discussions versus Discourse appeared first on Duck Alignment Academy.

O problema não é a Venezuela

Posted by Avi Alkalay on 2026-01-07 10:48:00 UTC

Gente que diz que Maduro era péssimo e já vai tarde. Gente que diz que ninguém falou nada quando Maduro quis tomar parte da Guiana em 2023. E assim justifica Donal fazer o que fez na Venezuela, país que realmente está em frangalhos e agora seu povo foi libertado. Até celebra, até bate palmas.

Gente que diz essas coisas, acorde. Tente ver mais longe e para o futuro.

O problema maior não é bem a Venezuela.

O problema é a nova ordem mundial que se abre para o futuro agora.

Qual o próximo país que Donald vai querer invadir? Dinamarca? Colombia? Sim, porque abriu-se um precedente. E se Donald invade, porque Putin não invadiria em sua região também? E China? E Taiwan? Quando será a vez do Brasil ser invadido, com nossas infindáveis riquezas naturais, fontes de água potável, sol, população enorme? Invadir países pelas suas riquezas é coisa de século 15. Não de século 21, era da ONU.

Preparamos nossos filhos e nossas aposentadorias para uma certa ordem mundial que pode estar se desmanchando. O futuro fica tremendamente mais incerto agora para todo mundo. Esse é o risco. Isso é que é ruim.

Quanto a Venezuela, não há muitas evidências na História de país que foi invadido por suas riquezas naturais e que a situação melhorou muito para o povo lá. Invasão assim é para pilhar, usurpar, não para fazer caridade.

Bater palma para a abertura deste precedente é passar vergonha. Muita vergonha, gente. Em caso de dúvida, é melhor aproveitar a chance de ficar quieto.

Também no meu LinkedIn e Facebook.

I Voted, F43 edition

Posted by Tomasz Torcz on 2026-01-06 16:44:33 UTC

I've cast my votes in Fedora Engineering Steering Comittee. The voting closes tomorrow.

As usual, I've read interview with the candidates, and then decided on my preferences. Red Hat employees get a minus, new faces get plus. There are exceptions, it's not a hard rule!

That's one of the way I contribute to Fedora. Sometimes I blog about elections. I've also been a Fedora packages for almost 18 years!

https://badges.fedoraproject.org/pngs/ivoted-f43.png

028/100 of #100DaysToOffload

2025 blog review

Posted by Kushal Das on 2026-01-06 08:34:20 UTC

After 2005 again in 2025 I wrote only 8 blog posts. The year was difficult in many different ways. But, from September things became a bit better. I could not do a lot of things which I thought I would do, or rather I promised to do.

I hoping to catch up on those promises in the coming months. That not only includes blog posts on vairous things I am writing/building, but also I have a huge backlog of photos to work on and publish.

New badge: Fedora 47 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:19:56 UTC
Fedora 47 Change AcceptedYou got a "Change" accepted into the Fedora 47 Change list

New badge: Fedora 46 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:15:56 UTC
Fedora 46 Change AcceptedYou got a "Change" accepted into the Fedora 46 Change list

New badge: Fedora 45 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:14:25 UTC
Fedora 45 Change AcceptedYou got a "Change" accepted into the Fedora 45 Change list

New badge: Fedora 44 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:13:21 UTC
Fedora 44 Change AcceptedYou got a "Change" accepted into the Fedora 44 Change list

New badge: Fedora 43 Change Accepted !

Posted by Fedora Badges on 2026-01-06 06:12:31 UTC
Fedora 43 Change AcceptedYou got a "Change" accepted into the Fedora 43 Change list

DEADLINE 2026-01-07: Fedora Linux 43 FESCo Elections

Posted by Fedora Community Blog on 2026-01-05 14:39:09 UTC

Voting is currently open for the Fedora Engineering Steering Committee (FESCo). You have approximately 2 days and 9 hours remaining to participate.

DEADLINE: 2026-01-07 at 23:59:59 UTC
VOTE HERE: https://elections.fedoraproject.org/about/f43-fesco

Please ensure your ballot for the Fedora Linux 43 FESCo Elections is cast before the cutoff.

The post DEADLINE 2026-01-07: Fedora Linux 43 FESCo Elections appeared first on Fedora Community Blog.

Free Software Activities for 2025

Posted by Jonathan McDowell on 2026-01-05 07:57:19 UTC

Given we’ve entered a new year it’s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.

Conferences

My first conference of the year was FOSDEM. I’d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more “this is how we do it” rather than “here’s some neat Free Software that does it”. I’m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that’s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.

I’d more luck at DebConf25 where I gave a talk (Don’t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn’t seen in ages, and generally feeling a bit more invigorated about Debian.

Other conferences I considered, but couldn’t justify, were All Systems Go! and the Linux Plumbers Conference. I’ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.

I’m going to have to miss FOSDEM this year, due to travel later in the month, and I’m uncertain if I’m going to make DebConf (for a variety of reasons). That means I don’t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?

Debian

I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.

sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There’s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + Jürgen Spitzmüller from the LyX upstream for figuring out the issue + a fix.

Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.

I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.

While I don’t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.

libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.

Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There’s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.

I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.

Recognising the fact I wasn’t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas’ call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we’re not actually managing to do, to avoid the perception it’s all fine and no one else needs to step up. Took me too long to act on it.

The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.

Linux

TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Some were just cleanups, but several fixed real issues that were causing us issues. I’ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what’s going on there.

Personal projects

I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I’ve got a set of changes that should add RFC9580 (v6) support, but there’s not a lot of test keys out there at present for making sure I’m handling things properly. Equally there’s a plan to remove Berkeley DB from Debian, which I’m completely down with, but that means I need a new primary backend. I’ve got a draft of LMDB support to replace that, but I need to go back and confirm I’ve got all the important bits implemented before publishing it and committing to a DB layout. I’d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.

(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I’m aware of offers.)

That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I’m involved. I highly doubt 2026 will be much different, but I think that’s ok. I scratch my own itches, and if that helps out other folk too then that’s lovely, but not the primary goal.

Feliz Guerra Nova!

Posted by Avi Alkalay on 2026-01-04 13:46:00 UTC

Rússia invadindo Ucrania foi a precursora, mas agora com os EUA assaltando a Venezuela por petróleo fica re-estabelecida a lei do mais forte, desveladamente, como era séculos atrás. Foram prás cucuias 80 anos de ONU e diplomacia entre as nações.

O problema é o que vem agora com uma nova ordem mundial. Este kit de emergência foi divulgado recentemente para a população francesa pelo seu Ministério do Interior. A gente fica preocupado quando há radinho de pilha no kit.

Minha amiga francesa conta que tal alerta veio logo após os alertas similares emitidos pelos governos sueco e finlandês, países que estão geograficamente mais próximo da Rússia de Putin.

O MI6 britânico também, diz que “está operando numa zona entre guerra e paz”.

Acho que já podemos esperar que o mundo e sua geopolítica será lugar inesperadamente diferente dentro de uns 5 ou 10 anos. A frase é propositalmente incongruente para expressar a minha perplexidade. E tristeza.

Podemos até fazer um bolão de próxima invasão. Dinamarca e sua Groenlândia é a minha aposta mais quente. E tem também Taiwan pela China nos rumores.

Também no meu Facebook e LinkedIn.

What is a PC compatible?

Posted by Matthew Garrett on 2026-01-04 03:11:36 UTC

Wikipedia says “An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models”. But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?

Before we dig into that, let’s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They’d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who’d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn’t run elsewhere. CP/M’s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn’t need to care about the underlying hardware and would run on all systems that had a working CP/M port.

By 1979, boards based on the 8086, Intel’s successor to the 8080, were hitting the market. The 8086 wasn’t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM’s hardware, and the rest is history.

But one key part of this was that despite what was now MS-DOS existing only to support IBM’s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn’t include all the code needed to run on a PC - you needed IBM’s BIOS. To begin with this wasn’t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn’t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor’s ROM code wasn’t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.

And here’s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM’s functionality, or didn’t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you’d think wouldn’t be necessary given that’s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.

You’d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn’t maintain compatibility. As long as everything went via the BIOS this shouldn’t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn’t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.

And that’s what happened. IBM was the biggest player, so people targeted IBM’s platform. When BIOS interfaces weren’t sufficient they hit the hardware directly - and even if they weren’t doing that, they’d end up depending on behavioural quirks of IBM’s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.

So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you’d need some additional media for hardware-specific drivers. It’s something that still distinguishes the PC market from the ARM desktop market. But it’s not as true as it used to be, and it’s interesting to think about whether it ever was as true as people thought.

Let’s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don’t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn’t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no.

Ok. But the hardware is broadly the same, right? There’s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn’t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it’s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that’s really not going to work when you have a PCI card that’s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it’s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won’t2, so you’re still actually relying on the firmware to do the right thing but it’s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you’re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.

But imagine you are, or imagine you’re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that’s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you’re trying to run something built with IBM Pascal 1.0? There’s a risk that it’ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it’ll break. It’d work fine on an actual PC, and it won’t work here, so are we PC compatible?

That’s a very interesting abstract question and I’m going to entirely ignore it. Let’s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.

Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn’t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn’t display correctly on any future PCs either. This is going to become a theme.

There’s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn’t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you’re going to have a bad time. This isn’t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we’re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You’d likely say “yes”, but there’s software written for the original PC that won’t work there.

And, well, let’s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It’s fine, we’d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through.

So, what’s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it’ll run most old software, as long as it doesn’t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it’ll potentially be unusable or crash because time is hard.

The truth is that there’s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn’t run Flight Simulator. “PC Compatible” is a socially defined construct, just like “Woman”. We can get hung up on the details or we can just chill.


  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don’t provide BIOS compatibility ↩︎

  2. Back in the 90s and early 2000s operating systems didn’t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that’s how I made a laptop that could boot unmodified MacOS X ↩︎

  3. (my name will not be Wolfwings Shadowflight↩︎

  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA ↩︎

  5. and by advanced we’re still talking about the 90s, don’t get excited ↩︎

Looking for Mentors for Google Summer of Code 2026

Posted by Felipe Borges on 2026-01-02 12:39:47 UTC

It is once again that pre-GSoC time of year where I go around asking GNOME developers for project ideas they are willing to mentor during Google Summer of Code. GSoC is approaching fast, and we should aim to get a preliminary list of project ideas by the end of January.

Internships offer an opportunity for new contributors to join our community and help us build the software we love.

@Mentors, please submit new proposals in our Project Ideas GitLab repository.

Proposals will be reviewed by the GNOME Internship Committee and posted at https://gsoc.gnome.org/2026. If you have any questions, please don’t hesitate to contact us.

Open source trends for 2026

Posted by Ben Cotton on 2026-01-01 12:00:00 UTC

A new year is here and that means it’s time to clean up the confetti from last night’s party. It’s also time for my third annual trend prediction post. After a solid 2024, I did okay-ish in 2025. I am not feeling particularly confident about this year’s predictions in large part because so much depends on the direction of broader economic and political trends, which are far outside my expertise. But this makes a good segue into the first trend on my radar.

Geopolitics fracturing global cooperation

The US government proved to be an unreliable partner in a lot of ways in 2025 and I see little reason that will change in 2026. With capricious policy driven by retribution and self-serving, Europe has become more wary of American tech firms. This has led to efforts to develop a Europe-based tech stack and a greater focus on where data is stored (and what laws govern access to that data). Open source projects are somewhat insulated from this, but there are two areas that we’ll see effects.

First, is that US-based conferences will have an increasingly domestic attendee list. With anecdotes of foreign visitors held in detention for weeks and visa issuance contingent on not saying mean things about the president, it’s little wonder that fewer people are willing to risk travel to the United States. Global projects, like the Python Software Foundation, that have their flagship conference in the US may face financial challenges from a drop in attendance. The Europe versions of Linux Foundation events will be the main version (arguably that’s already true). FOSDEM will strain the limits of its venue, even more than it already does.

The other effect we may see is a sudden prohibition against individuals or nations participating in projects. Projects with US-based backers — whether company or foundation — already have to comply with US sanctions, the Entity List, and other restrictions. It’s conceivable that a nation, company, or individual who upsets the White House will find themselves subject to some kind of ban which could force projects to restrict participation. Whether these restrictions apply to open source is unclear, but I would expect organizations with something to lose to take a cautious approach. Projects with no legal entity will likely take a “how will you stop me?” approach.

A thaw in the job market

This section feels the most precarious, since it depends almost entirely on the macroeconomic conditions and what happens with generative AI. With the latter, I think my prediction of a leveling off in 2025 was just too soon. In 2026, we’ll see more recognition of where generative AI is actually useful and where it isn’t. Companies won’t fire thousands of workers to replace them with AI agents only to discover that the AI is…suboptimal. That’s not to say that AI will disappear, but the approach will be more measured.

With interest rates dropping, companies may feel more confident in trying to grow instead of cutting costs. Supply chain issues and Cyber Resilience Act (CRA) requirements (more on those in a moment) will drive a need for open source expertise specifically. Anecdotally, I’ve seen what seems to be an upward trend in hiring for open source roles in the last part of 2025 and I think that continues in 2026. It won’t be the huge growth we saw in the early part of the decade, but it will be better than the terrible job market we’ve seen in the last year or two.

Supply chain and compliance

Oh look: “software supply chain” is on my trends list. That’s never happened before, except for every time. It won’t stop being an issue in 2026, though. Volunteer maintainers will continue to say “I am not a supplier” as companies make ever increasing demands for information and support. September 11 marks the first significant deadline; companies must have a mechanism for reporting active vulnerabilities. This means they’ll be pushing on their upstream projects for that information.

Although open source projects don’t have obligations under the CRA, they’ll have an increased request burden to deal with. Unfortunately, I think this means that developing a process for dealing with the request deluge may distract from efforts to improve the project’s security. It may also drive more maintainers to give up.

This post’s featured photo by Jason Coudriet on Unsplash.

The post Open source trends for 2026 appeared first on Duck Alignment Academy.