/rss20.xml">

Fedora People

Pagure.io Migration

Posted by Fedora Infrastructure Status on 2025-12-03 21:00:00 UTC

Planned Outage - pagure.io migration - 2025-12-03 21:00-23:00 UTC

We will be migrating pagure.io to a new network in our rdu3 datacenter. All services on pagure.io will be taken down, all data synced, and then services will be restored on the new server/datacenter. IP addresses for …

Atajos útiles de Readline en Bash

Posted by Rénich Bon Ćirić on 2025-11-28 16:00:00 UTC

Hoy me acordé de lo útiles que son los atajos de Readline.

Estaba tecleando un comando larguísimo y me equivoqué al final. En lugar de borrar todo, usé Ctrl + A y Ctrl + E para saltar, y Ctrl + W para borrar palabras. ¡Chido! Readline es la librería que hace que Bash sea tan poderoso. Con sus atajos, editas líneas como un pro, sin mouse. La neta, una vez que los aprendes, no vives sin ellos.

Nota

Readline viene por defecto en Bash. Si usas otro shell, puede variar.

Atajos básicos

Ctrl + A:
Ir al inicio de la línea.
Ctrl + E:
Ir al final de la línea.
Ctrl + B:
Mover cursor un carácter a la izquierda.
Ctrl + F:
Mover cursor un carácter a la derecha.
Ctrl + H:
Borrar carácter anterior (como Backspace).
Ctrl + D:
Borrar carácter actual (como Delete).

Consejo

Usa Ctrl + A y Ctrl + E para saltar rápido al inicio o fin.

Edición avanzada

Ctrl + W:
Borrar palabra anterior.
Alt + D:
Borrar palabra siguiente.
Ctrl + K:
Borrar desde cursor hasta fin de línea.
Ctrl + U:
Borrar desde inicio de línea hasta cursor.
Ctrl + Y:
Pegar lo borrado (yank).

Advertencia

Ctrl + U borra todo antes del cursor, ¡cuidado con no perder comandos largos! Lo bueno es que lo reestableces con Ctrl + Y.

Historial

Ctrl + P:
Comando anterior en historial.
Ctrl + N:
Comando siguiente en historial.
Ctrl + R:
Búsqueda inversa en historial (escribe para buscar).
Ctrl + G:
Salir de búsqueda.

Consejo

Ctrl + R es genial para encontrar comandos viejos. Escribe parte y presiona Ctrl + R varias veces.

Completado y más

Tab:
Autocompletar comandos, archivos, etc.
Alt + ?:
Mostrar posibles completados.
Ctrl + L:
Limpiar pantalla.
Ctrl + C:
Cancelar comando actual.

¡PELIGRO!

Ctrl + C mata el proceso actual, útil pero no lo uses en medio de algo importante sin guardar.

Nota

Estos atajos funcionan en la mayoría de shells que usan Readline, como Bash.

Conclusión

Readline hace la terminal mucho más eficiente. Practica estos atajos y verás cómo acelera tu workflow. La neta, es una herramienta chingona.

Consejo

Para más, lee el man de readline o visita sitios como gnu.org.

Retomada (ou não) do Bitcoin

Posted by Avi Alkalay on 2025-11-28 12:39:33 UTC

Prá quem quer investir em criptomoedas, e tem estômago, e tem tempo para esperar, e sabe fazer, e sabe guardar por longo prazo, talvez agora seja um bom momento para começar pois o preço do Bitcoin caiu bastante nos últimos dias e parece estar retomando o crescimento (mas ninguém sabe do futuro).

Lembre-se que Bitcoin é ativo escasso, que tem expectativa de valorização (ou não), e diversos países já o usam como reserva de valor. Exatamente como ouro. Você pode não compreendê-lo, achar que é moda inútil, que gasta muita energia minerá-lo — exatamente como o ouro tem todas essas características —, mas fato inegável é que há um mercado mundial pujante que paga imediatamente e legalmente o valor em Reais que aparece na imagem por 1 bitcoin, se você o colocar à venda. Novamente, exatamente como é o ouro e outros metais e pedras preciosas, com a única diferença de que o Bitcoin só precisa da Internet para existir e ser transportado.

Bitcoin em si jamais vai substituir dinheiro, exatamente da mesma forma como ouro não é aceito no caixa do supermercado — precisa ser vendido/convertido antes de usar. Outras criptomoedas têm dinâmicas, propostas e finalidades diferentes, algumas muito interessantes. Mas acima de tudo você deve evitar as memecoins, pois não tem valor intrínseco nem utilidade nenhuma.

Colecionar ou guardar relógios, arte, jóias, selos ou criptomoedas são práticas muito parecidas do ponto de vista psicológico. Todos ativam algum valor diferente no imaginário humano. Status por beleza, status intelectual pelo valor histórico ou status por status mesmo. Nada disso coloca comida na mesa, não te salva de um cataclisma apocalíptico nem dá paz de espírito. Mas são todas coisas que a psique humana valoriza de alguma forma. Vai entender.

Também no meu LinkeIn.

💎 PHP version 8.5 is released!

Posted by Remi Collet on 2025-11-21 07:52:00 UTC

RC5 was GOLD, so version 8.5.0 GA was just released, at the planned date.

A great thanks to Volker Dusch, Daniel Scherzer and Pierrick Charron, our Release Managers, to all developers who have contributed to this new, long-awaited version of PHP, and to all testers of the RC versions who have allowed us to deliver a good-quality version.

RPMs are available in the php:remi-8.5 module for Fedora and Enterprise Linux ≥ 8 and as Software Collection in the remi-safe repository.

Read the PHP 8.5.0 Release Announcement and its Addendum for new features and detailed description.

For memory, this is the result of 6 months of work for me to provide these packages, starting in July for Software Collections of alpha versions, in September for module streams of RC versions, and also a lot of work on extensions to provide a mostly full PHP 8.5 stack.

emblem-notice-24.pngInstallation: read the Repository configuration and choose installation mode, or follow the Configuration Wizard instructions.

Replacement of default PHP by version 8.5 installation (simplest):

Fedora (dnf 5):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %fedora).rpm
dnf module reset php
dnf module enable php:remi-8.5
dnf install php

Enterprise Linux (dnf 4):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %rhel).rpm
dnf module switch-to php:remi-8.5/common

Parallel installation of version 8.5 as Software Collection (recommended for tests):

yum install php85

emblem-important-2-24.pngTo be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • This version will also be the default version in Fedora 44
  • Many extensions are already available, see the PECL extension RPM status page.

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php85)

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on 2025-11-26 07:00:00 UTC

We're updating Copr servers to F43

This outage impacts the copr-frontend and the copr-backend.

Pragmatic Bookshelf half-off sale!

Posted by Ben Cotton on 2025-11-25 21:49:05 UTC

Getting a start on your holiday shopping this weekend? Give the gift of knowledge to yourself or the people you love! Use promo code save50 at pragprog.com through December 1 to save 50% on every ebook title (except The Pragmatic Programmers).

Not sure what you should get? If you don’t have a copy of Program Management for Open Source Projects, now’s a great time to get one. I’ve been a technical reviewer for a few other books as well:

I’ve read (or have in my stack to read) other books as well:

With hundreds of titles to choose from, there’s something for you and the techies in your life.

The post Pragmatic Bookshelf half-off sale! appeared first on Duck Alignment Academy.

Cómo migrar de Windows a GNU/Linux y olvidarse a la chingada de esa cochinada

Posted by Rénich Bon Ćirić on 2025-11-25 16:00:00 UTC

Hoy me acordé cuando un compa me pidió ayuda con su laptop llena de virus y lentitud. La neta, Windows es como una novia celosa: te controla todo y te deja sin libertad. Pero GNU/Linux es abierto, gratis y bien chingón. Vamos a migrar paso a paso para que no te pierdas en el camino.

Preparación: Haz backup completo y mata a Windows bien

Antes de empezar, guarda todo lo importante. Windows te va a dejar tirado para variar, así que no seas gacho contigo mismo y respalda tus chingaderas.

Archivos personales:
Copia documentos, fotos, música y videos a un disco externo o nube. Usa herramientas como rsync o simplemente copia-pega. Verifica que todo esté intacto después.

Advertencia

¡Aguas! Si te equivocas en el particionado, puedes borrar todo tu disco. Si no tienes respaldo, ya valió madres.

Desactiva "Fast Startup" (Inicio Rápido):

Windows es tan tramposo que cuando le das "Apagar", en realidad hiberna para prender más rápido. Esto deja los discos duros "bloqueados" y Linux no podrá escribir en ellos.

  • Ve a Panel de Control > Energía > Elegir comportamiento de botones de inicio/apagado > Desactivar inicio rápido.

Consejo

Si tu compu es nueva, entra al BIOS/UEFI y desactiva el Secure Boot si te da lata, aunque Fedora suele jalar bien con él activado.

Elegir distro: Fedora y alternativas

GNU/Linux tiene distros para todos los gustos. Para principiantes, elige una estable y con buena comunidad, no te compliques.

Fedora:
Moderna, con actualizaciones regulares y RPM. Fácil de instalar, gran soporte para hardware nuevo. Yo la uso porque es confiable y la comunidad en Fedora México es a toda madre. Únete en Telegram: t.me/fedoramexico.
OpenMandriva:
Basada en RPM, amigable para nuevos usuarios. Tiene un instalador gráfico simple y buena documentación.
OpenSUSE:
Rolling release con RPM, ideal si quieres estabilidad con actualizaciones frecuentes. Tiene Yast para configuración fácil.

Arch Linux o Gentoo son para masoquistas o gente muy pro; la neta, evítalas si vas empezando o vas a terminar odiando la vida.

Instalación: Paso a paso detallado

  1. Crea USB booteable: Descarga la ISO de Fedora desde fedoraproject.org. Usa Rufus o Etcher para grabarla en una USB de al menos 8GB.

  2. Bootea desde USB: Reinicia la PC, entra al BIOS/UEFI (teclas como F2, F10, Del) y cambia el orden de boot para priorizar la USB.

    Consejo

    Antes de instalar, usa el modo "Live" un rato. Checa que jale el WiFi, el sonido y el Bluetooth. Si todo está suave, dale instalar.

  3. Instala: El instalador gráfico te guía. Selecciona idioma, zona horaria. Para particionado: * Borra Windows si no lo necesitas: asigna todo el espacio a / (raíz). ¡A la goma con Microsoft! * Dual-boot: Crea particiones separadas para Windows y Linux. Usa al menos 50GB para Linux, más si juegas.

  4. Usuario: Crea un usuario normal, no uses root. Elige una contraseña fuerte. Configura sudo para permisos elevados.

  5. Post-instalación: Actualiza el sistema. Abre la terminal y dale gas:

    sudo dnf update
    

Configuración inicial: Lo que nadie te dice (Codecs y Repos)

Fedora es muy purista con el software libre. Eso está chido, pero significa que "out of the box" no vas a poder ver videos MP4 ni escuchar MP3. No mames, ¿verdad? Arreglémoslo.

Habilitar RPM Fusion:

Este repositorio comunitario tiene todo lo "propietario" que Fedora no incluye (codecs, drivers de NVIDIA, Steam).

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
Codecs Multimedia:

Para que no te quedes sin ver tus series o escuchar rolas:

sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin
sudo dnf groupupdate sound-and-video
Drivers NVIDIA:

Si tienes tarjeta gráfica NVIDIA, esto es obligatorio para no andar con gráficos lentos:

sudo dnf install akmod-nvidia
Flatpak y Flathub:

Para instalar Spotify, Discord, Zoom y esas cosas privativas, usa Flathub. Fedora ya trae soporte Flatpak, solo añade el repo:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

Alternativas a software de Windows

Busca equivalentes libres para tus apps favoritas.

Productividad:
Microsoft Office -> LibreOffice (abre DOCX, XLSX sin broncas). Outlook -> Thunderbird o Evolution para correos.
Edición:
Photoshop -> GIMP (edición de imágenes avanzada). Illustrator -> Inkscape (vectores).
Juegos:
Steam funciona con Proton para juegos de Windows. Instala Steam y habilita Proton en ajustes. ¡Jalará bien perro!
Compatibilidad:
Para apps que no tienen alternativas, usa Wine o virtualización con VirtualBox.

Problemas comunes y soluciones

WiFi no conecta:
Verifica drivers con lspci o lsusb. A veces necesitas conectar el cable Ethernet primero para bajar el driver privativo (Broadcom suele ser latosa).
Dual-boot no arranca Windows:
Si el GRUB no ve a Windows, corre sudo os-prober y luego regenera el grub config.

Cómo obtener ayuda

La comunidad de GNU/Linux es muy solidaria. Aquí formas de pedir ayuda:

Foros y Reddit:
Únete a r/linux o r/Fedora en Reddit. Pregunta y comparte experiencias.
IRC:
Internet Relay Chat, chat en tiempo real. Usa clientes como HexChat. Para Fedora, conecta a libera.chat canal #fedora.
LUG (Linux User Group):
Grupos locales de usuarios de Linux. Organizan reuniones y talleres. En México, busca en lug.org.mx o meetup.com.
Telegram:
Comunidades como Fedora México en Telegram (t.me/fedoramexico).

Conclusión: Libertad y control

Migrar toma tiempo, pero vale la pena por la estabilidad y libertad. GNU/Linux te da control total sobre tu PC. Únete a comunidades como Reddit r/linux, IRC, Matrix o LUGs para ayuda. Una vez que migras, no vuelves. ¿No crees?

End of OpenID in Fedora

Posted by Fedora Community Blog on 2025-11-25 15:10:36 UTC

It’s finally here. We have an end date for OpenID in Fedora. The date is 1st May 2026. You can see it on the banner on https://id.fedoraproject.org/openid and it will be shown to you every time when trying to authenticate with OpenID. The date 1st May 2026 should give anybody still using OpenID authentication enough time to migrate to OpenID Connect.

We started our move to OpenID Connect and away from old OpenID authentication 4 years ago. This was a long road with plenty of obstacles on the way. We first ported all apps we own in Fedora Infrastructure to OpenID Connect. That took time, but we had at least control over these applications.

After porting all our applications we started to look at the application using OpenID authentication outside of Fedora ecosystem. This proved to be really difficult as those clients don’t need to register with Fedora Authentication System.

After some failures to contact the projects that we at least identified to use OpenID in Fedora, we decided that the best course of option is to separate the OpenID authentication system (the service that is providing it is called Ipsilon, which we want to decommission as well). I spent the last two months working on separating the OpenID authentication from OpenID Connect and SAML2. You can see the result on https://id.fedoraproject.org/openid.

This will now allow us to replace Ipsilon for both OpenID Connect and SAML2 and as most of the separation logic is in proxies, we should have no issue to reuse that for the new solution. This should resolve plenty of issues we are experiencing with current authentication system and let us remove one service from the portfolio of services we own in Fedora Infrastructure. We are looking forward to brighter future!

The post End of OpenID in Fedora appeared first on Fedora Community Blog.

Uso básico de Vim y consejos útiles

Posted by Rénich Bon Ćirić on 2025-11-24 16:00:00 UTC

¿Quieres aprender Vim?

Vim es un editor de texto muy perro que ha estado por ahí desde hace décadas. La neta, al principio parece complicado, pero una vez que lo dominas, ¡tu productividad se dispara! En este artículo, te cuento lo básico de Vim y unos consejos útiles para que empieces.

Nota

Vim es una mejora de Vi, que viene en casi todos los sistemas Unix-like. Si no lo tienes, instálalo con dnf -y install vim en Fedora o CentOS Stream (como root).

Modos de Vim

Vim tiene varios modos, cada uno para algo específico:

Modo Normal:
El default, para navegar y comandos.
Modo Insertar:
Para escribir texto, entra con i.
Modo Visual:
Para seleccionar, con v.
Modo Comando:
Para cosas avanzadas, con :.

Consejo

Presiona Esc cuando quieras para volver a Normal. ¡Es tu salvavidas!

Comandos Básicos Chingones

Aquí te dejo los comandos esenciales para arrancar:

  • :q - Salir (si no hay cambios).
  • :wq - Guardar y salir.
  • :wqa - Guardar todos los archivos y salir.
  • :q! - Salir sin guardar.
  • i - Insertar antes del cursor.
  • a - Insertar después del cursor.
  • dd - Borrar línea.
  • yy - Copiar línea.
  • p - Pegar.
  • u - Deshacer.
  • Ctrl+r - Rehacer.

Advertencia

¡Cuidado! Vim distingue mayúsculas. :Q no es :q.

Consejos Útiles

  1. Config básica: Haz un ~/.vimrc con set number para números de línea o set autoindent para indentar automático.
  2. Plugins: Agrega plugins como Vundle o vim-plug para más features, como mejor resaltado.
  3. Buscar y cambiar: :%s/viejo/nuevo/g cambia todas las "viejo" por "nuevo".
  4. Divisiones: :vsplit para dividir vertical, :split horizontal. Navega con Ctrl+w + dirección.
  5. Macros: Graba con q + letra, reproduce con @ + letra.

¡PELIGRO!

No edites archivos importantes sin respaldos. Vim no guarda automático.

Nota

Practica con vimtutor, el tutorial que viene con Vim. Solo escribe vimtutor.

Conclusión

Vim no es solo un editor, ¡es una herramienta que se apega a tu workflow! Con práctica, estos comandos serán instintivos. La neta, paciencia es lo que necesitas para aprenderlo.

Consejo

Únete a comunidades como el subreddit de Vim o foros para compartir tips.

How do we keep apps maintained on Flathub? (or building a more respectful App Store)

Posted by Timothée Ravier on 2025-11-23 23:00:00 UTC

There have been a few discussions about what Flathub should do to push developers to maintain their apps on the latest versions of the published runtimes. But most of those lack important details around how this would actually happen. I will not discuss in this post the technical means that are already in place to help developers keep their dependencies up to date. See the Flathub Safety: A Layered Approach from Source to User blog post instead.

The main thing to have in mind is that Flathub is not a commercial entity like other app stores. Right now, developers that put their apps on Flathub are (in the vast majority) not paid to do so and most apps are under an open source license.

So any discussion that starts with “developers should update to the latest runtime or have their apps removed” directly contradicts the social contract here (which is also in the terms of most open source licenses): You get something for free so don’t go around making demands unless you want to look like a jerk. We are not going to persuade overworked and generally volunteer developers to update their apps by putting pressure on them to do more work. It’s counter productive.

With that out of the way, how do we gently push developers to keep their apps up to date and using the latest runtime? Well, we can pay them. Flathub wants to setup a way to offer payments for applications but unfortunately it’s not ready yet. So in the meantime, the best option is to donate to the projects or developers working on those applications.

And make it very easy for users to do so.

Now we are in luck, this is exactly what some folks have been working on recently. Bazaar is a Flathub first app store that makes it really easy to donate to the apps that you have installed.

But we also need to make sure that the developers actually have something set up to get donations.

And this is were the flatpak-tracker project comes in. This project looks for the donation links in a collection of Flatpaks and checks if there is one and if the website is still up. If it’s not, it opens issues in the repo for tracking and fixing. It also checks if those apps are using the latest runtimes and open issues for that as well (FreeDesktop, GNOME, KDE).

If you want to help, you can take a look at this repo for apps that you use and see if things needs to be fixed. Then engage and suggest fixes upstream. Some of this work does not require complex technical skills so it’s a really good way to start contributing. This is probably one of the most direct way to enable developers to receive money from their users, via donations.

Updating the runtime used by an app usually requires more work and more testing, but it’s a great way to get started and to contribute to your favorite apps. And this is not just about Flathub: updating a Qt5 app to run with Qt6, or a GNOME 48 app to 49, will help everyone using the app.

We want to build an App Store that is respectful of the time developers put into developing, submitting, publishing, testing and maintaining their apps.

We don’t want to replicate the predatory model of other app stores.

Will some apps be out of date sometimes? Probably, but I would rather have a sustainable community than an exploiting one.

blogiversery: 22 years

Posted by Kevin Fenzi on 2025-11-23 18:14:07 UTC

Just a quick post to note my blogiversery.

22 years ago in 2003 I posted my first entry here.

Back then I was running a very new version of something called wordpress, then switched to wordpress-mu, then back to wordpress when multiuser came back into core wordpress and then finally to nikola.

It may be that blogs are out of vouge these days, but I still find them nice for longer thoughts that seem way too busy for social media.

Podman: Básicos y Creando un Contenedor con Systemd en CentOS Stream 10

Posted by Rénich Bon Ćirić on 2025-11-23 18:00:00 UTC

¡Heytale! ¿Quieres saber sobre Podman?

Podman es un motor de contenedores sin daemon para desarrollar, gestionar y ejecutar contenedores OCI en Linux. ¡A diferencia de Docker, no necesita un daemon corriendo, lo que lo hace más seguro y eficiente! Es compatible con imágenes Docker y se integra con Kubernetes para despliegues en la nube.

Nota

Podman se ejecuta sin root de forma predeterminada, mejorando la seguridad en comparación con Docker.

Advertencia

Mientras que Podman soporta contenedores sin root, algunas características avanzadas como la integración con systemd pueden requerir acceso root en el contenedor.

Características Chingonas

Sin daemonio:
Ejecuta contenedores directo desde tu usuario, sin servicios en segundo plano.
Rootless:
Corre contenedores sin root, ¡más seguridad!
Compatibilidad con docker:
Comandos similares, fácil migrar.
Integración con k8s:
Genera YAMLs para clústeres.
Gestión de imágenes y contenedores:
Construye, inspecciona y maneja imágenes OCI.
Soporte para systemd:
Ejecuta contenedores con systemd para servicios persistentes.

Ejemplo: contenedor con Systemd

¡Vamos a crear un contenedor de CentOS Stream 10 con systemd habilitado y PostgreSQL instalado! Asegúrate de tener Podman instalado.

# podman con systemd
## iniciar contenedor
podman run -di --name cs10-systemd centos:stream10

## instalar systemd y postgresql
podman exec cs10-systemd dnf -y install systemd postgresql-server sudo

## limpiar
podman exec cs10-systemd dnf clean all

## commitear a imagen
podman commit -s cs10-systemd cs10-systemd

## borrar contenedor
podman rm -f cs10-systemd

## correr con systemd
podman run -dt -p 127.0.0.1:5432:5432 --name=cs10-systemd localhost/cs10-systemd /usr/sbin/init

## configurar postgresql
podman exec cs10-systemd postgresql-setup --initdb

## habilitar e iniciar postgresql
podman exec cs10-systemd systemctl enable --now postgresql

## crear usuario y db
podman exec cs10-systemd sudo -u postgres createuser -dRS --no-replication renich
podman exec cs10-systemd sudo -u postgres createdb renich
podman exec cs10-systemd sudo -u postgres psql -c "ALTER USER renich WITH PASSWORD 'MySuperPass';"

## crear pg_hba.conf
cat << EOF > pg_hba.conf
local all all peer
host all renich 127.0.0.1/32 scram-sha-256
host all renich ::1/128 scram-sha-256
EOF

podman cp pg_hba.conf cs10-systemd:/var/lib/pgsql/data/pg_hba.conf
rm -f pg_hba.conf

## verificar
podman exec cs10-systemd systemctl restart postgresql
podman exec cs10-systemd systemctl status postgresql

PGPASSWORD='MySuperPass' psql -h 127.0.0.1 -l

# limpieza
podman rm -f cs10-systemd
podman rmi cs10-systemd

Consejo

Siempre limpia los contenedores e imágenes después de probar para ahorrar espacio en disco.

¡Este ejemplo muestra cómo podman facilita contenedores avanzados con systemd y PostgreSQL, perfecto para desarrollo y producción!

infra weeksly recap: Late November 2025

Posted by Kevin Fenzi on 2025-11-22 20:48:28 UTC
Scrye into the crystal ball

Another busy week in fedora infrastructure. Here's my attempt at a recap of the more interesting items.

Inscrutable vHMC

We have a vHMC vm. This is a virtual Hardware Management Console for our power10 servers. You need one of these to do anything reasonably complex on the servers. I had initially set it up on one of our virthosts just as a qemu raw image, since thats the way the appliance is shipped. But that was making the root filesystem on that server be close to full, so I moved it to a logical volume like all our other vm's. However, after I did that, it started getting high packet loss talking to the servers. Nothing at all should have changed network wise, and indeed, it was the only thing seeing this problem. The virthost, all the other vm's on it, they were all fine. I rebooted it a bunch, tried changing things with no luck.

Then, we had our mass update/reboot outage thursday. After rebooting that virthost, everything was back to normal with the vHMC. Very strange. I hate problems that just go away where you don't know what actually caused them, but at least for now the vHMC is back to normal.

Mass update/reboot cycle

We did a mass update/reboot cycle this last week. We wanted to:

  • Update all the RHEL9 instances to 9.7 which just came out

  • Update all the RHEL10 instances to 10.1 which just came out.

  • Update all the fedora builders from f42 to f43

  • Update all our proxies from f42 to f43

  • Update a few other fedora instances from f42 to f43

This overall went pretty smoothly and everything should be updated and working now. Please do file an issue if you see anything amiss (as always).

AI Scrapers / DDoSers

The new anubis is working I think quite well to keep the ai scrapers at bay now. It is causing some problems for some clients however. It's more likely to find a client that has no user-agent or accept header might be a bot. So, if you are running some client that hits our infra and are seeing anubis challenges, you should adjust your client to send a user-agent and accept header and see if that gets you working again.

The last thing we are seeing thats still anoying is something I thought was ai scraping, but now I am not sure the motivation of it, but here's what I am seeing:

  • LOTS of requests from a large amount of ip's

  • fetching the same files

  • all under forks/$someuser/$popularpackage/ (so forks/kevin/kernel or the like)

  • passing anubis challenges

My guess is that these may be some browser add on/botnet where they don't care about the challenge, but why fetch the same commit 400 times? Why hit the same forked project for millions of hits over 8 or so hours?

If this is a scraper, it's a very unfit one, gathering the same content over and over and never moving on. Perhaps it's just broken and looping?

In any case currently the fix seems to be just to block requests to those forks, but of course that means the user who's fork it is cannot access them. ;( Will try and come up with a better solution.

RDU2-CC to RDU3 move

This datacenter move is still planned to happen. :) I was waiting for a new machine to migrate things to, but it's stuck in process, so instead I just repurposed for now a older server that we still had around. I've setup a new stg.pagure.io on it and copied all the staging data to it, it seems to be working as expected, but I haven't moved it in dns yet.

I then setup a new pagure.io there and am copying data to it now.

The current plan if all goes well is to have an outage and move pagure.io over on december 3rd.

Then, on December 8th, the rest of our RDU2-CC hardware will be powered off and moved. The rest of the items we have there shouldn't be very impactful to users and contributors. download-cc-rdu01 will be down, but we have a bunch of other download servers. Some proxies will be down, but we have a bunch of other proxy servers. After stuff comes back up on the 8th or 9th we will bring things back on line.

US Thanksgiving

Next week is the US Thanksgiving holiday (on thursday). We get thursday and friday as holidays at Red Hat, and I am taking the rest of the week off too. So, I might be around some in community spaces, but will not be attending any meetings or doing things I don't want to.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115595437083693195

New badge: Let's have a party (Fedora 43) !

Posted by Fedora Badges on 2025-11-21 15:38:20 UTC
LetYou attended the F43 Virtual Release Party!

Community Update – Week 47

Posted by Fedora Community Blog on 2025-11-21 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infratructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 17 November – 21 November 2025

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • The intermittent 503 timeout issues plaguing the infra appear to finally be resolved, kudos to Kevin and the Networking team for tracking it down. 🎉
  • The Power10 hosts which caused the outage last week are now installed and ready for use.
  • Crashlooping OCP worker caused issues with log01 disk space
  • Monitoring migration to Zabbix is moving along, with discussions of when to make it “official”.
  • AI scrapers continue to cause significant load. A change has been made to bring some of the hits to src.fpo under the Varnish cache, which may help.
  • Update/reboot cycle planned for this week.

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

  • F43 RISC-V rebuild status: the delta for F43 RISC-V is still about ~2.5K packages compared to F43 primary. Current plan: once we hit ~2K package delta, we’ll start focusing on the quality of the rebuild and fix whatever important stuff that needs fixing. (Here is the last interim update to the community.)
  • Community highlight: David Abdurachmanov (Rivos Inc) has been doing excellent work on Fedora 43 rebuild, doing a lot of heavy-lifting. He also provides quite some personal hardware for Koji rebuilders.

Forgejo

Updates of the team responsible for Fedora Forge deployment and customization.
Ticket tracker

List of new releases of apps maintained by I&R Team

Minor update of FMN from 3.3.0 to 3.4.0
Minor update of FASJSON from 1.6.0 to 1.7.0
Minor update of Noggin from 1.10.0 to 1.11.0

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 47 appeared first on Fedora Community Blog.

Updates and Reboots

Posted by Fedora Infrastructure Status on 2025-11-20 22:00:00 UTC

We will be updating and rebooting various servers. Services will be up or down during the outage window.

If You’re Wearing More Than One Hat, Something’s Probably Wrong

Posted by Brian (bex) Exelbierd on 2025-11-20 15:00:00 UTC

If you’re wearing more than one hat on your head something is probably wrong. In open source, this can feel like running a haberdashery, with a focus on juggling roles and responsibilities that sometimes conflict, instead of contributing. In October, I attended the first OpenSSL Conference and got to see some amazing talks and, more importantly, meet some truly wonderful people and catch up with friends.

Disclaimer: I work at Microsoft on upstream Linux in Azure and was formerly at Red Hat. These reflections draw on roles I’ve held in various communities and at various companies. These are personal observations and opinions.

Let’s start by defining a hat. This is a situation where you are in a formalized role, often charged with representing a specific perspective, team, or entity. The formalization is critical. There is a difference between a contributor saying something, even one who is active in many areas of the project, and the founder, a maintainer, or the project leader saying it. That said, you are always you, regardless of whether you have one hat, a million hats, or none. You can’t be a jerk in a forum and then expect everyone to ignore that when you show up at a conference. Hats don’t change who you are.

During a few of the panels, several panelists were trying to represent multiple points of view. They participate or have participated in multiple ways, for example on behalf of an employer and out of personal interest. One speaker has a collection of colored berets they take with them onto the stage. Over the course of their comments they change the hat on their head to talk to different, and quite often all, sides of a question. I want to be clear, I am not calling this person out. This is the situation they feel like they are in.

I empathize with them because I have been in this same situation. I have participated in the Fedora community as an individually motivated contributor, the Fedora Community Action and Impact Coordinator (a paid role provided to the community by Red Hat), and as the representative of Red Hat discussing what Red Hat thinks. Thankfully, I never did them all at once, just two at a time. I felt like I was walking a tightrope. Stressful. I didn’t want my personal opinion to be taken as the “voice” of the project or of Red Hat.

This experience was formative and helped me navigate this the next time it came up when I became Red Hat’s representative to the CentOS Project Board. My predecessor in the role had been a long-time individual contributor and was serving as the Red Hat representative. They struggled with the hats game. The first thing I was told was that the hat switching was tough to follow and people were often unsure if they were hearing “the voice of Red Hat” or the “voice of the person.” I resolved to not further this. I made the decision that I would only ever speak as “the voice of Red Hat.”1 It would be clear and unambiguous.

But, you may be thinking, what if you, bex, really have something you personally want to say. It did happen and what I did was leverage the power of patience and friendship.

Patience was in the form of waiting to see how a conversation developed. I am very rarely the smartest person in the room. I often found that someone would propose the exact thing I was thinking of, sometimes even better or more nuanced than I would have.

On the rare occasions that didn’t happen I would backchannel one of my friends in the room and ask them to consider saying what I thought. The act of asking was useful for two reasons. One, it was a filter for things that may not have been useful to begin with. Two, if someone was uneasy with sharing my views, their feedback was often useful in helping me better understand the situation.

In the worst case, if I didn’t agree with their feedback, I could ask someone else. Alternatively, I could step back and examine what was motivating me so strongly. Usually that reflection revealed this was a matter of personal preference or style that wouldn’t affect the outcome in the long term. It was always possible that I’d hit an edge case where I genuinely needed a second hat.

I recognize this is not an easy choice to make. I had the privilege of not having to give up an existing role to make this decision. However, I believe that in most cases when you do have to give up one role for another, you’re better off not trying to play both parts. You’re likely blocking or impeding the person who took on the role you gave up. If you have advice a quiet sidebar with them will go further than potentially forcing them into public conversations that don’t need to be public. Your successor may do things differently, you should be ok with that. And remember what I wrote above, you’re not being silenced.

So when do multiple hats tend to happen? Here are some common causes of hat wearing:

  1. When you’re in a project because your company wants you there and you are personally interested in the technology.
  2. You participate in the project and a fork, downstream, or upstream that it has a relationship with.
  3. You participate in multiple projects all solving the same problem, for example multiple Linux distributions.
  4. You sit on a standards body or other organization that has general purview over an area and are working on the implementation.
  5. You work on both an open source project and the product it is commercially sold as.
  6. You’re a member of a legally liable profession, such as a lawyer (in many jurisdictions) so anything you say can be held to that standard.
  7. You’re in a small project and because of bootstrapping (or community apathy) you’re filling multiple roles during a “forming” phase.

This raises the question of which hat you should wear if you feel like you have more than one option. Here’s how I decide which hat to wear:

  1. Is this really a multi-hat situation? Are you just conflicted because you have views as a member of multiple projects or as someone who contributes in multiple ways that aren’t in alignment? If it isn’t a formalized role you’re struggling with the right problem. Speak your mind. Share the conflict and lack of alignment. This is the meat of the conversation.
  2. Why are you here? You generally know. That is the hat you wear. If you’re at a Technical Advisory Committee Meeting on behalf of your company and an issue about which you are personally passionate comes up - remember patience and friendship because this is a company hat moment.
  3. If you are in a situation where you can truly firewall off the conversations, you can change to an alternative hat. What this means is when you find yourself in a space where the provider of your other hat is very uninvolved. For example, if you normally work on crypto for your employer, but right now you are making documentation website CSS updates. Hello personal hat.
  4. If you’re in a 1:1 conversation and you know the person well, you can lay out all of your thoughts - just avoid the hat language. Be direct and open. If you don’t know the person well, you should probably err on the side of being conservative and think carefully about states 1 and 2 above.

Some will argue that in smaller projects or early-stage efforts the flexibility of multiple roles is a feature, not a bug, allowing for rapid adaptation before formal structures are needed. That’s fair during a “forming” phase - but it shouldn’t become permanent. As the project matures, work to clarify roles and expectations so contributors can focus on one hat at a time.

As a maintainer or project leader, when you find people wearing multiple hats, it’s a warning flag. Something isn’t going right. Figure it out before the complexity becomes unmanageable.

  1. In the case of this role it meant I spent a lot of time not saying much as Red Hat didn’t have opinions on many community issues preferring to see the community make its own decisions. Honestly, I probably spent more time explaining why I wasn’t talking than actually talking. 

⚙️ PHP version 8.3.27 and 8.4.14

Posted by Remi Collet on 2025-10-24 04:51:00 UTC

RPMs of PHP version 8.4.14 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.27 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

⚙️ PHP version 8.3.28 and 8.4.15

Posted by Remi Collet on 2025-11-20 14:21:00 UTC

RPMs of PHP version 8.4.15 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.28 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

⚠️ These versions introduce a regression in MySQL connection when using an IPv6 address enclosed in square brackets. See the report #20528. A fix is under review and will be released soon.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0 (next build will use 10.1)
  • EL-9 RPMs are built using RHEL-9.6 (next build will use 9.7)
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

A statement concerning the Fedora and Flathub relationship from the FPL

Posted by Fedora Community Blog on 2025-11-20 12:00:00 UTC

Hi,
I’m Jef, the Fedora Project Leader.

As FPL I believe Fedora needs to be part of a healthy flatpak ecosystem.  I’d like to share my journey in working towards that over the last few months with you all, and include some of the insights that I’ve gained. I hope by sharing this with you it will encourage those who share my belief to join with me in the journey to take us to a better future for Fedora and the entire ecosystem.

The immediate goal

First, my immediate goal is to get the Fedora ChangeProposal that was submitted to make Flathub the default remote for some of the Atomic desktops accepted on reproposal. I believe implementing the idea expressed in that ChangeProposal is the best available option for the Atomic desktops that help us down the path I want to see us walking together. 

There seems to be wide appeal from both the maintainers of specific Fedora outputs, and the subset of Fedora users of those desktop outputs, that using Flathub is the best tradeoff available for the defaults.  I am explicitly not in favor of shuttering the Fedora flatpaks, but I do see value in changing the default remote, where it is reasonable and desirable to do so. I continue to be sensitive to the idea that Fedora Flatpaks can exist because it is delivering value to a subset of users, even when it’s not the default remote but still targeting an overlapping set of applications serving different use cases. I don’t view this as a zero-sum situation; the important discussion right now is about what the defaults should be for specific Fedora outputs.  

What I did this summer

There is a history of change proposals being tabled and then coming back in the next cycle after some of the technical concerns were addressed.  There is nothing precedent-setting in how the Fedora Engineering Steering Committee handled this situation. Part of getting to the immediate goal, from my point of view, was doing the due diligence on some of the concerns raised in the FESCo discussion leading to the decision to table the proposal in the last release. So in an effort to get things in shape for a successful outcome for the reproposal, I took it on myself to do some of the work to understand the technical concerns around the corresponding source requirements of the GPL and LGPL licenses.

I felt like we were making some good progress in the Fedora discussion forums back in July. In particular, Timothee was a great help and wrote up an entirely new document on how to get corresponding sources for applications built in flathub’s infrastructure.  That discussion and the resulting documentation output showed great progress in bringing the signal to noise ratio up and addressing the concerns raised in the FESCo discussion.  In fact, this was a critical part of the talk I gave at GUADEC.  People came up to me after that talk and said they weren’t aware of that extension that Timothee documented. We were making some really great progress out in the open and setting a stage for a successful reproposal in the next Fedora cycle.

Okay, that’s all context intended to help you, dear reader, understand where my head is at. Hopefully we can all agree my efforts were aligned with the goal leading up to late July.   The next part gets a bit harder to talk about, and involves a discussion of communication fumbles, which is not a fun topic. 

The last 3 months

Unfortunately, at GUADEC I found a different problem, one I wasn’t expecting to find. Luckily, I was able to communicate face to face with people involved and they confirmed my findings, committed on the spot to get it fixed, and we had a discussion on how to proceed. This started an embargo period where I couldn’t participate in the public narrative work in the community I lead. That embargo ended up being nearly 3 months. I don’t think any of us who spoke in person that day at GUADEC had any expectation that the embargo would last so long.

Through all of this, I was in communication with Rob McQueen, VP of the Gnome Foundation, and one of the Flathub founders, checking in periodically on when it was reasonable for me to start talking publicly again. It seems that the people involved in resolving the issues took it so seriously that they not only addressed the deficiencies I found -missing files- but committed to creating significant tooling changes to help prevent it from happening again. Some characterized that work as “busting their asses.” That’s great, especially considering much of that work is probably volunteer effort. Taking the initiative to solve not just the immediate problem, but building tooling to help prevent it is a fantastic commitment, and in line with what I would expect from the volunteers in the Fedora community itself.  We’re more aligned than we realize I think.

What I’ve learned from this is there’s a balance with regard to embargos that must be struck. Thinking about it, we might have been better served if we had agreed to scope the embargo at the outset and then adjusted later with a discussion on extending the time further, that also gave me visibility into why it was taking additional time. It’s one of the ideas I’d like to talk to people about to help ensure this is handled better in the future.  There are opportunities to do the sensitive communications a bit better in the future, and I hope in the weeks ahead to talk with people about some ideas on that.

Now with the embargo lifted, I’ve resumed working towards a successful change reproposal. I’ve restarted my investigation of corresponding source availability for the runtimes. We lost 3 months to the embargo, but I think there is still work to be done.  Already, in the past couple of weeks, I’ve had one face to face discussion with a FESCo member, specifically about putting a reproposal together, and got useful feedback on the approach to that.

So that’s where we are at now.  What’s next?

The future

I am still working on understanding how source availability works for the Flathub runtimes.  I think there is a documentation gap here, like there was for the flatpak-builder sources extension.  My ask to the Fedora community, particularly those motivated to find paths forward for Flathub as the default choice for bootc based Fedora desktops, is to join me in clarifying how source availability for the critical FLOSS runtimes works so we can help Flathub by contributing documentation that all Flathub users can find and make use of.

Like I said in my GUADEC talk, having a coherent (but not perfect) understanding of how Fedora users can get the flatpak corresponding sources and make local patched builds is important to me to figure out as we walk towards a future where Flathub is the default remote for Fedora.  We have to get to a future where application developers can look at the entire linux ecosystem as one target. I think this is part of what takes the Linux desktop to the next level. But we need to do it in a way that ensures that end users have access to all the necessary source code to stay in control of their FLOSS software consumption. Ensuring users have the ability to patch and build software for themselves is vital, even if it’s never something the vast majority of users will need to do.  Hopefully, we’re just a couple more documents away from telling that story adequately for Flathub flatpaks.

I’ve found that some of the most contentious discussions can be with people with whom you actually have a significant amount of agreement. Back in graduate school, when my officemate and I would talk about anything we both felt well-informed about and were in high agreement on: politics, comic books, science, whatever it was.. we’d get into some of the craziest, heated arguments about our small differences of opinion, which were minor in comparison to how much we agreed on. And it was never about needing to be right at the expense of the other person. It was never about him proving me wrong or me proving him wrong. It was because we really deeply wanted to be even more closely aligned. After all, we valued each other’s opinions.  It’s weird to think about how much energy we spent doing that. And I get some of the same feeling that this is what’s going on now around flatpaks. Sometimes we just need to take a second and value the alignment we do have.  I think there’s a lot to value right now in the Fedora and Flathub relationship, and I’m committed to find ways both communities can add value to each other as we walk into the future.

The post A statement concerning the Fedora and Flathub relationship from the FPL appeared first on Fedora Community Blog.

New badge: FOSDEM 2026 Attendee !

Posted by Fedora Badges on 2025-11-20 05:51:09 UTC
FOSDEM 2026 AttendeeYou dropped by the Fedora booth at FOSDEM '26

curl’s zero issues

Posted by Ben Cotton on 2025-11-19 12:00:00 UTC

A few weeks ago, Daniel Stenberg highlighted that the curl project had — for a moment, at least — zero open issues in the GitHub tracker. (As of this writing, there are three open issues.) How can a project that runs on basically everything even remotely resembling a computer achieve this? The project has written some basics, and I’ve poked around to piece together a generalizable approach.

But first: why?

Is “zero issues” a reasonable goal? Opinions differ. There’s no One Right Way™ to manage an issue tracker. No matter how you choose to do it, someone will be mad about it. In general projects should handle issues however they want, so long as the expectations are clearly managed. If everyone involved knows what to expect (ideally with documentation), that’s all that matters.

In Producing Open Source Software, Karl Fogel wrote “an accessible bug database is one of the strongest signs that a project should be taken seriously — and the higher the number of bugs in the database, the better the project looks.” But Fogel does not say “open bugs”, nor do I think he intended to. A high number of closed bugs is probably even better than a high-ish number of open bugs.

I have argued for closing stale issues (see: Your Bug Tracker and You) on the grounds that it sends a clear signal of intent. Not everyone buys into that philosophy. To keep the number low, as the project seems to consistently do, curl has to take an approach that heavily favors closing issues quickly. Their apparent approach is more aggressive than I’d personally choose, but it works for them. If you want it to work for you, here’s how.

Achieving zero issues

If you want to reach zero issues (or at least approach it) for your project, here are some basic principles to follow.

Focus the issue tracker. curl’s issue tracker is not for questions, conversations, or wishing. Because curl uses GitHub, these vaguer interactions can happen on the repo’s built-in discussion forum. And, of course, curl has a mailing list for discussion, too.

Close issues when the reporter is unresponsive. As curl’s documentation states: “nobody responding to follow-up questions or questions asking for clarifications or for discussing possible ways to move forward [is] a strong suggestion that the bug is unimportant.” If the reporter hasn’t given you the information you need to diagnose and resolve the issue, then what are you supposed to do about it?

Document known bugs. If you’ve confirmed a bug, but no one is immediately planning to fix it, then add it to a list of known bugs. curl does this and then closes the issue. If someone decides they want to fix a particular bug, they can re-open the issue (or not).

Maintain a to-do list. You probably have more ideas than time to implement them. A to-do list will help you keep those ideas ready for anyone (including Future You) that wants to find something to work on. curl explicitly does not track these in the issue tracker and uses a web page instead.

Close invalid or unreproducible issues. If a bug can’t be reproduced, it can’t be reliably fixed. Similarly, if the bug is in an upstream library or downstream distribution that your project can do nothing about, there’s no point in keeping the issue open.

Be prepared for upset people; document accordingly. Not everyone will like your issue management practices. That’s okay. But make sure you’ve written them down so that people will know what to expect (some, of course, will not read it). As a bonus, as you grow your core contributors, everyone will have a reference for managing issues in a consistent way.

This post’s featured photo by Jeremy Perkins on Unsplash.

The post curl’s zero issues appeared first on Duck Alignment Academy.

📝 Redis version 8.4

Posted by Remi Collet on 2025-11-04 12:48:00 UTC

RPMs of Redis version 8.4 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

1. Installation

Packages are available in the redis:remi-8.4 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to redis:remi-8.4/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  redis
# dnf module enable redis:remi-8.4
# dnf install redis --allowerasing

You may have to remove the valkey-compat-redis compatibilty package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The modules are automatically loaded after installation and service (re)start.

The modules are not available for Enterprise Linux 8.

3. Future

Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.

Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.

4. Statistics

redis

redis-bloom

redis-json

redis-timeseries

🎲 PHP version 8.3.28RC1 and 8.4.15RC1

Posted by Remi Collet on 2025-11-07 06:27:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.15RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.28RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.0RC4 is in Fedora rawhide for QA
  • version 8.5.0RC4 is also available in the repository
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.28 and 8.4.15 are planed for November 20th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

infra weeksly recap: Early November 2025

Posted by Kevin Fenzi on 2025-11-15 17:08:50 UTC
Scrye into the crystal ball

Well, it's been a few weeks since I made one of these recap blog posts. Last weekend I was recovering from some oral surgery, the weekend before I had been on PTO on friday and was trying to be 'away'.

Lots of things happened in the last few weeks thought!

tcp timeout issue finally solved!

As contributors no doubt know we have been fighting a super anoying tcp timeout issue. Basically just sometimes requests from our proxies to backend services just timeout. I don't know how many hours I spent on this issue trying everything I could think of, coming up with theorys and then disproving them. Debugging was difficult because _most_ of the time everything worked as expected. Finally, after a good deal of pain I was able to get a tcpdump showing that when it happens the sending side sends a SYN and the receiving side sees nothing at all.

This all pointed to the firewall cluster in our datacenter. We don't manage that, our networking folks do. It took some prep work, but last week they were finally able to update the firmware/os in the cluster to the latest recommended version.

After that: The problem was gone!

I don't suppose we will ever know the exact bug that was happening here, but one final thing to note: When they did the upgrade the cluster had over 1 million active connections. After the upgrade it has about 150k. So, seems likely that it was somehow not freeing resources correctly and dropping packets or something along those lines.

I know this problem has been anoying to contributors. It's personally been very anoying to me, my mind kept focusing on it and not anything else. It kept me awake at night. ;(

In any case finally solved!

There is a new outstanding issue that has occurred from the upgrade: https://pagure.io/fedora-infrastructure/issue/12913 basically long running koji cli watch tasks ( watch-task / watch-logs) are getting a 502 error after a while. This does not affect the task in any way, just the watching of it. Hopefully we can get to the bottom of this and fix it soon.

outages outages outages

We have had a number of outages of late. They have been for different reasons, but it does make it frustrating trying to contribute.

A recap of a few of them:

  • AI scrapers continue to mess with us. Even though most of our services are behind anubis now, they find ways around that, like fetching css or js files in loops, hitting things that are not behind anubis and generally making life sad. We continue to block things as we can. The impact here is mostly that src.fedoraproject.org is sensitive to high load and we need to make sure and block things before it impacts commits.

  • We had two outages ( friday 2025-11-07 and later monday 2025-11-10 ) That were caused by a switch loop when I brought up a power10 lpar. This was due to the somewhat weird setup on the power10 lpars where they shouldn't be using the untagged/native vlan at all, but a build vlan. The friday outage took a while for us to figure out what was causing it. The monday outage was very short. All those lpars are correctly configured now and up and operating ok.

  • We had a outage on monday ( 2025-11-10 ) where a set of crashlooping pods filled up our log server with tracebacks and generally messed with everything. Pod was fixed, storage was cleared up.

  • We had some kojipkgs outages on thursday ( 2025-11-13 ) and friday ( 2025-11-14 ). These were caused by many requests for directory listings for some ostree objects directories. Those directories have ~65k files in them each, so apache has to stat 64k files each time it gets those requests. But then, cloudfront (which is making the request) times out after 30s and resends. So, you get a load average of 1000 and very slow processing. So, for now we put that behind varnish, so it just has to do it the first time for a dir and then it can send the cached result to all the rest. If that doesn't fix it, we can look at just disabling indexes there, but I am not sure the implications.

We had a nice discussion in the last fedora infrastructure meeting about tracking outages better and trying to do a RCA on them after the fact to make sure we solved it or at least tried to make it less likely to happen again.

I am really hoping for some non outage days and smooth sailing for a bit.

power10s

I think we are finally done with the power10 setup. Many thanks again to Fabian on figuring out all the bizare and odd things we needed to do to configure the servers as close to the way we want them as possible.

The fedora builder lpars are all up and operating since last week. The buildvm-ppc64les on them should have more memory and cpus that before and hopefully are faster for everyone. We have staging lpars also now.

The only final thing to do is to get the coreos builders installed. The lpars themselves are all setup and ready to go.

rdu2-cc to rdu3 datacenter move

I haven't really been able to think about this due to outages and timeout issue, but things will start heating up next week again.

It seems unlikely that we will get our new machine in time to matter now, so I am moving to a new plan: repurposing another server there to migrate things to. I plan to try and get it setup next week and sync pagure.io data to a new pagure instance there. Depending on how that looks we might move to it first week of december.

Theres so much more going on, but those are some highlights I recall...

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115554940851319300

Friday Links 25-26

Posted by Christof Damian on 2025-11-14 10:04:00 UTC

Skateboarder jumping off a building in NYC

Short, but with lots of good stuff. The fad of engineering management, drug policy in Spain, and Mr. TIFF are my favourites.

Have a lovely weekend!  

Leadership

"Good engineering management" is a fad - you will have to adapt and core skills are reusable

Seven Decisions - "Inspired" -- right. 

Engineering

Mr TIFF - the story of the TIFF format. As an Amiga fanboy, I like to follow anything related to IFF.

Prompt Injection in AI Browsers - nice. We will have so much fun in the future. 

Rust in Android: move fast and fix things  - Google's experience with Rust. Faster reviews are interesting. 

Urbanism

The Trammmformation of Diagonal  [YouTube] a look at the new tram lines and some history

34-Year-Old Finds Dream Job Doing The Unexpected [YouTube] - cargo bike Olympics as a business. 

Roads need to be narrower or wider to protect cyclists, says new government guidance - interesting finding. I do like smaller lanes, as this also reduces speed.

Random Skateboarding

We can't have nice fountains, part 3 [YouTube] - Some great skateboarding shots. 

Time to Migrate - Tim wants you to move to Mastodon. He is right. 

Drugs policy: Who Does It Best? [Podcast] - another special episode from The Europeans. 

Other Links

Friday Links Disclaimer
Inclusion of links does not imply that I agree with the content of linked articles or podcasts. I am just interested in all kinds of perspectives. If you follow the link posts over time, you might notice common themes, though.
More about the links in a separate post: About Friday Links.

Infra and RelEng Update – Week 46 2025

Posted by Fedora Community Blog on 2025-11-14 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 10th – 14th November 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 46 2025 appeared first on Fedora Community Blog.

Fedora at Kirinyaga University – Docs workshop

Posted by Fedora Magazine on 2025-11-14 08:00:00 UTC
Kirinyaga University students group photo

We did it again, Fedora at Kirinyaga university in Kenya. This time, we didn’t just introduce what open source is – we showed students how to participate and actually contribute in real time.

Many students had heard of open source before, but were not sure how to get started or where they could fit. We did it hands-on and began with a simple explanation of what open source is: people around the world working together to create tools, share knowledge, and support each other. Fedora is one of these communities. It is open, friendly, and built by different people with different skills.

We talked about the many ways someone can contribute, even without deep technical experience. Documentation, writing guides, design work, translation, testing software, and helping new contributors are all important roles in Fedora. Students learned that open source is not only for “experts.” It is also for learners. It is a place to grow.

Hands-on Documentation Workshop

A room full of kirinyaga students on a worskhop

After the introduction, we moved into a hands-on workshop. We opened Fedora Docs and explored how documentation is structured. Students learned how to find issues, read contribution instructions, and make changes step-by-step. We walked together through:

  • Opening or choosing an issue to work on
  • Editing documentation files
  • Making a pull request (PR)
  • Writing a clear contribution message

By the end of the workshop, students had created actual contributions that went to the Fedora project. This moment was important. It showed them that contributing is not something you wait to do “someday.” You can do it today.

“This weekend’s Open Source Event with Fedora, hosted by the Computer Society Of Kirinyaga, was truly inspiring! 💻

Through the guidance of Cornelius Emase, I was able to make my first pull request to the Fedora Project Docs – my first ever contribution to the open-source world. 🌍
– Student at Kirinyaga University

Thank you note

Huge appreciation to:

  • Jona Azizaj — for steady guidance and mentorship.
  • Mat H. — for backing the vision of regional community building.
  • Fedora Mindshare Team — for supporting community growth here in Kenya.
  • Computer Society of Kirinyaga — for hosting and bringing real energy into the room.

And to everyone who played a part – even if your name isn’t listed here, I see you. You made this possible.

Growing the next generation

The students showed interest, curiosity, and energy. Many asked how they can continue contributing and how to connect with the wider Fedora community. I guided them to Fedora Docs, Matrix community chat rooms, and how they can be part of the Fedora local meetups here in Kenya.

We are introducing open source step-by-step in Kenya. There is a new generation of students who want to be part of global technology work. They want to learn, collaborate, and build. Our role is to open the door and walk together(I have a discourse post on this, you’re welcome to add your views).

A group photo of students after the workshop

What Comes Next

This event is part of a growing movement to strengthen Fedora’s presence in Kenya. More events will follow so that learning and contributing can continue.

We believe that open source becomes strong when more people are included. Fedora is a place where students in Kenya can learn, grow, share, and contribute to something global.

We already had a Discourse thread running for this event – from the first announcement, planning, and budget proposal, all the way to the final workshop. Everything happened in the open. Students who attended have already shared reflections there, and anyone who wants to keep contributing or stay connected can join the conversation.

You can check the events photos submitted here on Google photos(sorry that’s not FOSS:))

Cornelius Emase,
Your Friend in Open Source(Open Source Freedom Fighter)

Is Kubernetes Ready for AI? Google’s New Agent Tech | TSG Ep. 967

Posted by Chris Short on 2025-11-14 05:00:00 UTC
Alan Shimel, Mike Vizard, and Chris Short discuss the state of Kubernetes following the KubeCon + CloudNativeCon North America 2025 conference.

How We Streamed OpenAlt on Vhsky.cz

Posted by Jiri Eischmann on 2025-11-13 11:37:21 UTC

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it’s a skill we should maintain ourselves.

To be honest, it’s bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it’s common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt—a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn’t quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don’t have insight into this part of the process, so I won’t focus on it. Michal’s job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt’s AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it’s not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn’t been limited by performance until then, but seven 1440p streams were truly at the edge of the server’s capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don’t change the resolution, you still need to transcode the video to leverage useful distribution features, which I’ll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn’t handle it. Fortunately, PeerTube allows for the use of “remote runners”. The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it’s not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn’t tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we’d better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn’t much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn’t grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server’s bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don’t think is a problem. After all, we’re not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn’t a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn’t even have noticed as a problem if I wasn’t focusing on it. I could imagine streaming only in 480p if necessary. But it’s clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn’t support bulk uploads. However, tools exist for this, and we’d like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn’t the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude – my talk about PeerTube at this year’s OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I’m glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud’s accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google’s infrastructure, but it doesn’t run for free either.

F43 election nominations now open

Posted by Fedora Community Blog on 2025-11-12 13:18:27 UTC

Today, the Fedora Project begins the nomination period during which we accept nominations to the “steering bodies” of the following teams:

This period is open until Wednesday, 2025-11-26 at 23:59:59 UTC.

Candidates may self-nominate. If you nominate someone else, check with them first to ensure that they are willing to be nominated before submitting their name.

Nominees do not yet need to complete an interview. However, interviews are mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2025-12-03) will be disqualified and removed from the election. Nominees will submit questionnaire answers via a private Pagure issue after the nomination period closes on Wednesday, 2025-11-26. The F43 Election Wrangler (Justin Wheeler) will publish the interviews to the Community Blog before the start of the voting period on Friday, 2025-12-05.

The elected seats on FESCo are for a two-release term (approximately twelve months). For more information about FESCo, please visit the FESCo docs.

The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the Elections docs.

The post F43 election nominations now open appeared first on Fedora Community Blog.

Managing a manual Alexa Home Assistant Skill via the Web UI

Posted by Brian (bex) Exelbierd on 2025-11-12 12:40:00 UTC

My house has a handful of Amazon Echo Dot devices that we mostly use for timers, turning lights on and off, and playing music. They work well and have been an easy solution. I also use Home Assistant for some basic home automation and serve most everything I want to verbally control to the Echo Dots from Home Assistant.

I don’t use the Nabu Casa Home Assistant Cloud Service. If you’re reading this and you want the easy route, consider it — the cloud service is convenient. One benefit of the service is that there is a UI toggle to mark which entities/devices to expose to voice assistants.

If you take the manual route, like I do, you must set up a developer account, AWS Lambda, and maintain a hand-coded list of entity IDs in a YAML file.

- switch.living_room
- switch.table
- light.kitchen
- sensor.temp_humid_reindeer_marshall_temperature
- sensor.living_room_temperature
- sensor.temp_humid_rubble_chase_temperature
- sensor.temp_humid_olaf_temperature
- sensor.ikea_of_sweden_vindstyrka_temperature
- light.white_lamp_bulb_1_light
- light.white_lamp_bulb_2_light
- light.white_lamp_bulb_3_light
- switch.ikea_smart_plug_2_switch
- switch.ikea_smart_plug_1_switch
- sensor.temp_humid_chase_c_temperature
- light.side_light
- switch.h619a_64c3_power_switch

A list of entity IDs to expose to Alexa.

Fun, right? Maintaining that list is tedious. I generally don’t mess with my Home Assistant installation very often. Therefore, when I need to change what is exposed to Alexa or add a new device, finding the actual entity_id is annoying. This is not helped by how good Home Assistant has gotten at showing only friendly names in most places. I decided there had to be a better way to do this other than manually maintaining YAML.

After some digging through docs and the source, I found there isn’t a built-in way to build this list by labels, categories, or friendly names. The Alexa integration supports only explicit entity IDs or glob includes/excludes.

So I worked out a way to build the list with a Home Assistant automation. It isn’t fully automatic - there’s no trigger that runs right before Home Assistant reboots - and you still need to restart Home Assistant when the list changes. But it lets me maintain the list by labeling entities rather than hand-editing YAML.

After a few experiments and some (occasionally overly imaginative) AI help, I arrived at this process. There are two parts.

Prep and staging

In your configuration.yaml enable the Alexa Smart Home Skill to use an external list of entity IDs. I store mine in /config/alexa_entities.yaml.

alexa:
  smart_home:
    locale: en-US
    endpoint: https://api.amazonalexa.com/v3/events
    client_id: !secret alexa_client_id
    client_secret: !secret alexa_client_secret
    filter:
      include_entities:
         !include alexa_entities.yaml

Add two helper shell commands:

shell_command:
  clear_alexa_entities_file: "truncate -s 0 /config/alexa_entities.yaml"
  append_alexa_entity: '/bin/sh -c "echo \"- {{ entity }}\" >> /config/alexa_entities.yaml"'

A script to find the entities

Place this script in scripts.yaml. It does three things:

  1. Clears the existing file.
  2. Finds all entities labeled with the tag you choose (I use “Alexa”).
  3. Appends each entity ID to the file.
export_alexa_entities:
  alias: Export Entities with Alexa Label
  sequence:
    # 1. Clear the file
    - service: shell_command.clear_alexa_entities_file

    # 2. Loop through each entity and append
    - repeat:
        for_each: "{{ label_entities('Alexa') }}"
        sequence:
          - service: shell_command.append_alexa_entity
            data:
              entity: "{{ repeat.item }}"
  mode: single

Why clear the file and write it line by line? I couldn’t get any file or notify integration to write to /config, and passing a YAML list to a shell command collapses whitespace into a single line. Reformatting that back into proper YAML without invoking Python was painful, so I chose to truncate and append line-by-line. It’s ugly, but it’s simple and it works.

The result is that I can label entities in the UI and avoid tedious bookkeeping.

Home Assistant entity details screen showing an IKEA smart plug named 'tree' with the Alexa label applied in the Labels section

Browser wars

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.