A project’s code of conduct is a vital part of creating a welcoming and healthy community. The existence of a code of conduct sets expectations for behavior within a community. It signals how the community expects its members to behave. Most people, of course, behave well with or without a code of conduct, so the mere existence of a code of conduct is not enough. Sometimes, the code of conduct has to be enforced. But who should enforce the code of conduct?
In a recent discussion on the copyleft-next project’s mailing list, Bradley Kuhn — one of the project leaders — raised this question. He said, in part:
I just don’t think it’s a good idea for various reasons for Richard [Fontana, another project leader] or me to enforce the CoC, and I note the Contributor Covenant says that “Community Leaders” are responsible for enforcement.
The case for outsourcing code of conduct enforcement
Bradley did not elaborate on why he thinks the project leadership should not enforce the code of conduct, so I’ll address some general arguments that I’ve seen.
Project leaders (generally) lack training and experience in code of conduct enforcement. Code of conduct enforcement is a skill, just like coding, issue triage, documentation writing, and any number of other activities that happen in an open source project. It’s not reasonable to expect one (or even a small number) of project leaders to be experts in every skill.
An “outsider” enforcing the code of conduct would be less biased. Someone who doesn’t work within the project on a daily basis doesn’t have history with the parties involved in a code of conduct issue. They don’t have to work with the parties in the future, either, so they avoid the temptation to make unjust decisions to “keep the peace”.
Code of conduct enforcement is emotionally-draining work. In the easiest cases, it requires navigating an emotionally-charged situation. In the hardest cases, it involves hearing about abuse or assault. Dealing with this is important, of course, but it takes a toll on project leaders and can accelerate burnout.
Why leaders should enforce the code of conduct
For every argument for outsourcing code of conduct, there’s a counterargument in favor of keeping it “in house.”
Leaders can learn new skills. Leading a project is not a technical exercise, it’s a people exercise. If a project leader doesn’t have the skills needed to handle code of conduct enforcement, that’s okay — they can learn. Given importance of code of conduct enforcement to maintaining a healthy community, it’s worth learning that skill. Project leaders don’t need to be the world’s leading expert; they need to be just competent enough.
Outsiders are not invested in the success of the project the way its leaders are. An outsider may look at the case in a vacuum, whereas the project leader understands the long term implications their decisions have for the project. Project leaders also better understand the people involved in an incident and can bring that broader context to a decision.
Code of conduct is an integral part of the responsibility of project leadership. Leadership is hard sometimes. Offloading the hard parts is abdicating leadership.
Easing the enforcement burden for project leaders
Code of conduct enforcement is a core part of leading an open source project, but there are ways to ease the burden. First among those is to share the load. A committee of three to five people means that the whole responsibility doesn’t fall on one person. It also gives space for recusals if one of the committee members is directly involved or has another conflict of interest. This committee can include people with formal leadership roles, respected community members, and an outside code of conduct expert.
If your project has multiple formal leadership roles (“maintainer” counts, here), include only some of them in the code of conduct committee at any one time. This allows for people to rotate out. Taking a break from code of conduct work gives people a chance to recharge and helps avoid the accelerated burnout.
Code of conduct expert Marie Nordin, who helped contributed ideas for this post, also suggests having legal counsel available to the code of conduct committee. This is not possible for all projects, of course, but any project stewarded by a foundation should have some form of counsel available for particularly difficult situations.
Early enforcement, with an eye toward encouraging acceptable behavior instead of punishing unacceptable behavior, will also help. The longer an issue festers, the harder it will be to resolve.
Koji is unavailable during F43 mass branching for external users. All builds
that will be running at that time for the rawhide will be canceled and can
be resubmitted by maintainers after the branching.
Once Fedora Linux 43 is branched we will reenable builds in Koji.
OpenH264 provides an open source and fully licenced codec for the widely-used H.264 format. However, due to legal constraints, the binaries thereof must be distributed directly by Cisco, who provides a repository of RPMs particularly built for Fedora. The question then arises, how can we use this with Fedora Flatpaks?
Flatpak (the program) includes an extra-data functionality which allows for downloading and installing artifacts (the metadata for which is embedded in the distributed flatpak) on the users’ machine during installation, rather than including the content in the flatpak itself. This is perfect for a case where we want to install something but cannot distribute the content ourselves.
Unfortunately, flatpak (the program) does not handle this properly when the flatpak (content) is downloaded from an OCI registry, such as we do in Fedora. (If we could just implement that, this article may not be necessary.) However, there are ways around this, in increasing order of difficulty.
All of these options require network access (at some point), the flatpak-module-tools package, and aside from Option 1, the fedpkg package. You only need one of these options, as they all just different ways to install the same extension (named org.fedoraproject.Platform.Codecs.openh264).
NOTE 1: because of the nature of these workarounds, each user on a system will need to run these steps, and will need to redo them for each Fedora Flatpak version (42, 43, etc.) and occasionally in between when there are updates to openh264 itself.
NOTE 2: In all of the following examples, RELEASE or @RELEASE@ must be replaced with the Fedora Flatpak runtime release for which the extension should be installed (currently, f42).
Option 1 – Download the Extension
This is the easiest option, provided that you are also able to install the openh264 RPM.
Browse the openh264-flatpak builds in Koji and select the most recent build corresponding to the version of your Fedora Flatpak runtime(s) (currently, f42).
Download the .oci.tar file corresponding to your architecture.
If Option 1 does not work, or there is no corresponding extension build available for download, then you can build and install the very same extension yourself.
Clone the extension spec:
fedpkg clone -ab RELEASE flatpaks/openh264 && cd openh264
Build and install the extension:
flatpak-module build-container-local --install
Option 3 – Build the Package and Extension
If you are somehow unable to install the openh264 RPM on your system, then the previous methods will not work. Instead, you will need to build openh264 itself and an extension which includes it (instead of downloading it, as above).
NOTE: this option requires flatpak-module-tools 1.2.0 or later.
Create the container.yaml flatpak spec:
flatpak:id:org.fedoraproject.Platform.Codecs.openh264build-extension:trueruntime:org.fedoraproject.Platformruntime-version:@RELEASE@sdk:org.fedoraproject.Sdkname:openh264component:openh264-flatpakbranch:@RELEASE@packages:-openh264-openh264-flatpak-config# for cleanup-commands (part of runtime)-coreutils-singlecleanup-commands:|mkdir -p /app/libmv /usr/lib64/libopenh264.so* /app/lib/
Clone the RPM spec:
fedpkg clone -ab RELEASE rpms/openh264
Build the RPM to be included in the extension:
flatpak-module build-rpms-local ./openh264
Build and install the extension:
flatpak-module build-container-local --install
Option 4 – The Fallback
If none of these are present but you have any Flathub content installed, the Fedora Flatpak runtimes uses the Flathub openh264 extension instead. However, this will no longer be an option as of the F43 release of the Fedora Flatpak runtime.
Hopefully, one day we will no longer have to deal with the artificial encumbrances of software patents, and will be able to distribute all open source software freely. Until that time, the Codecs extension space provides a flexible framework for extending the multimedia support of all Fedora flatpaks.
The man command, is short for manual. It provides access to the various up-to-date on-board documentation pages. This helps users utilize the Linux/Unix operating systems in a better manner.
What is man ?
The man command is a manual pager which provides the user with documentation about specific functions, system calls, and commands. The man command uses the less pager by default. (See man less for more information.)
Note that a man page is likely to contain better up-to-date information compared to what can be found on the internet. It is wise to compare the man page usage and options with that found on the web.
How to use man ?
To use the man command effectively we have to know the manual pages system. The manual pages are distributed in 8 sections. Each provides documentation on particular topics.
What are the manual page sections ?
Programs, shell commands and daemons.
System calls.
Library calls.
Special files, drivers, and hardware.
Configuration files.
Games.
Miscellaneous commands.
System administration commands and daemons.
Examples
To get the printf library function documentation (section 3):
# man 3 printf
To get the printf shell builtin documentation (section 1):
# man 1 printf
You can learn more about the man command and its options:
# man man
How to manage the index caches database
To update the existing database, or to create it, use the -c or –create flag with the mandb command:
# mandb --create
To do a correctness check on the documentation database use the -t or –test flag:
# mandb --test
How to export manual pages
To export a man page, use the -t flag with the man command:
man -t 5 dnf.conf > manual.ps
This will create a PostScript file with the contents of the dnf.conf man page from section 5.
Alternatively, if you want to output a PDF file, use something like this instead:
man -Tpdf 5 dnf.conf > dnf.conf.pdf
You will need the groff-perl package installed for this command to work.
Summary
The need to get information about commands, daemons, shell builtins, etc. to make them do what they are intended to do correctly, motivates us to use the system manual to learn not everything but the required knowledge to reach our goal.
I love it when the algorithm delivers me a beloved song that I didn’t even remember existed, didn’t know who made it, and didn’t know the name of.
Thank you, algorithm.
Or should I thank the data scientist who organized music data into a space of about 20 to 50 dimensions and optimized and trained decision trees to infer the next song to play?
Loadouts for Genshin Impact v0.1.10 is OUT NOW with the addition of support for recently released characters like Ineffa and for recently released weapons like Fractured Halo and Flame-Forged Insight from Genshin Impact v5.8 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Automated dependency updates for GI Loadouts by @renovate[bot] in #359
Exhibit repository activity on the documentation by @gridhead in #358
Automated dependency updates for GI Loadouts by @renovate[bot] in #362
Add Claude Code GitHub Workflow by @gridhead in #370
feat: add missing return type hints by @gridhead in #363
Include additional testcases for informative assets by @gridhead in #371
Cleanup the inconsistent tabs and spaces usage by @gridhead in #372
Fix Claude workflow permissions by @gridhead in #377
Add pyinstaller to development dependencies by @cursor[bot] in #376
chore(deps): automated dependency updates for GI Loadouts by @renovate[bot] in #378
chore(deps): automated dependency updates for GI Loadouts by @renovate[bot] in #379
Mock the tests involving Tesseract for reliability by @gridhead in #381
Require GitHub Actions workflows to trigger on pull_request_target by @gridhead in #385
Add the recently added weapon Fractured Halo to the GI Loadouts roster by @sdglitched in #383
Add the recently added weapon Flame-Forged Insight to the GI Loadouts roster by @sdglitched in #388
Add the recently added character Ineffa to the GI Loadouts roster by @sdglitched in #382
Interchange artifact rarity and artifact level dropdown positions for improved UX by @sdglitched in #390
Package GI Loadouts for Fedora Linux as an RPM package by @gridhead in #361
chore(deps): automated dependency updates for GI Loadouts by @renovate[bot] in #392
Stage the release v0.1.10 for Genshin Impact v5.8 Phase 1 by @sdglitched in #391
Characters
Ineffa
Ineffa is a polearm-wielding Electro character of five-star quality.
Ineffa - Workspace and Results
Weapons
Fractured Halo
Purifying Crown - Scales on Crit DMG.
Fractured Halo - Workspace
Flame-Forged Insight
Mind in Bloom - Scales on Elemental Mastery.
Flame-Forged Insight - Workspace
Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1465 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
The summer is just flying by. Lots of folks taking vacations (I should too)
and things have been somewhat quiet this last week.
Outage on monday
We had a nasty outage on monday. It affected lots and lots of our services.
At first it looked like a DDoS against the proxies in our main datacenter.
They were seeing packet loss and were unable to process much of anything.
These proxies are the gateways to a lot of services that are just in that
one main datacenter (koji, etc). On digging some more, I was seeing a number
of connections to our registry stuck in 'sending to client' in apache.
They would pile up there and take up all the connection slots, then nothing
else would be able to get through. It was unclear if this was causing the
packet loss or the packet loss was causing this. I updated and rebooted the
proxies and things came back. I'm not sure if this was really a DDoS, or
if it was a kernel or apache bug or if it was caused by something else.
I guess the takeaway is that if you can't quickly find a cause for something,
updating and rebooting (ie, turning it off and back on again) is worth a
try.
F43 branching next week
Fedora 43 is branching off rawhide next week. So, we have started resigning
everything in rawhide (and all new builds) with the fedora 44 signing key.
This has resulted in some signing slowdowns, but hopefully it will make
branching more smooth next week.
F39/40 not properly moved to archives
When a release goes end of life, we move it to our archives and update
mirrormanager to point any users there. Some things went wrong on doing
that with f39/40, so mirrormanager was still trying to give our normal
mirrors, many of which no longer had that content.
Of course you shouldn't run EOL releases, but this should be cleaned
up now if you have to for some weird reason.
Zodbot back on irc
I finally got around to building what I needed to bring our IRC bot
back up after the datacenter move. It definitely wasn't high on my list
but now it's back.
Some AI learning
Yesterday was a day of learning at Red Hat and I looked at a bunch of
ai related stuff. I ended up mostly playing with claude, and it was a mixed
bag.
On the plus side:
It got very right a process for adding a new host to ansible.
Clear, spelled out what needed to be added where and had examples.
It was quick for adding some debugging I wanted to add. Faster to ask
it that it would have taken me to type it out.
On the minus side:
It couldn't figure out a issue where some role was running on a host
that it shouldn't be. At first it said it would run there, then it
appologised and said it wouldn't. (I still need to track down why
it's happening).
I asked it to fix a more complicated problem in a python app, and
it basically just added checks to avoid the problem without actually
fixing it. Likely I wasn't prompting it correctly to allow that.
Restaurante cheio hoje, e uma mesa tinha um casal de pé com fortes holofotes filmando seus pratos. Estavam claramente trabalhando.
90 minutos depois, ao irem embora passaram pela minha mesa e não tive dúvidas: chamei perguntando o que estavam filmando e se não iam me filmar também, pois eu sou celebridade — cantada barata só prá quebrar o gelo, arrancar um sorrisinho com a finalidade de abrirem as portas prá entrevista que eu faria a seguir. Sem pudor, perguntei tudo o que tinha direito.
São donos de um canal meio novo, só uns 50K seguidores somados Insta e TikTok. Por isso cobram barato dos restaurantes: só R$500. Mas foram logo emendando que influenciadores maiores cobram até R$3000 para gravar e publicar resenha de um estabelecimento.
Fazem 2, 3 gravações por dia; vivem em estabelecimentos de comida praticamente todos os dias. Mas ultimamente começaram a diversificar em lojas de artigos para bebês e recém nascidos pois Luana está grávida.
E se a comida for ruim? — perguntei. Dizem que nunca aconteceu, mas se certificam antes pedindo um delivery anônimo prá aferir. Afinal, têm uma reputação a zelar.
Uma resenha publicada qualquer tem em torno de 1mi de visualizações e pode ter milhares de — varia e é orgânico pois não pagam nada para as plataformas promoverem seu conteúdo.
Eles têm umas 10 ou 15 resenhas já gravades e que estão pendentes de publicação. Depois da gravação (footage) vem a parte mais trabalhosa, que é escrever o roteiro do vídeo e fazer a edição. É a Luana que edita no próprio celular, com CapCut pago, pois tem uma série de filtros úteis. O processo todo parece não usar computador, só celular mesmo. Demora de 1 a 2 semanas para publicarem a resenha, a não ser que o cliente tenha urgência.
Luana e Kaio são de Campinas, mas os clientes são todos de São Paulo. Vivem em hotéis parceiros da Pauliceia, principalmente nos finais de semana, para executarem sua agenda frenética de resenhas. E estabelecimentos do interior, Campinas, Americana, Ribeirão? — perguntei. Contaram que no interior ninguém jamais contrata influenciador para fazer resenha. É só São Paulo mesmo.
Comentei que teria medo da balança caso levasse essa vida de muitos restaurantes. Kaio concordou e disse que já ganhou uns 15kg nessa empreitada. Vida de influenciador digital não é fácil.
Divulgar o canal deles eu não divulgo pois não sei se gostariam de tal exposição de bastidor. E também porque não sou influenciador de influencers. Só sou enxerido mesmo.
Meus filhos não piscaram os olhos nessa entrevista toda e também fizeram perguntas. Rolaram muitos outros assuntos interessantes e divertidos nessa noitada, mas conhecer os influenciadores digitais foi o evento mais notável, segundo os adolês.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 04 Aug – 08 Aug 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Join us to test the 6.16 kernel for Fedora Linux 43 during August 10 – 16!
What is a test week?
Test weeks are organised by the Fedora QA team per release cycle and are a great way to get involved in developing the upcoming Fedora Linux release. Instructions and test cases are provided for you, plus you will also be mentioned and thanked in the ‘Heroes of Fedora‘ blog at the end of the release. For more information on how to get involved, check out the Fedora Test Days wiki, and to participate in the upcoming Kernel 6.16 test week, read on!
Kernel 6.16 Test Week
The kernel team is working on final integration for Linux kernel 6.16. This recently released kernel version will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, August 10, 2025 to Saturday, August 16, 2025.
The wiki page contains links to the test images you’ll need to participate. The results can be submitted in the test day app.
What are we looking for?
Regressions when rebasing
Issues when installing from USB on VM and/or bare metal systems (file under ‘exploratory)
What do I need to do?
To contribute, you only need to be able to do the following things:
Flossy ( https://2025.fossy.us/ ) is in it's 3rd year now.
I always wanted to attend, but in previous years it was right
around the time that flock was, or some release event that was
very busy (or both!). This year happily it was not and I was able
to attend.
Flossy was in portland, oregon. Which is a 2ish hour drive from my
house. No planes, no jetlag! Hurray! The first day was somewhat of
a half day: registration started in the morning, but the keynote
wasn't until the afternoon. This worked great for me, as I was able
to do a few things in the morning and then drive up and park at
my hotel and walk over to the venue.
The venue was the 3rd floor of the PSU student union building.
I had a bit of confusion at first as the first set of doors I got
to were locked with a sign to enter from one street direction
for events. However, the correct door you can enter by _also_
has the sign on it, so I passed it thinking it would have a nice
"hey, this is the right door". A nice walk around the building and
I figured it out.
Registration was easy and into the ballroom/main room where keynotes
and such and tables for some free software projects as well as coffee
and snacks. I wished a few times there were some tables around to help
juggling coffee and a snack, and perhaps be a place to gather folks.
Somehow a table with 2 people standing at it and more room seems like
a chance to walk up and introduce yourself, where as two people talking
seems like it would be intruding.
Some talks I attended:
Is There Really an SBOM Mandate?
The main surprise for me is that SBOM has no actual definition and
companies/communities are just winging it. I definitely love the idea
of open source projects making their complete source code and scripts
available as their SBOM.
Panel: Ongoing Things in the Kernel Community
I tossed a 'what do you think of AI contributions' question to get
things started and there was a lot of great discussion.
The Subtle Art of Lying with Statistics
Lots of fun graphs here and definitely something people should keep
in mind when they see some stat that doesn't 'seem' right.
The Future of Fixing Technology
This was noting how we should try and define the tech we get
and make sure it's repairable/configurable. I completely agree,
but I am really not sure how we get to that future.
The evening event was nice, got to catch up with several folks.
Friday we had:
keynote: Assessing and Managing threats to the Nonprofit Infrastructure of FOSS
This was an interesting discussion from the point of view of nonprofit
orgs and manageing them. Todays landscape is pretty different than it was
even a few years ago.
After that they presented the Distinguished Service Award in Software Freedom
to Lance Albertson. Well deserved award! Congrats lance!
Starting an Open Mentorship Handbook!
This was a fun talk, it was more of a round table with the audience talking
about their experences and areas. I'm definitely looking forward to
hearing more about the handbook they are going to put together and
I got some ideas about how we can up our game.
Raising the bar on your conference presentation
This was a great talk! Rich Bowen did a great job here. There's tons of
good advice on how to make your conference talk so much better. I found
myself after this checking off when I saw things in talks he mentioned.
everyone speaking should give a watch to this one.
I caught next "Making P2P apps with Spritely Goblins".
Seemed pretty cool, but I am not so much into web design, so I was
a bit lost.
How to Hold It Together When It All Falls Apart:
Surviving a Toxic Open Source Project Without Losing it.
This was kind of a cautionary tale and a bit harrowing, but kudos
to sharing for others to learn by. In particular I was struck by
"being available all the time" (I do this) and "people just expect
you to do things so no one does them if you dont" (this happens to me
as well. Definitely something to think about.
It's all about the ecosystem! was next.
This was a reminder that when you make a thing that a ecosystem
comes up around, it's that part that people find great. If you try and
make it harder for them to keep doing their ecosystem things they will
leave.
DevOps is a Foreign Language (or Why There Are No Junior SREs)
This was a great take on new devops folks learning things like
adults learn new languages. There were a lot of interesting parallels.
More good thoughts about how we could onboard folks better.
Saturday started out with:
Q&A on SFC's lawsuit against Vizio
Some specifics I had not heard before, was pretty interesting.
Good luck on the case conservency!
DRM, security, or both? How do we decide?
Matthew Garret went over a bunch of DRM and security measures.
Good background/info for those that dont follow that space.
Some general thoughts: It was sure nice to see a more diverse audience!
Things seemed to run pretty smoothly and portland was a lovely space
for it. Next year, they are going to be moving to Vancouver, BC.
I think it might be interesting to see about having a fedora booth
and/or having some more distro related talks.
Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement.
So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.
With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.
RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
So you now have the choice between Redis and Valkey.
1. Installation
Packages are available in the valkey:remi-8.1 module stream.
These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The Modules are automatically loaded after installation and service (re)start.
Some modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.
Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
So users will have the choice and can even use both.
ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valley 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valley 8.0, so it should be part of EL-9.7.
As everyone probably knows rust is considered a great language for secure programming and hence a lot of people are looking at it for everything from low level firmware to GPU drivers. In a similar vein it can already be to build UEFI applications.
Upstream in U-Boot we’ve been adding support for UEFI HTTP and HTTPs Boot and it’s now stable enough I am looking to enable this in Fedora. While I was testing the features on various bits of hardware I wanted a small UEFI app I could pull across a network quickly and easily from a web server for testing devices.
Of course adding in display testing as well would be nice…. so enter UEFI nyan cat for a bit of fun!
Thankfully the Fedora rust toolchain already has the UEFI targets built (aarch64 and x86_64) so you don’t need to mess with third party toolchains or repos, it works the same for both targets, just substitute the one you want in the example where necessary.
I did most of this in a container:
$ podman pull fedora
$ podman run --name=nyan_cat --rm -it fedora /bin/bash
# dnf install -y git-core cargo rust-std-static-aarch64-unknown-uefi
# git clone https://github.com/diekmann/uefi_nyan_80x25.git
# cd uefi_nyan_80x25/nyan/
# cargo build --release --target aarch64-unknown-uefi
# ls target/aarch64-unknown-uefi/release/nyan.efi
target/aarch64-unknown-uefi/release/nyan.efi
From outside the container then copy the binary, and add it to the EFI boot menu:
$ podman cp nyan_cat:/uefi_nyan_80x25/nyan/target/aarch64-unknown-uefi/release/nyan.efi ~/
$ sudo mkdir /boot/efi/EFI/nyan/
$ sudo cp ~/nyan.efi /boot/efi/EFI/nyan/nyan-a64.efi
$ sudo efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "nyan" --loader \\EFI\\nyan\\nyan-a64.efi
Join us this week for the Anaconda Web UI Installer test week where we are focusing testing on Anacondas brand new WebUI for KDE and Spins live images.
What is a test week?
Test weeks are organised by the Fedora QA team per release cycle and are a great way to get involved in developing the upcoming Fedora Linux release. Instructions and test cases are provided for you, plus you will be mentioned and thanked in the ‘Heroes of Fedora‘ blog at the end of the release. More information on how to get involved is located on the Fedora Test Days wiki. To participate in the upcoming Kernel 6.16 test week, read on!
Anaconda Web UI Installer Test Week
The Anaconda WebUI became the default installer interface in the previous Fedora release for Fedora Workstation. This Test Day extends that effort to ensure the WebUI works smoothly across the broader Fedora ecosystem, especially in various Spin environments and Fedora KDE. As a result, the Anaconda and QA teams have organized a test week from Monday August 4, 2025 to Friday August 8, 2025.
The wiki page contains links to the test images you’ll need to participate. The results can be submitted in the test day app.
What are we looking for?
How partitioning is working
Cockpit Storage Editor performance
Internalisation
Issues not mentioned in the test cases – file those findings under ‘exploratory’.
What do I need to do?
To contribute, you only need to be able to do the following things:
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
July is already behind us – time really flies when you’re learning and growing! I’m happy to share what I’ve been up to this month as a Fedora DEI Outreachy intern.
To be honest, I feel like I’m learning more and more about open source every week. This month was a mix of video editing, working on event checklists, helping with fun Fedora activities, and doing prep work for community outreach.
Shoutouts
First, I want to say a big thank you to my mentor, Jona Azizaj, for her amazing support and guidance.
Also, shoutout to Mat H (the program) – thank you for always helping out and making sure I stay on track.
Being part of the Fedora DEI team has been a really positive experience. It feels awe-some to be part of the team.
What I’ve been working on
Event Checklists
I reviewed the existing event checklist and started making updates to improve it. I want to make it easier for future organizers to plan Fedora events. If you haven’t checked it, you can find the draft in the Discourse post and be sure to leave your comments if you have something to add.
How I managed swags and funding
This month, I figured out how Fedora handles swag distribution, event funding, and reimbursements through ticket requests and Mindshare support – especially useful for local events like mine in June. I’m documenting the process to make it easier for future contributors to plan events smoothly.
Video editing
I worked on editing Fedora DEI and Docs workshop video, which will be used to promote our work. Actually I didn’t know anything about editing, but it’s worth editing this to learn a few things about video editing.
Fedora Fun Activities
I’ve taken over the Fedora Fun socials! We hosted a social session on July 25 at 15:00 UTC, and another one followed on Friday, August 1. These are light, 1-hour sessions to hang out with our community members.
I’ve also started working on community outreach, you can check this discourse post, a draft for what I am looking forward to, helping prepare for future Fedora outreach within our regions, especially around how to reach out to students, what topics would be covered and more.
Chairing DEI Meetings
I got the chance to chair multiple Fedora DEI team meetings. It felt a bit scary at first, but with each meeting, I’ve become more confident running the meetings and helping keep things on track.
It helped me understand more about being inclusive when working with others – super helpful in open source communities!
What’s next
As we move into August, our main focus will be:
Community Outreach – bring more people into Open Source and Fedora’s space.
Event Checklist Documentation – we’re working on getting the updated checklist into Fedora DEI Docs so it’s more accessible and useful to everyone planning an event soon.
I’ll also continue contributing to DEI docs and helping organize fun and inclusive Fedora activities.
You’ve got anything we can do together? Hit me up on the matrix (username: lochipi) about it.
Final thoughts
This internship continues to teach me how open source works – from collaboration, planning events, to working with different Fedora teams.
If you’re curious about Fedora or want to support DEI work, join us in the Matrix room – we’d love to have you around.
The 46th annual TeX Users Group conference (TUG2025) took place in Kerala during July 18–20, 2025. I’ve attended and presented at a few of the past TUG conferences; but this time it is different as I was to present a paper and help to organize the conference. This is a personal, incomplete, (and potentially hazy around the edges) reflection of my experience organizing the event, which had participants from many parts of the Europe, the US and India.
Preparations
The Indian TeX Users Group, lead by CVR, have conducted TUG conferences in 2002 and 2011. We, a group of about 18 volunteers, lead by him convened as soon as the conference plan was announced in September 2024, and started creating todo-lists, schedules and assigning responsible persons for each.
STMDocs campus has excellent conference facilities including large conference hall, audio/video systems, high-speed internet with fallback, redundant power supply etc. making it an ideal choice, as done in 2011. Yet, we prioritized the convenience of the speakers and delegates to avoid travel from and to the hotel in city — prior experience found it is best to locate the conference facility closer to the stay. We scouted for a few hotels with good conference facilities in Thiruvananthapuram city, and finalized the Hyatt Regency; even though we had to take greater responsibility and coordination as they had no prior experience organizing a conference with requirements similar to TUG. Travel and visit advisories were published on the conference web site as soon as details were available.
Projector, UPS, display connectors, microphones, WiFi access points and a lot of related hardware were procured. Conference materials such as t-shirt, mug, notepad, pen, tote bag etc. were arranged. Noted political cartoonist E.P. Unny graciously drew the beloved lion sketches for the conference.
Karl Berry, from the US, orchestrated mailing lists for coordination and communication. CVR, Shan and I assumed the responsibility of answering speaker & delegate emails. At the end of extended deadline for submitting presentations and prerecorded talks; Karl handed us over the archive of those to use with the audio/video system.
Audio/video and live streaming setup
I traveled to Thiruvananthapuram a week ahead of the conference to be present in person for the final preparations. One of the important tasks for me was the setup the audio/video and live streaming for the workshop and conference. The audio/video team and volunteers in charge did a commendable job of setting up all the hardware and connectivity on 16th evening and we tested presentation, video playing, projector, audio in/out, prompt, clicker, microphones and live streaming. There was no prompt at the hotel, so we split the screen-out to two monitors placed on both side of the podium — this was much appreciated by the speakers later. In addition to the A/V team’s hardware and (primary) laptop, two laptops (running Fedora 42) were used: a hefty one to run the presentation & backup OBS setup; another for video conferencing remote speakers’ Q&A session. The laptop used for presentation had 4K screen resolution. Thanks to Wayland (specifically, Kwin), the connected HDMI out can be independently configured for 1080p resolution; but it failed to drive the monitors split further for prompt. Changing the laptop built-in display resolution also to 1080p fixed the issue (may changing from 120 Hz refresh rate to 60 Hz might have helped, but we didn’t fiddle any further).
Also met with Erik Nijenhuis in front of the hotel, who was hand-rolling a cigarette (which turned out to be quite in demand during and after the conference), to receive a copy of the book ‘The Stroke’ by Gerrit Noordzij he kindly bought for me — many thanks!
Workshop
The ‘Tagging PDF for accessibility’ workshop was conducted on 17th July at STMDocs campus — the A/V systems & WiFi were setup and tested a couple of days prior. Delegates were picked up at the hotel in the morning and dropped off after the workshop. Registration of workshop attendees were done on the spot, and we collected speaker introductions to share with session chairs. Had interesting discussions with Frank Mittelbach and Boris Veytsman during lunch.
Reception & Registration
There was a reception at Hyatt on 17th evening, where almost everyone got registered, collected the conference material with program pre-print, t-shirt, mug, notepad & pen, a handwritten (by N. Bhattathiri) copy of Daiva Daśakam, and a copy of the LaTeX tutorial. All delegates introduced themselves — but I had to step out at the exact moment to get into a video call to prepare for live Q&A with Norman Gray from UK, who was presenting remotely on Saturday. There were two more remote speakers — Ross Moore from Australia and Martin J. Osborne from Canada — with whom I conducted the same exercise, despite at inconvenient times for them. Frank Mittelbach needed to use his own laptop for presentation; so we tested the A/V & streaming setup with that too. Doris Behrendt had a presentation with videos; its setup was also tested & arranged.
An ode to libre software & PipeWire
Tried to use a recent Macbook for the live video conference of remote speakers, but it failed miserably to detect the A/V splitter connected via USB to pick up the audio in and out. Resorting to my old laptop running Fedora 42; the devices were detected automagically and PipeWire (plus WirePlumber) made those instantly available for use.
With everything organized and tested for A/V & live streaming, I went back to get some sleep to wake early on the next day.
Day 1 — Friday
Woke up at 05:30, reached hotel by 07:00, and met with some attendees during breakfast. By 08:45, the live stream for day 1 started. Boris Veytsman, the outgoing vice-president of TUG opened TUG2025, handed over to the incoming vice-president and the session chair Erik Nijenhuis; who then introduced Rob Schrauwen to deliver the keynote titled ‘True but Irrelevant’ reflecting on the design of Elsevier XML DTD for archiving scientific articles. It was quite enlightening, especially when one of the designers of a system looks back at the strength, shortcomings, and impact of their design decisions; approached with humility and openness. Rob and I had a chat later, about the motto of validating documents and its parallel with IETF’s robustness principle.
You may see a second stream for day 1, this is entirely my fault as I accidentally stopped streaming during tea break; and started a new one. The group photo was taken after a few exercises in cat-herding.
All the talks on day 1 were very interesting: with many talks about tagging pdf project (that of Mittelbach, Fischer, & Moore); the state of CTAN by Braun — to which I had a suggestion for inactive package maintainer process to consider some Linux distributions’ procedures; Vrajarāja explained their use of XeTeX to typeset in multiple scripts; Hufflen’s experience in teaching LaTeX to students; Behrendt & Busse’s talk about use of LaTeX in CryptTool; and CVR’s talk about long running project of archiving Malayalam literary works in TEI XML format using TeX and friends. The session chairs, speakers and audience were all punctual and kept their allotted time in check; with many followup discussions happening during coffee break, which had ample time to feel the sessions not rushed.
Ross Moore’s talk was prerecorded. As the video played out, he joined via a video conference link. The audio in/out & video out (for projecting on screen and for live streaming) were connected to my laptop, and we could hear him through the audio system as well as the audience questions via microphone were relayed to him with no lag — this worked seamlessly (thanks to PipeWire). We had a small problem with pausing a video that locked up the computer running the presentation; but quickly recovered — after the conference, I diagnosed it to be a noveau driver issue (a GPU hang).
By the end of the day, Rahul & Abhilash were accustomed to driving the presentation and live streams, so I could hand over the rein and enjoy the talks. Decided to stay back at the hotel to avoid travel, and went to bed by 22:00 but sleep descended on this poor soul only by 04:30 or so; thanks to that cup of ristretto for breakfast!
Judging by the ensuing laughs and questions; it appears not everyone was asleep during my talk. Frank & Ulrike suggested not to colour the underscore glyph in math, instead properly colour LaTeX3 macro names (which can have underscore and colon in addition to letters) in the font.
The sessions on second day were also varied and interesting, in particular Novotný’s talk about static analysis of LaTeX3 macros; Vaishnavi’s fifteen-year long project of researching and encoding Tulu-Tigalari script in Unicode; bibliography processing talks separately by Gray and Osborne (both appeared on video conferencing for live Q&A which worked like a charm), etc.
I had interesting discussions with many participants during lunch and coffee breaks. Mentioned to Ben Davies from Overleaf that many résumés I get nowadays are done in LaTeX, even when the person has no knowledge of it — shows signs of TeX going mainstream, in some sense. Ben agreed that it would make sense to set the first/default project in Overleaf a résumé template. I did rehash my concern shared at TUG2023, about the no-error-stop mode in Overleaf leaves much to be desired, as often I encounter documents that do not compile — corroborated by Linas Stonys from VTeX.
In the evening, all of us walked (the monsoon rain was at respite) to the music and dance concert; both of which were fantastic cultural & audio-visual experience.
Veena music, and fusion dance concerts.
Day 3 — Sunday
The morning session of final day had a few talks: Rishi lamented about eroding typographic beauty in publishing (which Rob concurred with, Vrajarāja earlier pointed out as the reason for choosing TeX, …); Doris on LaTeX village in CCC — and about ‘tuwat’ (to take action); followed by the TeX Users Group annual general body meeting presided by Boris as the first session post lunch; then on his approach to solve editorial review process of documents in TeX; and a couple more talks: Rahul’s presentation about pdf tagging used our opentype font for syntax highlighting (yay!); and the lexer developed by Overleaf team was interesting. On Veeraraghavan’s presentation about challenges faced by publishers, I had a comment about the recurrent statement that “LaTeX is complex” — LaTeX is not complex, but the scientific content is complex, and LaTeX is still the best tool to capture and represent such complex information.
Two Hermann Zapf fans listening to one who collaborated with Zapf [published with permission].
Calligraphy
For the final session, Narayana Bhattathiri gave us a calligraphy demonstration, in four scripts — Latin, Malayalam, Devanagari and Tamil; which was very well received judging by the applause. I was deputed to explain what he does; and also to translate for the Q&A session. He obliged the audience’s request of writing names: of themselves, or spouse or children, even a bär, or as Hàn Thế Thành wanted — Nhà khủng lồ (the house of dinosaurs, name for the family group); for the next half hour.
Bhattathiri signing his calligraphy work for TUG2025.
Nijenhuis was also giving away swags by Xerdi, and I made the difficult choice between a pen and a pendrive, opting for the latter.
The banquet followed; where in between enjoying delicious food I could find time to meet and speak with even more people and say good byes and ‘tot ziens’.
Later, I had some discussions with Frank about generating MathML using TeX.
Many thanks
A number of people during the conference shared their appreciation of how well the conference was organized, this was heartwarming. I would like to express thanks to many people involved, including the TeX Users Group, the sponsors (who made it fiscally possible to run the event and support many travels via bursary), STMDocs volunteers who handled many other responsibilities of organizing, the audio-video team (who were very thoughtful to place the headshot of speakers away from the presentation text), the unobtrusive hotel staff; and all the attendees, especially the speakers.
Thanks particularly to those who stayed at and/or visited the campus, for enjoying the spicy food, delicious fruits from the garden, and surviving the long techno-socio-eco-political discussions. Boris seems to have taken it to heart my request for a copy of the TeXbook signed by Don Knuth — I cannot express the joy & thanks in words!
The TeXbook signed by Don Knuth.
The recorded videos were handed over to Norbert Preining, who graciously agreed to make the individual lectures available after processing. The total file size was ~720 GB; so I connected the external SSD to one of the servers and made it available to a virtual machine via USB-passthrough; then mounted and made it securely available for copying remotely.
Special note of thanks to CVR, and Karl Berry — who I suspect is actually a kubernetes cluster running hundreds of containers each doing a separate task (with apologies to a thousand gnomes), but there are reported sightings of him; so I sent personal thanks via people who have seen him in flesh — for leading and coordinating the conference organizing. Barbara Beeton and Karl copy-edited our article for the TUGboat conference proceedings, which is gratefully acknowledged. I had a lot of fun and a lot less stress participating in TUG2025 conference!
Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.
I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.
In 2012 University of Rijeka became NVIDIAGPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.
Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.
The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.
But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114999790880878918