If you’re a fan of real-time strategy (RTS) games and use Linux, OpenRA is a must-have. This open-source project breathes new life into classic Westwood titles like Command & Conquer: Red Alert, Tiberian Dawn, and Dune 2000, offering modern enhancements while preserving the nostalgic gameplay.
What Is OpenRA?
OpenRA is a community-driven initiative that reimagines classic RTS games for contemporary platforms. It’s not just a remake; it’s a complete overhaul that introduces:
Modernized Interfaces: Updated sidebars and controls for improved usability.
Enhanced Gameplay Mechanics: Features like fog of war, unit veterancy, and attack-move commands.
Cross-Platform Support: Runs seamlessly on Linux, Windows, macOS, and *BSD systems.
Modding Capabilities: A built-in SDK allows for the creation of custom mods and maps.
These improvements ensure that both veterans and newcomers can enjoy a refined RTS experience.
Latest Features and Updates
The March 2025 release brought significant enhancements:
New Missions: Two additional Red Alert missions with improved objectives.
Persistent Skirmish Options: Settings now carry over between matches.
Balance Tweaks: Refinements for Red Alert and Dune 2000 to ensure fair play.
Asset Support: Compatibility with The Ultimate Collection and C&C Remastered Collection.
Language Support: Progress towards multilingual capabilities.
These updates demonstrate OpenRA’s commitment to evolving and enhancing the RTS genre.
Installation on Linux
Installing OpenRA on Linux is straightforward:
Download AppImages: Visit the official download page to get the AppImage for your desired mod.
Make Executable: Right-click the AppImage, select ‘Properties,’ and enable execution permissions.
Launch: Double-click the AppImage to start the game.
These methods ensure that OpenRA integrates smoothly with your system.
Why Choose OpenRA?
OpenRA stands out in the Linux gaming landscape due to:
Community Engagement: Regular updates and active forums foster a vibrant player base.
Modding Scene: A robust SDK encourages creativity and customization.
Cross-Platform Play: Enjoy multiplayer matches with friends on different operating systems.
Educational Value: An in-game encyclopedia provides insights into units and strategies.
These features make OpenRA not just a game but a platform for learning and community interaction.
See OpenRA in Action
For a visual overview, check out this review:
Other Notable Strategy Games for Linux
If you’re exploring more strategy titles, consider:
0 A.D.: A historical RTS focusing on ancient civilizations.
The Battle for Wesnoth: A turn-based strategy game with a rich fantasy setting.
Freeciv: A free Civilization-inspired game with extensive customization.
Each offers unique gameplay experiences and is well-supported on Linux platforms.
OpenRA exemplifies how classic games can be revitalized for modern audiences. Its blend of nostalgia and innovation makes it a standout choice for strategy enthusiasts on Linux.
We have arrived at the end of May. This year is going by in the blur for me.
So much going on, so much to do.
Datacenter move
The switch week is still scheduled for the week of June 30th.
We made some progress this last week on installs. Got everything setup to
install a bunch of servers. I installed a few and kept building out services.
I was mostly focusing on getting things setup so I could install openshift
clusters in both prod and staging. That will let us move applications.
I also setup to do rhel10 installs and installed a test virthost. There's
still a few things missing from epel10 that we need: nagios clients,
collectd (thats on me) and zabbix clients, otherwise the changes were
reasonably minor. I might try and use rhel10 for a few things, but I
don't want to spend a lot of time on it as we don't have much time.
Flock
Flock is next week! If you are looking for me, I will be traveling basically
all monday and tuesday, then in prague from tuesday to very early sunday
morning, when I travel back home.
If you are going to flock and want to chat, please feel free to catch me
and/or drop me a note to try and meet you. Happy to talk!
If you aren't going to flock, I'm hoping everything is pretty quiet
infrastructure wise. I will try and check in on any major issues, but
do try and file tickets on things instead of posting to mailing lists
or matrix.
I'd also like to remind everyone going to flock that we try and not
actually decide anything there. It's for discussion and learning and
putting a human face on your fellow contributors. Make plans, propose things
definitely, just make sure after flock you use our usual channels to
discuss and actually decide things. Deciscions shouldn't be made offline
where those not present can't provide input.
I'm likely to do blog posts about flock days, but may be delayed until
after the event. There's likely not going to be a regular saturday post
next week from me.
Arm laptop
So, I successfully used this Lenovo slim7x all week, so I guess I am going
to try and use it for my flock travel. Hopefully it will all work out. :)
Issues I have run into in no particular order:
There are a bunch of various people working on various things, and all
of that work touches the devicetree file. This makes it a nightmare to
try and have a dtb with working bluetooth, ec, webcam, sound, suspend, etc.
I really hope a bunch of this stuff lands upstream soon. For now I just
Have a kernel with bluetooth and ec working and am ignoring sound and webcam.
s2idle sleep "works", but I don't trust it. I suspended the other day when
I was running some errands, and when I got home, the laptop had come on
and was super super hot (it was under a jacket to make it less a theft target).
So, I might just shutdown most of the time traveling. There's a patch
to fix deep sleep, but see above.
I did wake up one day and it had rebooted, no idea why...
Otherwise everything is working fine and it's pretty nice and zippy.
Battery life is... ok. 7-8 hours. It's not hitting the lowest power states
yet, but that will do I think for my needs for now.
In 1978, a commemorative souvenir was published to celebrate the milestone of acting in 400 films by Bahadoor, a celebrated Malayalam movie actor. Artist Namboodiri designed its cover caricature and the lettering.
Cover of Bahadoor souvenir designed by artist Namboodiri in 1978.
Based on this lettering, KH Hussain designed a traditional script Malayalam Unicode font named âRIT Bahadurâ. I did work on the engineering and production of the font to release it on the 25th death anniversary of Bahadoor, on 22-May-2025.
RIT Bahadur is a display typeface that comes in Bold and BoldItalic variants. It is licensed under Open Font License and can be freely downloaded from Rachana website.
When installing or managing a Linux system, one of the most debated topics is whether to use a swap partition or a swap fileâor even use swap at all.
In this post, weâll go back to the origin of swap, explore why swap was needed, how modern systems use (or avoid) it, and the advantages and disadvantages of both swap partitions and swap files.
What is Swap?
Swap is disk space used by the operating system when physical RAM is full. It acts as an extension of RAM to allow the system to offload memory pages that are not immediately needed, keeping critical applications running smoothly.
The Origin of Swap
Swap originated in the early days of computing, when:
RAM was expensive and limited.
Storage (although slower) was more plentiful.
Systems needed a way to âextendâ memory to run more processes than RAM allowed.
Unix systems implemented swap space as a way to avoid running out of memory entirelyâthis idea carried over to Linux.
Why You Might Still Need Swap Today
Even with modern hardware, swap still has roles:
Prevent Out of Memory (OOM) crashes: If your system runs out of RAM, swap provides a safety net.
Hibernation (suspend-to-disk): Requires swap equal to or greater than your RAM size.
Memory balancing: Swap allows the kernel to move idle pages out of RAM, freeing up space for active applications or disk cache.
Low-memory devices: On systems like Raspberry Pi or small VPS servers, swap helps compensate for limited RAM.
Why You Might Not Need Swap
On the other hand:
Lots of RAM: If your system rarely uses all available memory, swap may never be touched.
SSD wear concerns: Excessive swapping can reduce SSD lifespan (though this is largely exaggerated with modern SSDs).
Performance-critical applications: Swap is much slower than RAM. If you’re running performance-sensitive workloads, using swap can be a bottleneck.
Modern alternatives: Features like zram and zswap offer compressed RAM swap spaces, reducing or eliminating the need for disk-based swap.
Swap Partition
Advantages
Stability: Less prone to fragmentation.
Predictable performance: Constant location on disk can be slightly faster on spinning HDDs.
Used by default in many legacy systems.
Can be used even if root filesystem becomes read-only.
Disadvantages
Inflexible size: Hard to resize without repartitioning.
Occupies a dedicated partition: Not space-efficient, especially on SSDs.
Inconvenient for virtualized or cloud instances.
Swap File
Advantages
Flexible: Easy to resize or remove.
No need for a separate partition.
Supported by all modern Linux kernels (since 2.6).
Works well with most filesystems including ext4, XFS, Btrfs (with limitations).
Disadvantages
Can be slower on heavily fragmented file systems.
Doesnât work with hibernation on some setups.
Needs correct permissions and configuration (e.g., no copy-on-write or compression with Btrfs unless configured properly).
Performance Considerations
Criteria
Swap Partition
Swap File
Resize Flexibility
Hard
Easy
Setup Complexity
Medium
Easy
Performance (HDD)
Slightly better
Slightly worse
Performance (SSD)
Similar
Similar
Works with Hibernate
Yes
Depends on setup
Dynamic Management
Manual
Resizable on-the-fly
When to Use What?
Use a Swap Partition if:
Youâre setting up a traditional desktop or dual-boot Linux system.
You plan to use hibernation reliably.
You prefer separating system components into strict partitions.
Use a Swap File if:
Youâre on a modern system with lots of RAM and SSD.
You want to add swap after install easily.
Youâre using cloud or VPS environments with flexible resources.
You donât plan to use hibernation.
Bonus: zram and zswap
Modern Linux kernels support zram and zswap, which compress memory pages before swapping to disk:
zram creates a compressed RAM-based block device as swap.
zswap is a compressed cache for swap pages before writing to disk.
These are great for low-memory systems like Raspberry Pi or embedded devices.
Conclusion
Swap is not deadâitâs evolved.
Whether you choose a swap partition or a swap file depends on your needs:
Flexibility? Go for swap file.
Predictability and hibernation? Use a swap partition.
Want better performance with low RAM? Consider zram.
As always with Linux, the choice is yoursâand thatâs the power of open systems.
TL;DR
Swap partition: Reliable, but rigid.
Swap file: Flexible and modern.
No swap: Fine if you have lots of RAM and donât use hibernation.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. Itâs responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
How I manage SSL certificates for my homelab with Letsencrypt and Ansible
I have a fairly sizable homelab, consisting of some Raspberry Pi 4s, some Intel Nucs, a Synology NAS with a VM running on it and a number of free VMs in Oracle cloud. All these machines run RHEL 9 or RHEL 10 and all of them are managed from an instance of Red Hat Ansible Automation Platform that runs on the VM on my NAS.
On most of these machines, I run podman containers behind caddy (which takes care of any SSL certificate management automatically). But for some services, I really needed an automated way of managing SSL certificates that didn't involve Caddy. An example for this is cockpit, which I use on some occasions. I hate those "your connection is not secure messages", so I needed real SSL certificates that my whole network would trust without the need of me having to load custom CA certificates in every single device.
I also use this method for securing my internal Postfix relay, and (in a slightly different way) for setting up certificates for containers running on my NAS.
So. Ansible to the rescue. It turns out, there is a surprisingly easy way to do this with Ansible. I found some code floating around the internet. To be honest, I forgot where I got it, it was probably a GitHub gist, but I really don't remember: I wrote this playbook months and months ago - I would love to attribute credit for this, but I simply can't :(
The point of the playbook is that it takes a list of certificates that should exist on a machine, and it makes sure those certificates exist on the target machine. Because this is for machines that are not connected to the internet, it's not possible to use the standard HTTP verification. Instead, it creates temporary DNS records to verify my ownership of the domain.
Let's break down how the playbook works. I'll link to the full playbook at the end.
Keep in mind that all tasks below are meant to be run as a playbook looping over a list of dictionaries that are structures as follows:
First, we make sure a directory exists to store the certificate. We check for the existence of a Letsencrypt account key and if that does not exist, we create it and copy it over to the client:
-name:Create directory to store certificate informationansible.builtin.file:path:"{{item.basedir}}"state:directorymode:"0710"owner:"{{cert_directory_user}}"group:"{{cert_directory_group}}"-name:Check if account private key existsansible.builtin.stat:path:"{{item.basedir}}/account_{{item.common_name}}.key"register:account_key-name:Generate and copy over the acme account private keywhen:not account_key.stat.exists | boolblock:-name:Generate private account key for letsencryptcommunity.crypto.openssl_privatekey:path:/tmp/account_{{ item.common_name }}.keytype:RSAdelegate_to:localhostbecome:falsewhen:not account_key.stat.exists | bool-name:Copy over private account key to clientansible.builtin.copy:src:/tmp/account_{{ item.common_name }}.keydest:"{{item.basedir}}/account_{{item.common_name}}.key"mode:"0640"owner:rootgroup:root
The next step is to check for the existence of a private key for the domain we are handling, and create it and copy it to the client if it doesn't exist:
-name:Check if certificate private key existsansible.builtin.stat:path:"{{item.basedir}}/{{item.common_name}}.key"register:cert_key-name:Generate and copy over the acme cert private keywhen:not cert_key.stat.exists | boolblock:-name:Generate private acme key for letsencryptcommunity.crypto.openssl_privatekey:path:/tmp/{{ item.common_name }}.keytype:RSAdelegate_to:localhostbecome:falsewhen:not cert_key.stat.exists | bool-name:Copy over private acme key to clientansible.builtin.copy:src:/tmp/{{ item.common_name }}.keydest:"{{item.basedir}}/{{item.common_name}}.key"mode:"0640"owner:rootgroup:root
Then, we create a certificate signing request (CSR) based on the private key, and copy that to the client:
-name:Generate and copy over the csrblock:-name:Grab the private key from the hostansible.builtin.slurp:src:"{{item.basedir}}/{{item.common_name}}.key"register:remote_cert_key-name:Generate the csrcommunity.crypto.openssl_csr:path:/tmp/{{ item.common_name }}.csrprivatekey_content:"{{remote_cert_key['content']|b64decode}}"common_name:"{{item.common_name}}"delegate_to:localhostbecome:false-name:Copy over csr to clientansible.builtin.copy:src:/tmp/{{ item.common_name }}.csrdest:"{{item.basedir}}/{{item.common_name}}.csr"mode:"0640"owner:rootgroup:root
Now the slightly more complicated stuff starts. This next task contacts the Letsencrypt API and requests a certificate. It specifies a dns-01 challenge, which means that Letsencrypt will respond with a challenge that we can validate our request through the creation of a special DNS record. All we need is in the response, which well store as cert_challenge.
-name:Create a challenge using an account key file.community.crypto.acme_certificate:account_key_src:"{{item.basedir}}/account_{{item.common_name}}.key"account_email:"{{item.email}}"src:"{{item.basedir}}/{{item.common_name}}.csr"cert:"{{item.basedir}}/{{item.common_name}}.crt"challenge:dns-01acme_version:2acme_directory:"{{acme_dir}}"# Renew if the certificate is at least 30 days oldremaining_days:60terms_agreed:trueregister:cert_challenge
Now, I'll be using DigitalOcean's API to create the temporary DNS records, but you can use whatever DNS service you want, as long as it's publicly available for Letsencrypt to query. The following block will only run if two things are true:
1. the cert_challenge is changed, which is only so if we need to renew the certificate. Letsencrypt certificates are valid for 90 days only. We specified remaining_days: 60, so if we run this playbook 30 or more days after its previous run, cert_challenge will be changed and the certificate will be renewed.
2. item.common_name (which is a variable that holds the requested DNS record) is part of the challenge_data structure in cert_challenge. This is to verify we actually got the correct data from the Letsencrypt API, and not just some metadata change.
The block looks like this:
-name:Actual certificate creationwhen:cert_challenge is changed and item.common_name in cert_challenge.challenge_datablock:-name:Create DNS challenge record on DOcommunity.digitalocean.digital_ocean_domain_record:state:presentoauth_token:"{{do_api_token}}"domain:"{{item.domain[1:]}}"type:TXTttl:60name:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].record|replace(item.domain,'')}}"data:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value}}"delegate_to:localhostbecome:false-name:Let the challenge be validated and retrieve the cert and intermediate certificatecommunity.crypto.acme_certificate:account_key_src:"{{item.basedir}}/account_{{item.common_name}}.key"account_email:"{{item.email}}"src:"{{item.basedir}}/{{item.common_name}}.csr"cert:"{{item.basedir}}/{{item.common_name}}.crt"fullchain:"{{item.basedir}}/{{item.domain[1:]}}-fullchain.crt"chain:"{{item.basedir}}/{{item.domain[1:]}}-intermediate.crt"challenge:dns-01acme_version:2acme_directory:"{{acme_dir}}"remaining_days:60terms_agreed:truedata:"{{cert_challenge}}"-name:Remove DNS challenge record on DOcommunity.digitalocean.digital_ocean_domain_record:state:absentoauth_token:"{{do_api_token}}"domain:"{{item.domain[1:]}}"type:TXTname:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].record|replace(item.domain,'')}}"data:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value}}"delegate_to:localhostbecome:false
You'll notice that the TTL for this record is intentionally very low, because we don't need it other than for validation of the challenge, and we'll remove it after vertification. If you do not use DigitalOcean as a DNS provider, the first task in the block above will look different, obviously.
The second task in the block reruns the acme_certificate task, and this time we pass the contents of the cert_challenge variable as the data parameter. Upon successful validation, we can store retrieve the new certificate, full chain and intermediate chain to disk. Basically, at this point, we are done without having to use certbot :)
Of course, in the third task, we clean up the temporary DNS record again.
I have a slightly different playbook to manage certificates on my NAS, and some additional tasks that configure Postfix to use this certificate, too, but those are probably useful for me only.
TL;DR: it you want to create a (set of) certificate(s) for a (group of) machine(s), running this playbook from AAP every month makes that really easy.
The main playbook looks like this:
---# file: letsencrypt.yml-name:Configure letsencrypt certificateshosts:rhel_machinesgather_facts:falsebecome:truevars:debug:falseacme_dir:https://acme-v02.api.letsencrypt.org/directorypre_tasks:-name:Gather facts subsetansible.builtin.setup:gather_subset:-"!all"-default_ipv4-default_ipv6tasks:-name:Include letsencrypt tasks for each certificateansible.builtin.include_tasks:letsencrypt_tasks.ymlloop:"{{le_certificates}}"
The letsencrypt_tasks.yml file is all of the above tasks combined into a single playbook:
---# file: letsencrypt_tasks.yml-name:Create directory to store certificate informationansible.builtin.file:path:"{{item.basedir}}"state:directorymode:"0710"owner:"{{cert_directory_user}}"group:"{{cert_directory_group}}"-name:Check if account private key existsansible.builtin.stat:path:"{{item.basedir}}/account_{{item.common_name}}.key"register:account_key-name:Generate and copy over the acme account private keywhen:not account_key.stat.exists | boolblock:-name:Generate private account key for letsencryptcommunity.crypto.openssl_privatekey:path:/tmp/account_{{ item.common_name }}.keytype:RSAdelegate_to:localhostbecome:falsewhen:not account_key.stat.exists | bool-name:Copy over private account key to clientansible.builtin.copy:src:/tmp/account_{{ item.common_name }}.keydest:"{{item.basedir}}/account_{{item.common_name}}.key"mode:"0640"owner:rootgroup:root-name:Check if certificate private key existsansible.builtin.stat:path:"{{item.basedir}}/{{item.common_name}}.key"register:cert_key-name:Generate and copy over the acme cert private keywhen:not cert_key.stat.exists | boolblock:-name:Generate private acme key for letsencryptcommunity.crypto.openssl_privatekey:path:/tmp/{{ item.common_name }}.keytype:RSAdelegate_to:localhostbecome:falsewhen:not cert_key.stat.exists | bool-name:Copy over private acme key to clientansible.builtin.copy:src:/tmp/{{ item.common_name }}.keydest:"{{item.basedir}}/{{item.common_name}}.key"mode:"0640"owner:rootgroup:root-name:Generate and copy over the csrblock:-name:Grab the private key from the hostansible.builtin.slurp:src:"{{item.basedir}}/{{item.common_name}}.key"register:remote_cert_key-name:Generate the csrcommunity.crypto.openssl_csr:path:/tmp/{{ item.common_name }}.csrprivatekey_content:"{{remote_cert_key['content']|b64decode}}"common_name:"{{item.common_name}}"delegate_to:localhostbecome:false-name:Copy over csr to clientansible.builtin.copy:src:/tmp/{{ item.common_name }}.csrdest:"{{item.basedir}}/{{item.common_name}}.csr"mode:"0640"owner:rootgroup:root-name:Create a challenge using an account key file.community.crypto.acme_certificate:account_key_src:"{{item.basedir}}/account_{{item.common_name}}.key"account_email:"{{item.email}}"src:"{{item.basedir}}/{{item.common_name}}.csr"cert:"{{item.basedir}}/{{item.common_name}}.crt"challenge:dns-01acme_version:2acme_directory:"{{acme_dir}}"# Renew if the certificate is at least 30 days oldremaining_days:60terms_agreed:trueregister:cert_challenge-name:Actual certificate creationwhen:cert_challenge is changed and item.common_name in cert_challenge.challenge_datablock:-name:Create DNS challenge record on DOcommunity.digitalocean.digital_ocean_domain_record:state:presentoauth_token:"{{do_api_token}}"domain:"{{item.domain[1:]}}"type:TXTttl:60name:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].record|replace(item.domain,'')}}"data:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value}}"delegate_to:localhostbecome:false-name:Let the challenge be validated and retrieve the cert and intermediate certificatecommunity.crypto.acme_certificate:account_key_src:"{{item.basedir}}/account_{{item.common_name}}.key"account_email:"{{item.email}}"src:"{{item.basedir}}/{{item.common_name}}.csr"cert:"{{item.basedir}}/{{item.common_name}}.crt"fullchain:"{{item.basedir}}/{{item.domain[1:]}}-fullchain.crt"chain:"{{item.basedir}}/{{item.domain[1:]}}-intermediate.crt"challenge:dns-01acme_version:2acme_directory:"{{acme_dir}}"remaining_days:60terms_agreed:truedata:"{{cert_challenge}}"-name:Remove DNS challenge record on DOcommunity.digitalocean.digital_ocean_domain_record:state:absentoauth_token:"{{do_api_token}}"domain:"{{item.domain[1:]}}"type:TXTname:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].record|replace(item.domain,'')}}"data:"{{cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value}}"delegate_to:localhostbecome:false
And finally, as part of host_vars, for each of my hosts a letsencrypt.yml file exists containing:
To be fair, there could probably be a lot of optimization done in that playbook, and I can't remember why I did it with .example.com (with the leading dot) and then use item.domain[1:] in so many places. But, I'm a lazy IT person, and I'm not fixing what isn't inherently broken :)
Here’s another update on the upcoming fedoraproject Datacenter move.
Summary: there have been some delays, the current target switch week to the new Datacenter is now the week of 2025-06-30. ( formerly 2025-05-16 ).
The plans we mentioned last month are all still in our plan, just moved out two weeks.
Why the delay? Well, there were some delays in getting networking setup in the new datacenter, but thats now been overcome and we are back on track, just with a delay.
Here’s a rundown of the current plan:
We now have access to all the new hardware, it’s firmware has been updated and configured.
We have a small number of servers installed and this week we are installing OS on more servers as well as building out vm’s for various services.
Next week is flock, so we will probibly not make too much progress, but we might do some more installs/configuration if time permits.
The week after flock we hope to get openshift clusters all setup and configured.
The week after that we will start moving some applications that aren’t closely tied to the old datacenter. If they don’t have storage or databases, they are good candidates to move.
The next week will be any other applications we can move
The week before the switch will be getting things ready for that (making sure data is synced, plans are reviewed, etc)
Finally the switch week (week of june 30th): Fedora Project users should not notice much during this change. Mirrorlists, mirrors, docs, and other user facing applications should continue working as always. Updates pushes may be delayed a few days while the switch happens. Our goal is to keep any end user impact to a minimum.
For Fedora Contributors, Monday and Tuesday we plan to âmoveâ the bulk of applications and services. Contributors should avoiding doing much on those days as services may be moving around or syncing in various ways. Starting Wednesday, we will make sure everything is switched and fix problems or issues as they are found. Thursday and Friday will continue stabilization work.
The week after the switch, some newer hardware in our old datacenter will be shipped down to the new one. This hardware will be added to increase capacity (more builders, more openqa workers, etc).
This move should get us in a nicer place with faster/newer/better hardware.
I often see leaders in open source projects not wanting to promote their own work in the interest of fairness. That’s a noble idea, but it’s unnecessary. It’s okay to be partial to â and promote â your own work, so long as you follow the community’s process.
Real world examples
What does this look like in practice? You may be a member of a steering committee that approves feature proposals. You didn’t earn that spot just because you’re good at meetings, you mostly likely earned it on sustained technical and interpersonal merit. This, in turn, means you’re probably still writing new feature proposals sometimes. That doesn’t mean you have to recuse yourself when it comes up for a vote. Everyone knows you wrote it, and you’re a member of the committee, not an independent judge presiding over a criminal trial.
Or you might be leading a project and have a tool that would help the project meet its goals. You can propose that the project adopt your tool. Again, it’s going to be clear that you wrote it, so go ahead and make the proposal.
The need for process
As I alluded to in the opening paragraph, your community needs a process for these sorts of proposals. It doesn’t have to be elaborate. Something simple as “a majority of the steering committee must approve of the proposal” counts as a process. Following the process is what keeps the decision fair, even when you have a predisposition to like what you’re proposing. If your proposal gets the same treatment as everyone else’s, that’s all that matters.
When to recuse yourself
Of course, there are times when it’s appropriate to recuse yourself. If your proposal is particularly contentious (let’s say a roughly 50-50 split, not a 75-25 split in favor), it’s best that you’re not the deciding vote. If you can’t amend your proposal in such a way that you win some more support, then it may be better to note vote.
If the community policy and processes require the author or a proposal to recuse themselves, then that’s obviously a good reason to do so. “But Ben said I shouldn’t!” won’t win you any points, even if the policy is misguided (and it may or may not be!).
Also, if the context is a pull request, you should not vote to approve it to get it over the approval requirement threshold. That is a separate case, and one that most forges will prohibit anyway.
Authselect is a utility tool that manages PAM configurations using profiles. Starting with Fedora 36, Authselect became a hard requirement for configuring PAM. In this article, you will learn how to configure PAM using Authselect.
Introduction.
Unauthorized access is a critical risk factor in computer security. Cybercriminals engage in data theft, cyber-jacking, crypto-jacking, phishing, and ransomware attacks once they gain unauthorized access. A common vulnerability exploit for unauthorized access is poor configuration authentication. Pluggable authentication module (PAM) plays a critical role in mitigating this vulnerability, by acting as a middleware layer between your application and authentication mechanisms. For instance, you can use PAM to configure a server to deny login after 6pm, and any login attempts afterwards will require a token. PAM does not carry out authentication itself, instead it forwards requests to the authentication module you specified in its configuration file.
This article will cover the following three topics:
PAM
Authselect, and authselect profiles
How to configure PAM
Prerequisites:
Fedora, Centos or RHEL server. This guide uses Fedora 41 server edition. Fedora, Centos, and RHEL servers are interchangeable.
A user account with sudo privileges on the server.
Command line familiarity.
What is PAM?
The Pluggable Authentication Modules (PAM) provides a modular framework for authenticating users, systems, and applications on Fedora Linux. Before PAM, file-based authentication was the prevalent authentication scheme. File-based authentication stores, usernames, passwords, idâs, names, and other optional information in one file. This was simple, and everyone was happy until security requirements changed, or new authentication mechanisms were adopted.
Hereâs an excerpt from Red Hatâs PAM documentation:
Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism which system applications can use to relay authentication to a centrally configured framework. PAM is pluggable because there is a PAM module for different types of authentication sources (such as Kerberos, SSSD, NIS, or the local file system). Different authentication sources can be prioritized.
PAM acts as a middleware between applications and authentication modules. It receives authentication requests, looks at its configuration files and forwards the request to the appropriate authentication module. If any module detects that the credentials do not meet the required configuration, PAM denies the request and prevents unauthorized access. PAM guarantees that every request is consistently validated before it denies or grants access.
Why PAM?
Support for various authentication schemes using pluggable modules. These may include two-factor authentication (2FA), password authentication (LDAP), tokens (OAuth, Kerberos), biometrics (fingerprint, facial), Hardware (YubiKey), and much more.
Support for stacked authentication. PAM can combine one or more authentication schemes.
Flexibility to support new or future authentication technology with minimal friction.
High performance, and stability under significant load.
Support for granular/custom configuration across users and applications. For example, PAM can disallow access to an application from 5pm to 5am, if an authenticated user does not possess a role.
Authselect replaces Authconfig
Authselect was introduced in Fedora 28 to replace Authconfig. By Fedora 35 Authconfig was removed system-wide. In Fedora 36 Authselect became a hard dependency making it a requirement for configuring PAM in subsequent Fedora versions.
This tool does not configure your applications (LDAP, AD, SSH); it is a configuration management tool designed to set up and maintain PAM. Authselect selects and applies pre-tested authentication profiles that determine which PAM modules are active and how they are configured.
Hereâs an excerpt from the Fedora 27 changeset which announced Authselect as a replacement for Authconfig
Authselect is a tool to select system authentication and identity sources from a list of supported profiles.
It is designed to be a replacement for authconfig but it takes a different approach to configure the system. Instead of letting the administrator build the pam stack with a tool (which may potentially end up with a broken configuration), it would ship several tested stacks (profiles) that solve a use-case and are well tested and supported.
From the same changeset, the authors report that Authconfig was error prone, hard to maintain due to technical debt, caused system regressions after updates, and was hard to test.
Authconfig does its best to always generate a valid pam stack but it is not possible to test every combination of options and identity and authentication daemons configuration. It is also quite regression prone since those daemons are in active development on their own and independent on authconfig. When a new feature is implemented in an authentication daemon it takes some time to propagate this feature into authconfig. It also may require a drastic change to the pam stack which may easily introduce regressions since it is hard to test properly with so many possible different setups.
Authselect profiles, and what they do.
As mentioned above, Authselect manages PAM configuration using ready-made profiles. A profile is a set of features and functions that describe how the resulting system configuration will look. One selects a profile and Authselect applies the configuration to PAM.
In Fedora, Authselect ships with four profiles;
$ authselect list
- local Local users only
- nis Enable NIS for system authentication
- sssd Enable SSSD for system authentication (also for local users only)
- winbind Enable winbind for system authentication
For descriptions of each profile, visit Authselectâs readme page for profiles, and the wiki, available on GitHub.
You can view the current profile with;
$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-fingerprint
- with-mdns4
You can change the current profile with;
$ sudo authselect select local
Profile "local" was selected.
Scenario: You have noticed a high number of failed login attempts on your Fedora Linux server. As a preemptive action you want to configure PAM for lockouts. Any user with 3 failed login attempts, gets locked out of your server for 24 hours.
The pam_faillock.so module maintains a list of failed authentication attempts per user during a specified interval and locks the account in case there were more than the stipulated consecutive failed authentications.
The Authselect profile “with-faillock” feature handles failed authentication lockouts.
Step 1. Check if current profile on the server has with-faillock enabled;
$ authselect current Profile ID: local Enabled features: - with-silent-lastlog - with-mdns4 - with-fingerprint
As you can see, with-faillock is not enabled in this profile.
Authselect has now configured PAM to support lockouts. Check the /etc/pam.d/system-auth file, and the /etc/pam.d/password-auth files, to see that the files are updated by authselect.
From the vimdiff image below you can see the changes authselect added to /etc/pam.d/system-auth.
Step 3. Check if the current configuration is valid.
$ authselect check
Current configuration is valid.
Step 4. Apply changes
$ sudo authselect apply-changes
Changes were successfully applied.
Step 5. Configure faillock
$ vi /etc/security/faillock.conf
Uncomment the file to match the following parameters
silent
audit
deny=3
unlock_time=86400
dir = /var/run/faillock
Step 6. Test PAM configuration
6.1 Attempt to login consecutive times with the wrong password, to trigger a lockout
6.2 Check failure records
Important: As a best practice, always backup your current Authselect profile before making any change.
Back up the current Authselect profile as follows;
$ authselect select local -b
Backup stored at /var/lib/authselect/backups/2025-05-23-22-41-33.UyM1lJ
Profile "local" was selected.
To list backed up profiles;
$ authselect backup-list
2025-05-22-15-17-41.fe92T8 (created at Thu 22 May 2025 11:17:41 AM EDT)
2025-05-23-22-41-33.UyM1lJ (created at Fri 23 May 2025 06:41:33 PM EDT)
On Thursday, May 29 (yes, two days away!) we will host the F42 release party on Matrix.
We would love for you to join us to celebrate all things F42 in a private event room from 1300 – 1600 UTC. You will hear from our new FPL Jef Spaleta, learn about the design process for each release, and hear about some of the great new features in Fedora Workstation, Fedora KDE and our installer. Plus there’s a git forge update and a mentor summit update too, plus lots more.
You can see the schedule on the event page wiki, and how to attend is simple: please register for the event, for free, in advance. Using your Matrix ID, you will receive an invitation to a private event room where we will be streaming presentations via ReStream.
Events will be a mixture of live and pre-recorded. All will be available after the event on the Fedora YouTube channel.
Last year, syslog-ng 4.8.0 improved the wildcard-file() source on FreeBSD and MacOS. Version 4.9.0 will do the same for Linux by using inotify for file and directory monitoring, resulting in faster performance while using significantly less resources. This blog is a call for testing the new wildcard-file() source options before release.
Oh look, it's saturday already. Another busy week here with lots
going on, so without further adieu, lets discuss some things!
Datacenter Move
Due to delays in getting network to new servers and various logistics,
We are going to be moving the switcharoo week to the week of June 30th.
It was set for June 16th, but thats just too close timing wise, so
we are moving it out two weeks. Look for a community blog post
and devel-announce post next week on this. I realize that that means
that friday is July 4th (a holiday in the US), but we hope to do
the bulk of switching things on monday and tuesday of that week,
and leave only fixing things for wed and thursday.
We did finally get network for the new servers last week.
Many thanks to all the networking folks who worked hard to get
things up and running. With some network I was able to start
bootstrapping infrastructure up. We now have a bastion host,
a dhcp/tftp host and a dns server all up and managed via our
existing ansible control host like all the rest of our hosts.
Friday was a recharge day at Red Hat, and monday is the US
Memorial day holiday, but I should be back at deploying things
on tuesday. Hopefully next week I will get a initial proxy setup
and can then look at doing openshift cluster installs.
Flock
The week after next is flock! It came up so fast.
I do plan on being there (I get into prague late morning
on the 3rd). Hope to see many folks there, happy to talk about most
anything. I'm really looking forward to the good energy that comes
from being around so many awesome open source folks!
Of course that means I may well not be online as much as normal
(when traveling, in talks, etc), so Please plan accordingly if
you need my help with something.
Laptop
So, I got this lenovo slim7x snapdragon X laptop quite a long time
ago, and finally I decided I should see if I can use it day to day,
and if so, use it for the flock trip, so I don't have to bring my
frame.work laptop.
So, I hacked up a aarch64 rawhide live with a dtb for it and was able
to do a encrypted install and then upgrade the kernel. I did have
to downgrade linux-firmware for the ath12k firmware bug, but thats
fine.
So far it's looking tenable (I am typing this blog post on it now).
I did have to add another kernel patch to get bluetooth working, but
it seems to be fine with the patch. The OLED screen on this thing is
wonderfull. Battery life seems ok, although it's hard to tell without
a 'real life' test.
Known things not working: camera (there's patches, but it's really
early so I will wait for them), sound (there's also patches, but it
has the same issue the mac laptops had with there being no safeguards
so you can easily destroy your speakers if you adjust too loud).
Amusing things: no discord flatpak available (the one on flathub is
x86_64 only), but the web version works fine. (Although amusingly
it tells you to install the app (which doesn't exist).
Also, no chrome, but there is chromium, which should be fine for
sites that firefox doesn't work with.
I'll see if I can get through the weekend and upcoming week and decide
what laptop I will take traveling.
I'm happy to share that 3 major IPU6 camera related kernel changes from linux-next have been backported to Fedora and have been available for about a week now the Fedora kernel-6.14.6-300.fc42 (or later) package:
Support for the OV02C10 camera sensor, this should e.g. enable the camera to work out of the box on all Dell XPS 9x40 models.
Support for the OV02E10 camera sensor, this should e.g. enable the camera to work out of the box on Dell Precision 5690 laptops. When combined with item 3. below and the USBIO drivers from rpmfusion this should also e.g. enable the camera on other laptop models like e.g. the Dell Latitude 7450.
Support for the special handshake GPIO used to turn on the sensor and allow sensor i2c-access on various new laptop models using the Lattice MIPI aggregator FPGA / USBIO chip.
If you want to give this a test using the libcamera-softwareISP FOSS stack, run the following commands:
Note the colors being washed out and/or the image possibly being a bit over or under exposed is expected behavior ATM, this is due to the software ISP needing more work to improve the image quality. If your camera still does not work after these changes and you've not filed a bug for this camera already please file a bug following these instructions.
See my previous blogpost on how to also test Intel's proprietary stack from rpmfusion if you also have that installed.
Since the set of rpmfusion intel-ipu6-kmod + ipu6-camera-* package updates from last February the FOSS libcamera-softwareISP and Intel's proprietary stack using the Intel hardware ISP can now co-exist on Fedora systems, sharing the mainline IPU6-CSI2 receiver driver.
Because of this it is no longer necessary to blacklist the kernel-modules from the other stack. Unfortunately when the rpmfusion packages first generated "/etc/modprobe.d/ipu6-driver-select.conf" for blacklisting this file was not marked as "%ghost" in the specfile and now with the February ipu6-camera-hal the file has been removed from the package. This means that if you've jumped from an old ipu6-camera-hal where the file was not marked as "%ghost directly to the latest you may still have the modprobe.d conf file around causing issues. To fix this run:
and then reboot. I'll also add this as post-install script to the ipu6-camera-hal packages, to fix systems being broken because of this.
If you want the rpmfusion packages because your system needs the USBIO drivers, but you do not want the proprietary stack, you can run the following command to disable the proprietary stack:
sudo ipu6-driver-select foss
Or if you have disabled the prorietary stack in the past and want to give it a try, run:
sudo ipu6-driver-select proprietary
To test switching between the 2 stacks in Firefox go to Mozilla's webrtc test page and click on the "Camera" button, you should now get a camera permisson dialog with 2 cameras: "Built in Front Camera" and "Intel MIPI Camera (V4L2)" the "Built in Front Camera" is the FOSS stack and the "Intel MIPI Camera (V4L2)" is the proprietary stack. Note the FOSS stack will show a strongly zoomed in (cropped) image, this is caused by the GUM test-page, in e.g. google-meet this will not be the case.
Unfortunately switching between the 2 cameras in jitsi does not work well. The jitsi camera selector tries to show a preview of both cameras at the same time and while one stack is streaming the other stack cannot access the camera. You should be able to switch by: 1. Selecting the camera you want 2. Closing the jitsi tab 3. wait a few seconds for the camera to stop streaming 4. open jitsi in a new tab.
Note I already mentioned most of this in my previous blog post but it was a bit buried there.
Self-hosting applications has become increasingly popular among developers, tech enthusiasts, and homelabbers. However, securely exposing internal services to the internet is often a complicated task. It involves:
Opening firewall ports
Dealing with dynamic IPs
Managing TLS certificates
Handling reverse proxies
Setting up access control
This is where DockFlare comes in.
DockFlare is a lightweight, self-hosted Cloudflare Tunnel automation tool for Docker users. It simplifies the process of publishing your internal Docker services to the public internet through Cloudflare Tunnels, while providing optional Zero Trust security, DNS record automation, and a sleek web interface for real-time management.
Objectives of DockFlare
DockFlare was created to solve three key problems:
Simplicity: Configure secure public access to your Docker containers using just labelsâno reverse proxy, SSL setup, or manual DNS records needed.
Security: Protect your services behind Cloudflare’s Zero Trust Access, supporting identity-based authentication (Google, GitHub, OTP, and more).
Automation: Automatically create tunnels, subdomains, and security policies based on your Docker service metadata. No scripting. No re-deploys.
Why Use DockFlare?
Hereâs how DockFlare benefits its users:
Quick Setup: Set up secure tunnels and expose services in seconds with Docker labels.
Zero Trust Security: Enforce authentication for any service using Cloudflare Access.
No Public IP Required: No need to forward ports or expose your home IPâperfect for CG-NAT and mobile ISPs.
Safe by Default: TLS encryption, no open ports, and access rules built-in.
User-Friendly UI: Visualize tunnels, view logs, and manage configurations in a web dashboard.
DevOps Ready: Works seamlessly in CI/CD pipelines or home labs.
How to Install DockFlare
Requirements
Docker and Docker Compose
A Cloudflare account
A domain connected to Cloudflare
A Cloudflare API Token with:
Zone DNS edit
Zero Trust policy management
Tunnel management
Step 1: Create Your Project Directory
mkdir dockflare && cd dockflare
Step 2: Create .env File
Create a file named .env with the following contents:
DockFlare is a game-changer for developers, sysadmins, and homelabbers who want an easy, secure, and automated way to expose their applications to the web. With support for Cloudflare Tunnels, Zero Trust Access, DNS automation, and a clean UIâit’s the only tool you’ll need to publish your services safely.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.4.8RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linuxâ„ 8
as SCL in remi-test repository
RPMs of PHP version 8.3.22RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linuxâ„ 8
as SCL in remi-test repository
âčïž The packages are available for x86_64 and aarch64.
âčïž PHP version 8.2 is now in security mode only, so no more RC will be released.
âčïž Installation: follow the wizard instructions.
Join us on Thursday, May 29 2025 for the Fedora 42 release party! Free registration is now open for the event, and you can find an early draft of the event schedule on the wiki page. We will be hosting the event in a dedicated matrix room, which registration is required to gain access, and will stream a mix of live and pre-recorded short sessions via Restream from 1300 UTC – 1600 UTC.
Read on for more information, although that intro might cover most of it
Why a change in format, kind of?
For F40, we trialed a two-day event of live presentations, streamed via YouTube into a matrix room over Friday and Saturday. This was fine, but probably a little too long to ask people to be able to participate for in full.
For F41, we trialled ‘watch parties’ across three time zones – APAC, EMEA and NA/LATM. This was ok, but had a lot of production issues behind the scenes, and unfortunately some in front of the scenes too!
So, for F42, we decided to run a poll to collect some feedback on what kind of event and content people actually wanted. The majority prefer live presentations, but time zones are unfortunately still a thing so we have decided to do a mix of live and per-recorded sessions via Restream in a dedicated matrix room. Each presentation will be 10-15 minutes in length, a perfect amount to take in the highlights, and the event itself will be three hours in duration. Lets see how this remix goes!
What you can expect
10-15 minute sessions that highlight all the good stuff that went into our Editions this release, plus hear about our new spin – COSMIC! You can also learn a little more about the Fedora Linux design process and get an update on the git forge work that’s in progress too, plus much more!
Real Time Matrix Chat – Attendees are welcome to chat in our event room, share thoughts, ask questions, and connect with others who are passionate about Fedora. Some speakers will be available for questions, but if you have any specific ones, you can always follow up with them outside of the event.
Opening Remarks from Fedora Leadership, updating you on very exciting things happening in the project, plus an introduction from our new FPL – Jef Spaleta!
Why a registration for an event on Matrix?
Right now we have no tooling to get some metrics for the event with on matrix. Plus we want to avoid spammers as much as we possibly can. That’s why we are using a free registration system that will send an invitation to your email in advance of the event. We recommend registering in advance to avoid any last minute issues, but just in case they are unavoidable anyway, we will have someone on hand to provide the room invite to attendees who have not received it.
As always, sessions will be available on the Fedora YouTube channel after the event for folks who want to re-watch or catch up on the talks. Also a big thank you to our CommOps team for helping put together this release party! We hope you enjoy the event, and look forward to celebrating the F42 release with you all on Thursday, May 29 from 1300 – 1600 UTC on Matrix!
One Identity Cloud PAM is one of the latest security products by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. Last year, I showed you how collect One Identity Cloud PAM Network Agent log messages on Windows and create alerts when somebody connects to a host on your local network using PAM Essentials. This time, I will show you how to work with the Linux version of the Network Agent.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114603298176306720