/rss20.xml">

Fedora People

Rediscover Classic RTS with OpenRA on Linux

Posted by Piju 9M2PJU on 2025-05-31 23:29:27 UTC

If you’re a fan of real-time strategy (RTS) games and use Linux, OpenRA is a must-have. This open-source project breathes new life into classic Westwood titles like Command & Conquer: Red Alert, Tiberian Dawn, and Dune 2000, offering modern enhancements while preserving the nostalgic gameplay.


🛠 What Is OpenRA?

OpenRA is a community-driven initiative that reimagines classic RTS games for contemporary platforms. It’s not just a remake; it’s a complete overhaul that introduces:

  • Modernized Interfaces: Updated sidebars and controls for improved usability.
  • Enhanced Gameplay Mechanics: Features like fog of war, unit veterancy, and attack-move commands.
  • Cross-Platform Support: Runs seamlessly on Linux, Windows, macOS, and *BSD systems.
  • Modding Capabilities: A built-in SDK allows for the creation of custom mods and maps.

These improvements ensure that both veterans and newcomers can enjoy a refined RTS experience.


🚀 Latest Features and Updates

The March 2025 release brought significant enhancements:

  • New Missions: Two additional Red Alert missions with improved objectives.
  • Persistent Skirmish Options: Settings now carry over between matches.
  • Balance Tweaks: Refinements for Red Alert and Dune 2000 to ensure fair play.
  • Asset Support: Compatibility with The Ultimate Collection and C&C Remastered Collection.
  • Language Support: Progress towards multilingual capabilities.

These updates demonstrate OpenRA’s commitment to evolving and enhancing the RTS genre.


🧰 Installation on Linux

Installing OpenRA on Linux is straightforward:

  1. Download AppImages: Visit the official download page to get the AppImage for your desired mod.
  2. Make Executable: Right-click the AppImage, select ‘Properties,’ and enable execution permissions.
  3. Launch: Double-click the AppImage to start the game.

Alternatively, you can install OpenRA via:

  • Snap: sudo snap install openra
  • Flatpak: flatpak install flathub net.openra.OpenRA

These methods ensure that OpenRA integrates smoothly with your system.


🌟 Why Choose OpenRA?

OpenRA stands out in the Linux gaming landscape due to:

  • Community Engagement: Regular updates and active forums foster a vibrant player base.
  • Modding Scene: A robust SDK encourages creativity and customization.
  • Cross-Platform Play: Enjoy multiplayer matches with friends on different operating systems.
  • Educational Value: An in-game encyclopedia provides insights into units and strategies.

These features make OpenRA not just a game but a platform for learning and community interaction.


đŸŽ„ See OpenRA in Action

For a visual overview, check out this review:


🏆 Other Notable Strategy Games for Linux

If you’re exploring more strategy titles, consider:

  • 0 A.D.: A historical RTS focusing on ancient civilizations.
  • The Battle for Wesnoth: A turn-based strategy game with a rich fantasy setting.
  • Freeciv: A free Civilization-inspired game with extensive customization.

Each offers unique gameplay experiences and is well-supported on Linux platforms.


OpenRA exemplifies how classic games can be revitalized for modern audiences. Its blend of nostalgia and innovation makes it a standout choice for strategy enthusiasts on Linux.

The post Rediscover Classic RTS with OpenRA on Linux appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

End of May 2025 fedora infra bits

Posted by Kevin Fenzi on 2025-05-31 15:46:59 UTC
Scrye into the crystal ball

We have arrived at the end of May. This year is going by in the blur for me. So much going on, so much to do.

Datacenter move

The switch week is still scheduled for the week of June 30th. We made some progress this last week on installs. Got everything setup to install a bunch of servers. I installed a few and kept building out services. I was mostly focusing on getting things setup so I could install openshift clusters in both prod and staging. That will let us move applications. I also setup to do rhel10 installs and installed a test virthost. There's still a few things missing from epel10 that we need: nagios clients, collectd (thats on me) and zabbix clients, otherwise the changes were reasonably minor. I might try and use rhel10 for a few things, but I don't want to spend a lot of time on it as we don't have much time.

Flock

Flock is next week! If you are looking for me, I will be traveling basically all monday and tuesday, then in prague from tuesday to very early sunday morning, when I travel back home.

If you are going to flock and want to chat, please feel free to catch me and/or drop me a note to try and meet you. Happy to talk!

If you aren't going to flock, I'm hoping everything is pretty quiet infrastructure wise. I will try and check in on any major issues, but do try and file tickets on things instead of posting to mailing lists or matrix.

I'd also like to remind everyone going to flock that we try and not actually decide anything there. It's for discussion and learning and putting a human face on your fellow contributors. Make plans, propose things definitely, just make sure after flock you use our usual channels to discuss and actually decide things. Deciscions shouldn't be made offline where those not present can't provide input.

I'm likely to do blog posts about flock days, but may be delayed until after the event. There's likely not going to be a regular saturday post next week from me.

Arm laptop

So, I successfully used this Lenovo slim7x all week, so I guess I am going to try and use it for my flock travel. Hopefully it will all work out. :)

Issues I have run into in no particular order:

  • There are a bunch of various people working on various things, and all of that work touches the devicetree file. This makes it a nightmare to try and have a dtb with working bluetooth, ec, webcam, sound, suspend, etc. I really hope a bunch of this stuff lands upstream soon. For now I just Have a kernel with bluetooth and ec working and am ignoring sound and webcam.

  • s2idle sleep "works", but I don't trust it. I suspended the other day when I was running some errands, and when I got home, the laptop had come on and was super super hot (it was under a jacket to make it less a theft target). So, I might just shutdown most of the time traveling. There's a patch to fix deep sleep, but see above.

  • I did wake up one day and it had rebooted, no idea why...

  • Otherwise everything is working fine and it's pretty nice and zippy.

  • Battery life is... ok. 7-8 hours. It's not hitting the lowest power states yet, but that will do I think for my needs for now.

So, hopefully it will all work. :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114603298176306720

RIT-Bahadur Malayalam typeface

Posted by Rajeesh KV on 2025-05-31 10:03:59 UTC

In 1978, a commemorative souvenir was published to celebrate the milestone of acting in 400 films by Bahadoor, a celebrated Malayalam movie actor. Artist Namboodiri designed its cover caricature and the lettering.

Cover of Bahadoor souvenir designed by artist Namboodiri in 1978.

Based on this lettering, KH Hussain designed a traditional script Malayalam Unicode font named ‘RIT Bahadur’. I did work on the engineering and production of the font to release it on the 25th death anniversary of Bahadoor, on 22-May-2025.

National daily ‘The Hindu’ has published an article about Bahadur font.

RIT Bahadur is a display typeface that comes in Bold and BoldItalic variants. It is licensed under Open Font License and can be freely downloaded from Rachana website.

RIT Bahadur font specimen.

Swap Partition vs Swap File on Linux: Everything You Need to Know

Posted by Piju 9M2PJU on 2025-05-31 07:52:08 UTC

When installing or managing a Linux system, one of the most debated topics is whether to use a swap partition or a swap file—or even use swap at all.

In this post, we’ll go back to the origin of swap, explore why swap was needed, how modern systems use (or avoid) it, and the advantages and disadvantages of both swap partitions and swap files.


🔄 What is Swap?

Swap is disk space used by the operating system when physical RAM is full. It acts as an extension of RAM to allow the system to offload memory pages that are not immediately needed, keeping critical applications running smoothly.


🧓 The Origin of Swap

Swap originated in the early days of computing, when:

  • RAM was expensive and limited.
  • Storage (although slower) was more plentiful.
  • Systems needed a way to “extend” memory to run more processes than RAM allowed.

Unix systems implemented swap space as a way to avoid running out of memory entirely—this idea carried over to Linux.


🧠 Why You Might Still Need Swap Today

Even with modern hardware, swap still has roles:

  1. Prevent Out of Memory (OOM) crashes: If your system runs out of RAM, swap provides a safety net.
  2. Hibernation (suspend-to-disk): Requires swap equal to or greater than your RAM size.
  3. Memory balancing: Swap allows the kernel to move idle pages out of RAM, freeing up space for active applications or disk cache.
  4. Low-memory devices: On systems like Raspberry Pi or small VPS servers, swap helps compensate for limited RAM.

đŸ€· Why You Might Not Need Swap

On the other hand:

  1. Lots of RAM: If your system rarely uses all available memory, swap may never be touched.
  2. SSD wear concerns: Excessive swapping can reduce SSD lifespan (though this is largely exaggerated with modern SSDs).
  3. Performance-critical applications: Swap is much slower than RAM. If you’re running performance-sensitive workloads, using swap can be a bottleneck.
  4. Modern alternatives: Features like zram and zswap offer compressed RAM swap spaces, reducing or eliminating the need for disk-based swap.

🗃 Swap Partition

✔ Advantages

  • Stability: Less prone to fragmentation.
  • Predictable performance: Constant location on disk can be slightly faster on spinning HDDs.
  • Used by default in many legacy systems.
  • Can be used even if root filesystem becomes read-only.

❌ Disadvantages

  • Inflexible size: Hard to resize without repartitioning.
  • Occupies a dedicated partition: Not space-efficient, especially on SSDs.
  • Inconvenient for virtualized or cloud instances.

📁 Swap File

✔ Advantages

  • Flexible: Easy to resize or remove.
  • No need for a separate partition.
  • Supported by all modern Linux kernels (since 2.6).
  • Works well with most filesystems including ext4, XFS, Btrfs (with limitations).

❌ Disadvantages

  • Can be slower on heavily fragmented file systems.
  • Doesn’t work with hibernation on some setups.
  • Needs correct permissions and configuration (e.g., no copy-on-write or compression with Btrfs unless configured properly).

đŸ§Ș Performance Considerations

CriteriaSwap PartitionSwap File
Resize Flexibility❌ Hard✅ Easy
Setup Complexity⚠ Medium✅ Easy
Performance (HDD)✅ Slightly better⚠ Slightly worse
Performance (SSD)⚖ Similar⚖ Similar
Works with Hibernate✅ Yes⚠ Depends on setup
Dynamic Management❌ Manual✅ Resizable on-the-fly

🛠 When to Use What?

Use a Swap Partition if:

  • You’re setting up a traditional desktop or dual-boot Linux system.
  • You plan to use hibernation reliably.
  • You prefer separating system components into strict partitions.

Use a Swap File if:

  • You’re on a modern system with lots of RAM and SSD.
  • You want to add swap after install easily.
  • You’re using cloud or VPS environments with flexible resources.
  • You don’t plan to use hibernation.

💡 Bonus: zram and zswap

Modern Linux kernels support zram and zswap, which compress memory pages before swapping to disk:

  • zram creates a compressed RAM-based block device as swap.
  • zswap is a compressed cache for swap pages before writing to disk.

These are great for low-memory systems like Raspberry Pi or embedded devices.


đŸ§Ÿ Conclusion

Swap is not dead—it’s evolved.

Whether you choose a swap partition or a swap file depends on your needs:

  • Flexibility? Go for swap file.
  • Predictability and hibernation? Use a swap partition.
  • Want better performance with low RAM? Consider zram.

As always with Linux, the choice is yours—and that’s the power of open systems.


✅ TL;DR

  • Swap partition: Reliable, but rigid.
  • Swap file: Flexible and modern.
  • No swap: Fine if you have lots of RAM and don’t use hibernation.
  • zram/zswap: Smart memory compression alternatives.

The post Swap Partition vs Swap File on Linux: Everything You Need to Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

New badge: Fedora Mentor Summit 2025 !

Posted by Fedora Badges on 2025-05-30 15:09:59 UTC
Fedora Mentor Summit 2025You attended the Fedora Mentor Summit 2025

Infra and RelEng Update – Week 22 2025

Posted by Fedora Community Blog on 2025-05-30 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 26 May – 30 May 2025

Read more: Infra and RelEng Update – Week 22 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 22 2025 appeared first on Fedora Community Blog.

How I manage SSL certificates for my homelab with Letsencrypt and Ansible

Posted by Maxim Burgerhout on 2025-05-30 07:12:00 UTC

How I manage SSL certificates for my homelab with Letsencrypt and Ansible

I have a fairly sizable homelab, consisting of some Raspberry Pi 4s, some Intel Nucs, a Synology NAS with a VM running on it and a number of free VMs in Oracle cloud. All these machines run RHEL 9 or RHEL 10 and all of them are managed from an instance of Red Hat Ansible Automation Platform that runs on the VM on my NAS.

On most of these machines, I run podman containers behind caddy (which takes care of any SSL certificate management automatically). But for some services, I really needed an automated way of managing SSL certificates that didn't involve Caddy. An example for this is cockpit, which I use on some occasions. I hate those "your connection is not secure messages", so I needed real SSL certificates that my whole network would trust without the need of me having to load custom CA certificates in every single device.

I also use this method for securing my internal Postfix relay, and (in a slightly different way) for setting up certificates for containers running on my NAS.

So. Ansible to the rescue. It turns out, there is a surprisingly easy way to do this with Ansible. I found some code floating around the internet. To be honest, I forgot where I got it, it was probably a GitHub gist, but I really don't remember: I wrote this playbook months and months ago - I would love to attribute credit for this, but I simply can't :(

The point of the playbook is that it takes a list of certificates that should exist on a machine, and it makes sure those certificates exist on the target machine. Because this is for machines that are not connected to the internet, it's not possible to use the standard HTTP verification. Instead, it creates temporary DNS records to verify my ownership of the domain.

Let's break down how the playbook works. I'll link to the full playbook at the end.

Keep in mind that all tasks below are meant to be run as a playbook looping over a list of dictionaries that are structures as follows:

  le_certificates:
    - common_name: "mymachine.example.com"
      basedir: "/etc/letsencrypt"
      domain: ".example.com"
      email: security-team@example.com

First, we make sure a directory exists to store the certificate. We check for the existence of a Letsencrypt account key and if that does not exist, we create it and copy it over to the client:

  - name: Create directory to store certificate information
    ansible.builtin.file:
      path: "{{ item.basedir }}"
      state: directory
      mode: "0710"
      owner: "{{ cert_directory_user }}"
      group: "{{ cert_directory_group }}"

  - name: Check if account private key exists
    ansible.builtin.stat:
      path: "{{ item.basedir }}/account_{{ item.common_name }}.key"
    register: account_key

  - name: Generate and copy over the acme account private key
    when: not account_key.stat.exists | bool
    block:
      - name: Generate private account key for letsencrypt
        community.crypto.openssl_privatekey:
          path: /tmp/account_{{ item.common_name }}.key
          type: RSA
        delegate_to: localhost
        become: false
        when: not account_key.stat.exists | bool

      - name: Copy over private account key to client
        ansible.builtin.copy:
          src: /tmp/account_{{ item.common_name }}.key
          dest: "{{ item.basedir }}/account_{{ item.common_name }}.key"
          mode: "0640"
          owner: root
          group: root

The next step is to check for the existence of a private key for the domain we are handling, and create it and copy it to the client if it doesn't exist:

  - name: Check if certificate private key exists
    ansible.builtin.stat:
      path: "{{ item.basedir }}/{{ item.common_name }}.key"
    register: cert_key

  - name: Generate and copy over the acme cert private key
    when: not cert_key.stat.exists | bool
    block:
      - name: Generate private acme key for letsencrypt
        community.crypto.openssl_privatekey:
          path: /tmp/{{ item.common_name }}.key
          type: RSA
        delegate_to: localhost
        become: false
        when: not cert_key.stat.exists | bool

      - name: Copy over private acme key to client
        ansible.builtin.copy:
          src: /tmp/{{ item.common_name }}.key
          dest: "{{ item.basedir }}/{{ item.common_name }}.key"
          mode: "0640"
          owner: root
          group: root

Then, we create a certificate signing request (CSR) based on the private key, and copy that to the client:

  - name: Generate and copy over the csr
    block:
      - name: Grab the private key from the host
        ansible.builtin.slurp:
          src: "{{ item.basedir }}/{{ item.common_name }}.key"
        register: remote_cert_key

      - name: Generate the csr
        community.crypto.openssl_csr:
          path: /tmp/{{ item.common_name }}.csr
          privatekey_content: "{{ remote_cert_key['content'] | b64decode }}"
          common_name: "{{ item.common_name }}"
        delegate_to: localhost
        become: false

      - name: Copy over csr to client
        ansible.builtin.copy:
          src: /tmp/{{ item.common_name }}.csr
          dest: "{{ item.basedir }}/{{ item.common_name }}.csr"
          mode: "0640"
          owner: root
          group: root

Now the slightly more complicated stuff starts. This next task contacts the Letsencrypt API and requests a certificate. It specifies a dns-01 challenge, which means that Letsencrypt will respond with a challenge that we can validate our request through the creation of a special DNS record. All we need is in the response, which well store as cert_challenge.

  - name: Create a challenge using an account key file.
    community.crypto.acme_certificate:
      account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
      account_email: "{{ item.email }}"
      src: "{{ item.basedir }}/{{ item.common_name }}.csr"
      cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
      challenge: dns-01
      acme_version: 2
      acme_directory: "{{ acme_dir }}"
      # Renew if the certificate is at least 30 days old
      remaining_days: 60
      terms_agreed: true
    register: cert_challenge

Now, I'll be using DigitalOcean's API to create the temporary DNS records, but you can use whatever DNS service you want, as long as it's publicly available for Letsencrypt to query. The following block will only run if two things are true: 1. the cert_challenge is changed, which is only so if we need to renew the certificate. Letsencrypt certificates are valid for 90 days only. We specified remaining_days: 60, so if we run this playbook 30 or more days after its previous run, cert_challenge will be changed and the certificate will be renewed. 2. item.common_name (which is a variable that holds the requested DNS record) is part of the challenge_data structure in cert_challenge. This is to verify we actually got the correct data from the Letsencrypt API, and not just some metadata change.

The block looks like this:

   - name: Actual certificate creation
    when: cert_challenge is changed and item.common_name in cert_challenge.challenge_data
    block:
      - name: Create DNS challenge record on DO
        community.digitalocean.digital_ocean_domain_record:
          state: present
          oauth_token: "{{ do_api_token }}"
          domain: "{{ item.domain[1:] }}"
          type: TXT
          ttl: 60
          name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
          data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
        delegate_to: localhost
        become: false

      - name: Let the challenge be validated and retrieve the cert and intermediate certificate
        community.crypto.acme_certificate:
          account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
          account_email: "{{ item.email }}"
          src: "{{ item.basedir }}/{{ item.common_name }}.csr"
          cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
          fullchain: "{{ item.basedir }}/{{ item.domain[1:] }}-fullchain.crt"
          chain: "{{ item.basedir }}/{{ item.domain[1:] }}-intermediate.crt"
          challenge: dns-01
          acme_version: 2
          acme_directory: "{{ acme_dir }}"
          remaining_days: 60
          terms_agreed: true
          data: "{{ cert_challenge }}"

      - name: Remove DNS challenge record on DO
        community.digitalocean.digital_ocean_domain_record:
          state: absent
          oauth_token: "{{ do_api_token }}"
          domain: "{{ item.domain[1:] }}"
          type: TXT
          name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
          data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
        delegate_to: localhost
        become: false

You'll notice that the TTL for this record is intentionally very low, because we don't need it other than for validation of the challenge, and we'll remove it after vertification. If you do not use DigitalOcean as a DNS provider, the first task in the block above will look different, obviously.

The second task in the block reruns the acme_certificate task, and this time we pass the contents of the cert_challenge variable as the data parameter. Upon successful validation, we can store retrieve the new certificate, full chain and intermediate chain to disk. Basically, at this point, we are done without having to use certbot :)

Of course, in the third task, we clean up the temporary DNS record again.

I have a slightly different playbook to manage certificates on my NAS, and some additional tasks that configure Postfix to use this certificate, too, but those are probably useful for me only.

TL;DR: it you want to create a (set of) certificate(s) for a (group of) machine(s), running this playbook from AAP every month makes that really easy.

The main playbook looks like this:

---
# file: letsencrypt.yml
- name: Configure letsencrypt certificates
  hosts: rhel_machines
  gather_facts: false
  become: true
  vars:
    debug: false
    acme_dir: https://acme-v02.api.letsencrypt.org/directory

  pre_tasks:
    - name: Gather facts subset
      ansible.builtin.setup:
        gather_subset:
          - "!all"
          - default_ipv4
          - default_ipv6

  tasks:
    - name: Include letsencrypt tasks for each certificate
      ansible.builtin.include_tasks: letsencrypt_tasks.yml
      loop: "{{ le_certificates }}"

The letsencrypt_tasks.yml file is all of the above tasks combined into a single playbook:

---
# file: letsencrypt_tasks.yml

- name: Create directory to store certificate information
  ansible.builtin.file:
    path: "{{ item.basedir }}"
    state: directory
    mode: "0710"
    owner: "{{ cert_directory_user }}"
    group: "{{ cert_directory_group }}"

- name: Check if account private key exists
  ansible.builtin.stat:
    path: "{{ item.basedir }}/account_{{ item.common_name }}.key"
  register: account_key

- name: Generate and copy over the acme account private key
  when: not account_key.stat.exists | bool
  block:
    - name: Generate private account key for letsencrypt
      community.crypto.openssl_privatekey:
        path: /tmp/account_{{ item.common_name }}.key
        type: RSA
      delegate_to: localhost
      become: false
      when: not account_key.stat.exists | bool

    - name: Copy over private account key to client
      ansible.builtin.copy:
        src: /tmp/account_{{ item.common_name }}.key
        dest: "{{ item.basedir }}/account_{{ item.common_name }}.key"
        mode: "0640"
        owner: root
        group: root

- name: Check if certificate private key exists
  ansible.builtin.stat:
    path: "{{ item.basedir }}/{{ item.common_name }}.key"
  register: cert_key

- name: Generate and copy over the acme cert private key
  when: not cert_key.stat.exists | bool
  block:
    - name: Generate private acme key for letsencrypt
      community.crypto.openssl_privatekey:
        path: /tmp/{{ item.common_name }}.key
        type: RSA
      delegate_to: localhost
      become: false
      when: not cert_key.stat.exists | bool

    - name: Copy over private acme key to client
      ansible.builtin.copy:
        src: /tmp/{{ item.common_name }}.key
        dest: "{{ item.basedir }}/{{ item.common_name }}.key"
        mode: "0640"
        owner: root
        group: root

- name: Generate and copy over the csr
  block:
    - name: Grab the private key from the host
      ansible.builtin.slurp:
        src: "{{ item.basedir }}/{{ item.common_name }}.key"
      register: remote_cert_key

    - name: Generate the csr
      community.crypto.openssl_csr:
        path: /tmp/{{ item.common_name }}.csr
        privatekey_content: "{{ remote_cert_key['content'] | b64decode }}"
        common_name: "{{ item.common_name }}"
      delegate_to: localhost
      become: false

    - name: Copy over csr to client
      ansible.builtin.copy:
        src: /tmp/{{ item.common_name }}.csr
        dest: "{{ item.basedir }}/{{ item.common_name }}.csr"
        mode: "0640"
        owner: root
        group: root

- name: Create a challenge using an account key file.
  community.crypto.acme_certificate:
    account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
    account_email: "{{ item.email }}"
    src: "{{ item.basedir }}/{{ item.common_name }}.csr"
    cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
    challenge: dns-01
    acme_version: 2
    acme_directory: "{{ acme_dir }}"
    # Renew if the certificate is at least 30 days old
    remaining_days: 60
    terms_agreed: true
  register: cert_challenge

- name: Actual certificate creation
  when: cert_challenge is changed and item.common_name in cert_challenge.challenge_data
  block:
    - name: Create DNS challenge record on DO
      community.digitalocean.digital_ocean_domain_record:
        state: present
        oauth_token: "{{ do_api_token }}"
        domain: "{{ item.domain[1:] }}"
        type: TXT
        ttl: 60
        name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
        data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
      delegate_to: localhost
      become: false

    - name: Let the challenge be validated and retrieve the cert and intermediate certificate
      community.crypto.acme_certificate:
        account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
        account_email: "{{ item.email }}"
        src: "{{ item.basedir }}/{{ item.common_name }}.csr"
        cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
        fullchain: "{{ item.basedir }}/{{ item.domain[1:] }}-fullchain.crt"
        chain: "{{ item.basedir }}/{{ item.domain[1:] }}-intermediate.crt"
        challenge: dns-01
        acme_version: 2
        acme_directory: "{{ acme_dir }}"
        remaining_days: 60
        terms_agreed: true
        data: "{{ cert_challenge }}"

    - name: Remove DNS challenge record on DO
      community.digitalocean.digital_ocean_domain_record:
        state: absent
        oauth_token: "{{ do_api_token }}"
        domain: "{{ item.domain[1:] }}"
        type: TXT
        name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
        data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
      delegate_to: localhost
      become: false

And finally, as part of host_vars, for each of my hosts a letsencrypt.yml file exists containing:

---
le_certificates:
  - common_name: "myhost.example.com"
    basedir: "/etc/letsencrypt"
    domain: ".example.com"
    email: security-team@example.com

To be fair, there could probably be a lot of optimization done in that playbook, and I can't remember why I did it with .example.com (with the leading dot) and then use item.domain[1:] in so many places. But, I'm a lazy IT person, and I'm not fixing what isn't inherently broken :)

Hope this helps!

M

Another update on the Fedoraproject Datacenter Move

Posted by Fedora Community Blog on 2025-05-29 11:07:41 UTC

Here’s another update on the upcoming fedoraproject Datacenter move.

Summary: there have been some delays, the current target switch week
to the new Datacenter is now the week of 2025-06-30.
( formerly 2025-05-16 ).

The plans we mentioned last month are all still in our plan, just moved out two weeks.

Why the delay? Well, there were some delays in getting networking
setup in the new datacenter, but thats now been overcome and we are
back on track, just with a delay.

Here’s a rundown of the current plan:

  • We now have access to all the new hardware, it’s firmware has been
    updated and configured.
  • We have a small number of servers installed and this week we
    are installing OS on more servers as well as building out vm’s for
    various services.
  • Next week is flock, so we will probibly not make too much progress,
    but we might do some more installs/configuration if time permits.
  • The week after flock we hope to get openshift clusters all setup
    and configured.
  • The week after that we will start moving some applications that
    aren’t closely tied to the old datacenter. If they don’t have storage
    or databases, they are good candidates to move.
  • The next week will be any other applications we can move
  • The week before the switch will be getting things ready for that
    (making sure data is synced, plans are reviewed, etc)
  • Finally the switch week (week of june 30th):
    Fedora Project users should not notice much during this change.
    Mirrorlists, mirrors, docs, and other user facing applications
    should continue working as always. Updates pushes may be delayed
    a few days while the switch happens.
    Our goal is to keep any end user impact to a minimum.
  • For Fedora Contributors, Monday and Tuesday we plan to “move” the bulk of applications and services. Contributors should avoiding doing much on those days as services may be moving around or syncing in various ways. Starting Wednesday, we will make sure everything is switched and fix problems or issues as they are found. Thursday and Friday will continue stabilization work.
  • The week after the switch, some newer hardware in our old datacenter
    will be shipped down to the new one. This hardware will be added
    to increase capacity (more builders, more openqa workers, etc).

This move should get us in a nicer place with faster/newer/better hardware.

The post Another update on the Fedoraproject Datacenter Move appeared first on Fedora Community Blog.

It’s okay to be partial to your work

Posted by Ben Cotton on 2025-05-28 12:00:00 UTC

I often see leaders in open source projects not wanting to promote their own work in the interest of fairness. That’s a noble idea, but it’s unnecessary. It’s okay to be partial to — and promote — your own work, so long as you follow the community’s process.

Real world examples

What does this look like in practice? You may be a member of a steering committee that approves feature proposals. You didn’t earn that spot just because you’re good at meetings, you mostly likely earned it on sustained technical and interpersonal merit. This, in turn, means you’re probably still writing new feature proposals sometimes. That doesn’t mean you have to recuse yourself when it comes up for a vote. Everyone knows you wrote it, and you’re a member of the committee, not an independent judge presiding over a criminal trial.

Or you might be leading a project and have a tool that would help the project meet its goals. You can propose that the project adopt your tool. Again, it’s going to be clear that you wrote it, so go ahead and make the proposal.

The need for process

As I alluded to in the opening paragraph, your community needs a process for these sorts of proposals. It doesn’t have to be elaborate. Something simple as “a majority of the steering committee must approve of the proposal” counts as a process. Following the process is what keeps the decision fair, even when you have a predisposition to like what you’re proposing. If your proposal gets the same treatment as everyone else’s, that’s all that matters.

When to recuse yourself

Of course, there are times when it’s appropriate to recuse yourself. If your proposal is particularly contentious (let’s say a roughly 50-50 split, not a 75-25 split in favor), it’s best that you’re not the deciding vote. If you can’t amend your proposal in such a way that you win some more support, then it may be better to note vote.

If the community policy and processes require the author or a proposal to recuse themselves, then that’s obviously a good reason to do so. “But Ben said I shouldn’t!” won’t win you any points, even if the policy is misguided (and it may or may not be!).

Also, if the context is a pull request, you should not vote to approve it to get it over the approval requirement threshold. That is a separate case, and one that most forges will prohibit anyway.

This post’s featured photo by Piret Ilver on Unsplash.

The post It’s okay to be partial to your work appeared first on Duck Alignment Academy.

New badge: Rock the Boat !

Posted by Fedora Badges on 2025-05-28 09:42:09 UTC
Rock the BoatYou joined the boat party at Flock 2025 in Prague!

New badge: Flock 2025 Attendee !

Posted by Fedora Badges on 2025-05-28 09:37:22 UTC
Flock 2025 AttendeeYou attended Flock 2025 in Prague, Czech Republic!

How to use Authselect to configure PAM in Fedora Linux

Posted by Fedora Magazine on 2025-05-28 08:00:00 UTC

Authselect is a utility tool that manages PAM configurations using profiles. Starting with Fedora 36, Authselect became a hard requirement for configuring PAM. In this article, you will learn how to configure PAM using Authselect.

Introduction.

Unauthorized access is a critical risk factor in computer security. Cybercriminals engage in data theft, cyber-jacking, crypto-jacking, phishing, and ransomware attacks once they gain unauthorized access. A common vulnerability exploit for unauthorized access is poor configuration authentication. Pluggable authentication module (PAM) plays a critical role in mitigating this vulnerability, by acting as a middleware layer between your application and authentication mechanisms. For instance, you can use PAM to configure a server to deny login after 6pm, and any login attempts afterwards will require a token. PAM does not carry out authentication itself, instead it forwards requests to the authentication module you specified in its configuration file.

This article will cover the following three topics:

  1. PAM
  2. Authselect, and authselect profiles
  3. How to configure PAM

Prerequisites:

  • Fedora, Centos or RHEL server. This guide uses Fedora 41 server edition. Fedora, Centos, and RHEL servers are interchangeable.
  • A user account with sudo privileges on the server.
  • Command line familiarity.

What is PAM?

The Pluggable Authentication Modules (PAM) provides a modular framework for authenticating users, systems, and applications on Fedora Linux. Before PAM, file-based authentication was the prevalent authentication scheme. File-based authentication stores, usernames, passwords, id’s, names, and other optional information in one file. This was simple, and everyone was happy until security requirements changed, or new authentication mechanisms were adopted.

Here’s an excerpt from Red Hat’s PAM documentation:

Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism which system applications can use to relay authentication to a centrally configured framework. PAM is pluggable because there is a PAM module for different types of authentication sources (such as Kerberos, SSSD, NIS, or the local file system). Different authentication sources can be prioritized.

PAM acts as a middleware between applications and authentication modules. It receives authentication requests, looks at its configuration files and forwards the request to the appropriate authentication module. If any module detects that the credentials do not meet the required configuration, PAM denies the request and prevents unauthorized access. PAM guarantees that every request is consistently validated before it denies or grants access.

Why PAM?

  1. Support for various authentication schemes using pluggable modules. These may include two-factor authentication (2FA), password authentication (LDAP), tokens (OAuth, Kerberos), biometrics (fingerprint, facial), Hardware (YubiKey), and much more.
  2. Support for stacked authentication. PAM can combine one or more authentication schemes.
  3. Flexibility to support new or future authentication technology with minimal friction.
  4. High performance, and stability under significant load.
  5. Support for granular/custom configuration across users and applications. For example, PAM can disallow access to an application from 5pm to 5am, if an authenticated user does not possess a role.

Authselect replaces Authconfig

Authselect was introduced in Fedora 28 to replace Authconfig. By Fedora 35 Authconfig was removed system-wide. In Fedora 36 Authselect became a hard dependency making it a requirement for configuring PAM in subsequent Fedora versions. 

This tool does not configure your applications (LDAP, AD, SSH);  it is a configuration management tool designed to set up and maintain PAM. Authselect selects and applies pre-tested authentication profiles that determine which PAM modules are active and how they are configured.

Here’s an excerpt from the Fedora 27 changeset which announced Authselect as a replacement for Authconfig

Authselect is a tool to select system authentication and identity sources from a list of supported profiles.

It is designed to be a replacement for authconfig but it takes a different approach to configure the system. Instead of letting the administrator build the pam stack with a tool (which may potentially end up with a broken configuration), it would ship several tested stacks (profiles) that solve a use-case and are well tested and supported.

From the same changeset, the authors report that Authconfig was error prone, hard to maintain due to technical debt, caused system regressions after updates, and was hard to test.

Authconfig does its best to always generate a valid pam stack but it is not possible to test every combination of options and identity and authentication daemons configuration. It is also quite regression prone since those daemons are in active development on their own and independent on authconfig. When a new feature is implemented in an authentication daemon it takes some time to propagate this feature into authconfig. It also may require a drastic change to the pam stack which may easily introduce regressions since it is hard to test properly with so many possible different setups.

Authselect profiles, and what they do.

As mentioned above, Authselect manages PAM configuration using ready-made profiles. A profile is a set of features and functions that describe how the resulting system configuration will look. One selects a profile and Authselect applies the configuration to PAM.

In Fedora, Authselect ships with four profiles;

$ authselect list
- local          Local users only
- nis            Enable NIS for system authentication
- sssd           Enable SSSD for system authentication (also for local users only)
- winbind        Enable winbind for system authentication

For descriptions of each profile, visit Authselect’s readme page for profiles, and the wiki, available on GitHub.

You can view the current profile with;

$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-fingerprint
- with-mdns4

You can change the current profile with;

$ sudo authselect select local
Profile "local" was selected.

List the features in any profile with;

$ authselect list-features local
with-altfiles
with-ecryptfs
with-faillock
with-fingerprint
with-libvirt
with-mdns4
with-mdns6
with-mkhomedir
with-pam-gnome-keyring
with-pam-u2f
with-pam-u2f-2fa
with-pamaccess
with-pwhistory
with-silent-lastlog
with-systemd-homed
without-lastlog-showfailed
without-nullok
without-pam-u2f-nouserok

You can enable or disable features in a profile with;

$ sudo authselect enable-feature with-fingerprint
$ sudo authselect disable-feature with-fingerprint

Configure PAM with Authselect.

Scenario: You have noticed a high number of failed login attempts on your Fedora Linux server. As a preemptive action you want to configure PAM for lockouts. Any user with 3 failed login attempts, gets locked out of your server for 24 hours.

The pam_faillock.so module maintains a list of failed authentication attempts per user during a specified interval and locks the account in case there were more than the stipulated consecutive failed authentications.

The Authselect profile “with-faillock” feature handles failed authentication lockouts.

Step 1. Check if current profile on the server has with-faillock enabled;

$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-mdns4
- with-fingerprint

As you can see, with-faillock is not enabled in this profile.

Step 2. Enable with-faillock

$ sudo authselect enable-feature with-faillock
$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-mdns4
- with-fingerprint
- with-faillock

Authselect has now configured PAM to support lockouts. Check the /etc/pam.d/system-auth file, and the /etc/pam.d/password-auth files, to see that the files are updated by authselect.

From the vimdiff image below you can see the changes authselect added to /etc/pam.d/system-auth.

Authselect added the following lines

auth      required   pam_faillock.so preauth silent
auth      required   pam_faillock.so authfail
account   required   pam_faillock.so

Step 3. Check if the current configuration is valid.

$ authselect check
Current configuration is valid.

Step 4. Apply changes

$ sudo authselect apply-changes
Changes were successfully applied.

Step 5. Configure faillock

$ vi /etc/security/faillock.conf

Uncomment the file to match the following parameters

silent
audit
deny=3
unlock_time=86400
dir = /var/run/faillock

Step 6. Test PAM configuration

6.1 Attempt to login consecutive times with the wrong password, to trigger a lockout

6.2 Check failure records

Important: As a best practice, always backup your current Authselect profile before making any change.

Back up the current Authselect profile as follows;

$ authselect select local -b
Backup stored at /var/lib/authselect/backups/2025-05-23-22-41-33.UyM1lJ
Profile "local" was selected.

To list backed up profiles;

$ authselect backup-list
2025-05-22-15-17-41.fe92T8 (created at Thu 22 May 2025 11:17:41 AM EDT)
2025-05-23-22-41-33.UyM1lJ (created at Fri 23 May 2025 06:41:33 PM EDT)

Restore a profile from backup using this;

$ authselect backup-restore 2025-05-23-22-41-33.UyM1lJ

Don’t Panic! There’s an F42 Release Party on Thursday!

Posted by Fedora Magazine on 2025-05-27 16:24:15 UTC

On Thursday, May 29 (yes, two days away!) we will host the F42 release party on Matrix.

We would love for you to join us to celebrate all things F42 in a private event room from 1300 – 1600 UTC. You will hear from our new FPL Jef Spaleta, learn about the design process for each release, and hear about some of the great new features in Fedora Workstation, Fedora KDE and our installer. Plus there’s a git forge update and a mentor summit update too, plus lots more.

You can see the schedule on the event page wiki, and how to attend is simple: please register for the event, for free, in advance. Using your Matrix ID, you will receive an invitation to a private event room where we will be streaming presentations via ReStream.

Events will be a mixture of live and pre-recorded. All will be available after the event on the Fedora YouTube channel.

Testing the new syslog-ng wildcard-file() source options on Linux

Posted by Peter Czanik on 2025-05-27 13:20:50 UTC

Last year, syslog-ng 4.8.0 improved the wildcard-file() source on FreeBSD and MacOS. Version 4.9.0 will do the same for Linux by using inotify for file and directory monitoring, resulting in faster performance while using significantly less resources. This blog is a call for testing the new wildcard-file() source options before release.

Read more at https://www.syslog-ng.com/community/b/blog/posts/testing-the-new-syslog-ng-wildcard-file-source-options-on-linux

syslog-ng logo

Andy Warhol na FAAP e uma retrospectiva de Arte ContemporĂąnea

Posted by Avi Alkalay on 2025-05-27 11:54:58 UTC

Então eu fui na exposição do Andy Warhol.

É sempre assim. Embaço prá ir ver arte contemporñnea, mas ai quando vou saio empolgado, enlouquecido, maravilhado, inspirado, intrigado.

Foi assim também com a portuguesa Joana Vasconcelos, no MaAM, e sua monumental e estonteante Valkyrie Mumbet. Foi assim com as ilusÔes geométricas do Julio Le Parc na Tomie Ohtake. Ou com a inventividade de Anish Kapoor que vi na Corpartes. Ou os necessårios parques de arte como Inhotim, o inesquecível De Hoge Veluwe, o deCordova. Ou ainda o ICA de Boston. Richard Serra no Guggenheim de Bilbao. E muitos outros. Consigo ser exaustivo com essa lista pois guardei fotos de todas essas visitas que muito me marcaram.

Essa do Warhol, na FAAP, Ă© obrigatĂłria. Tem centenas de obras originais, muitas sĂŁo extremamente conhecidas, tudo muito bem curado. Warhol como retratista retratou inĂșmeras celebridades como Marlyn, Michael Jackson, Joan Collins (foto, e uma das composiçÔes mais impressionantes da exposição), Sylvester Stallone, Jacqueline Kennedy etc. Mas tem tambĂ©m sua veia polĂ­tica, onde tratou a espetacularização da morte, tema que acho ainda muito atual.

Uma das características intrigantes de sua obra é sua técnica. Qualquer pessoa que assistisse Warhol trabalhando por 2 dias com fotografia, serigrafia e tinta, conseguiria fazer parecido. E digo isso não para diminuir, mas porque é sensacional onde se pode chegar com tão pouco. O diferencial de Warhol creio que foi o meio em que estava inserido, as festas que frequentava, as pessoas com quem se relacionava. E coragem. Muita coragem para fazer arte daquele jeito simples e novo, e naquela escala.

Fico feliz que amigos tambĂ©m estĂŁo nas artes plĂĄsticas, inclusive com exposiçÔes recentes. Marcia Cymbalista que claramente transmite a singeleza de seu carĂĄter para suas pinturas. RogĂ©rio Pasqua que me impressionou com os desenhos semi-abstratos que tem produzido. Taly Cohen, que jĂĄ se lança para fama internacional. Babak Fakhamzadeh que se aventura por vĂĄrios tipos de expressĂ”es artĂ­sticas. É uma honra ter vocĂȘs por perto.

New badge: Let's have a party (Fedora 42) !

Posted by Fedora Badges on 2025-05-27 07:48:17 UTC
LetYou attended the F42 Virtual Release Party!

Third week of May 2025 fedora infra bits

Posted by Kevin Fenzi on 2025-05-24 21:01:11 UTC
Scrye into the crystal ball

Oh look, it's saturday already. Another busy week here with lots going on, so without further adieu, lets discuss some things!

Datacenter Move

Due to delays in getting network to new servers and various logistics, We are going to be moving the switcharoo week to the week of June 30th. It was set for June 16th, but thats just too close timing wise, so we are moving it out two weeks. Look for a community blog post and devel-announce post next week on this. I realize that that means that friday is July 4th (a holiday in the US), but we hope to do the bulk of switching things on monday and tuesday of that week, and leave only fixing things for wed and thursday.

We did finally get network for the new servers last week. Many thanks to all the networking folks who worked hard to get things up and running. With some network I was able to start bootstrapping infrastructure up. We now have a bastion host, a dhcp/tftp host and a dns server all up and managed via our existing ansible control host like all the rest of our hosts.

Friday was a recharge day at Red Hat, and monday is the US Memorial day holiday, but I should be back at deploying things on tuesday. Hopefully next week I will get a initial proxy setup and can then look at doing openshift cluster installs.

Flock

The week after next is flock! It came up so fast. I do plan on being there (I get into prague late morning on the 3rd). Hope to see many folks there, happy to talk about most anything. I'm really looking forward to the good energy that comes from being around so many awesome open source folks!

Of course that means I may well not be online as much as normal (when traveling, in talks, etc), so Please plan accordingly if you need my help with something.

Laptop

So, I got this lenovo slim7x snapdragon X laptop quite a long time ago, and finally I decided I should see if I can use it day to day, and if so, use it for the flock trip, so I don't have to bring my frame.work laptop.

So, I hacked up a aarch64 rawhide live with a dtb for it and was able to do a encrypted install and then upgrade the kernel. I did have to downgrade linux-firmware for the ath12k firmware bug, but thats fine.

So far it's looking tenable (I am typing this blog post on it now). I did have to add another kernel patch to get bluetooth working, but it seems to be fine with the patch. The OLED screen on this thing is wonderfull. Battery life seems ok, although it's hard to tell without a 'real life' test.

Known things not working: camera (there's patches, but it's really early so I will wait for them), sound (there's also patches, but it has the same issue the mac laptops had with there being no safeguards so you can easily destroy your speakers if you adjust too loud).

Amusing things: no discord flatpak available (the one on flathub is x86_64 only), but the web version works fine. (Although amusingly it tells you to install the app (which doesn't exist).

Also, no chrome, but there is chromium, which should be fine for sites that firefox doesn't work with.

I'll see if I can get through the weekend and upcoming week and decide what laptop I will take traveling.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114564927864167658

IPU6 cameras with ov02c10 / ov02e10 now supported in Fedora

Posted by Hans de Goede on 2025-05-23 16:09:31 UTC
I'm happy to share that 3 major IPU6 camera related kernel changes from linux-next have been backported to Fedora and have been available for about a week now the Fedora kernel-6.14.6-300.fc42 (or later) package:

  1. Support for the OV02C10 camera sensor, this should e.g. enable the camera to work out of the box on all Dell XPS 9x40 models.
  2. Support for the OV02E10 camera sensor, this should e.g. enable the camera to work out of the box on Dell Precision 5690 laptops. When combined with item 3. below and the USBIO drivers from rpmfusion this should also e.g. enable the camera on other laptop models like e.g. the Dell Latitude 7450.
  3. Support for the special handshake GPIO used to turn on the sensor and allow sensor i2c-access on various new laptop models using the Lattice MIPI aggregator FPGA / USBIO chip.

If you want to give this a test using the libcamera-softwareISP FOSS stack, run the following commands:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf
sudo dnf update 'kernel*'
sudo dnf install libcamera-qcam
reboot
qcam

Note the colors being washed out and/or the image possibly being a bit over or under exposed is expected behavior ATM, this is due to the software ISP needing more work to improve the image quality. If your camera still does not work after these changes and you've not filed a bug for this camera already please file a bug following these instructions.

See my previous blogpost on how to also test Intel's proprietary stack from rpmfusion if you also have that installed.

comment count unavailable comments

IPU6 FOSS and proprietary stack co-existence

Posted by Hans de Goede on 2025-05-23 15:42:26 UTC
Since the set of rpmfusion intel-ipu6-kmodipu6-camera-* package updates from last February the FOSS libcamera-softwareISP and Intel's proprietary stack using the Intel hardware ISP can now co-exist on Fedora systems, sharing the mainline IPU6-CSI2 receiver driver.

Because of this it is no longer necessary to blacklist the kernel-modules from the other stack. Unfortunately when the rpmfusion packages first generated "/etc/modprobe.d/ipu6-driver-select.conf" for blacklisting this file was not marked as "%ghost" in the specfile and now with the February ipu6-camera-hal the file has been removed from the package. This means that if you've jumped from an old ipu6-camera-hal where the file was not marked as "%ghost directly to the latest you may still have the modprobe.d conf file around causing issues. To fix this run:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf

and then reboot. I'll also add this as post-install script to the ipu6-camera-hal packages, to fix systems being broken because of this.

If you want the rpmfusion packages because your system needs the USBIO drivers, but you do not want the proprietary stack, you can run the following command to disable the proprietary stack:

sudo ipu6-driver-select foss

Or if you have disabled the prorietary stack in the past and want to give it a try, run:

sudo ipu6-driver-select proprietary

To test switching between the 2 stacks in Firefox go to Mozilla's webrtc test page and click on the "Camera" button, you should now get a camera permisson dialog with 2 cameras: "Built in Front Camera" and "Intel MIPI Camera (V4L2)" the "Built in Front Camera" is the FOSS stack and the "Intel MIPI Camera (V4L2)" is the proprietary stack. Note the FOSS stack will show a strongly zoomed in (cropped) image, this is caused by the GUM test-page, in e.g. google-meet this will not be the case.

Unfortunately switching between the 2 cameras in jitsi does not work well. The jitsi camera selector tries to show a preview of both cameras at the same time and while one stack is streaming the other stack cannot access the camera. You should be able to switch by: 1. Selecting the camera you want 2. Closing the jitsi tab 3. wait a few seconds for the camera to stop streaming 4. open jitsi in a new tab.

Note I already mentioned most of this in my previous blog post but it was a bit buried there.

comment count unavailable comments

DockFlare: Securely Expose Docker Services with Cloudflare Tunnels

Posted by Piju 9M2PJU on 2025-05-23 06:55:38 UTC

🌟 Introduction: What Is DockFlare?

Self-hosting applications has become increasingly popular among developers, tech enthusiasts, and homelabbers. However, securely exposing internal services to the internet is often a complicated task. It involves:

  • Opening firewall ports
  • Dealing with dynamic IPs
  • Managing TLS certificates
  • Handling reverse proxies
  • Setting up access control

This is where DockFlare comes in.

DockFlare is a lightweight, self-hosted Cloudflare Tunnel automation tool for Docker users. It simplifies the process of publishing your internal Docker services to the public internet through Cloudflare Tunnels, while providing optional Zero Trust security, DNS record automation, and a sleek web interface for real-time management.


🎯 Objectives of DockFlare

DockFlare was created to solve three key problems:

  1. Simplicity: Configure secure public access to your Docker containers using just labels—no reverse proxy, SSL setup, or manual DNS records needed.
  2. Security: Protect your services behind Cloudflare’s Zero Trust Access, supporting identity-based authentication (Google, GitHub, OTP, and more).
  3. Automation: Automatically create tunnels, subdomains, and security policies based on your Docker service metadata. No scripting. No re-deploys.

💡 Why Use DockFlare?

Here’s how DockFlare benefits its users:

  • 🚀 Quick Setup: Set up secure tunnels and expose services in seconds with Docker labels.
  • 🔐 Zero Trust Security: Enforce authentication for any service using Cloudflare Access.
  • 🌍 No Public IP Required: No need to forward ports or expose your home IP—perfect for CG-NAT and mobile ISPs.
  • 🛡 Safe by Default: TLS encryption, no open ports, and access rules built-in.
  • đŸ–„ User-Friendly UI: Visualize tunnels, view logs, and manage configurations in a web dashboard.
  • 🧰 DevOps Ready: Works seamlessly in CI/CD pipelines or home labs.

🛠 How to Install DockFlare

đŸ§Ÿ Requirements

  • Docker and Docker Compose
  • A Cloudflare account
  • A domain connected to Cloudflare
  • A Cloudflare API Token with:
    • Zone DNS edit
    • Zero Trust policy management
    • Tunnel management

📁 Step 1: Create Your Project Directory

mkdir dockflare && cd dockflare

📝 Step 2: Create .env File

Create a file named .env with the following contents:

CLOUDFLARE_API_TOKEN=your_token_here
CLOUDFLARE_ACCOUNT_ID=your_account_id
CLOUDFLARE_ZONE_ID=your_zone_id
TZ=Asia/Kuala_Lumpur

🔒 Keep this file private!

🐳 Step 3: Create docker-compose.yml

version: '3.8'

services:
  dockflare:
    image: alplat/dockflare:stable
    container_name: dockflare
    restart: unless-stopped
    env_file:
      - .env
    ports:
      - "5000:5000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - dockflare_data:/app/data
    labels:
      - cloudflare.tunnel.enable=true
      - cloudflare.tunnel.hostname=dockflare.yourdomain.com
      - cloudflare.tunnel.service=http://dockflare:5000

volumes:
  dockflare_data:

▶ Step 4: Deploy DockFlare

docker compose up -d

Access the UI: http://localhost:5000


🌐 Exposing a Docker Service

Here’s an example of exposing a service like myapp running on port 8080:

services:
  myapp:
    image: myapp:latest
    labels:
      cloudflare.tunnel.enable: "true"
      cloudflare.tunnel.hostname: "app.yourdomain.com"
      cloudflare.tunnel.service: "http://myapp:8080"
      cloudflare.tunnel.access.policy: "authenticate"
      cloudflare.tunnel.access.allowed_idps: "your-idp-uuid"

🔐 This will automatically:

  • Create a Cloudflare Tunnel
  • Point your subdomain to it
  • Enforce secure login

🌍 Add Non-Docker Services

Want to expose your home router or NAS?

  1. Go to DockFlare UI.
  2. Click “Add Hostname”.
  3. Enter:
    • Hostname (e.g., nas.yourdomain.com)
    • Internal IP/port
    • Access policy (bypass/authenticate)
  4. Done!

This works for any service, not just Docker.


🔐 Configuring Zero Trust Access

To secure your services:

cloudflare.tunnel.access.policy: authenticate
cloudflare.tunnel.access.allowed_idps: abc123-def456
cloudflare.tunnel.access.session_duration: 8h

🧠 Advanced Tips

  • Expose multiple hostnames:
    cloudflare.tunnel.hostname=api.domain.com,admin.domain.com
  • Customize session duration:
    cloudflare.tunnel.access.session_duration=12h
  • Monitor logs via the web UI or docker logs dockflare

📚 Resources


🏁 Conclusion

DockFlare is a game-changer for developers, sysadmins, and homelabbers who want an easy, secure, and automated way to expose their applications to the web. With support for Cloudflare Tunnels, Zero Trust Access, DNS automation, and a clean UI—it’s the only tool you’ll need to publish your services safely.

No more port forwarding. No more SSL headaches.

Just Docker + DockFlare + Cloudflare = Done. ✅

The post DockFlare: Securely Expose Docker Services with Cloudflare Tunnels appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

đŸŽČ PHP version 8.3.22RC1 and 8.4.8RC1

Posted by Remi Collet on 2025-05-23 04:26:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.8RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≄ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.22RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≄ 8
  • as SCL in remi-test repository

â„č The packages are available for x86_64 and aarch64.

â„č PHP version 8.2 is now in security mode only, so no more RC will be released.

â„č Installation: follow the wizard instructions.

â„č Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

â„č Notice:

  • version 8.4.8RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.5
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.7 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.22 and 8.4.8 are planed for June 6th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Announcing the Fedora 42 Release Party – May 29, 2025

Posted by Fedora Community Blog on 2025-05-22 22:35:32 UTC

Join us on Thursday, May 29 2025 for the Fedora 42 release party! Free registration is now open for the event, and you can find an early draft of the event schedule on the wiki page. We will be hosting the event in a dedicated matrix room, which registration is required to gain access, and will stream a mix of live and pre-recorded short sessions via Restream from 1300 UTC – 1600 UTC.

Read on for more information, although that intro might cover most of it 🙂

Why a change in format, kind of?

For F40, we trialed a two-day event of live presentations, streamed via YouTube into a matrix room over Friday and Saturday. This was fine, but probably a little too long to ask people to be able to participate for in full.

For F41, we trialled ‘watch parties’ across three time zones – APAC, EMEA and NA/LATM. This was ok, but had a lot of production issues behind the scenes, and unfortunately some in front of the scenes too!

So, for F42, we decided to run a poll to collect some feedback on what kind of event and content people actually wanted. The majority prefer live presentations, but time zones are unfortunately still a thing so we have decided to do a mix of live and per-recorded sessions via Restream in a dedicated matrix room. Each presentation will be 10-15 minutes in length, a perfect amount to take in the highlights, and the event itself will be three hours in duration. Lets see how this remix goes! 🙂

What you can expect

  • 10-15 minute sessions that highlight all the good stuff that went into our Editions this release, plus hear about our new spin – COSMIC! You can also learn a little more about the Fedora Linux design process and get an update on the git forge work that’s in progress too, plus much more!
  • Real Time Matrix Chat – Attendees are welcome to chat in our event room, share thoughts, ask questions, and connect with others who are passionate about Fedora. Some speakers will be available for questions, but if you have any specific ones, you can always follow up with them outside of the event.
  • Opening Remarks from Fedora Leadership, updating you on very exciting things happening in the project, plus an introduction from our new FPL – Jef Spaleta!

Why a registration for an event on Matrix?

Right now we have no tooling to get some metrics for the event with on matrix. Plus we want to avoid spammers as much as we possibly can. That’s why we are using a free registration system that will send an invitation to your email in advance of the event. We recommend registering in advance to avoid any last minute issues, but just in case they are unavoidable anyway, we will have someone on hand to provide the room invite to attendees who have not received it.

As always, sessions will be available on the Fedora YouTube channel after the event for folks who want to re-watch or catch up on the talks. Also a big thank you to our CommOps team for helping put together this release party! We hope you enjoy the event, and look forward to celebrating the F42 release with you all on Thursday, May 29 from 1300 – 1600 UTC on Matrix!

The post Announcing the Fedora 42 Release Party – May 29, 2025 appeared first on Fedora Community Blog.

Updates (esp. Wiki) and Reboots

Posted by Fedora Infrastructure Status on 2025-05-22 21:00:00 UTC

We will be applying updates to all our servers and rebooting.

As part of this we will be doing a large upgrade to the Wiki, which will be down at least two hours.

The other services will be up or down during the outage window.

Working with One Identity Cloud PAM Linux agent logs in syslog-ng

Posted by Peter Czanik on 2025-05-21 13:16:25 UTC

One Identity Cloud PAM is one of the latest security products by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. Last year, I showed you how collect One Identity Cloud PAM Network Agent log messages on Windows and create alerts when somebody connects to a host on your local network using PAM Essentials. This time, I will show you how to work with the Linux version of the Network Agent.

Read more at https://www.syslog-ng.com/community/b/blog/posts/working-with-one-identity-cloud-pam-linux-agent-logs-in-syslog-ng

syslog-ng logo