We will be updating and rebooting various servers. Services will be up or down during the outage window.
We might be doing some firmware upgrades, so when services reboot they may be down for longer than in previous "Update + Reboot" cycles.
/rss20.xml">
We will be updating and rebooting various servers. Services will be up or down during the outage window.
We might be doing some firmware upgrades, so when services reboot they may be down for longer than in previous "Update + Reboot" cycles.
Another week another recap here in longer form. I started to get all caught up from the holidays this week, but then got derailed later in the week sadly.
On tuesday I migrated our https://pagure.io/fedora-infrastructure (pagure) repo over to https://forge.fedoraproject.org/infra/tickets/ (forgejo).
Things went mostly smoothly, the migration tool is pretty slick and I borrowed a bunch from the checklist that the quality folks put together ( https://forge.fedoraproject.org/quality/tickets/issues/836 ) Thanks Adam and Kamil!
There are still a few outstanding things I need to do:
We need to update our docs everywhere it mentions the old url, I am working on a pull request for that.
I cannot seem to get the fedora-messaging hook working right It might well be something I did wrong, but it is just not working
Of course no private issues migrated, hopefully someday (soon!) we will be able to just migrate them over once there's support in forgejo.
We could likely tweak the templates a bit more.
Once I sort out the fedora-messaging hook I should be able to look at moving our ansible repo over, which will be nice. forgejo's pull request reviews are much nicer, and we may be able to leverage lots of other fun features there.
Even thought it started late (was supposed to start last wed, but didn't end up starting really until friday morning) it finished over the weekend pretty easily. There was some cleanup and such and then it was tagged in.
I updated my laptop and everything just kept working. I would like to shout out that openqa caught a mozjs bug landing (again) that would have broken gdm, so that got untagged and sorted and I never hit it here.
Wed night I noticed that one of our two network links in the datacenter was topping out (10GB). I looked a bit, but marked it down to the mass rebuild landing and causing everyone to sync all of rawhide.
Thursday morning there were more reports of issues with the master mirrors being very slow. Network was still saturated on that link (the other 10G link was only doing about 2-3GB/sec).
On investigation, it turned out that scrapers were now scraping our master mirrors. This was bad because all the BW used downloading every package ever over http and was saturating the link. These seemed to mostly be what I am calling "type 1" scrapers.
"type 1" are scrapers coming from clouds or known network blocks. These are mostly known in anubis'es list and it can just DENY them without too much trouble. These could also manually be blocked, but you would have to maintain the list(s).
"type 2" are the worse kind. Those are the browser botnets, where the connections are coming from a vast diverse set of consumer ip's and also since they are just using someone elses computer/browser they don't care too much if they have to do a proof of work challenge. These are much harder to deal with, but if they are hitting specific areas, upping the amount of challenge anubis gives those areas helps if only to slow them down.
First order of business was to setup anubis in front of them. There's no epel9 package for anubis, so I went with the method we used for pagure (el8) and just set it up using a container. There was a bit of tweaking around to get everything set, but I got it in place by mid morning and it definitely cut the load a great deal there.
Also, at the same time it seems we had some config on download servers for prefork apache. Which, we have not used in a while. So, I cleaned all that up and updated things so their apache setup could handle lots more connections.
The BW used was still high though, and a bit later I figured out why. The websites had been updated to point downloads of CHECKSUM files to the master mirrors. This was to make sure they were all coming from a known location, etc. However, accidentially _all_ artifact download links were pointing to the master mirrors. Luckly we could handle the load and also luckily there wasn't a release so it was less people downloading. Switching that back to point to mirrors got things happier.
So, hopefully scrapers handled again... for now.
So, as many folks may know, our Red Hat teams are all trying to use agile and scrum these days. We have various things in case anyone is interested:
We have daily standup notes from each team member in matrix. They submit with a bot and it posts to a team room. You can find them all in #cle-standups:fedora.im space on matrix. This daily is just a quick 'what did you do', 'what do you plan to do' any notes or blockers.
We have been doing retro/planning meetings, but those have been in video calls. However, there's no reason they need to be there, so I suggested and we are going to try and just meet on matrix for anyone interested. The first of these will be monday in the #meeting-3:fedoraproject.org room at 15UTC. We will talk about the last 2 weeks and plan for what planned things we want to try and get in the next 2.
The forge projects boards are much nicer than the pagure boards were, and we can use them more effectively. Here's how it will work:
Right now the current sprint is in: https://forge.fedoraproject.org/infra/tickets/projects/325 and the next one is in: https://forge.fedoraproject.org/infra/tickets/projects/326
On monday we will review the first, move everything that wasn't completed over to the second, add/tweak the second one then close the first one, rename the 'next' to 'current' and add a new current one. This will allow us to track what was done in which sprint and be able to populate things for the next one.
Additionally, we are going to label tickets that come in and are just 'day-to-day' requests that we need to do and add those to the current sprint to track. That should help us get an idea of things that we are doing that we cannot plan for.
Mass update/reboot outage =========================o
Next week we are also going to be doing a mass update/reboot cycle with outage on thrusday. This is pretty overdue as we haven't done such since before the holidays.
This week I enjoyed the podcast about Venezuela, the blog post about asking, and the one from Lara Hogan about employees getting stuck.
How can we make asking easier? - if you want public question, you have to make it safe, otherwise accept private ones.
Creating momentum when an employee is stuck - this was more helpful than I expected :-)
I’m Going to Dig a Hole - sometimes you have to let people do stupid things.
RPMs of Redis version 8.6 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
⚠️ Warning: this is a pre-release version not ready for production usage.
Packages are available in the redis:remi-8.6 module stream.
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to redis:remi-8.6/common
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset redis # dnf module enable redis:remi-8.6 # dnf install redis --allowerasing
You may have to remove the valkey-compat-redis compatibilty package.
Some optional modules are also available:
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
Valkey also provides a similar set of modules, requiring some packaging changes already applied in Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
redis
redis-bloom
redis-json
redis-timeseries
RPMs of Redis version 8.4 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
Packages are available in the redis:remi-8.4 module stream.
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to redis:remi-8.4/common
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset redis # dnf module enable redis:remi-8.4 # dnf install redis --allowerasing
You may have to remove the valkey-compat-redis compatibilty package.
Some optional modules are also available:
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
redis
redis-bloom
redis-json
redis-timeseries
You connected with CentOS in 2026
You attended the 2026 iteration of DevConf.IN, an annual free and open source conference in India!
Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.
And today we're gonna generate YAML from ERB, what could possibly go wrong?!
Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.
The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.
Enter cloud-init schema, or so I thought.
Turns out running cloud-init schema is rather broken without root privileges,
as it tries to load a ton of information from the running system.
This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself.
I've not found a way to disable that behavior.
Luckily, I know Python.
Enter evgeni-knows-better-and-can-write-python:
#!/usr/bin/env python3 import sys from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError try: valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema()) if not valid: raise RuntimeError("Schema is not valid") except (SchemaValidationError, RuntimeError) as e: print(e) sys.exit(1)
The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.
The hardest part was to understand thevalidate_cloudconfig_file API,
as it will sometimes raise an SchemaValidationError,
sometimes a RuntimeError and sometimes just return False.
No idea why.
But the above just turns it into a couple of printed lines and a non zero exit code,
unless of course there are no problems, then you get peaceful silence.
"canonical", not "Canonical" ↩
There is heavy scraper activity against dl.fedoraproject.org. We are working to mitigate it.
Most modern issue trackers offer a label mechanism (sometimes called “tags” or a similar name) that allow you or your users to set metadata on issues and pull/merge requests. It’s fun to set them up and anticipate all of the cool things you’ll do. But it turns out that labels you don’t use are worse than useless. As I wrote a few years ago, “adding more labels adds cognitive overhead to creating and managing issues, so you don’t want to add complexity when you don’t have to.”
A label that you don’t use just complicates the experience and doesn’t give you useful information. A label that you’re not consistent in using will lead to unreliable analysis data. Use your labels.
Jeff Fortin Tam highlighted one benefit to using labels: after two years of regular use in GNOME, it was easy to see nearly a thousand performance improvements because of the “Performance” label. (As of this writing, the count is over 1,200.)
The problem with labels is that they’re either present or they’re not. If your process requires affirmatively adding labels, then you can’t treat the absence of a label as significant. The label might be absent because it doesn’t apply, or it might be absent because nobody remembered to apply it. By the same token, you don’t want to apply all the labels up front and then remove the ones that don’t apply. That’s a lot of extra effort.
There are two parts of having consistent label usage. The first is having a simple and well-documented label setup. Only have the labels you need. A label that only applies to a small number of issues is probably not necessary. Clearly document what each label is for and under what conditions it should be applied.
The other part of consistent label usage is to automatically apply a “needs triage” label. Many ticket systems support doing this in a template or with an automated action. When someone triages an incoming issue, they can apply the appropriate labels and then remove the “needs triage” label. Any issue that still includes a “needs triage” label should be excluded from any analysis, since you can reasonably infer that it hasn’t been appropriately labeled.
You’ll still miss a few here and there, but that will help you use your labels, and that makes the labels valuable.
This post’s featured photo by Angèle Kamp on Unsplash.
The post Use your labels appeared first on Duck Alignment Academy.
The syslog-ng 4.11 release is right around the corner. Thousands of automatic tests run before each new piece of source code is merged, but nothing can replace real-world hands-on tests. So help us testing Elasticsearch / OpenSearch data-streams, Kafka source, cmake fixes and much more!
The development of syslog-ng is supported by thousands of automatic test cases. Nothing can enter the syslog-ng source code before all of these tests pass. In theory, I could ask my colleagues at any moment to make a release from the current state of the syslog-ng development branch once all tests pass. However, before my current job, I was working as a director of quality assurance, so I have a different take on testing things. Automatic test cases are indeed fantastic and help us to catch many problems during development. However, nothing can replace real-world users trying to use the latest version of your software.
Personally, I run a nightly or git snapshot build of syslog-ng on all my hosts. However, none of my machines are mission-critical, where downtime would cost $$$ with each and every passing minute. While syslog-ng snapshot builds are usually quite stable and breaking configuration changes are rare, I still do not recommend installing these builds on critical servers. On the other hand, I am a big fan of production testing on hosts where running into occasional problems is not a critical issue.
Read more at https://www.syslog-ng.com/community/b/blog/posts/call-for-testing-syslog-ng-4-11-is-coming

Photo source: Ray Hennessy (@rayhennessy) | Unsplash
Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.
I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.
Photo source: Andre Benz (@trapnation) | Unsplash
When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of
Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.
Photo source: Almos Bechtold (@almosbech) | Unsplash
Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).
Photo source: Trnava University (@trnavskauni) | Unsplash
In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.
Photo source: j (@janicetea) | Unsplash
The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:
Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.
The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.
Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash
Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.
But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.
Photo source: Andrew Dawes (@andrewdawes) | Unsplash
Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.
AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.
I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.
Photo source: Patrick Bellot (@pbellot) | Unsplash
This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.
Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.
All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.
Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.
Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash
!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.
In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.
The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.
Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash
Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.
Photo source: Elena Mozhvilo (@miracleday) | Unsplash
Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.
The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.
Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash
Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.
HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.
Photo source: Patrick Tomasso (@impatrickt) | Unsplash
Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:
You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.
While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.
Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash
The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.
Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash
Post theme song: Mirror mirror by Blind Guardian
A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.
Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.
Photo source: Eugenio Mazzone (@eugi1492) | Unsplash
Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.
This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.
Photo source: Tim Mossholder (@ctimmossholder) | Unsplash
I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.
Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash
Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.
As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.
Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash
This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.
Photo source: Darran Shen (@darranshen) | Unsplash
My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.
Prague is calling! The deadline for the Flock 2026 CFP (Call for Proposals) is fast approaching. You have until Monday, February 2nd to submit your session ideas for Fedora’s premier contributor conference.
We are returning to the heart of Europe (June 14–16) to define the next era of our operating system. Whether you are a kernel hacker, a community organizer, or an emerging local-first AI enthusiast, Flock is where the roadmap for the next year in Fedora gets written.
If you haven’t submitted yet, here is why you should.
This year isn’t just about maintenance; it is about architecture. As we look toward Fedora Linux 45 and 46, we are also laying the upstream foundation for Enterprise Linux 11. This includes RHEL 11, CentOS Stream 11, EPEL 11, and the downstream rebuilder ecosystem around the projects. The conversations happening in Prague will play a part in the next decade of modern Linux enterprise computing.
To guide the schedule, we are looking for submissions across our Four Foundations:
Freedom (The Open Frontier)How are we pushing the boundaries of what Open Source can do? We are looking for Flock 2026 CFP submissions covering:
Friends (Our Fedora Story)Code is important, but community is critical. We need sessions that focus on the human element:
Features (Engineering Core)The “Nitty-Gritty” of the distribution. If you work on the tools that build the OS every six months, we want you on stage:
First (Blueprint for the Future)Fedora is “First.” This track is for the visionaries:
If you require financial support to attend, please remember that the Flock 2026 CFP submission is separate from the Travel Subsidy application, but speakers are prioritized. Make sure to apply for the Travel Subsidy by March 8.
Don’t let the deadline slip past you. Head over to our CFP platform and draft your proposal today.
Submit to the Flock 2026 CFPSee you in Prague!
Note: AI (Google Gemini) edited human-generated content first written by me, the author, to write this article. I, the author, edited the AI-generated output before making this Community Blog article. If you notice mistakes, please provide a correction as a reply to this topic.
The post 2 Weeks Left: The Flock 2026 CFP Ends Feb 2 appeared first on Fedora Community Blog.
Hi folks! Over the last couple of weeks, we have migrated nearly all the quality team's repositories from Pagure (the old Fedora forge) to the new, Forgejo-based Fedora Forge. As part of this, I've figured out a process for doing CI with Forgejo Actions. I also came up with a way to do automated LLM pull request reviews, for those interested in that.
For the impatient, you can just look at / copy the two workflows in python-wikitcms, but you'll at least need to read the stuff about runners below.
Forgejo Actions works very similarly to GitHub Actions, by design. You create a .forgejo/workflows directory in your project and define workflows in it. The syntax is almost entirely compatible with GitHub Actions, but with several missing features.
Some very commonly-used shared actions, like actions/checkout, are ported to Forgejo so you can use them directly. Other shared and third-party actions can be used by giving a full URL to them - e.g. uses: https://github.com/actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0 # v1.3.0 - but whether a given action will work or not depends on whether it's written to assume it's running on public GitHub, and whether Forgejo has all the features it needs.
Probably the most noticeable difference with using GitHub Actions is runner availability and environment. If you have a public GitHub project you can define workflows with something like runs-on: ubuntu-latest; behind the scenes, GitHub maintains a farm of runners with various labels, of which ubuntu-latest is one, and your jobs will run on any available runner with that label. The available environments for public GitHub repos are a handful of Ubuntu, Windows and macOS versions.
The staging instance of Fedora Forge has a few universal runners you can use like this. Currently each has only one, unique, label, so you can't specify workflows with a label like fedora and have them run on any available runner; you have to just pick one of the labels, and your jobs will always run on that runner. Maybe this will get changed at some point. But the runners are available to all repos in the staging instance, so you can just define a workflow and get it run.
Currently the production instance has no universal runners like this; runners are limited to specific organizations. The releng and infra organizations have runners, and now I requested one, the quality organization has one too. If you want to run workflows for projects in a different organization, the first thing you'll need to do is file a ticket to request runner(s) for that organization. If you have admin access to an organization, you can see whether it has runners, and what labels they have, by visiting https://forge.fedoraproject.org/org/<organization>/settings/actions/runners.
Once your org has at least one runner, you can define workflows and they'll run, as long as you set the runs-on value to a label that at least one of the runners has.
However, you might be surprised by the default environment: it's currently Debian Bookworm. Until that gets fixed, you may be interested in the container directive for workflows, which lets you define any arbitrary container image to be used:
container:
image: quay.io/fedora/fedora:latest
There is one little gotcha with this, though. Many GitHub actions, including checkout, are written in Node, but Fedora's stock container images don't have Node installed. So you have to install it before running checkout or anything else that uses Node.
Put it all together, and here's the workflow I've defined for doing CI on Python projects with Tox:
name: CI via Tox on: pull_request: types: [opened, synchronize] jobs: tox: runs-on: fedora container: image: quay.io/fedora/fedora:latest steps: - name: Install required packages run: dnf -y install nodejs tox git - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 with: fetch-depth: 0 - name: Install Python interpreters run: for py in 3.6 3.9 3.10 3.11 3.12 3.13; do dnf -y install python$py; done - name: Test with tox run: tox
That runs whenever a pull request is opened or pushed (the on section). It expects a runner with the fedora label (the runs-on setting). It uses the fedora:latest container image from quay.io (the container setting). From that image, we install packages we're going to need - including nodejs (the first step). Then we run actions/checkout to check out the PR (the second step, the uses one). Then we install all the Python interpreters we need, and run tox (the final two steps). Of course, if your project isn't Python or doesn't use Tox, you'll have to tweak this a bit, but hopefully you get the general idea.
If you're security-minded, you might notice there's no permissions setting in this workflow. That's because Forgejo currently does not support fine-grained permissions in the automatically-generated workflow tokens. In Forgejo, the automatically-generated token always has full read/write privileges unless it's operating on a pull request from a fork, in which case it has only read permissions. Nothing finer-grained is possible at present. If you need something finer-grained, you have to generate a token manually, save it as a repository secret, and adjust your workflow (somehow) to use that and hide the automatically-generated token as far as is practically possible (that's outside the scope of this post).
So that's CI! What about LLM pull request review? Well, if you dislike or are not interested in that, stop reading now. If you are interested, here's a recipe:
name: AI Code Review on: pull_request_target: types: [labeled] jobs: ai-review: if: forgejo.event.label.name == 'ai-review-please' runs-on: fedora container: image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:v2.3.0 steps: - name: Run AI Review env: AI_API_KEY: ${{ secrets.GEMINI_API_KEY }} run: ai-code-review --platform forgejo --pr-number ${{ forgejo.event.pull_request.number }} --post # this has to be a separate job because ai-code-review container does not have nodejs in it # also note this does not work for PRs from forks because of a forgejo bug # https://codeberg.org/forgejo/forgejo/issues/10733 remove-label: runs-on: fedora steps: - uses: https://github.com/actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0 # v1.3.0 with: labels: ai-review-please
That will cause the ai-code-review tool to review the pull request and post its analysis as a comment.
Just a couple of things to note here. I decided to have the LLM review happen only when a pull request is given a special label. LLM reviews are relatively expensive, and also quite verbose; you don't necessarily want one cluttering up the ticket any time a pull request is created or edited, and you may not want to make it possible for someone to charge some LLM usage to your account as often as they like just by creating or editing a pull request.
So, to use this recipe you have to create a label called ai-review-please in your repository. You can do this by going to "Issues", then clicking "Labels", then "New label". Give it whatever color and description you like. Any time you add that label to a PR, the review process will be triggered. Before adding the label to a PR you should probably make sure the PR is well-intentioned and not attempting any kind of prompt injection to get ai-code-review to disclose a secret or mess with the repository.
The other thing is you need an AI provider API key. In this recipe we have a Gemini API key saved as a repository secret called GEMINI_API_KEY. To create repository secrets, go to repository "Settings", then "Actions", then "Secrets", and click "Add secret". In the workflow, we make the repository secret called GEMINI_API_KEY (secrets.GEMINI_API_KEY) available in the container as the environment variable AI_API_KEY; ai-code-review reads it in from there. Gemini is the default LLM provider for ai-code-review. You can also use OpenAI or Anthropic by adding an --ai-provider argument to the ai-code-review call in the workflow (obviously, then, the secret you export as AI_API_KEY must be a valid key for that provider). I'm hoping that in the not-too-distant future, we'll have an LLM model provider in Fedora infra, running open source models, that we can use for this purpose; for now, unfortunately, we have to use the hyperscaler ones.
Finally, as noted in the comment, the workflow is intended to remove the ai-review-please label when it runs (so you don't have to remove it manually, then add it again, if you want another review later), but this does not currently work for pull requests from forks due to a Forgejo bug (because we're using pull_request_target the workflow token should have write permissions even for a fork PR, but it doesn't). If you use it on a fork PR, you'll have to remove the label manually once the workflow has triggered.
You can, of course, change the on block to be the same as the CI recipe if you want to have LLM review run automatically whenever a PR is created or edited - but do make sure whoever's paying the bills for the API key is OK with that, and monitor the repo to make sure nobody starts creating hundreds of PRs to try and blow your budget...and hope/pray nobody manages a successful prompt injection attack. On the whole I'd stick with the label (only repository admins can label PRs, so a non-admin attacker can't apply the label themselves to trigger the review).
Like a lot of Europeans, I realise that the US isn't a reliable partner anymore.
I am just talking about myself here, and not my employer, or my family.
I do use a lot of services that are based in the US, and some I will probably not migrate in the near future.
But when I receive a bill, it is a good moment to consider if it is a candidate for migration.
Greetings everyone. Happy new year!
This last week was the first full week I was back at it after the holidays.
So, how did that go? It was pretty good. I did manage to finish my december of docs with only missing one day. Closed about 40 tickets, made about 30 pr's for infra docs. Good progress. I might try and resume doing them, it is kind of nice to chip away at them over time.
I got a number of home projects done that I kept not having time for:
Migrated all my bespoke firewall rules to nftables
Fixed up my email setup to block more junk
setup fail2ban finally when the ssh and dovecot and postfix brute forcers were making my logs hard to read. I wasn't worried about any of them getting in, but no reason not to just block them all.
Also did a bunch of learning about solar and home backups and batteries. I'm likely to pull the trigger on a solar system before too long.
Sadly, the world in 2026 is pretty horrible, but I will do my best to do what I can to help others and hope we can all collectively return the world to some sanity.
fesco elections completed and I was elected again. Thanks so much everyone who voted for me. I will try and do my best (as always). If you have some concern or issue for fesco, feel free to let me know.
I think the slate of candidates were all excellent and I look forward to working with the new members to make fedora better.
I tried to keep an eye out on things during the holidays, so catching up wasn't as bad as if I ignored everything, but still a lot to process through. I have now I think mostly done so, so if you asked me something or sent me a message looking for a reply and haven't seen it yet, please do check with me again.
The f44 mass rebuild started, all be it a few days late. There was a large boost update and then some last minute tools builds that had to get in, but it's finally going now since yesterday. Typically these take a few days to rebuild most everything, then there's a few stragglers. I am not sure if it will be done by monday, but it shouldn't be too far off that. Then, we will merge things in and start getting ready for branching in a few weeks.
Next tuesday ( 2026-01-20 at 21UTC) I plan to migrate the fedora infra ticket tracker ( https://pagure.io/fedora-infrastructure ) to the new forge ( https://forge.fedoraproject.org/infra/tickets ). I sent out an announcement and will try and make things as smooth as possible.
If you do interact with infra you may want to login to forge and set your notifications as you like. Right now email notifications are not on by default, so if you want emails you need to enable them in your prefs.
I'm really looking forward to having this moved. We will likely look at moving our ansible repo after this and a few others. Our docs repo was already moved before the holidays.
I have a few things I really want to get to soon:
I want to FINALLY land the pesign changes to sign secure boot via sigul. I have a pr I made last year, need to fix it up some, deploy and test.
I want to look into the 502's people have been seeing and try and figure out where in the proxy stack thats happening and hopefully fix it up.
We got over 100 tickets pending last week. I processed/fixed a number, but we are still up to 92 as of today. I'd like to fight those down again.
As always, comment on mastodon: https://fosstodon.org/@nirik/115911910269440220
Last year, I've spent few months exchanging emails and packet dumps with Zoom's support. They provide videoconferencing software which gave me connectivity problems . Specifically, Zoom client, on my work Macbook, when connected to my home network, failed. To make reporting harded, Zoom's webpage did not open in Chrome on that laptop neither.
Perplexingly, my private Fedora laptop, on the same network, had no problems whatsoever. I'll spare the details of weeks-long investigation (no, it wasn't DNS' fault; nor Cloudflare's).
The problem source…
MacOS X DHCP client ignores Maximum Transfer Unit (option 26) from DHCP server!
I couldn't believe it, but there are tons of similar reports over the net. Apparently, Apple deems MTU information not trustworthy. Well, thank you 🍏, there went weeks of my time.
Since my ISP was acquired, I had to deal with blast-from-the-past networking. Plain, reliable Ethernet connection was replaced with PPPoE, lowering the MTU to 1492 bytes, bringing distaste and headaches.
Most communications use TCP/IP protocol, which knows how to deal with decreased MTU. Modern pages (and Zoom) try to use QUIC protocol. It works over UDP. UDP has no mechanisms of Path MTU discovery and too big packets are just dropped. PMPTU for QUIC is still at a draft stage.
Due to other circuimstances, I had disabled IPv6 connectivity for my work laptop around summer last year. I didn't noticed at the time, but before that, Zoom worked fine over IPv6. MTU information from RA is good enough for Apple, whereas from DHCP is a no-no.
Kudos to Zoom Support, for spotting the MTU problem in the end.
030/100 of #100DaysToOffload
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 12 Jan – 16 Jan 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This team is working on day to day business regarding Fedora CI and testing.
This team is working on deployment of forge.fedoraproject.org.
Ticket tracker

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 03 2026 appeared first on Fedora Community Blog.
This week I enjoyed the podcast about Godbolt’s Rule and the one with Sir Tim Berners-Lee.
You might notice more podcasts this week. Something went wrong with my link collection system recently.
I am also moving the blog … more things will go wrong.
Stop Picking Sides: Manage the Tension Between Adaptation and Optimization - nicely put. It’s similar to the creation vs maintenance tension.
Two year afters PHP 8.0, and as announced, PHP version 8.1.34 was the last official release of PHP 8.1
To keep a secure installation, the upgrade to a maintained version is strongly recommended:
Read :
ℹ️ However, given the very important number of downloads by the users of my repository the version is still available in remi repository for Enterprise Linux (RHEL, CentOS, Alma, Rocky...) and Fedora and will include the latest security fixes.
⚠️ This is a best effort action, depending on my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.
You can also watch the sources repository on github.
RPMs of PHP version 8.5.2 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.17 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.3.30 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ️ These versions are also available as Software Collections in the remi-safe repository.
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ There is no security fix this month, so no update for version 8.2.30.
Version announcements:
ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
Replacement of default PHP by version 8.3 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.3/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.3 dnf update
Parallel installation of version 8.3 as Software Collection
yum install php83
And soon in the official updates:
⚠️ To be noticed :
ℹ️ Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
In a comment on the GitHub issue that sparked my post on measuring contributions outside working hours, Clark Boylan wrote:
I suspect that tracking affiliations is a better measure of who is contributing as part of their employment vs those who might be doing so on a volunteer basis.
Unfortunately, this isn’t as easy as it sounds. Let’s explore the complexity with a concrete example. When I was the Fedora Program Manager, I’d occasionally get asked some variation of “what percentage of Fedora contributors are from Red Hat?” The question usually included an implicit “and by ‘from Red Hat’, I mean ‘are working on Fedora as part of their job at Red Hat.'” I never gave a direct answer because I didn’t have one to give. Why not?
The first complication is that there aren’t two classifications of contributor: inside Red Hat and outside Red Hat. There are actually three classifications of contributor:
Because Fedora is a large project with many different work streams, the first two classifications of contributor have a fuzzy boundary. The same person may fit into the first or second category, depending on what they’re doing at the moment. When I was updating the release schedule, I was clearly in the first category. When I was updating the wordgrinder package, I was probably in the second category. Even the general work wasn’t clearly delineated. When I was editing and publishing elections interviews, that was a first category activity. But editing and publishing other Community Blog posts is more akin to a second category activity.
My job responsibilities were somewhat elastic, and my manager encouraged me to jump in where I could reasonably help. This meant I was doing category-two-like activities while “on the clock”. Depending on the context and intent behind the question, the answer to “is this a Red Hat contribution?” could be different for the same work.
It’s tempting to say “differentiate activities based on account.” People will use their work account for work things and their personal account for personal things, right? That’s not my experience. By and large, people don’t want to have to switch accounts in the issue tracker, etc., so they don’t unless they have to. A lot of people used their Red Hat email address for Bugzilla, commit messages, etc, regardless of whether the work was directly in scope for their job or not. By the same token, I knew at least one person who used a personal address, even though they were primarily working on stuff Red Hat wanted them to.
In other roles, I’ve seen people use the same GitHub account for work, foundation-led and other third-party open source projects, plus their personal projects. GitHub’s mail rules make that (somewhat) workable. Of course, people can use different emails for their commits based on if they’re contributing for vocation or for avocation. They don’t always.
To make it even more complicated, someone might not even have a clear conception of whether or not they were contributing as part of their job or not. And their boss might have a different view. There are some projects that I only participated in because it was part of my job. When I no longer had that job, I stopped participating in the project. Other projects I started participating in on behalf of my employer, but kept participating after I had a new role.
Is a project tangential to my employer’s interests that I’d keep participating in even after I left a work contribution or a non-work contribution? Does it switch from being one to the other? As you can see, what seems simple gets complicated very quickly.
This post’s featured image by Mohamed Hassan from Pixabay.
The post Measuring contributor affiliations is complicated appeared first on Duck Alignment Academy.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.2RC1 are available
RPMs of PHP version 8.4.17RC1 are available
RPMs of PHP version 8.3.30RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.5 as Software Collection:
yum --enablerepo=remi-test install php85
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Update of system version 8.5:
dnf module switch-to php:remi-8.5 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php84, php85)
Base packages (php)
While testing the latest Elasticsearch release with syslog-ng, I realized that there was already a not fully documented elasticsearch-datastream() driver. Instead of fixing the docs, I reworked the elasticsearch-http() destination to support data streams.
So, what was the problem? The driver follows a different logic in multiple places than the base elasticsearch-http() destination driver. Some of the descriptions were too general, others were missing completely. You had to read the configuration file in the syslog-ng configuration library (SCL) to configure the destination properly.
While preparing for syslog-ng 4.11.0, the OpenSearch destination received a change that allows support for data streams. I applied these changes to the elasticsearch-http() destination, and did a small compatibility change along the way, so old configurations and samples from blogs work.
Read more at https://www.syslog-ng.com/community/b/blog/posts/changes-in-the-syslog-ng-elasticsearch-destination

Esta é Pooja Krishnamoorthy, uma heroína. Adil Mirza, seu treinador, também é.
Pooja e Adil pernoitaram em casa antes de seguirem diferentes destinos. Ela vai visitar o Rio de Janeiro e a Amazônia antes de voltar para Mumbai. E Adil já voltou hoje mesmo para a Índia.

Pooja acabou de terminar a fabulosa maratona Brasil 135 Ultra Journey, em que percorreu correndo, em 48 horas seguidas, os 217 quilômetros (135 milhas) entre Águas da Prata, SP a Luminosa, MG. Pooja conta que o Brasil é destino sagrado para os corredores desta ultra-maratona.
Eu nem sabia que essa maratona existia. Eu nem imaginava que o corpo humano era capaz disso. Como ela realizou isso por inteiro — é a primeira mulher indiana que terminou a maratona —, é para mim uma heroína.
Adil também, porque ele já correu até na maratona seguinte, que é a Badwater 135, na Califórnia. Wikipédia diz que é considerada a maratona mais difícil do mundo.
Como a maratona Brasil 135 é classificatória para Badwater 135, em julho Pooja já vai corrê-la também. Mas ela comenta que a Brasil 135 é bem mais difícil que a Badwater 135 por causa dos morros nessa região de Minas e São Paulo. A Badwater 135 tem um agravante que é o clima de deserto no verão californiano.
Ficando em casa, tem que pagar pedágio e responder todas as minhas perguntas hiper-detalhistas sobre a realização de façanha tão monumental.
Pooja na Ultra-maratona Brasil 135 itemizada e em números:
Distância: 217km (135 milhas) em estradas de terra de Minas Gerais e São Paulo
Tempo para finalizar a prova: 48h; de 2026-01-08 8:00 a 2026-01-10 8:00, de manhã, de tarde, de noite, de madrugada
Relação correndo/caminhando: 60%/40%
Maior trecho correndo/caminhando sem parar: 90km (mais do que 2 maratonas completas)
Tempo cochilando: 30 minutos total, que é a soma de 3 cochiladas de 10 minutos cada
Outra parada: 1 jantar leve de salada, lentilhas e arroz; rápida massagem nas pernas pelo treinador enquanto comia, e continuar correndo em seguida
Outras refeições: lanches, enquanto caminha
Membros no time de apoio: Adil numa van que vai junto todo o trajeto, e mais 3 brasileiros exclusivos para Pooja, que prestam esse serviço profissionalmente para maratonistas da Brasil 135
Função da equipe de apoio: revezar para correr junto em alguns trechos, animar Pooja, tocar música, dar suporte psicológico para ajudar mente e força de vontade de Pooja a dominarem o corpo, fornecer água, comida, cuidados médicos, fotografar e documentar a façanha
Número de pares de tênis que Pooja trouxe para a maratona: 5
Vezes que trocou o par de tênis: 1
Número de maratonistas solo, como Pooja (há também a modalidade de revezamento): ≈100
Maratonistas solo que efetivamente terminaram a prova, como Pooja: ≈44
Patrocínio e financiamento: 40% Pooja, 60% outros
Lista de mulheres indianas que realizaram uma ultra-maratona completa: Pooja.Seres humanos extraordinários e as coisas incríveis que realizam. Esta é a inspiração.
Cela fait maintenant un bon moment que je possède un NAS, et je n’avais encore jamais pris le temps de décrire réellement ma stack ni les applications que j’y héberge. Cet article est donc l’occasion de corriger cela. Références et inspiration S’il ne fallait citer qu’une seule référence incontournable dans le monde du self-hosted, ce […]
Cet article Comment j’organise mon NAS et mes services auto-hébergés est apparu en premier sur Guillaume Kulakowski's blog.

Hello travelers!
Loadouts for Genshin Impact v0.1.13 is OUT NOW with the addition of support for recently released characters like Durin and Jahoda, and for recently released weapons like Athame Artis, The Daybreak Chronicles and Rainbow Serpent's Rain Bow from Genshin Impact Luna III or v6.2 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.
$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=FalsePoetry to UV for dependency management by @gridhead in #476Athame Artis by @gridhead in #479Durin to the roster by @gridhead in #477The Daybreak Chronicles by @gridhead in #480Rainbow Serpent's Rain Bow by @gridhead in #481Jahoda to the roster by @gridhead in #478Two characters have debuted in this version release.
Durin is a sword-wielding Pyro character of five-star quality.


Durin - Workspace and Results
Jahoda is a bow-wielding Anemo character of four-star quality.


Jahoda - Workspace and Results
Three weapons have debuted in this version release.
Day King's Splendor Solis - Scales on Crit Rate.

Dawning Song of Daybreak - Scales on Crit DMG.

Astral Whispers Beyond the Sacred Throne - Scales on Energy Recharge.

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
With an extensive suite of over 1527 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
No mames, ¿sabías que la IA puede ser tu mejor chalán o tu peor pesadilla?
Si le estás pegando duro al código asistido por IA, seguro te ha pasado que la IA se pone necia o "alucina" bien gacho. Le pides un script y te lo da para Ubuntu (¡wákala!) cuando tú eres puro Fedora, o te mete librerías bien pesadas cuando tú quieres algo KISS y DRY.
Ahí es donde entran al quite los archivos de configuración de "memoria" o reglas: ~/.gemini/GEMINI.md y ~/.config/opencode/AGENTS.md. La neta, son un paro.
Básicamente, son el manual de estilo y las reglas del juego que tú le dictas a la IA. Es la forma de decirle: "A ver, carnal, aquí se hacen las cosas así". En lugar de estarle repitiendo en cada prompt las mismas instrucciones, lo escribes una sola vez y la IA lo toma como ley.
Es como poner al tiro a un chalán nuevo, pero este sí tiene memoria fotográfica y no se le va la onda si se lo configuras bien.
No se trata de escribir la Biblia, pero sí de dejar claras tus "líneas rojas". Aquí te van unos ejemplos de lo que yo tengo en mi GEMINI.md para que te sirva de ejemplo:
Para que no me salga con cosas de Debian.
- The host system is Fedora 43 x86_64 with SELinux enabled.
- OS Tool Preference: Use tools available in the OS (latest Fedora) via `dnf`.
- Distro Preference: The user despises Debian/Ubuntu; never considers them.
Para que no se pase de listo con soluciones complejas.
- KISS (Keep It Simple, Stupid): Always prefer simple, readable code.
- DRY (Don't Repeat Yourself): Extract repeated code.
- Avoid Premature Optimization: Write clean code first.
Porque aquí usamos Podman, mi compa. Nada de andar con cosas raras.
- Containers: Prefer Podman over Docker (Docker is sub-optimal).
- Containerfile: Use `Containerfile` instead of `Dockerfile`.
- Quadlets: Use systemd quadlets (`*.container`) when practical.
Para no andar manteniendo dos archivos diferentes y que luego se te desincronicen, la movida maestra es crear uno "maestro" y hacer un enlace simbólico (symlink). ¡Bien práctico!
Creas tu ~/.gemini/GEMINI.md bien tuneado y luego tiras el symlink para OpenCode:
ln -sf ~/.gemini/GEMINI.md ~/.config/opencode/AGENTS.md
Tip
Así, cambias una regla en un lado y se actualiza en todos lados. Te ahorras un chingo de chamba y mantienes la consistencia.
La diferencia es garrafal, carnal; te ahorra un chingo de tiempo:
Note
El código que generas se siente tuyo, adaptado a tu flujo de trabajo (EVALinux, en mi caso), y no un copy/paste genérico de Stack Overflow. ¿No crees que así conectas mejor con la gente?
Dedicarle unos 15 minutos a optimizar tu GEMINI.md no es pérdida de tiempo, es una inversión. Es la diferencia entre pelearte con la IA para que te entienda y tener un copiloto que ya se sabe el camino de memoria.
Así que ya se la saben, configuren sus reglas, no sean desidiosos y ¡a darle al ether!
Team's “Wrapped 2025” to Increase Velocity - nice idea I clearly didn't implement
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 05 Jan – 09 Jan 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 02 2026 appeared first on Fedora Community Blog.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/115951447954013009