sudoedit.com!

Just some guy on the internet.


authors: [“Luke Rawlins”] date: 2016-12-09 draft: false title: Uncomplicated Firewall... be careful description: “UFW doesn't have any default rules to allow ssh inbound by default, if you aren't careful when turning it on you could find yourself locked out” url: /ufw-locked-out/ tags: – Linux Firewall – UFW – Ubuntu


If like me, you enjoy the simplicity of ufw (Uncomplicated Firewall) on your Ubuntu servers be careful when you turn it on.

ufw doesn't have any default rules to allow {{}}ssh{{}} inbound by default, if you aren't careful when turning it on you could find yourself locked out! If you don't have direct console access to the server that could mean being locked out forever! Not a conversation you want to have with a client, or your boss,... or tech support at your friendly cloud provider.

So before turning ufw on for the first time here are couple quick tips.

The easy way

Build your allow rule first.

sudo ufw allow 22/tcp
sudo ufw show added


Then, after seeing the output which confirms the rule has been added, go ahead enable ufw.

sudo ufw enable


The less easy way – edit the file directly

Why would you want to use the less easy way!? Well, you may have a need to copy this file over to a newly built server. Maybe because you like to know where configuration files hide. Or maybe just because you like to do things a different way. Anyway whatever your reasons may be here you go.

Edit the user.rules file at /usr/lib/ufw/.

sudo vim /usr/lib/ufw/user.rules


Add the following lines directly under the section that says RULES

### RULES ###

### tuple ### allow tcp 22 0.0.0.0/0 any 0.0.0.0/0 in

-A ufw-user-input -p tcp --dport 22 -j ACCEPT


I'd like to say that I didn't learn this the hard way but alas I seem to have locked myself out one too many times!

Be careful with the “easy” tools. They will bite you if you aren't paying close attention!


author: “Luke Rawlins” date: “2022-05-03” title: The Right to Privacy description: “All rights are NOT enumerated in the constitution.” url: /right-to-privacy/ tags: – Opinion


The 9th Amendment

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

I read this as: “The Constitution doesn't list every right we have, you stupid fuck.”

Madison on the Enumeration of Rights

The Federalists contended that a bill of rights was unnecessary. They responded to those opposing ratification of the Constitution because of the lack of a declaration of fundamental rights by arguing that, inasmuch as it would be impossible to list all rights, it would be dangerous to list some and thereby lend support to the argument that government was unrestrained as to those rights not listed.

Madison adverted to this argument in presenting his proposed amendments to the House of Representatives.

“It has been objected also against a bill of rights, that, by enumerating particular exceptions to the grant of power, it would disparage those rights which were not placed in that enumeration; and it might follow by implication, that those rights which were not singled out, were intended to be assigned into the hands of the General Government, and were consequently insecure. This is one of the most plausible arguments I have ever heard against the admission of a bill of rights into this system; but, I conceive, that it may be guarded against. I have attempted it, as gentlemen may see by turning to the last clause of the fourth resolution.”

It is clear from its text and from Madison’s statement that the Amendment states but a rule of construction, making clear that a Bill of Rights might not by implication be taken to increase the powers of the national government in areas not enumerated, and that it does not contain within itself any guarantee of a right or a proscription of an infringement.

Quoted from: 9th Amendment – Cornell Law

This is all quoted because I don't know a damn thing about the law. But it would seem weird to add this amendment if the framers of the U.S Constitution intended the Bill of Rights to be an exhaustive list of what Citizens of the United States are “allowed to do”.

The very idea that the U.S. Government decideds what Americans are “allowed” to do, flys in the face of a very long American tradition of telling our governments to get off our lawn.

George Carlin on rights

Even more importanly than Madison – Carlin had this to say about our “rights”.

George Carlin on rights


authors: [“Luke Rawlins”] date: 2020-06-06 draft: false title: Podman, SELinux, and systemd url: /podman-selinux-and-systemd/ tags: – Containers – podman – SELinux – Fedora


In my previous post about migrating this site to Podman, I laid out a rough outline of my plan to move forward with Podman. Step one was to move the database into a container.

I have a few updates on my progress, and some tips to share regarding selinux, and containers that have systemd running for service control.

I've basically been starting from scratch on this – I don't have any experience with other container platforms like Docker, I have had some limited exposer to lxd on Ubuntu systems, but I've always treated them as live systems — more like a VM than a container.

Building the image

When thinking about how I wanted to build my container image I knew I had several options. There are a number of MariaDB container images available on Docker Hub, and I considered using one of those at first. In the end, I chose to use just the standard Fedora 32 image from the Fedora registry and install MariaDB into my own custom image based on Fedora. I did this for 2 reasons.

The first reason comes down to trust. I don't have anything against Docker or Docker Hub, but I honestly don't know much about them, and I don't know much about where the container images come from, or how often they are updated. Based on that I decided that I wanted to stick to the Fedora registry.

The second reason is that installing MariaDB on Fedora is trivial, and doesn't add any undue burden on me when creating the Containerfile. So why not just keep it simple – Fedora container – install MariaDB and push a simple service file.

Speaking of the Containerfile.

Podman Containerfile

Many of you probably already know this but you can build a container image using a simple file that describes the end state of your image. By convention this file is usually named “Dockerfile” or “Containerfile”, but you can name it anything you want. If you do choose to call it something other than Dockerfile or Containerfile, just specify the name with podman build -t <image_name> -f <container_file_name>.

My Containerfile is pretty simple and is based on the build file I found on here:

    FROM registry.fedoraproject.org/fedora:32
    MAINTAINER luke at sudoedit.com
    RUN yum -y install mariadb-server mariadb
    COPY mariadb-service-limits.conf /etc/systemd/system/mariadb.service.d/limits.conf
    RUN systemctl enable mariadb
    RUN systemctl disable systemd-update-utmp.service
    ENTRYPOINT ["/sbin/init"]
    CMD ["/sbin/init"]

We'll see if I end up needing to change this at all, but basically I wanted to accomplish a few things.

  • I wanted to specify the Fedora version – I was a bit worried that if I built the container using just “fedora” that I would forget what was in here when the next major version is released and could end up with changes I'm not ready for. Probably unlikely to be an issue, but this just seemed safer.
  • Install MariaDB – and copy in a simple configuration file to allow more open files (file shown below)
  • Allow systemd to manage the MariaDB service, and have it enabled at start time.

Contents of mariadb-service-limits.conf

    [Service]
    LimitNOFILE=10000
    LimitMEMLOCK=infinity

If you don't include these parameters then MariaDB will only be allowed 1024 open files, and it will complain about it.

The first roadblock – SELinux booleans

After building the image the next most obvious course of action is to run it right? So, that's what I did, and I was greeted with this interesting message:

    [luke@Fedora ~]$ podman run -i -it localhost/test:testing

    systemd v245.4-1.fc32 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
    Detected virtualization container-other.
    Detected architecture x86-64.

    Welcome to Fedora 32 (Container Image)!

    Set hostname to <7f0982604176>.
    Initializing machine ID from random generator.
    Failed to create /init.scope control group: Permission denied
    Failed to allocate manager object: Permission denied
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1...

Reading this error message, I wasn't quite sure where I had gone wrong. The “permission denied” message didn't seem to make much sense to me because obviously Podman had started the container. Why is that obvious?

  1. I see a “Welcome to Fedora 32 (Container Image)!” message – That means the container started.
  2. If I didn't have permission to run Podman or if I didn't have permission to use the image I wouldn't have seen that message. Plus I did the image build as my own user account, so it would be incredibly weird if the image output from the Containerfile was owned by not me.

So, if the error is permission denied, and it's not me getting denied I decided it had to be something with the container – and in these cases I know that it makes sense to check for SELinux avc's ( Access Vector Cache) alerts in the audit log. Among my findings was the following:

    [luke@Fedora ~]$ sudo sealert -a /var/log/audit/audit.log

    ...

    SELinux is preventing systemd from write access on the directory libpod-53808768ab5caa62b545bc57c88001fae301b3111a93deb02386d3a81bcb84e1.scope.

    *****  Plugin catchall_boolean (89.3 confidence) suggests   ******************

    If you want to allow container to manage cgroup
    Then you must tell SELinux about this by enabling the 'container_manage_cgroup' boolean.

    Do
    setsebool -P container_manage_cgroup 1

    ...


I had definitely violated some SELinux policy.

I really like how the sealert tool tells you exactly how to solve the problem – setsebool -P container_manage_cgroup 1.

I wasn't entirely certain what this boolean controlled – I know it says manage cgroup, but what does that mean anyway? So I did a little bit of research Googling and stumbled upon this Red Hat blog:  https://developers.redhat.com/blog/2019/04/24/how-to-run-systemd-in-a-container/ which has a great description of the issue I was facing:

 On SELinux systems, systemd attempts to write to the cgroup file system.  Containers writing to the cgroup file system are denied by default. The container_manage_cgroup boolean must be enabled for this to be allowed on an SELinux separated system.

That is where I was running into my permission denied error – or not me really but the container process trying to write to the cgroup filesystem. By default container processes cannot write to the cgroup file system but can be given permission to do so by flipping the container_manage_cgroup boolean to “true”.

Turns out that works!

After running the following command:

sudo setsebool -P container_manage_cgroup 1

I tried to run my container image again – This time with much friendlier output:

    [luke@Fedora ~]$ podman run -i -it localhost/test:testing

    systemd v245.4-1.fc32 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
    Detected virtualization container-other.
    Detected architecture x86-64.

    Welcome to Fedora 32 (Container Image)!

    Set hostname to <3004166ec95b>.
    Initializing machine ID from random generator.
    [  OK  ] Started Dispatch Password Requests to Console Directory Watch.
    [  OK  ] Started Forward Password Requests to Wall Directory Watch.
    [  OK  ] Reached target Local File Systems.
    [  OK  ] Reached target Paths.
    [  OK  ] Reached target Remote File Systems.
    [  OK  ] Reached target Slices.
    [  OK  ] Reached target Swap.
    [  OK  ] Listening on Process Core Dump Socket.
    [  OK  ] Listening on initctl Compatibility Named Pipe.
    [  OK  ] Listening on Journal Socket (/dev/log).
    [  OK  ] Listening on Journal Socket.
    [  OK  ] Listening on User Database Manager Socket.
             Starting Rebuild Dynamic Linker Cache...
             Starting Journal Service...
             Starting Create System Users...
    [  OK  ] Finished Create System Users.
    [  OK  ] Finished Rebuild Dynamic Linker Cache.
    [  OK  ] Started Journal Service.
             Starting Flush Journal to Persistent Storage...
    [  OK  ] Finished Flush Journal to Persistent Storage.
             Starting Create Volatile Files and Directories...
    [  OK  ] Finished Create Volatile Files and Directories.
             Starting Rebuild Journal Catalog...
             Starting Update UTMP about System Boot/Shutdown...
    [  OK  ] Finished Update UTMP about System Boot/Shutdown.
    [  OK  ] Finished Rebuild Journal Catalog.
             Starting Update is Completed...
    [  OK  ] Finished Update is Completed.
    [  OK  ] Reached target System Initialization.
    [  OK  ] Started Daily Cleanup of Temporary Directories.
    [  OK  ] Reached target Timers.
    [  OK  ] Listening on D-Bus System Message Bus Socket.
    [  OK  ] Reached target Sockets.
    [  OK  ] Reached target Basic System.
             Starting MariaDB 10.4 database server...
             Starting Home Area Manager...
             Starting Permit User Sessions...
    [  OK  ] Finished Permit User Sessions.
             Starting D-Bus System Message Bus...
    [  OK  ] Started D-Bus System Message Bus.
    [  OK  ] Started Home Area Manager.
    [  OK  ] Started MariaDB 10.4 database server.
    [  OK  ] Reached target Multi-User System.
             Starting Update UTMP about System Runlevel Changes...
    [  OK  ] Finished Update UTMP about System Runlevel Changes.

Note: It's better to run this in detached mode – I just wanted to see the output as the container came up so that I would see if there were any other hangups.

Detached mode would look like this:

podman run -d --rm -v /srv/sudoedit.com/data/mysql/:/var/lib/mysql/:Z localhost/test:testing`

Checking the status I see:

    [luke@Fedora database]$ podman ps
    CONTAINER ID  IMAGE                    COMMAND     CREATED      STATUS          PORTS  NAMES
    bebb132a0bb2  localhost/test:testing  /sbin/init  2 hours ago  Up 2 hours ago         suspicious_beaver

Next steps

I'm fairly close to bundling this image up and putting it into production. I have a few minor details that I need to sort out and a few things to test.

  • Test restoring MariaDB from a mysqldump in the container on a writable mount.
  • I've done some preliminary testing and it's definitely possible – just need to break the steps down and script it. * How to manage updates to the container – Do I want to rebuild the image on a weekly/monthly basis and push it up to my server? Or should I just build the image directly on the host and restart/rebuild as necessary? I want to find out how others handle that sort of thing.
  • How to get the webserver talking to the database?
    • Use port forwarding?
    • Someway to use the local unix socket?
    • Is one way better than other?

That's about it for the database. I feel like I can work out some of these final pieces as I go. I'd like to have a patching plan in place before jumping in, but I think the automated mysqldump restore is not a show-stopping problem. I'm thinking sometime in the next week I'll be able to have the first version of the MariaDB container up and running.


authors: [“Luke Rawlins”] date: 2020-05-27 description: “How to generate a host list with hammer cli.” draft: false title: Host list with Hammer CLI url: /host-list-with-hammer-cli/ tags: – Linux – Satellite 6 – hammercli


On occasion I need to pull a host list from Satellite 6; and while using the web ui is often simple enough, the hammer cli that comes with foreman is often faster.

Here is a quick way to get a full host list:

hammer host list

That command will print list of all hosts registered with your Satellite server.

Filter by OS major version

Often when I’m generating this list it’s because someone has asked me something like: “How many RHEL 5, servers do we have?”

You can generate a list of all the Red Hat Enterprise Linux 5 machines you have registered to Satellite like this:

hammer host list —-search os_major=5

Of course if you wanted to see RHEL 6 or 7 or 8 you would just replace the “5”, with the major version of the OS you were looking for.

Checkout other options with the help option

hammer host list —-help


authors: [“Luke Rawlins”] date: 2017-01-14 draft: false title: School District finds cost savings and flexibility with Linux description: “This is a great story about how Linux can be used by people of all ages and technical skill while still providing a low cost and secure platform for everyday operations. I'm glad to share this story and I hope it helps advance Linux as a viable option for anyone considering an alternative to proprietary Operating Systems.” url: /school-district-uses-linux-desktop/ tags: – Linux – Ubuntu


West Branch Computing Being a big proponent of Linux on the desktop I was excited to have the opportunity to talk with Aaron Prisk of the West Branch Area School District, who has recently helped migrate 80% of the school district's infrastructure to Linux. When I first heard about the district's move to Linux I wanted to find out as much as I could about his experiences during and after the migration. This is a great story about how Linux can be used by people of all ages and technical skill while still providing a low cost and secure platform for everyday operations. I'm glad to share this story and I hope it helps advance Linux as a viable option for anyone considering an alternative to proprietary Operating Systems.

When did the idea to migrate to Linux first come under consideration and what were the driving factors behind the transition?

Aaron Prisk: “We were approached by our superintendent in early 2013 to create a long-term technology plan. At the time, we were a Windows and proprietary software dominated school with a small handful of Linux machines floating around. Districts around us were either going the Chromebook route or forgoing laptops and investing in Windows virtual labs. After meeting with our technology committee we opted to go the 1:1 (1 device per student) route. We needed an approach that was flexible, affordable, and could be managed by our two-man IT department. My colleague and I, being long time Linux users, saw a good middle ground between those two approaches with Ubuntu. We could replicate the simplicity and security of a Chromebook, but without the vendor lock-in and lack of applications.

At the same time, we had to come up with a plan to address our aging grant purchased laptops that make up the other 65% of our inventory. With Windows, long log in times and poor performance greatly hindered their use. Moving those machines to Linux greatly improved their performance, cut out log in times and allowed them to become valuable classroom tools again.”

The Linux ecosystem is huge. With so many distributions and support options, what made you choose Ubuntu as the default platform?

Aaron: “We tossed around a few popular distro's during the testing phase (Ubuntu, Fedora, and OpenSUSE), but ended up going with Ubuntu. We found that Ubuntu tended to handle the laptop hardware the best. Ubuntu would also receive updates for multiple years under the LTS plan, it has a huge array of packages, lots of great PPAs and is also supported by the state testing diagnostics software we use in the district.”

I understand that you did a pilot program at first. How did you select the areas to use in the pilot and what lessons did you learn from the initial testing?

Aaron: “Our first pilot was a very small deployment of 10 machines in our elementary library. We wanted to see how young students would react to a different computing environment. The test showed us that young students adapted very well to the new environment and we could pursue larger deployments in our elementary.

laptops The first large-scale pilot was the 9th grade 1:1 rollout. Our tech committee selected 9th grade as it provided the largest sample size of students. We opted to give the kids full root access to their devices and encouraged them to experiment with software. Our approach was to create an open campus where students were given a toolbox and not a single tool.

We learned early on the value of training our faculty on how best to utilize the new technology in the classroom. We saw a need to educate our faculty on the new platform, and the many education resources they could now take advantage of. We also learned the great value of our student help desk program. Throughout the process, they did an excellent job helping us image, hand out, and repair the devices.”

What were some of the biggest challenges to this type of project? Did you have any software compatibility issues to overcome?

Aaron: “One of our biggest challenges was educating our users on how to best utilize the new operating system, and our new computing approach. Getting over that “fear of the unknown”, and encouraging the users to experiment with Linux and its many applications.

We tried to push our staff in the years leading up to the migration to use cross-platform and web based applications, specifically in productivity and education based software to help mitigate the software compatibility issues. We had some push back when it came to moving away from Microsoft Office. While LibreOffice and Google Docs are great programs, students and faculty were more familiar with the Office product line. Over time, that resistance has lessened, I think due to students and teachers growing more familiar with LibreOffice and online suites.”

Since the migration have you come across any unforeseen issues, things you didn't anticipate?

Aaron: “Thankfully we haven't run into any major issues, only a few minor problems. Early on we ran into a few small issues with automatic package updates. We assumed enabling automatic security update installation would help keep our laptops updated, but often resulted in broken packages.”

You say that Windows still makes up about 18% of your infrastructure. Where are you still using it and do you have any plans to transition those areas off of Windows in the future?

Aaron: “The vast majority of our staff's machines run Windows, mostly for familiarity and some for support of proprietary software they still rely on for day to day tasks. We also have three student labs that run Windows: 2 for Computing classes and 1 for a Drafting/CAD class.

We would certainly like to migrate more machines to Linux, but it hinges on support from proprietary software that doesn't have open source equivalents quite yet.”

Tell me a little bit about your in-house spin on Xubuntu “CorvOS 1.0”.

Aaron: “In our first year of migration, we ran Edubuntu on the 600+ cart laptops. It served us well, but we ran into limitations and bugs with the Gnome Fallback Session and the Edubuntu project seemingly died after the 14.04 release. After researching other education-oriented distro's and spins, I figured I could take the feedback I got from students and faculty and tweak Xubuntu into a well-functioning replacement.

Computer Lab The name corvOS is a play on “corvus” the scientific name for the crow and raven family of birds. My goal was to create an extremely easy to use, kiosk like distro that would be easy to use for students, and easy to manage for administrators. I made a handful of XFCE tweaks, installed a bunch of great educational tools and used some clever Bash scripts that keep the desktop experience consistent. It was during this time that I saw a need for a simple way to manage the devices and came up with the idea of Lagertha.

I found that spinning my own distro was not only a ton of fun but gave me the opportunity to build something that I know would best fit the district's needs. I'd love to distribute corvOS, but I'm still unsure of the best way to do so. My goal would be to make it easy for a non-Linux savvy individual to install and use it in their district. At the very least, I may make a script that simply converts Xubuntu into a working corvOS box.”

**Going open source has allowed us to save money, expand access, and provide a more secure computing environment.**

What is Lagertha and how is it being used in the school district? How might someone get involved in contributing to the project?

Aaron: “Lagertha is a web based tool I made for simple management of our corvOS machines. We use it to manage the packages installed on our corvOS devices and make changes to them as we need. With Lagertha you can:

  • Install Packages
  • Remove Packages
  • Update Packages
  • Change Desktop Wallpapers

Like corvOS, I wanted it to be exceptionally easy to use so a Linux newcomer could use it in their business/district.

I'm not a programmer by trade, so I would love if others wanted to get involved. I have plenty of features that I'd love to add, but don't always have the time or skill to do.” (The project is located here: https://github.com/aaronprisk/lagertha)

Do you have any advice for other school districts or businesses that are considering a transition to Linux?

Aaron: “First, I would tell them that Linux isn't as scary as it used to be. Linux still has a stigma that it's difficult to use and is only suited for power users, but that's not the case anymore. I work with a user base ranging from 5 years old to 65 years old and I see them both being able to use Linux machines with ease. Going open source has allowed us to save money, expand access, and provide a more secure computing environment.

For the technology admins in particular, since moving the majority of machines to Linux, our technical issues have gone WAY down and we've been able to breathe life back into our legacy devices.”

I want to say thank you to Aaron Prisk for taking the time to share this story with me and for being so willing to answer my questions. I'm glad to see that the West Branch School District has had such a successful implementation of Ubuntu, due in no small part to the patience, persistence, and skill of Aaron and his colleagues. For anyone capable and interested in helping out with Lagertha please check out Aaron's github for the project: https://github.com/aaronprisk/lagertha.


title: “CentOS Dilemma” authors: [“Luke Rawlins”] date: “2020-12-12” description: “CentOS Stream takes the place of CentOS Linux 8.” url: “/centos-dilemma” tags: – CentOS


I should state right off the bat that everything you are about to read is just an opinion. I don't have any insider information I didn't reach out to anyone at Red Hat for comment, because there is already a ton of stuff online and I'm sure you can find it if you are looking. This also isn't a post about CentOS Stream, I don't really know anything about CentOS Stream as I've never used it, and I don't have much interest in it at the moment. Instead this post is about taking a brief and honest look at what I think is the dilemma caused by the news that CentOS has shifted focus.

Red Hatters across the board have made their opinions known on Twitter, Reddit, Blogs etc. Their thoughts seem to range from toe the company line, to it's not as bad as it sounds.

In my reading I particularly thought Scott McCarty gave the subject a fair treatment. Sort of a balance between toe the line and everything is going to be okay.

Killing CentOS Linux 8 is hard to justify.

CentOS 8 was released in September of 2019 it was expected to be EOL (End of Life) sometime in 2029, as of December 8 of this year (2020) that timeline has been moved to December of 2021.

Ten years is a long time to support an OS, and some of the arguments have suggested that it's an unreasonably long time to support an OS. That's fair – 10 years is a long time to go without major upgrades... but no one forced Red Hat to commit to that timeline. CentOS 8 could've been released with a shorter life commitment, or as other have said, they could've just waited till CentOS 9. In any event I can't help but wonder what changed over the last 15 months that caused this sudden change of plan. To make matters even worse anyone who upgraded from CentOS 7 to CentOS 8 is now in a weird position where they have an OS with a shorter life than the one they just upgraded from. What do you do then? Roll back if you can? Move on to an entirely new Distro? Convert to RHEL? A lot of possiblities to unpack in a year – especially after you thought you had already settled these issues.

This sudden change of plan is really what strikes me the most about the CentOS decision. It's obvious that this couldn't have been in the works for a long time, because if it was why was CentOS 8 ever projected to have such a long life in the first place? I don't have an answer to this question – but I think Red Hat should've taken more caution around this subject because it really does speak to either a certain indecisiveness around CentOS, or a lack of respect for the people who have grown to depend on it and overcoming this impression will be an uphill battle. Especially in an open source community that, if we are honest, tends to be a bit extreme when it comes to corporate involvement in Linux.

Red Hat is not evil.

Regarding the last bit about extremism in the open source world.

Red Hat doesn't have to be evil, and CentOS Stream doesn't have to be some unstable beta quality garbage for you to be angry about this decision. It's enough that they shortened the life by 8 years and replaced it with something that while probably good enough for some, is not really comparable. If you were a CentOS 8 user maybe start testing Stream to see if fits your needs, or start using something else. More on that a little later – but if Red Hat gambled on expanding RHEL at the cost of CentOS and users flee to something with more regular release cycles and a long track record of long term support, then that is part of the risk they took with this decision and will have to live with it.

I just want to point out that contrary to popular opinion you can disagree with someone without demonizing them.

I agree that the untimely death of CentOS 8 Linux is a stain on the credibility of Red Hat, but I also know that Red Hat as a business has to make the best decisions it can for both itself and at least in theory for its paying customers.

Which brings me to my next point.

Using a Community project to run your business is hard to justify.

In my humble opinion you are taking an enormous risk by entrusting your valuable business infrastructure to a strictly community based project.

  1. CentOS has historically trailed Red Hat and Oracle (another Red Hat clone) in patching security issues by weeks and even months. That is something I can't live with.
  2. A “project” is not a business, it doesn't have customers, it doesn't offer support outside of forums which to be honest is often a hard place to get any information, let alone good information.
  3. I don't care how good you and your team are – unless you are Google or Facebook you are going to need support somewhere down the line and the risk of taking that all in-house is a bad decision.

The cost of doing business often includes software licensing, and subscriptions. Part of being a responsible steward of your resources sometimes involves paying for stuff, and on the occasion when you can get something for free don't let yourself get too hooked on it.

I might get a lot of flack for this next statement but I believe it's true.

You get what you pay for.

CentOS is free and that is its primary (and probably only) selling point. If you took a gamble on a free project and got burned I'm sorry, I really am, but if you are looking for someone to blame – either go down to your finance office and say “I told you so”, or go look in a mirror because those are the only two possible points of blame that I see. After that start evaluating whatever OS you plan to migrate to. I would suggest something that has people who's feet you can hold to the fire if they don't deliver on their promises, but that's just me.

Having said all that I (paradoxically) would love to see RHEL go the free to play but pay to win route. i.e. Drop the subscription fee's for RHEL and RHEL repositories, and sell services and support. Satellite is a great product that is worth the cost, hell even the Red Hat knowledge base is good enough to pay a subscription to gain access. Ubuntu has had a lot of success with this model and I think if Red Hat is removing CentOS as it traditionally existed they should replace it with RHEL offered for free if for no other reason than to stop the bleeding that will result as people move on to other distributions.

Don't put all your eggs in one basket.

This is something that I'm guilty of as well. As a prototypical lazy sysadmin I don't want to support more systems than I have to, and I don't want to support a plethora of Operating Systems. Having said that it's also clear that I may be making a mistake by placing all my trust in Red Hat's commitments. The more I reflect on it the more I focus on something I've heard at several conferences hosted by Red Hat over the last few years. I've heard this formulated a few different ways, but it's generally something to this effect “Red Hat doesn't want to be just the Linux company, we are a cloud company.” As a sysadmin who focuses mostly with On-Prem stuff I don't care about your cloud offerings. I care about Linux and that's it. Containery, cloudy thingys are mysterious and scary I just want a solid OS that will run your cloudy containery thingys.

On that note I plan to start evaluating Ubuntu for use in the area's that I can afford to have a little flexibility in the stack (I'll argue for a support contract as well – so don't send me hate mail about hypocrisy).

Why Ubuntu?

  1. They have long term support. 5 years on the LTS and you can pay for more if you have a group that is resistant to change.
  2. Say what you want about Canonical they have a track record for regular releases that are on time, well built, and supported as promised.
  3. It has a huge selection of packages often newer than what is found in Red Hat releases.
  4. I use what I know – I know RHEL systems better than Ubuntu, but I know Ubuntu well enough to get started. And outside of a few critical systems that must be on RHEL for application support issues, Ubuntu is likely just fine for many other common use cases.
  5. It's the most commonly requested alternative Linux OS – in my experience anyway.

Conclusion

I don't know how to conclude this except to say if you feel burned by Red Hat I sympathize with that feeling and I know it's going to add a ton of work to your life in the very near future. I think it was a bad decision on their part. I also think trusting a community project is a bad business decision and I can't be too empathetic as I wouldn't make that decision myself (though I don't often have the political will or power to stand up to those decisions myself).

If you are evaluating new OS's or starting the conversion to CentOS Stream I wish you the best of luck.


authors: [“Luke Rawlins”] date: 2017-03-05 draft: false title: Satellite 6 Duplicate Host Names with Puppet description: “How to fix duplicate host names in Satellite 6 with Puppet.” url: /satellite-6-puppet-shortname/ tags: – fqdn – Puppet – Red Hat – Satellite 6 – shortname


Satellite 6, Red Hat's patch, configuration, and deployment management one stop shop solution is a powerful tool.

It is also a formidable and complicated piece of software. One of the big hurdles that I have run into when incorporating Puppet into Satellite 6 is that many of our systems do not use a fqdn (fully qualified domain name) for their host names. Which means that when I register ”superawesomewebserver01” with Satellite 6 I get a host record that reflects the short name. This isn't a problem until that same host connects with Puppet, its name is then recorded based on the certificate which is always the fqdn (i.e. “superawesomewebserver01.example.com” and results in duplicate host records showing up in Satellite, each as an independent object.

So, how can you fix this?

What you should not do... probably

Do not use foreman-rake katello:unify_hosts on your Satellite server if you have connected it to a compute resource like VMware.

Especially don't do this if your Satellite user has full privileges to create, modify, and delete VM's. Somewhere in the process of unifying the hosts, this script will delete the short name host record, which triggers Satellite to delete the host from VMware.

Now in my case, I should note that people on our team had the foresight to not give Satellite the ability to delete virtual machines so I didn't end up losing any critical data, or services. Instead, it ended up simply shutting down the target host, causing only a minor inconvenience for myself and the poor soul who happened to be on-call at the time.

If running this command is the best or only option you have, then I would suggest that you first disassociate all of your hosts from the compute resource that they are linked to. You can do this in the GUI from the “All Hosts” section. I've been trying to find a way to do this with hammer-cli but I haven't seen anything that looks promising at the moment. Running the foreman-rake katello:unify_hosts command on a production Satellite 6 server, that has full permissions in your vm environment could be disastrous so be cautious...(I.E don't run it just because someone from support asked you to) If you are not connected to a compute resource then this solution should work fine after registering all your hosts with Puppet.

A safer way to solve this problem

You can avoid the entire issue of duplicate host records by changing the name of each record (note I didn't say the hostname of the actual machine) to the fqdn.

Foreman comes with a handy command line interface that will allow us to script this.

hammer host update --name <hostname> --new-name <hostname.example.com>


In my case, I gathered a list of the servers that I wanted to tack the domain name onto and put them into a file called hostlist.txt with each entry separated by a new line and used that list to iterate over a quick loop.

#!/bin/bash
hosts=$(cat shortnames.txt)
for h in $hosts
  do
    hammer host update --name $h --new-name $h.example.com
 done


This will take a bit of time depending on how many records you are going to update, but it is far safer and quite a bit easier than many of the other solutions that I have found while digging through web forums.

Avoiding this problem in the first place

If you do use shortnames in your environment, one of the things you can do to avoid this problem in the first place is to use the fqdn when you initially register the host to Satellite. The subscription-manager package has an option to register a host with any name you choose.

sudo subscription-manager register --org='<organization>' --name <hostname.example.com> --activationkey='<key>'


I put this in an activation script and instead of hard coding a name I use --name $(hostname --fqdn) to make sure that I register each host with its proper fully qualified domain name.

This, I think, is the simplest way to avoid naming conflicts in the future. I've seen other suggestions about adding custom facts to new hosts to force subscription manager to use the fqdn, as outlined in this bugzilla report, and I'm sure that probably works just fine. I feel like this solution is a bit more flexible in that it allows you to use whichever name you want.


author: “Luke Rawlins” date: “2022-06-05” title: Install VMware Horizon on Pop!OS 22.04 description: “Install VMware Horizon client on Pop!OS or Ubuntu 22.04” url: /install-horizon-on-popos/ tags: – Tutorial – Pop!_OS – Ubuntu – Linux


Earlier today I found myself running into a problem while trying to install the VMware Horizon Client on Pop!_OS 22.04.

I didn't take a screenshot so I can't post it here, but after running the bundle I would get a window with a totally unhelpful error message that just said “Installation was unsuccessful”.

Here's what I had to do to install the Horizon Client on Pop!_OS 22.04 though I would assume these steps are the same for Ubuntu 22.04.

Download the “bundle”

Start by downloading the VMware Horizon Client for 64-bit Linux from the VMware website.

This “bundle” file is really just a shell script so you just need to make it executable.

cd ~/Downloads
chmod +x VMware-Horizon-Client-2203-8.5.0-19586897.x64.bundle 

Note: your bundle version may be different by the time you read this.


Install python2

For some reason the VMware Horizon Client needs python version 2, even though it's been EOL for over 2 years now.

Easy enough to install though.

sudo apt install python2

Run the bundle

From your terminal run the Horizon client bundle.

sudo ./VMware-Horizon-Client-2203-8.5.0-19586897.x64.bundle

This script will prompt you with a series of questions. I answered “yes” to all of them, but you do whatever you think is best.


Open the Client

Horizon Screen Shot

At this point you should be able to search for and run the horzon client from the app launcher.

Note

If you have enabled Wayland you will need to disable it and use X11 instead as the Horizon Client doesn't support Wayland at this point in time.

I assume the steps to do this on Fedora would be pretty similar, but you should keep in mind that Wayland is the default on Fedora so you may need to make additional changes to get it working.


If you found this useful please support the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


author: “Luke Rawlins” date: “2023-10-31” title: Firefox is getting a deb package repository description: “Debian package repository for Firefox nightly, with release, and ESR in the works.” url: /firefox-debian-package/ tags: – Firefox


Yesterday Mozilla announced that Firefox nightly builds will be available in an APT repository with the Stable, ESR, and Beta versions coming after “a period of testing”.

Here's the announcement from the nightly blog.

It's about time?

I'm personally thrilled to see an official traditional package of Firefox coming directly from Mozilla instead of one of those new fangled Flatpak's or a Snap's.

Calm down – I don't know a damn thing about flatpak's I'm sure you are correct that they are the best thing since baby Jesus, I just haven't had much luck with them, and given the choice I'll stick with a traditional package everytime. Even if it makes baby Jesus cry.

For as long as I can remember all of the major Linux distributions shipped Firefox by default and for the same amount of time Mozilla has never really seemed to care a whole lot about making it easy to install Firefox outside of your distrobutions package manager. Which really isn't that bad, except for the fact that usually updates would be a few days behind the upstream version.

It also sucks that Debian uses the ESR by default – so this will help me make a FrankenDebian! How exciting!

Now what do I do with my fancy install script?

I wrote this awesome install/update script for the Firefox tarball a little while ago, since Mozilla didn't think Linux important enough to build an official package.

Here's the script in case you want it – YMMV (I know it works on Debian 12).

#!/usr/bin/env bash
# tarball url found here https://ftp.mozilla.org/pub/firefox/releases/latest/README.txt
# Basically scripting the instructions found here:
#   https://support.mozilla.org/en-US/kb/install-firefox-linux#:~:text=1%20Download%20Firefox%20from%20the%20Firefox%20download%20page,script%20in%20the%20firefox%20folder%3A%20%22~%2Ffirefox%2Ffirefox%22%20See%20More

desktop_file_path="/home/${USER}/.local/share/applications/firefox.desktop"
get_desktop_file="https://raw.githubusercontent.com/mozilla/sumo-kb/main/install-firefox-linux/firefox.desktop"

if [[ -f firefox.tar.bz2 ]]; then
  echo "Removing old tarball"
  rm -f firefox.tar.bz2
fi

echo "Downloading latest Firefox tarball file..."
wget -qO firefox.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64&lang=en-US"

sudo tar -C /opt/ -xvf firefox.tar.bz2
if [[ ! -h /usr/local/bin/firefox ]]; then
  echo "linking firefox to /usr/local/bin/firefox"
  sudo ln -s /opt/firefox/firefox /usr/local/bin/firefox
fi

## If there is already a desktop file don't assume we want to overwrite it.
if [[ ! -f ${desktop_file_path} ]]; then
  echo "retrieving desktop file"
  cp firefox.desktop ${desktop_file_path}
fi
rm -f firefox.tar.bz2


title: “Firefox copy and paste with Apache Guacamole” authors: [“Luke Rawlins”] date: “2020-11-20” description: “Enable the asynchronous clipboard in Firefox for easy copy and paste in Apache Guacamole.” url: “/firefox-async-clipboard” tags: – Firefox


If you're not familiar with Apache's Guacamole project, it is a clientless remote desktop gateway that allows you to access your desktop from any web browser. It's actually pretty cool software that I've been using quite a bit lately.

As a Firefox user, however, one little annoyance has been that direct copy and paste has been a problem. The Guacamole docs have a brief section that explains how to copy and paste using the Guacamole menu bar, but that is combersome and not very user friendly. The Guacamole developers even have a section in the faq that explains the issue. Guacamole has been using the Asynchronous Clipboard API that currently only has limited browser support. Basically it's only working out of the box on Google Chrome.

This means that by default if you want to use direct copy and paste from your computers clipboard onto the remote computer in Firfox, then you had to use the text box that was built into the Guacamole menu – ugh.

However, there is a solution to this little conundrum in the Firefox advanced preferences.

Enable Ansynchronous Clipboard in Firefox

In the url bar type about:config and press enter. That will take you to the advanced preferences page which will present a warning message, that looks like this:

Firefox advanced menu

Go ahead and click the “Accept the Risk and Continue” button... That is, assuming you trust some random guy on the internet isn't about to give you bad directions :fearful:

After you click the Accept button (you did click it right? That last line was just a joke, I wouldn't steer you wrong). You will be at a mostly blank page with a search bar and yet another caution message. But remember We are not descended from fearful men! And so we press on!

Firefox advanced search

In the search field type “clipboard”. The page should immediately (if not sooner :unamused:) start filling the page with search results. The setting we are looking for is “dom.events.testing.asyncClipboard”. It will be close to the bottom, and by default its value will be “false”.

Click the toggle button in the right hand column (or double click the setting name) to set the value to “true”.

When it's changed it should look something like this:

Firefox clipboard settings

You might have to refresh the Guacamole page, or restart the browser – but now you'll be able to directly copy and paste from your computer into the remote client while skipping the annoying text box.

Feel free to send me a message if this doesn't work – or if you know a better way.