sudoedit.com!

Just some guy on the internet.


authors: [“Luke Rawlins”] date: 2020-05-25 description: “Web server upgraded to Fedora 32, and planning to migrate this site to Podman.” draft: false title: Upgrade to Fedora 32 – and figuring out containers with Podman! url: /upgrade-to-fedora-32-figuring-out-containers-with-podman/ tags: – Fedora 32 – podman


fedora infinity logo Over the weekend I decided it was time to test the in-place upgrade on my Fedora 31 webserver to migrate over to Fedora 32. The process for in-place upgrades on Fedora is a fairly straightforward process and has been flawless ( at least in my experience) over the last few years.

If you are thinking about kicking the tires on the new Fedora check out the quick tutorial over at Fedora Magazine to learn how easy it is to perform the upgrade.

Moving this site to containers! – with Podman

I'm a long time container skeptic – I don't understand exactly what problem I'm supposed to solve with containers, and I feel like they add a lot of complexity to systems that should be fairly simple. But maybe my suspicions are wrong, and I just don't understand them very well.

In this case I guess the problem I am solving is my own ignorance around container ecosystems, and I have no doubt that my real job will require some container knowledge fairly soon, and I don't want to get left behind.

It's definitely true that I don't understand containers very well. At least not at a practical level and not to the extent that I should understand them. In that regard there is no better way to learn than by doing, so I've taken the first baby steps toward moving this site into containers. I'll update the blog as I go, passing along anything I learn along the way.

I've been using a recent post from @fatherlinux about how he migrated his Wordpress installation to containers using Podman. The rough guide for how I will do my migration is from his article in “Crunch Tools” found here: http://crunchtools.com/moving-linux-services-to-containers/.

My migration plan is a bit different from his, as I intend to separate the database from the web server, and I'll need to figure out how to incorporate letsencrypt into an Nginx reverse proxy. I actually host multiple websites, not all of which are mine. They will all need to be accessible on port 80 and 443 so I don't have the luxury of choosing random ports to forward traffic to. – Luckily I personally know the people who I host sites for, and they don't usually mind if I want to tinker with things.

Why Podman?

Why podman instead of Docker, or LXC, or whatever your favorite container platform is? I chose to use Podman as my container engine or container tool or container something simply because it is the new native container tool that is provided by Red Hat. I have a weird neurosis where I feel that a tool provided natively with an OS always appears to be better than a comparative third-party tool, like Docker. I'm not saying that I'm always right about that, but it's something that I can't seem to shake. I figure if Red Hat is willing to provide it and stand behind it, then I should probably learn it. That is really all there is to it.

A rough outline of my plans as of now.

Right now I have a 9 step plan to moving this site into containers. (Because 10 step plans are boring.)

  1. Move the database for this Wordpress site into a container.
  2. Hammer out the steps to build the database automagically and create a Containerfile/script to redeploy with minimal effort.
  3. Plan web server space.
  4. nginx reverse proxy with certbot.
  5. build templates for each web site vhost to be in apache container.
  6. Create systemd unit files for each container.
  7. Spend some time doubting myself.
  8. Cut over – Probably some major downtime.
  9. Start plotting how to move the containers into “pods” using the podman toolkit.

How's the migration going so far?

After the upgrade to Fedora 32, and doing my normal validation after an upgrade (Did the home page and the admin page load? – Yep, time to celebrate my victory!), I went ahead and installed Podman and started reading the man pages.

Right away I ran into a bit of a problem just trying to run the base fedora image from https://registry.fedoraproject.org. After building a basic image with MariaDB and trying to run the container I hit a snag with what I thought was a permissions issue. One of my goals in this project is to run the whole operation with non-root, unprivileged containers.

The error I was struggling with for a while was this:


    Failed to create /init.scope control group: Permission denied
    Failed to allocate manager object: Permission denied
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1…

As it turns out this is an SELinux induced message, I found a good Red Hat KB that pointed me to flipping a SELinux Bool called container_manage_cgroup – I don't know what this bool does at the moment, but it appears you have to change it to true if you want to run a container that has systemd.

setsebool -P container_manage_cgroup on

After checking my audit logs I can verify that in fact this will prevent a container from starting, at least if your container has systemd installed and managing services.

Next Steps

So, so far so good. My next step is to plan out exactly where I want to keep my database configuration – and then how I want that to be passed into the container. If you read this site over the next week or two (or more, I'm pretty lazy) don't be surprised if it suddenly goes down for a little while, while I work out different issues that will certainly arise in the process.

If all goes well with the database migration, then I'll start working on the reverse proxy and apache services.

If you have any tips or suggestions please pass them on – I need all the help I can get.


authors: [“Luke Rawlins”] date: 2015-09-03 draft: true title: Troubleshooting – Isolating the cause url: /troubleshooting-isolating-the-cause/ tags: – Linux


This post is the second in my three part series on troubleshooting. In last weeks post I focused on the importance of building trust with your client by asking the right questions to solicit the valuable details you need to appropriately diagnose an IT problem. This week I will talk about the importance of isolating a specific cause.

Step 2 – Isolate a specific cause

A lot of time is wasted in IT troubleshooting due to anxiety over coming up with a fast solution. People want to impress their boss, their customer, or their colleagues so they quickly form an opinion as to what the client needs and try to implement a fix without first taking the time to isolate the problem.

Slow is smooth, smooth is fast.

Think through the symptoms, think about what system or systems they all have in common. Use that information along with other knowledge you have gained through training, education, and experience to narrow down the possible causes. Slow is smooth, smooth is fast. You isolate an issue by gathering data, coming up with a hypothesis, and then attempting to prove that hypothesis to be wrong.

Lets take what we have learned for a spin

Scenario – I don't have the internet!!!! OMG my life is over please help me!

Sorry no Internet

After asking our customer all the necessary questions that we discussed in The Art of Asking Questions we find out that the user discovered this issue when he couldn't reach his favorite news site. Further investigation shows that he cannot reach any site on the internet or the intranet, running ipconfig reveals that he is receiving a valid ip address, and he can ping the gateway by ip address.

Isolate a specific cause

Again ask more questions do not assume that this is a large scale DNS issue, it could be, it could also be a virus that corrupted his hosts file, or maybe he messed with the DNS settings on the network adapter trying to bypass the company servers. Remember you don't know yet, you are still agnostic.

Is anyone else in the office having this issue?

What does this question do for us? It can potentially eliminate a lot of problems that would be very expensive. If this is an office of 10 people all of whom have workstations that are connecting to the internet, network shares, and intranet sites; then you do not have a wide scale DNS problem. The ISP is not at fault since others are not effected so you can eliminate the router, the ISP, the DNS and DHCP servers, as possible causes. Just by asking a question.

From what we have gathered after asking all of our questions we know that the problem is with DNS, but that it is only effecting the local computer. What components are in play? The hosts file, a proxy redirect, an incorrect DNS setting on the network adapter, maybe a virus. Start with any component you like and work your way back verify that each is set correctly eliminating each as a cause until you run into one that cannot be excluded.

Conclusion

I used DNS as an example here, but the same methodology can and should be used when dealing with hardware problems, software installations, network connections, authentication etc. Ask your clients questions. If you build a strong relationship with them then the answers you get will contain reliable information that you can use to help you isolate the problem. If you ask the right questions you can eliminate many possibilities without even having to do any real technical work.

The lesson here is patience and persistence. Don't be the person who insists that they know what the problem is until you have eliminated every other possibility (or the most common ones at the very least). Skipping the isolation phase is very tempting, especially for people who are just starting out and are trying to prove themselves, don't do it! Step back, take a breath, gather your thoughts and verify, verify, verify. Analyze all the symptoms and prove every other possibility to be false before you spend time and money fixing something that isn't broken.


authors: [“Luke Rawlins”] date: 2016-06-23 draft: false title: Passwordless login with SSH Keygen url: /passwordless-login-with-ssh-keygen/ tags: – Linux – Ubuntu


What is a rsa key?

RSA keys are a public key encryption method that keeps a private key on the host computer, and a public key on other machines. The public key is generated by a mathematical algorithm that can only be de-crypted with the private key. As long as the private key is kept confidential use of the keys is secure.

Why use rsa keys?

rsa keys are secure

The keys are secure because they can be encrypted on a users computer protecting the key from falling into the wrong hands, like a password printed on a sticky note and place on your desk. The rsa key is also secure because it allows a server administrator to shut off password authentication on remote servers making a brute force attack that utilizes password dictionaries impossible. By default the rsa key is 2048 bits but this can be altered with the -b option.

rsa keys are convenient

Keys are easy to create, and distribute. The key allows near instant authentication without stopping to type a password every time you need to jump onto a server.

From your Mac, Linux computer open the terminal and type the following

ssh-keygen

You will be asked a series of questions

user@testserver:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):

To keep the default just press enter. By default this will create a file called id_rsa under /home/user/.ssh/

Next it will ask for a passphrase. You want to enter a passphrase. The passphrase is what protects your private key from being readable by others.

Enter passphrase (empty for no passphrase): 
Enter same passphrase again:

After entering the passphrase ssh-keygen will generate an rsa key and display a fingerprint like this:

Your identification has been saved in /home/user/.ssh/idrsa. Your public key has been saved in /home/user/.ssh/idrsa.pub. The key fingerprint is: 49:ee:8c:73:d2:a3:53:b9:b4:50:63:19:34:a2:47:c2 user@testserver The key's randomart image is: +—[ RSA 2048]——+ | .. o.o | | E+ ... | | . . .o | | . o=. | | oSo | | .=+ | | ++*o | | .=o. | | .. | +————————–+

If you cd into /home//.ssh/ and list files you should see something similar to the below.

cd .ssh/

$ ls idrsa  idrsa.pub  known_hosts

We will be transferring the id_rsa.pub file to our remote server using “ssh-copy-id”

user@testserver:~$ ssh-copy-id 192.168.0.2 The authenticity of host '192.168.0.2 (192.168.0.2)' can't be established. ECDSA key fingerprint is 10:0f:3b:dy:3d:08:5a:3c:09:c8:81:c1:53:a2:94:9c. Are you sure you want to continue connecting (yes/no)? yes

If you haven't connected to the remote server yet you will be asked to accept the fingerprint of the server you are connecting to. Just type yes and hit enter.

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
user@192.168.0.2's password:

Next you will be asked for the password for the user account on the remote server. Only users with a valid account can connect this way.

Now you should be able able to ssh to the remote server without being prompted for a password. You may have it input the passphrase to unlock the key file. However, if you are on a Mac or Linux desktop or laptop you can have it saved in your keychain and will not have to input the passphrase again.

ssh user@192.168.0.2 Enter passphrase for key '/home/user/.ssh/id_rsa':

Using rsa key authentication will increase the security of your server network.


author: “Luke Rawlins” date: “2022-02-19” title: Plain Text Notes description: “Keeping plain text notes” url: /plain-text-notes/ tags: – Opinion


Digital Hoarding

For years I've been a heavy note taker, as well as a semi-regular journal writer. I try to keep fairly detailed notes while I'm learning something new, which comes in handy more frequently than you might think. Note taking is my secret weapon when it comes to keeping up with new technologies, or even old technologies that I need to pick up and remember.

For many years I used OneNote to keep notes on Linux commands, configurations, general tech topics as well as my journals. OneNote is a great app, but after several years I wanted to give Evernote a try and in the transition I found the biggest weakness of OneNote and basically every other note taking app: Exporting and Importing from other platforms.

Moving from OneNote to Evernote was painful, OneNote only has an export as pdf option, and copy/paste has a tendency to paste text as an image. For me the biggest appeal of Evernote was the ability to export notes in a more portable manner, namely html.

Evernote is a powerful cross platform organization tool, but I've started to wonder if I really need it these days. Given the rising cost of pretty much everything and considering my laptops filesystem can serve the same purpose and comes with very capable document search abilities (spotlight on MacOS, and the Pop!_OS launchers “find” keyword).

Plain text the universal format

After my experience moving from OneNote to Evernote I wasn't thrilled about the idea of transitioning to yet another notes app. So after giving it some thought and looking at my options I came across a nice little blog dedicated to writing in plain text: The Plain Text Project.

I had a moment of epiphany 🤔. A good portion of my job involves searching, editing, and maintaining plain text files in the form of configurations, code, and even this site is written purely in markdown. When I think about it there isn't much I need a fancy note app for that I can't do in a more flexible and portable manner on my own with plain text files.

Why plain text?

  • Portability. Plain text is universal. A file written in plain text can be opened on Mac, Linux, BSD, Windows, iOS, Android... basically, anything you've got can open a plain text file.
  • Speed. Plain text is fast and clean. Without being presented with hundreds of text formating options you can just sit down and start writing.
  • Searchability. You can search plain text documents with command line tools, gui tools, fancy indexing tools, and dozens of other tools. If you write it in plain text you can find it again later.
  • Control. You can create as simple or complex a system as you want in plain text. If you want to have an index in elasticsearch you can do that. If you just want a bunch of files dumped in a folder – you can do that too.
  • Storage. Text files are small, and quick to open.

Why not use plain text?

  • It's not as pretty. Some people enjoy the text formating options in big office applications or in the professional note taking apps. If you really take issue with the simple formating of plain text files then you probably won't enjoy the experience.
  • No embedded images. You can't attach photo's to plain text, and this is one of the big drawbacks for me. When I'm journaling about a vacation, or an event I often like to include pictures in my journal... I haven't come up with a good system for that other than referencing the day and saying “check photos from 2022-02-19”.

How I'm using plain text

I spend a lot of time on the command line and have become so efficient with Vim that I decided right away that I was going to be using Vim for most of my writing. I do all my writing for this blog with markdown and Vim. In that vein I found an incredible Vim plugin called vimwiki as a way to keep my personal knowledge base in place while also keeping my writing both searchable, portable, and shareable.

This isn't to say you have to be comfortable or even aware of the command line to use plain text – it just means you'll use it differently than I do.

Plain text in markdown means I can easily share anything I need or desire to share via my blog. It also means I can keep anything I want to stay private or have no need to share completely under my control – I can put that info my personal iCloud storage, or off the cloud on my hard drive, whatever I want and I can change my mind later without worrying about migrating between apps.

Tags in plain text

Over my years of keeping notes I've found tags to be the best way to manage large quantities of information. I keep tags for technologies (Linux, Podman, Vim, etc), I keep tags for places (Parks, Cities, etc), and I keep tags for my children so that I can easily search for very specific information.

When I moved to plain text files I switched from tagging with # to @ because markdown uses the # pretty heavily and # is commonly used for comments in configuration files. In order to keep track of my tags and retrieve the information in those tags I wrote this handy bash script that can search a custom directory for tags like @Linux or @myfavoriteplace.

The notes-cmd script

I wrote this script to list my tags, if you like it feel free to use it, if you can make it better feel free to contribute something.

notes-cmd


If you found this useful please support the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


authors: [“Luke Rawlins”] date: 2022-06-18 title: I'm a Firefox fan url: firefox-still-the-best tags: – Firefox


Get Firefox

For anyone looking for a fast, modern, and privacy friendly browser I still think Firefox is an excellent choice.

Firefox! Yay!

In the interest of full disclouser, I am a Firefox fanboy. If Mozilla would let me, I would pay a subscription fee for Firefox but as of now they will not take my money.

However, despite being the default browser on most Linux distributions, Firefox has been bleeding users over the last several years. 🤣

Apparently Linux hasn't yet become the powerhouse of desktop operating systems we've all been hoping for, and hopefully Mozilla has a plan outside of the Linux space to start growing adoption of Firefox with new users. As well as continue being a great choice for those of us who have stuck with them over the long haul.

All jokes aside I think even a lot of long time Linux users have started moving to Chrome, Vivaldi, or even Edge.

I'm not sure what is driving people towards Chrome, Edge, and the other Chromium based browsers over Firefox. In my experience Firefox provides a great browsing experience, while being far lighter on resources than Chrome. Firefox doesn't seem to eat my laptop battery with the same insatiable hunger as Chrome, and it rarely makes my laptop fans spin up.

What so good about Firefox?

A lot of talk on the web about the value of Firefox seems to hinge on this idea that you need to preach to people about virtues of Open Source, and the free and open web. Or that developers want to keep Firefox around because other browsers (Safari) don't implement new features quickly enough... blah, blah, blah... No one cares if developers don't like Safari, or Chrome, or whatever.

If people are going to start using Firefox it's going to boil down to the basics. It needs to be fast, it needs to have a familiar interface, and it has to have the features people want.

I think Firefox already satisfies these requirements, the tricky part will be convincing people to move off Chrome – the place they already keep their bookmarks, passwords, photos, emails, and other valuable parts of their lives. It's an uphill battle for sure but here I'll just present a few of the features that keep me on Firefox, things I think they do better than their competitors.

Some of these things might seem trival or irrelevant to you, but these are the features I appeciate the most and use on a daily basis.

Reader View

Firefox has a superb reader view. A single click on the icon in the url bar will strip all the unnecessary clutter from a web page and present you with a clean easy to read page, devoid of ad's, pop ups, and other distractions. Outside of a decent ad blocker, the reader view is one of the few things that makes most modern blogs and news sites tolerable for me.

Firefox Sync

Almost every browser has a sync feature built into it and most of them work well enough that we don't often have to think about them.

Firefox at first glance might seem like just another browser that can sync bookmarks, passwords, addon's and other information. However, Firefox takes user privacy seriously and they've put a lot of thought into the type of harm that could be caused if a person's browsing history, bookmarks, or other data were to be leaked, stolen, or otherwise compromised.

How Firefox Sync Works > Firefox Sync by default protects all your synced data so Mozilla can’t read it. We built Sync this way because we put user privacy first. In this post, we take a closer look at some of the technical design choices we made and why.

When building a browser and implementing a sync service, we think it’s important to look at what one might call ‘Total Cost of Ownership’. Not just what users get from a feature, but what they give up in exchange for ease of use.

On top of being a highly secure and privacy focused service, Firefox Sync also works flawlessly. The hand-off from computer, to tablet, to phone just works. Addon's, bookmarks, and browsing history are accessible everywhere I'm signed in to my Firefox account.

Best of all I'm not paying for this service by handing my data over to Google or Microsoft to be sold to the highest bidder.

Picture in Picture

The picture in picture implementation in Firefox is outstanding.

I love that I can watch a show on one of my streaming services and pop the video out of the browser page so I can continue working on other things, while the video continues to play on another monitor.

Just hover your mouse over the video, and if the stream service supports it you'll reveal a “picture in picture” button. Just click it and the video will pop right off the page. You can resize, pause, mute, rewind and fastforward to your hearts content right from the video playback window.

Privacy

Firefox has builtin tracking protection, which blocks much of the tracking software that follows you around the web while you are shopping, working, or browsing.

These trackers aren't just annoying and intrusive, they make your browsing experience slower and by blocking them Firefox speeds up your browsing experience without hogging system resources.

This tracking protection can be increased by adding the Facebook Container add-on to Firefox. This add-on relegates Facebook's tracking software and cookies into an isolated browser container so that none of your other browsing activities will be leaked to Facebook. This helps protect you from one of the internet's most notorious and untrustworthy data brokers.


Get Firefox


If you found this useful please consider supporting the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

l Thanks!

Luke


authors: [“Luke Rawlins”] date: 2016-01-07 draft: false title: Media Server url: /media-server/ tags: – Linux – Ubuntu


Have you ever wanted to set up your own video streaming service on your home or work network? This simple guide will help you set up a media server using Ubuntu 14.04 and Plex. The setup for Plex on Ubuntu is incredibly easy and is a great way to back up your existing video, music and picture library in a way that will allow you to share the content with anyone on or off your network.

Step 1 – Download the installer

Make sure you are in your home directory.

cd ~/

We will be using wget to download the installation package from Plex. Since wget downloads files to the current working directory it is good practice to ensure that the package is downloaded in the home directory.

wget https://downloads.plex.tv/plex-media-server/0.9.14.6.1620-e0b7243/plexmediaserver_0.9.14.6.1620-e0b7243_amd64.deb

Step 2 – Install Plex

sudo dpkg -i plexmediaserver0.9.14.6.1620-e0b7243amd64.deb

Installing this package will create a user called “plex” on your server. You can verify that the user was created by checking the /etc/passwd file.

vim /etc/passwd

You should see a line in that file similar to the following:

plex:x:118:127::/var/lib/plexmediaserver:/bin/bash

Exit the file by typing :q! and press enter.

Step 3 – Create the directory structure

Plex reads from specific directories and wants to see your files named in a specific way in order to pull meta data about movies and music from the internet. We will create 3 directories to hold: Movies, Music, and Pictures.

sudo mkdir -p {/Plex/Movies,/Plex/Music,/Plex/Pictures}

The -p option for mkdir will create parent directories if they don't already exist. In this case the Plex directory will be created. The squiggle brackets {} group our directory's so that we can create multiple folders at once.

Plex requires read and execute permissions on the directories we just created, and on any files placed in those directories.

sudo chown -R plex:plex /Plex sudo chmod -R ug+rx /Plex sudo chmod -R g+s /Plex

These commands change the owner of the Plex directory to the plex user and modify the permissions to allow the plex user and group to read and execute files in the directory. Later when you add files to any of these directories they should inherit the plex group so that you will not need to change permissions manually on each file.

At this point you can move any files you want to share from your media server into the appropriate directory. If you don't have any files ready then you can move them at another time.

File names

In Plex file names are very important. Movies for instance should be named as follows: “Name of Awesome Movie (year of release).mp4”. For example “My Movie (2016).mp4”

Prevent accidental deletion

I like to protect my library from accidental deletion by setting the “i” (immutable) option in the file attributes.

sudo chatter +i /Plex/Movies/My \Movie (2016).mp4

Setting the immutable file attribute will prevent all users, even root, from deleting or modifying the file.

Step 4 - Don't forget the firewall

Plex uses port 34200 so in Ubuntu 14.04 we will need to open that port before our media server will work.

sudo ufw allow 34200/tcp

 Step 5 – Connect to the server

Open a web browser and go to the following address: http://yourserveripaddress:34200/web

The colon in the address is to specify the port number. You should now see a web page that looks something like this.

Screenshot-420

Walk through the set up screens and select your media libraries. Once you have completed the set up and created a user account (which is free) from www.plex.tv you will be able to stream Movies, Music, TV shows, and Pictures from pretty much any device on your network.


authors: [“Luke Rawlins”] date: 2019-07-07 draft: false title: Upgrade to Fedora 30 Complete! description: “Some lessons I learned while migrating WordPress from Ubuntu to Fedora 30.” url: /upgrade-to-fedora-30-complete/ tags: – fedora – migration – wordpress


In the last couple of days there has been some extended downtime on this site. That is because I've been working on migrating my blog from Ubuntu 16.04 to Fedora 30. I'm switching for lots of reasons. Some of the php packages I need for Wordpress have been getting a bit out of date on Ubuntu 16.04 and I wanted to have the most up-to-date stable release of php without needing to add a third-party repository and Fedora 30 comes with php7.3 by default which is what is recommended by the good people at WordPress dot org (https://wordpress.org/about/requirements/).

Why make the switch from Ubuntu to Fedora? The biggest reason is that these days I'm more comfortable working in Red Hat space, and I like some of the features in Fedora 30 like the new module framework (https://docs.pagure.org/modularity/) that allows multiple versions of programming languages, and databases to be installed on the same server without conflicting with each other. Also I still live under the delusion that, someday, I will port all or some of this site into containers and I want to try Podman (https://podman.io). It's also because the database server behind this site has been running on Fedora since just a few months after the Fedora 29 release, and after a flawless upgrade from 29 to 30 I decided that I wanted to have a consistent OS layer between the web server and the database. This is not a rebuke of Ubuntu or Debian based systems. I think they are great, they make great servers, they make great desktops, I've just grown more comfortable with some of the RPM based distributions, and I like that Fedora seems to be able to walk the line between stability, and keeping up with some of the newest packages available.

Fedora Inifinity logo

Lesson's learned

SELinux – check the audit log

Moving the site was fairly easy. I just archived my web root directory and copied it over to the new server, unpacked in the same directory on the new server and that was pretty much it.

However, if you have SELinux enabled on your web server, and I recommend that you do, then you will need to flip a few sebools to allow the web server to connect to the database, and be able to install themes/updates

sudo setsebool -P httpd_can_network_connect_db 1

That setting will allow you to connect to a remote database. If you are running Wordpress with the database on your web server then you don't have to worry about that one. The next one is important if you want to allow Wordpress to install plugins/themes/updates

sudo setsebool -P httpd_can_network_connect 1

I figured this out after spending way too much time trying to figure out why my site couldn't talk to to the database server. I knew I could hit the database port after running an nmap which meant that I didn't have any firewall issues, and I could connect to the database with Wordpress credentials. I could've figured it out much faster had I checked the audit log

sudo sealert -a /var/log/audit/audit.log

Checking the audit log would've saved me a bunch of time because it basically tells you what you need to do:


    *****  Plugin catchall_boolean (47.5 confidence) suggests   ******************
    If you want to allow httpd to can network connect
    Then you must tell SELinux about this by enabling the 'httpd_can_network_connect' boolean.
    Do
    setsebool -P httpd_can_network_connect 1


I may end up doing a whole post for how to navigate the ins and outs of SELinux related to Wordpress. These aren't the only settings you will want to change. You need to be able to make sure that apache can read and write to a few directories which selinux will block by default, at least on Fedora 30.

Lesson learned.... read the logs first.


authors: [“Luke Rawlins”] date: 2018-05-19 draft: false title: Three reasons to start using Ansible description: “Three reasons to start using Ansbile. It's agentless, you can get started on a laptop, and it's easy to learn!” url: /why-use-ansible/ tags: – ansible – devops – Linux


Ansible A few months ago I attended a one day Ansible workshop in Columbus Ohio with a colleague. The workshop was sponsored by Red Hat and contained several labs, which is well worth your time if you have the opportunity. I wasn't sure what to expect, generally you don't walk away with much working knowledge from these short events, but I had some experience with Puppet (most of it frustrating) and I was curious to see what Ansible could do for my organization.

Honestly, my expectations were pretty low. In general I haven't been impressed by the devops trend and as you know from my previous posts I'm fairly skeptical of containers. That said, my organization has a need to automate a lot of the everyday tasks that take time away from improvement projects, and everything I had read about Ansible lead me to believe that it would add value to our operations. I was not disappointed. Ansible has the perfect balance of power and ease of use, it has become my go-to tool for any repeatable task. Ansible is so easy to learn that we started putting together some playbooks for testing the day after the workshop.

So why am I so impressed with this tool? Here are a few of the reasons that I think Ansible is worth a serious look, if you are trying to automate more of your infrastructure.

Ansible does not require an agent

This alone is a great reason to at least give Ansible a trial run. Ansible performs all of its operations via ssh and python (python 2.6 at the time of writing, and no you don't have to know python to use Ansible). If you have Linux or Unix operating systems on your network that were released in the last 10 years or so, it is very likely that Ansible will be able to manage them right out of the box. Nothing to install, configure, or patch on the target hosts. This means that you can get started with testing right away. Less work makes me a happy Sysadmin and Ansible accommodates that laziness efficiency mindset very well.

Ansible does not require a control node

Or any control server whatsoever. If you have a Mac or Linux based laptop you can install Ansible right now and start using it from your workstation. I guess there is also a Microsoft Windows component but I haven't used it yet, so I'm only mentioning it for completeness YMMV. The fact that you can get started without buying any additional equipment or taking time to spec out and build a new server is hands down one of the best reasons to try Ansible. If, eventually, you become brainwashed convinced like me (one of us, one of us....) that Ansible is the greatest thing to happen to you since barbecue sauce on pizza, then you might want to build out a server to share playbooks with your team, or to use as a controller for scheduled tasks and what not. Or don't share and just let everyone wonder how you suddenly became so amazing.... But seriously don't be jerk, share your playbooks.

Ansible is easy to learn

Most IT departments should be able to get Ansible up and doing real work in a matter of weeks. Remember how I said my group started using Ansible in testing just a day after attending the workshop? That was the truth, Ansible playbooks are built from flat text files that are structured using yaml markup which is very easy to learn. If you haven't used yaml I'll have a few examples up in a week or two, with playbooks that can help you get started. Ansible comes with modules for most common tasks built right into it. Need to patch a bunch of servers? Ansible can do that. Need to update a hundred thousand million files? Ansible can do that. Need to edit a configuration file and reload a service to meet new security compliance standards, on every server in your network? Ansible can do that.... And I'm only mentioning the low hanging fruit here.... those kinds of god awful tasks that used to take days or weeks now take just minutes. I'm not the smartest person you will ever meet (or not meet.... don't be weird about it), but I was able to get Ansible to do real work like this on production servers within just a couple of weeks. If you mange more than a half dozen servers learning the basics will be well worth the effort and will save you time and energy in the long run.

In an upcoming post I'll explain how I got started using Ansible from an operations perspective. I'm less dev and more ops on the devops scale so don't expect a deep dive into the inner workings of ansible. I will, however, show you a few strategies that I used to get started. As well as share a couple quick and easy playbooks to give you some practical use case examples.

https://www.ansible.com/overview/how-ansible-works (overview of how it works)

http://docs.ansible.com/ansible/latest/installationguide/introinstallation.html#installation-guide (installation guide)

http://docs.ansible.com/ansible/latest/installationguide/introinstallation.html#managed-node-requirements (requirements for your target nodes)

Use cloud-init to build LXD/LXC containers

Over the last few months I've been reading and writing a lot about containers using podman on this site. I even went so far as to move this site onto the podman container platform, though I've recently de-containerized this site. Managing each container image was getting exhausting and in the long run I really didn't see the point in all the extra work, so I carefully backed out my container changes, and my workload on the web server has gone way down. I can hopefully focus more on writting now, rather than just constantly feeding the machine.

Even though I couldn't keep up with the overhead created by containers on this site, I do still occasionally use containers for a lot of things, mostly programing projects and scripting tests. However, I don't generally use podman or Docker – instead I usually end up building containers with LXD.

I like LXD containers for a lot of reasons. Primarily, I tend to think LXD is the easiest container platform to use and that is enough of a reason by itself for me to use it over other container platforms. They make it easy to spin up and manage containers from a multitude of distributions and you don't have to worry about complex volume mapping, or setting up new service unit files. LXD just takes care of it all, which means you can get to work just a little quicker. Or I can anyway – a lot of people prefer the application style containers, I'm just not one of them.

Here is how I rapidly build Ubuntu LXD containers using cloud-init when I need to create a clean environment to work on a new project.

Credit to Simos Xenitellis for doing a great write up on this a couple years ago https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/ I use quite a bit from his blog in this post while adding a few extra pieces that I felt were missing after quite a bit of trial and error on my part.

What is cloud-init?

According to the official cloud-init documentation site. > Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.

Cloud-init is a technology that allows you to define a desired initial state of whatever system you are building. In this context it's generally refering to cloud server instances, or bare-metal.

In this instance we'll see how it can be applied to LXD containers. All images (that I'm aware of) in the standard LXD repositories come with cloud-init out of the box. So you don't need to do anything special to get your images ready for cloud-init. Unless of you are making your own instances, in which case – why are you reading this?

What is LXD?

LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. https://linuxcontainers.org/lxd/introduction/

LXD is container platform that allows you to use Linux containers in a way that resembles a virtual machine. In my humble opinion LXD is the best of both worlds it has all the advantages of containers, from infrastructure as code design, to increased density per node when compared to a vm. While also providing a more familiar mechanism to manange each instance since you can easily treat each container as if it were a virtual machine.

You can read more about the feature set in LXD here: https://linuxcontainers.org/lxd/introduction/#features

Install LXD

Installing LXD on Ubntu is simple.

snap install lxd

Just run that command and follow the prompts. You can get a bit more details from the linuxcontainers web site

LXD profiles

Instead of jumping into creating a new LXD profile I like to copy the default profile into a new one.

You can list the current profiles like this:

lxc profile list

If you have just installed LXD or have never messed with the profiles that command should output something like this:

+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 0       |
+---------+---------+

After the initial installation LXD comes with a default profile that any new container will inherit. You can see whats is in that profile like this:

lxc profile show default

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []

Copy the default profile

I like to keep my profiles with cloud-init in my home directory so that I can version control them with git.

mkdir -p ~/lxd-profiles
cd ~/lxd-profiles

From here lets copy the default profile and cat it to a file.

lxc profile copy default dev-test
lxc profile list
+----------+---------+
|   NAME   | USED BY |
+----------+---------+
| default  | 0       |
+----------+---------+
| dev-test | 0       |
+----------+---------+

If you've been following along you should now have a new profile called dev-test that was copied from the default profile.

Now lets save that profile to a file so we can start making some changes to it.

lxc profile show dev-test >> dev-test.profile

Now take a look at our new dev-test.profile file

cat dev-test.profile

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: dev-test
used_by: []

If you compare this to your default profile they should be the same at this point. We are going to use the default profile as a template to insert our cloud-init data and customize the default container images automatically after they are built.

Using your favorite text editor open the file dev-test.profile and add the lines highlighted below: {{< highlight bash “hllines=2-15” >}} config: user.user-data: #cloud-config packageupdate: true packageupgrade: true packagerebootifrequired: true packages: – git – build-essential – awscli – python3-pip runcmd: – [ pip3, install, ansible ] – [ pip3, install, pyvmomi ] description: developement profile devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk name: dev-test used_by: [] {{< /highlight >}}

I want to point out a couple things here. 1) The line that starts with #cloud-config is required – cloud-init won't work without it. 2) Spacing is important in yaml syntax, watch your spaces. Generally each sub-section is indented with 2 spaces.

Here is what our changes are doing: 1) user.user-data: | —> This tells LXD to pipe the next set of instructions to cloud-init as user-data. 2) cloud-config —> Cloud-init requires this line just do it. 3) package_update, package_upgrade, and package_reboot_if_required. —> Do a full update and reboot if necessary. 4) packages —> install the following packages. 1) Note that each package in the list is indented with 2 spaces and and starts with a dash -. 5) runcmd —> tells cloud-init to run a command. 1) Note that each argument to the command is separated by a comma “,”. 2) Read more about here runcmd

Push changes into the profile

Now that we have a simple profile with cloud-init we need to load this new profile into LXD.

cat dev-test.profile | lxc profile edit dev-test

Verify the changes took effect.

lxc profile show dev-test

Launch a LXC container using the new profile

lxc launch -p dev-test ubuntu:lts test1

If this is the first time you've run a LXD container then it will take a minute for the image to be downloaded and start up, otherwise the launch command should execute within just a few seconds.

Next you will want to exec into the container and check to see if our cloud-init worked.

lxc exec test1 bash

Once inside the container execute the following command:

cloud-init status

You should see one of three outputs: running, done, or error.

If it say's running you'll need to wait a bit longer for cloud-init to complete it's work. This profile is pretty simple so it should complete fairly quickly.

If you want to wait for the configuration to finish, you can execute:

cloud-init status --wait

Once the status changes from “running to done” lets verify we got a some of our packages:

ansible --version
ansible 2.10.2
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]

If everything went well you should now have a clean working environment to test ansible playbooks, write some python scripts, or build stuff in aws. This template should be enough to get you started customizing your LXD containers in anyway you want.

A note on cloud-init errors

Sometimes cloud-init can be a little finicky. Occasionally you might check cloud-init status and see “done”, but when you check for your packages or other configuration you will find that it's all missing. This is sometimes a symptom of your cloud-init having syntax issues; i.e. incorrect spacing, no cloud-config decleration, or a missing dependency when running a command.

Check the logs

The log files for cloud-init are kept on the container at /var/log/cloud-init-output.log.

If you find yourself with a problem, either cloud-init reporting fail, or reporting done but you are not seeing the expected results, make sure to check the output log first.

Look for log entries that look something like this:

2020-10-22 21:08:00,969 - util.py[WARNING]: Failed loading yaml blob. Invalid format at line 10
column 5: "while parsing a flow sequence
 in "<unicode string>", line 10, column 5:

Cloud-init has surprisingly good error messages. An error message like that tells you which line and which column you need to look at... well, probably close to that line anyway. If you are using the runcmd declaration make sure you are separating each command argument with a comma “,”.

After you make any needed corrections to the dev-test.profile file, make sure to reload it into LXD the same way we did earlier.

Enough to get started

This is just a sample build file that demonstrates a small subset of the declarations that cloud-init will understand and execute for you. Check the cloud-init docs for some great examples of what you can do.

I use cloud-init to create containers to modify hugo themes like the one I'm using now. hugo theme introduction. I also keep a profile to build containers that act as web servers to test the full site. But that is only scratching the surface.

You could also use cloud-init to build complete systems in a matter of minutes that have everything from a standard set of users and groups with complex file system structures, and install applications with complete configuration pre-loaded.

There is a lot of power in combining cloud-init with LXD containers. I hope this article helped give you a better picture of how they can be combined to customize system containers.

References

Trust but verify

This post is about the audit daemon (auditd) that is available for most Linux systems.

Recently I’ve been looking at alternative ways to monitor sudo users on the servers I manage. Generally speaking it’s a good practice to keep an audit trail on managed systems. From a purely security perspective the more auditing you have on a system the easier any incident response should become when you need it. Your I.T. Security groups will need an easily searchable record of who ran which commands and with what privileges when trying to unravel how an exploit was used, or who used it, or both.

Outside of a security perspective you still want these controls in place to make sure that you can retrace any steps taken during changes while troubleshooting a problem. It’s all too common a scenario where a change goes wrong and somehow nobody knows what was changed. A robust audit trail can make hunting down which actions were taken much easier when figuring out what went wrong, and can go a long way towards finding a solution.

The Sudo Log

Historically we’ve always used the sudo log as a means to keep a record of all actions performed by users who need to escalate privileges to root while performing their job. One of the issues with the sudo log is that it does not keep a record of commands that are run when a user switches directly to root after logging in, at least not on a RHEL 7 system.

If a user logs in and immediately switches to root using sudo -i, sudo su - root, or sudo su -, then you won't have any log of their activities while logged in. This is an undesirable situation.

In search of a better solution

I.E. a solution that doesn’t involve just telling people not to switch directly to root. With all that in mind I decided to play around with some auditd rules, and after a little trial and error (and some internet searching) I came upon a really good answer on StackExchange.

I'll let you take a look at the answer for yourself if you so choose. The meat and potatoes of that post is to add this audit rule:

sudo auditctl -a always,exit -F arch=b64 -F euid=0 -S execve

For the most part it's a really good and simple solution. There are really just two problems with it for me. The first is that while this command does do most of what I'm looking for, the end result is only temporary. It will not survive a reboot, or even a restart of the auditd service and I need something permanent. The second problem with this answer (for my purposes) is that it logs everything done as root. I don’t necessarily want to log everything that is run by system daemons as a privilege escalation. Maybe you do, maybe I do and don’t know it yet. Right now what I want is a good way to specifically tag root actions performed by actual human users.

As an asside problem it also generates a lot of noise, which could make your logs unnecessarily large. Like this:

----
type=PROCTITLE msg=audit(09/05/2020 12:36:50.142:1572) : proctitle=sed -n s/device for \(.*\): usb:.*=\(TAG.*\)$/\1 \2/p
type=PATH msg=audit(09/05/2020 12:36:50.142:1572) : item=1 name=/lib64/ld-linux-x86-64.so.2 inode=2491677 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:ld_so_t:s0 objtype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0
type=PATH msg=audit(09/05/2020 12:36:50.142:1572) : item=0 name=/usr/bin/sed inode=2491999 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:bin_t:s0 objtype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0
type=CWD msg=audit(09/05/2020 12:36:50.142:1572) :  cwd=/
type=EXECVE msg=audit(09/05/2020 12:36:50.142:1572) : argc=3 a0=sed a1=-n a2=s/device for \(.*\): usb:.*=\(TAG.*\)$/\1 \2/p
type=SYSCALL msg=audit(09/05/2020 12:36:50.142:1572) : arch=x86_64 syscall=execve success=yes exit=0 a0=0x19072f0 a1=0x1906ed0 a2=0x1905e10 a3=0x7ffc9cd52420 items=2 ppid=4471 pid=4473 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=sed exe=/usr/bin/sed subj=system_u:system_r:unconfined_service_t:s0 key=root_cmd
-----

Cutting through the noise

Since I was looking for a way to audit commands run as root by real users I needed to filter out the system noise. By default most Linux distributions reserve the first 999 uid’s for system accounts – for reference see: Linux sysadmin basics: User account management with UIDs and GIDs.

After expermienting a little bit I’ve decided to modify that rule to filter down to just what I was looking for, and make the rule permanent.

Heres how:

Add the following lines to the file /etc/audit/rules.d/audit.rules.

-a always,exit -F arch=b64 -S execve -F euid=0 -F auid>=1000 -F auid!=-1 -F key=sudo_log
-a always,exit -F arch=b32 -S execve -F euid=0 -F auid>=1000 -F auid!=-1 -F key=sudo_log


Then restart the auditd service.

service auditd restart

One odd thing to note with the auditd service is that you can not restart it with systemctl. Instead you have to use the older service command to force a restart of the audit daemon, why does it work that way? – I have no idea, but I noticed that Red Hat recommends using the service command to start/stop the auditd service here: Chapter 10. Auditing the system. If you know the answer to why that is the case and want to call me out as a noob please do so on Twitter.

Another odd thing to note here is that on an up-to-date CentOS 7 or Fedora 32 machine that command string will not work if you run it with auditctl from the command line. For one reason or another auditctl doesn’t seem to like the >= operator when it’s passed in from standard input.

So what do these rules do:

  1. -a → Pretty simple that means append. You are appending everything after the -a to the audit rules.
  2. always,exit → This means you want to write to the log all the time when this rule is triggered and include the exit code.
  3. -F → The F option is a way to separate each field. The pattern is: name, operation, value. For instance the first field in our rule is name(arch), operation(=), and value(b64)
  4. -S → The S option defines the syscall we are looking for in this case execve is the syscall we want since it is the syscall for executing a program.
    1. We want to create a log entry everytime any program is run (constrained by the next set of fields)
    2. You can get a complete list of the available syscalls with ausyscall --dump
  5. The next fields euid=0, auid>=1000, and auid!=-1 allows us to constrain our rule based on the following criteria.
    1. The effective user has uid 0 (root)
    2. The original user id has a uid value of 1000 or greater. I chose uid 1000 here because that is the default starting point for non-system users on most Linux systems. You may or may not want to change this value.
    3. The originating user does not have a uid that is “unset”. Without this value you will end up with a ton of noise in your logs similar to what I shared above. I used auid!=-1, but you can also use auid!=4294967295 if your version of auditd doesn’t like the negative value, both do the same thing I just think -1 is more readable.
  6. Lastly we define a key with key=sudo_log. You could call your key anything you want, it is used to search for hits in the audit log.

The second rule is the same except that it watches for 32bit executables.

As always you should check the man pages for any command (man auditctl in this case) you are not familiar with before just copying and pasting something from the internet. You don’t know me, I might be a bad guy, or incompetent, or both. I’d like to think I'm not a bad guy, or incompetent but read the man pages anyway, and while you’re at it check out these documents for extra credit.

Testing the audit rules

After you have those rules in place and have restarted the auditd service, either with a reboot or with the service command, it’s time to test the new rules.

Use the auditclt command to view your rules.

sudo auditctl -l

Next, run any command you want with sudo or after switching to root (sudo -i, or sudo su - root, or sudo su - if you’re really old school)

I’m going to run yum updateinfo list

Displaying the audit log

Then search your audit log with the ausearch command filtering on the key we created:

sudo ausearch -k sudo_log -i


The first couple entries will be related to your ausearch command (remember every command run as root is logged). But scrolling up you should see something that looks like the following output:

----
type=PROCTITLE msg=audit(09/05/2020 14:12:45.350:551) : proctitle=/usr/bin/python /bin/yum updateinfo list
type=PATH msg=audit(09/05/2020 14:12:45.350:551) : item=2 name=/lib64/ld-linux-x86-64.so.2 inode=2491677 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:ld_so_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0
type=PATH msg=audit(09/05/2020 14:12:45.350:551) : item=1 name=/usr/bin/python inode=2494593 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:bin_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0
type=PATH msg=audit(09/05/2020 14:12:45.350:551) : item=0 name=/bin/yum inode=2502974 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:rpm_exec_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0
type=CWD msg=audit(09/05/2020 14:12:45.350:551) : cwd=/home/luke
type=EXECVE msg=audit(09/05/2020 14:12:45.350:551) : argc=4 a0=/usr/bin/python a1=/bin/yum a2=updateinfo a3=list
type=SYSCALL msg=audit(09/05/2020 14:12:45.350:551) : arch=x86_64 syscall=execve success=yes exit=0 a0=0x56536d272268 a1=0x56536d283ef8 a2=0x56536d2a81a0 a3=0x0 items=3 ppid=3423 pid=3425 auid=luke uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=1 comm=yum exe=/usr/bin/python2.7 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=sudo_log
----


A couple things to notice in the log output: 1. Notice each block is separated by three dashes. That shows you which log enteries are related. 2. The top line shows which command was run. In my case /usr/bin/python /bin/yum updateinfo list (It shows the full path of the execution – I only typed yum updateinfo list. 3. On the last line, if you scroll to the right (I know I hate those scroll boxes too, I just don't know how to make the lines wrap... sorry) you will see auid=luke. That shows you who ran the command, and if you scroll a little more to the right you will see euid=root (the effective user when the command was run). 4. Each line contains a timestamp which is useful for tracing when a command was run – in the event that you are trying to build a timeline.

A note about my environment

My servers do not allow direct login as root. If yours do, then you will need to account for that and will probably need to remove the filter based on the original uid. Also I think you should reconsider the decision to allow root to login – at least via ssh.

Reading the log directly

You can skip the ausearch command if you want and query the audit log directly at /var/log/audit/audit.log using whatever method you like. But the output will not be as friendly as it appears here, uids are numeric, and the time stamps are shown in seconds since epoch, you don’t get the nice dashes between entries. But it’s all there in plain text if you want it.

For further reading on audit logs see this Red Hat doc: