sudoedit.com!

Just some guy on the internet.


authors: [“Luke Rawlins”] description: “Interview with Red Hat, System 76, and Symantec. Does Linux need Anti-Virus?” date: 2020-03-27 draft: false title: Does Linux Need Antivirus? url: /does-linux-need-antivirus/ tags: – Linux Anti-virus – System Hardening


Back in May of 2017, in the wake of the WannaCry ransomware episode, I published an article outlining the major security advantages that Linux has over other operating systems.

I stand by each argument I presented back then, but recently I started to ask myself if anything has changed over the last few years that would call for revisiting this topic.

A lot of the information that gets passed around the Linux community is really good, however, sometimes the information surrounding this topic specifically is not always of the highest quality and it can be difficult to decipher fact from fiction.

In light of the fact that I stand by my previous article, and under the realization that I’m really just some guy on the internet; I thought it would be best to reach out to a few experts and see what they say regarding antivirus software on Linux. I thought it was important that the information I passed on was coming from trusted and well-known vendors in both the Operating System space, as well as the perspective of the antivirus makers, and in that regard I will keep my own commentary to a minimum and let the experts speak for themselves.

My Methodology

Apart from doing a highly scientific Twitter poll, I sent requests for comment to several major Linux distributions, including Red Hat, Canonical (Ubuntu), SUSE, and System76 the makers of Pop_OS. My goal was to find out what the people behind each of these popular operating systems thought about the state of Linux security, and whether or not they saw a need for Linux users to regularly scan their systems for malware.

I want to say thank you to the team at Red Hat, and System76 who were both very kind and provided invaluable information to make this article possible. At the time of writing, I have not received responses from Canonical, or SUSE.

On the other side of that coin, I also reached out to a few enterprise information security vendors who have products in their line up that are meant to run on the Linux platform. Specifically, I sent requests to Symantec’s Enterprise Division and Kaspersky Labs.

The team at the Symantec Enterprise Division provided some great information that I’m glad to pass on. I was not able to receive any response from Kaspersky at the time of writing.

I asked the following questions to each Linux distribution:

  1. Without endorsing any specific product do you generally recommend that users of your Linux distribution run any type of software for virus detection?
  2. It's a widely held belief in the Linux community that Linux is largely safe from viruses and other forms of malware due to built-in security systems such as, SELinux, AppArmor, and curated package repositories. Is this a sentiment that you would agree with?
  3. Under what circumstances would you say that a user or administrator of a Linux system should consider incorporating virus scanning?

I asked the Antivirus software vendors Symantec and Kaspersky just questions 2 and 3.

Red Hat

The press office at Red Hat forwarded my questions on to Mark Thacker, principal product manager, Security Experience, at Red Hat. Here are his responses to my questions.

Without endorsing any specific product does Red Hat generally, recommend that users of Red Hat Enterprise Linux run any type of software for virus detection?

“Red Hat does not actively recommend any specific anti-virus scanning software. As documented in our support article on anti-virus software, Red Hat recommends that customers follow our security hardening best practices, keep systems up to date with the latest security patches, and avoid running applications or sessions as ‘root’ or privileged users.”

“We also understand that some clients have internal processes that mandate the use of 3rd party scanning tools as they have heterogeneous environments that may demand this. It is worth noting that several third party anti-virus scanning solutions are certified for use on Red Hat products and serve an important role in keeping non-Linux clients safe from a virus that might be hosted on Red Hat Enterprise Linux file and mail servers. Additionally, the popular Clam-AV open source virus scanner is used in these situations on Linux systems.”

It's a widely held belief in the Linux community that Linux is largely safe from viruses and other forms of malware due to built-in security systems such as, SELinux, AppArmor, and curated package repositories. Is this a sentiment that you would agree with? Anything to add?

“The use of mandatory access control systems, such as SELinux, really does help to prevent exploitation and privilege escalations, which are often an attack vector used in information theft or virus applications. In fact, SELinux controls in Red Hat Enterprise Linux and Red Hat’s OpenShift product have prevented every container-based breach issue so far. Additionally, enterprise-class Linux systems provide built-in hardening during the compilation process of the OS itself through stack-smashing prevention, address space layout randomization (ASLR), position independent execution (PIE), object size checking and more techniques.”

Under what circumstances would you say that a user or administrator of a Linux system should consider incorporating virus scanning?

“System administrators responsible for hosting code, files or data that is consumed by other platforms more subject to virus attack would be wise to consider running an anti-virus scanning product.”_

“In all situations, it’s best to follow security-hardening best practices as documented by your enterprise Linux vendor, minimize the amount of privileged user access to your system and always validate that the code you are running is coming from authenticated, trusted sources.”

System76

The technical team over at System76 took some time out of their schedule to provide these answers for my readers.

Without endorsing any specific product does system76 generally recommend that users of Pop!_OS run any type of software for virus detection?

“No, we would not recommend that users of Pop!_OS run any type of software for virus detection. We're not aware of any antivirus that targets the Linux desktop. The purpose of ClamAV is to detect signatures on file shares to protect Windows systems accessing them.”

It's a widely held belief in the Linux community that Linux is largely safe from viruses and other forms of malware due to built-in security systems such as, SELinux, AppArmor, and curated package repositories. Is this a sentiment that you would agree with? Anything to add?

“Yes, we would agree with this sentiment. We would also add that known exploits in critical open source software projects are usually fixed quickly enough that it wouldn't be worth the effort to create antivirus software to watch these exploits.”

Under what circumstances would you say that a user or administrator of a Linux system should consider incorporating virus scanning?

“If you're hosting a file share accessible to Windows PCs, you might want ClamAV to protect Windows systems. The use of ClamAV is also suggested if you are serving files for Mac and Windows users or if you are filtering email. We would also add that the thing to be concerned about isn't so much the viruses, but social engineering. Don't execute a script if you don't know what it does.”

Symantec

In doing my research I reached out to the Symantec Enterprise Division (SED). For their take on this topic.

It's a widely held belief in the Linux community that Linux is largely safe from viruses and other forms of malware due to built-in security systems such as, SELinux, AppArmor, and curated package repositories. Is this a sentiment that you would agree with? Anything to add?

“Linux certainly sees significantly less malware than Windows systems. But risks still exist. The explosion of IoT devices running Linux has been followed by an explosion of attacks against these devices and put a lot more Linux worms out into the wild. Ransomware attackers have begun to target Linux systems to get into organizations (see PureLocker) and cybercriminals have focused on attacking cryptocurrency wallets to make fast money.”

Authors Note: PureLocker ransomware was something I had not heard of prior to speaking with Symantec. Apparently, the virus was written in the “PureBasic” programming language, which allows it to be cross-platform. I'm not sure what the delivery mechanism would be for this virus, but often these types of attacks come in via email. The way a Linux system works would still require user interaction in order to execute the virus, but it does show that Linux systems are becoming a larger target for these types of attacks.

Under what circumstances would you say that a user or administrator of a Linux system should consider incorporating virus scanning?

“We are all for hardening systems and keeping software up to date. These things can seriously reduce your risk. So can security software. So if you care about keeping yourself secure why not run both.”

Conclusion

Personally I've always been in the camp that would argue for antivirus on Linux systems only under circumstances in which that system was sharing files with other more vulnerable computers. It would appear that both Red Hat and System76 would agree with that assessment.

However, I do wonder what impact IoT devices will have on the Linux landscape. Someday we could find ourselves in a world where everyone's refrigerator comes down with some kind of bug that locks the freezer doors till you fork over a couple of bitcoin.


authors: [“Luke Rawlins”] date: 2022-09-07 title: Create a Bash function to open random man pages url: random-man-page description: Create a bash function to open random man pages. tags: – Linux – Bash


If your idea of fun is to read man pages, then today is your lucky day!

Here is a “cool” bash function that allows you to read random man pages whenever you want.

Get ready to learn all the things!

Well all the things there are man pages for... and in random order with no method behind the madness...

Just add the following to your .bashrc file in your users home directory.

function rmp() {
  random_man=$(man -k . | awk '{print $1}' | sort -R | tail -1)
  man ${random_man}
  unset random_man
}

Once you've added that to your .bashrc, you can either type bash in your terminal to open a new session.

Or, you can type source ~/.bashrc to have your current session reload the .bashrc file.

Then just type: rmp to open a random man page and read to your heart's content.


If you found this useful please support the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


authors: [“Luke Rawlins”] date: 2016-11-04 draft: false title: Bruh, do you even live patch? description: “Patching is arguably the single most important thing you can do to keep your systems secure.” url: /ubuntu-live-patching/ tags: – livepatch – Ubuntu – Linux


Patching is arguably the single most important thing you can do to keep your systems secure.

It's also tedious, boring work that ends with everyone's least favorite activity.... rebooting some indispensable, far too important for downtime server. Often meaning that patching takes a back seat to convenience, but no more!

Starting with Ubuntu 16.04, and continuing on to the latest LTS Ubuntu 18.04 you can now update the kernel on a live system without a reboot.

*note – I use vi as my text editor if you aren't comfortable with that replace all instances of vi or vim with nano.

Unattended Upgrades

Strictly speaking the unattended upgrades are not entirely necessary for live kernel patching, since it is handled by a snap package. But it's not a bad idea to allow the rest of your packages to update periodically as well.

sudo apt update sudo apt install unattended-upgrades

The configuration file for unattended-upgrades can be found at /etc/apt/apt.conf.d/50unattended-upgrades. By default it is configured to upgrade packages marked for security updates. You can keep that configuration or change the file as below to allow the updates channel as well.

sudo vim /etc/apt/apt.conf.d/50unattended-upgrades

// Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { “${distroid}:${distrocodename}“; “${distroid}:${distrocodename}-security”; “${distroid}:${distrocodename}-updates”; // “${distroid}:${distrocodename}-proposed”; // “${distroid}:${distrocodename}-backports”; };

You can also “blacklist” packages, that for one reason or another you do not want upgraded. The “//” is a comment in this file. So if you never wanted to upgrade vim simply delete the double slashes. Add any package you want to the list and it will be ignored when the system begins updating.

// List of packages to not update (regexp are supported) Unattended-Upgrade::Package-Blacklist { // “vim”; // “libc6”; // “libc6-dev”; // “libc6-i686”; };

Toward the bottom of this file you will notice some blasphemous talk of automatic reboots, you don't need that kind of negativity in your life... we are working towards live patching. Leave it turned off.

Now we need to update the apt configuration so that it knows when to run updates.

sudo cat /etc/apt/apt.conf.d/10periodic

This will display a file that looks like this:

APT::Periodic::Update-Package-Lists “1”; APT::Periodic::Download-Upgradeable-Packages “0”; APT::Periodic::AutocleanInterval “0”;

The number at the end of each line represents how often, in days, that apt will check for, download, and clean updates. We are going to change a few things and add a line to install updates.

sudo vim /etc/apt/apt.conf.d/10periodic

APT::Periodic::Update-Package-Lists “1”; APT::Periodic::Download-Upgradeable-Packages “1”; APT::Periodic::AutocleanInterval “7”; APT::Periodic::Unattended-Upgrade “1”;

This configuration will check for, download, and install updates at a randomized time everyday. It will clean up downloaded packages once every 7 days. For more details check /etc/cron.daily/apt-compat

Live Patching

Tux

Now the fun part install the Livepatching service. Canonical, the company behind Ubuntu, will allow anyone to install live patching for free on up to 3 desktops or servers. Beyond that you will need a paid support contract.

Go to the registration portal to register for your Livepatch token. https://auth.livepatch.canonical.com/

Install the Livepatch service

sudo snap install canonical-livepatch sudo canonical-livepatch enable [putyourtokenherewithout_brackets]

Thats it!

Take a look at https://ubuntu.com/livepatch for more information.


authors: [“Luke Rawlins”] date: 2022-04-09 draft: false title: How to install vimwiki in neovim url: /install-vimwiki-neovim tags: – vim – Linux


As I mentioned a couple blog posts ago, I like to keep a personal knowledge base and I've been trying to move away from commercial cloud products like Evernote.

Not because there is anything wrong with Evernote or other note taking apps, but because I think plain text is a superior format due to its overall flexibility.

The best tool I've found so far to help me stay organized with my personal writing is vimwiki. It's a vim plugin that will allow you to keep a personal wiki in vim. Or in my case neovim because that is what all the cool kids are using these days.

Step 1 – install neovim

I'm old fashoned so I just use my package manager. Though I should note there is an appimage version available for those of you who are too fancy pantsy for package managers 🤠.

Ubuntu based

sudo apt install neovim

Fedora

sudo dnf install neovim

Neovim has install instructions for several operating systems on their github.

Install vim-plug

vim-plug is a package manager for vim (or in this case neovim) plugins.

Installation instructions for vim-plug with neovim can be found on their github page.

The meat and potatoes of those instructions is the following (assuming you are on Linux):

sh -c 'curl -fLo "${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/autoload/plug.vim --create-dirs \
       https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim'

This will download the vimwiki files from github to your neovim plugin directory.

Create your neovim configuration file

If you haven't been using neovim you'll need to create a configuration file.

mkdir -p .config/nvim/
nvim init.vim

Then add at least the following lines:

set nocompatible
filetype plugin on
syntax on
call plug#begin()
Plug 'vimwiki/vimwiki'
call plug#end()
let g:vimwiki_list = [{'path': '/path/to/my/wiki/',
                      \ 'syntax': 'markdown', 'ext': '.md'}]

The final line in this file is assuming you want to write in markdown... because who isn't writing in markdown these days? But it isn't necessary you can end the list before the 'syntax' parameter and use the wiki format.

  • Make special note of the call plug#begin() and call plug#end() statements. You need to have those along with the Plug 'vimwiki/vimiwiki' in order to start using the vim-plug manager aswell as using it to install vimwiki.

Install vimwiki

Open neovim: nvim

And run the following command:

:PlugInstall

vim-plug will open a split and you'll be able to see the progress of the vimwiki installation. Once it's complete congratulations! You've installed vimwiki and can start creating your own plain text knowledge base with neovim.

Read the docs and start your wiki

Read the introduction documents for vimwiki https://github.com/vimwiki/vimwiki#introduction and start setting up your notes.


If you found this useful please support the blog.

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


authors: [“Luke Rawlins”] date: 2018-03-02 draft: false title: LXD/LXC url: /lxd-lxc/ tags: – Containers – LXC – LXD


I've been spending quite a bit of time learning about LXD/LXC containers on Ubuntu. There is a lot of really good information available about how to get started with these containers so I'm not going try to reproduce that content here, however, I will provide links at the bottom that I think are relevant to learn more about LXD and LXC.

Below I outline what it is that I like about LXC these reasons are also the driving factors behind my decision to use LXC for web hosting as opposed to other container technologies. Though I should note that LXC and Docker are not mutually exclusive. If you are comfortable using Docker you may want to consider using both of these technologies.

LXC containers are unprivileged.

An unprivileged container is a container that is not running as root on the host machine. The root account in the container is mapped to a random non-root uid on the host. According to Canonical “_Unprivileged containers are safe by design. The container uid 0 is mapped to an unprivileged user outside of the container and only has extra rights on resources that it owns itself._“This prevents access to host files that are owned by root and isolates the container in a way that isn't possible with a privilaged container deamon. You can allow a user to run LXD/LXC containers without handing over access to a root account on the host.

For more information take a look at this page: https://linuxcontainers.org/lxc/security/

LXD lets me use skills I already have.

“_LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead._” Since LXD treats LXC container guests as if they were a virtual machine the only new thing I had to learn was how to launch them, and how their networking is managed. After that it is just like configuring any other Linux virtual machine. LXD also offers base host images for several popular Linux distrobutions including: Fedora, CentOS, OpenSUSE, and of course Ubuntu. Learning to use LXD to manage containers is easy and Canonical provides a great tool to help get started. They even offer a web based application to allow you to try LXD which you can find here: https://linuxcontainers.org/lxd/try-it/

LXC containers are fast.

LXC containers spin up fast, snapshot fast, and can be redeployed much faster than a traditional server. Once you have built your container and configured it you your satisfaction you can easily launch other identical containers from a snapshot, either to your local host or a remote host in a public or private cloud.

Live Migration

You can live migrate a container to another host machine. This makes it possible to move your containers around for zero downtime operations so that you can perform maintenance tasks like patching or application updates without interrupting users.

Persistent Storage

Storage is persistent by default on Ubuntu 16.04. With LXD you can have a storage backend using zfs, btrfs, lvm, or for a development environment you can simple filebased storage (though that is much slower). For more information about storage check this page: https://lxd.readthedocs.io/en/latest/storage/

If you have any interest in using LXD, I highly recommend that you read the full blog series that was written by the LXC and LXD project leader, Stéphane Graber. I have linked to it below.

https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

https://help.ubuntu.com/lts/serverguide/lxd.html

https://lxd.readthedocs.io/en/latest/

https://linuxcontainers.org/lxd/getting-started-cli/#

https://linuxcontainers.org/lxd/try-it/


authors: [“Luke Rawlins”] date: 2022-06-21 title: Systemd Automount url: systemd-automount tags: – Linux


Update 2022-06-29

While the procedure outlined below works, setting up systemd automounts is actually far easier than I realized and can be done with just the /etc/fstab file.

You can just put the following into /etc/fstab:

<ip or hostname>:/path/to/nfs/export /mnt/test nfs x-systemd.automount, \
x-systemd.DirectoryMode=true,x-systemd.TimeoutIdleSec=10min 0 0

Then: systemctl daemon-restart systemctl start mnt-test.automount

And that's it. You don't need to create the systemd unit file because systemd does it for you in the background based on what's in the fstab file.


Original Post

Systemd has the ability to manage file systems in the same way it can manage services on a Linux system.

A Systems Administrator can replace the autofs service with the systemd automounter.

Why should I replace autofs?

Having a consistent interface to manage services and mount points on a system is an advantage all on its own.

Beyond the consistency with the interface, using the systemd automounter means you get to eliminate a running service (autofs).

  • One less moving part.
    • systemd manages the autofs service, which in turn manages automount points.
    • why not cut out the middle man and just use systemd?
  • One less package to manage.
    • One less potential package to keep you up over the weekend patching for a vulnerability.
  • The autofs configuration files are cryptic and not very user friendly.
    • systemd unit files are well understood and the automounter hooks into fstab which is a universally known location for identifying mount points.

The real question is: Why are you still installing autofs?


How to set up systemd automount with NFS

For this example you'll need an NFS file system available to you, though you could also use an lvm based file system for testing. See my linux lvm guide for more information on setting up lvm if you aren't sure how.

Step 1 – Create a unit file

Using your favorite text editor create a file at /etc/systemd/system/mnt-test.automount. If vim isn't your favorite text editor, then you should feel bad about yourself... run vimtutor in your terminal someday when you have 45 minutes to burn and learn all about what you've been missing.

Your mnt-test.automount file should contain the following.

[Unit]
Description=Mount nfs export

[Install]
WantedBy=multi-user.target

[Automount]
What=<ip or hostname>:/path/to/nfs/export
Where=/mnt/test
Type=nfs
DirectoryMode=true
TimeoutIdleSec=60
Options=defaults

About this file.

This is a pretty bare bones unit file. If you've used systemd to create service files this should look pretty familiar.

Description: The description can be anything you want it to be. I would include the name of the volume you are mounting and maybe the source server or application it's for.

[Automount]: Systemd will use this section to determine what to mount, where to mount it, what type of file system it is, and other options.

What: What are we mounting. For nfs this would be something like What=192.168.1.38:/path/to/nfs.

Where: Where systemd should mount the volume. Seems pretty self explanatory.

Type: What type of file system is this? xfs, nfs, ext4... whatever it is put it here.

DirectoryMode: This tells systemd to create any paths that don't already exist. If this works correctly you shouldn't have to create /mnt/test, systemd will do that for you.

TimeoutIdleSec: How many seconds after no activity has been detected on a mounted file system should systemd wait to unmount it.

Options: You can put any mount options you want here. I just used defaults to illustrate it as an example.

Step 2 – Update the fstab file

Once again using your favorite text editor, open /etc/fstab and add your nfs export to the file specifying x-systemd.automount where the mount options would normally go, as shown below.

<ip or hostname>:/path/to/nfs/export /mnt/test nfs x-systemd.automount 0 0

Step 3 – Refresh services

Reload the system daemon, start, and enable the mnt-test.mount unit.

sudo systemctl daemon-reload
sudo systemctl start mnt-test.automount
sudo systemctl enable mnt-test.automount

Otherwise you can reboot the system if you want.

Things to Note:

From the systemd.automount man pages:

Automount units must be named after the automount directories they control. Example: the automount point /home/lennart must be configured in a unit file home-lennart.automount.

Make sure you name the unit file to match this syntax. If you wanted to mount a volume to /backups/laptop then you would name your unit file backups-laptop.automount. Otherwise you'll get errors and the mount unit will refuse to start.


If you found this useful please support the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


author: “Luke Rawlins” date: “2024-01-07” title: “How to load a clicker for dog training.” description: “How to load a clicker for dog training. To mark good behaviors.” url: /load-a-clicker/ tags: – Dog Training


Positive training only

I'm not a certified anything trainer but I know a couple things.

If you treat an animal well, it can develop into a confident and loving companion.

If you treat an animal poorly it'll be anxious, fearful, and possibly violent. And you'll be an asshole.

If you've heard of the Alpha or Dominance theories in dogs, I would encourage you to re-evaluate what you've learned with all the latest research on positive reinforcement for training.

Dominance theory in Dogs (and Wolves) has been debunked for many years. Your dog is not trying to be the Alpha.

Debunking the “Alpha Dog” Theory

You Are Not the Boss of Your Dog

What is clicker training?

History of Clicker Training I

Using a clicker (a device that makes a “click” sound) is a highly effective way to communicate desirable behaviors with your dog.

Or any other mammal from what I've read, including cats.

The basic idea is this.

1) Dog does something you like. 2) You signal to the dog that they are going to be rewarded. (i.e. the click) 3) Dog gets reward. 4) Dog will associate the behavior with the marker (click) and replicate the behavior.

The click is generally thought of as a marker. You are marking an action that will result in a reward, thus reinforcing the value of that action. Which should incentivize the dog to do that action again.

A click could be replaced with a word or any other noise or visual cue if the dog is deaf, but I'll talk about why I like to use a click or a whistle a little bit later.

I don't have the expertise to advise on training deaf dogs so I'll leave that alone.

That sounds all well and good but how does the dog know that a click is going to mean a reward?

You're right dogs aren't born speaking a clicker language. They don't start out knowing that hearing a click noise means a reward is coming. There's nothing magic going on here.

Teaching a dog to understand “click = treat” is the fun part, because it involves letting the dog use its own judgement, and it doesn't really take that long for them to figure it out.

The process is called “loading the clicker”. It's probably the easiest thing you'll do all day.

How to load a clicker

I'm going to pretend that you have a clicker (use a mechanical pen if you don't have one).

1) Go get some delicious treats. – For now don't get kibble or even training treats, cut up a hotdog into small pieces or bacon bits, or left over steaks, something the dog will recognize as high value. 2) Go find your dog, if they didn't already find you while you were cutting up those awesome treats. 3) Hold the clicker behind your back, put treats in your other hand, and click. 4) Immediately after the click give them a bit of the treat. 5) Repeat this for a while – it doesn't matter what the dog is doing at this point don't try to make them sit or lay down or perform any sort of behavior. Just click, treat, repeat over and over. 6) Watch your dog for some bit of understanding. – After a probably short time (a couple minutes usually) click and wait a second or two. If your dog looks for the treat then you know you've built an association. If not keep clicking and treating checking every so often to see if they look like they are expecting something when they hear the click. 7) Now you have loaded the clicker and are ready to begin training.

Now that you've loaded the clicker you have a way to mark a behavior that you like and want to see more of.

One more quick rule that can never, under any circumstance be violated.

Once the dog has associated the click with a reward, you MUST reward the dog everytime you click, even if it was an accident, even if 2 milliseconds after you clicked the dog did something it shouldn't do. Every click gets a treat.

Further you should click far more often that you think you should. Most of what we will be doing is shaping behaviors – but that's beyond this post. For now don't worry about giving too much, the more rewards you give out the more your dog will want to pay attention to you and that is probably 90% of the problem when it comes to training most dogs.

Why a click or whistle and not a word?

I'm only speaking for myself here, and there are plenty of real dog trainers out there that might disagree. And truth be told I do often use the word “Good!” in a high pitch and happy voice to mark behaviors, I loaded that word in the same way that I loaded the click.

Here's why I think it's important to use a clicker.

1) It's faster than you can speak. – At least at first the clicks need to be precise. You'll get it wrong and it'll be frustrating but you get better with time. A quick squeeze of the clicker will always be faster than using your voice. It's probably why they use a clicker in Jeopardy. 2) It doesn't convey emotion. – Dogs respond to emotion, they watch us more often than we realize and they know when we are happy, sad, angry, etc. The clicker and the whistle sound the same no matter how we feel and that makes it easier for the dog to understand the marker. 3) This is most important for recall. – You won't get a dog to come to you when they are distracted by a clicker – remember the clicker marks a behavoir it's not an attention getter. But a whistle can be both, and the beautiy of a whistle is that it doesn't sound panicked when the dog runs out into the street. It doesn't sound angry when the dog is digging a hole. It sounds the same every time. – Using a little bit of Pavlov's Classical Conditioning (which is what loading the clicker is) you can load a whistle in the same way to brain wash your dog to return no matter what. Whistle means I'm about to get steak!

Counterpoint, you should ALSO load a keyword

I say you should “also” and not exclusively do this because sometimes you won't have your clicker. It'll get lost or broken, and won't be available sometimes.

  • If you are going to use a word, make it a short one. In English a quick loud and higher pitched “Good!” is a great way to start. Just remember when you use it you have to reward the dog so your keyword doesn't lose its power.

Have fun

Keep your training sessions short and fun. If your dog is having an off day cut it short and put the stuff away, try again later.

Think of it as building a friendship with your dog. Dog is mans best friend after all, so try to be a good friend!


authors: [“Luke Rawlins”] date: 2022-07-24 title: System 76 Lemur Pro url: system-76-lemur-pro description: Reviewing the System 76 Lemur Pro tags: – Opinion


I hate product reviews and I normally do not do them

Almost every product review you see or read online is coming from a YouTube personality or blogger who was able to get the thing they are reviewing for free. That, in my eyes, basically makes everything they have to say about a product completely useless.

I have little to no confidence that a person can actually give a fair minded review of something they got for free. Especially when I consider the fact that, a major source of the reviewers income likely depends on the company they are reviewing sending them another free review model of their next big product.

I bought a System 76 Lemur Pro 6(ish) months ago

System 76 didn't pay me to write this review, they didn't send me anything for free, they probably don't even know who I am.

I found myself with a little bit of spare money and a need for a new laptop at the end of January of this year and decided to mix things up a bit. Instead of purchasing a new Macbook I decided I wanted to checkout what System 76 had to offer, and quickly decided the Lemur Pro was likely the best laptop for me that they offered.

There were a couple major reasons for my decision.

  1. The new Macbook Pro chips.
    • I know it's supposed to be super fast and the reviews have been amazing (see the first couple paragraphs) but I don't like buying first generation things, I like to wait till the early adopters get burned and the product improves.
  2. I work in the Linux server space, but I started my path down the Linux rabbit hole by using the Linux desktop back when I was a struggling Auto Mechanic and couldn't afford to buy new stuff. So I thought why not try out a Linux desktop, built by a Linux company, on purposefully selected hardware?
  3. The Lemur Pro (Lemp10) was an all Intel machine, which to me signalled good support across the Linux ecosystem, without any need for special drivers.
    • It also came with CoreBoot, which I don't know anything about but figured if I'm going all in, might as well go all in and get the open source firmware as well.
  4. It was within my laptop budget.

So what do I think?

You should keep in mind that I've been a Mac user for many years, probably for at least the last 10 years, so that is what I'm comparing the laptop to. Whether or not you think it's fair to compare System 76 with Apple doesn't matter to me. The pricing is close enough that I think it's fair.

After configuring my laptop with some additional storage and memory and including shipping the grand total of my purchase was around $1600 American dollars. By way of comparison, a similarly spec'd Macbook Pro probably would've cost me about $200 dollars more, which I was aware of and okay with at the time of my purchase.

Build Quality

The build quality is... fine, it's not bad but it's not a contender against the aluminum unibody that is the Macbook. It doesn't feel plasticy or cheap but it definitely doesn't have the same confidence inspiring feel of the Macbook.

The little rubber boot thingy's on the bottom started peeling off about a month ago, and I'm sure they won't hang on much longer. I might call support just to see if it's any good, or I might just live with it... sorta depends on how much it ends up bothering me.

Keyboard

The keyboard is eleventy million billion times better than the crappy keyboard that Apple shipped on my 2016 Macbook Pro that this laptop was replacing. Not even in the same ballpark. The Lemur Pro keyboard is better than those horrible “butterfly” keyboards.

Trackpad

I don't even need to say anything about this. I'm sure they did their best, and again the end result is ok, but the Apple trackpad is far and away the best trackpad on the market. No contest.

Speakers

They suck.

Web Cam

I've never once used it.

The Screen

It's a full HD screen and it's bright enough. Again compared to the full retina display on a Macbook it's just ok.

Pop!_OS

Pop!_OS has been great.

I think Apple could learn a few things about window management from the gnome team. The keyboard shortcuts on the Cosmic Desktop are great “super + b” to open Firefox, “super +e” opens thunderbird. Lots of great ways to navigate the system using just keyboard shortcuts. Great job.

Closing the laptop lid

My biggest worry about buying this laptop was that I'd close the lid and then it would never wake back up. That I'd be constantly forcing it to reboot after opening the lid. Thankfully, those worries were without merit. The laptop seems to wake up from a sleep state fairly quickly, and without any problems. The only thing I have noticed is that the battery drains more than I'd like when the lid is closed. If I close the lid on my Macbook with 80% battery and open it the next day it'll have 75% battery when I open it again. This laptop not so much, if you close the lid at 80% and pick it back up the next morning there is a good chance the battery will be closer to 10%.

I've shut the lid on a Macbook, walked away for several days and come back to find the battery hadn't been impacted much. You will not get that experience from this laptop.

I wish the launcher was better.

MacOS spotlight search is one of it's killer features in my mind. I can use it to do queries like “kind:pdf 2021 Taxes” to find pdf's, or “kind:event Mom's Birthday” to find calendar events, etc. I'd love to see something like that integrated into Pop!_OS or Gnome in general.

Would I buy another one?

I don't know – probably not unless the price was significantly lowered, or the overall quality was higher. Currently if I were going to buy a not Apple computer I would probably buy a Lenovo Thinkpad, the Dell XPS Developer Edition, or the HP Dev One, each of which come with Linux (or can be configured with it).

All things considered I would probably install Pop!_OS on any of those laptops and be happy with it. I've been pretty happy with the OS over all and have even found somethings I like more than MacOS – though I do miss the cmd button for copy/paste, or new tabs. That cmd button makes things more consistent across the whole platform. I can copy out of my terminal with “cmd +c” and drop it into a note with “cmd +v”, or from a browser window into a terminal. But as we know “ctrl + c” doesn't copy from a terminal, it kills running processes, and I don't know what “ctrl + v” does but it doesn't paste. Instead you need to add a shift in there when you're using a terminal emulator, and little things like that do annoy me.

In closing

I'm not very hard on my laptops – I can't remember the last time I dropped one, or spilled something on one, my kids are all old enough that I no longer have to worry about them pulling the keys off, or smearing chocolate syrup on the screen, and I because of that I expect this laptop to last me for the next couple of years. However, even with special care I would be surprised if it lasted much more than 2 or 3 years at the most – maybe I'll do a follow up in 2024 to let you know if it's still humming along.


author: “Luke Rawlins” date: “2023-09-10” title: Adding a Creative Commons License description: “Why I think it's important to license under creative commons.” url: /thinking-about-licenses/ tags: – Open Source – Creative Commons


After attending a talk by Scott McCarty over the weekend at the Ohio Linux Fest I was inspired to go through my blog and make sure that I added a Creative Commons License to all of my work.

I think it's important to let people know what they can and can't do with your work. While at the same time help to protect a public body of knowledge and ensure that it remains free (as in freedom).

I've decieded to license all of my work under the Creative Commons Attribution-ShareAlike agreement. Which in my completely non-lawyerish brain means you can do pretty much anything you want with my blog posts as long as you attribute it, and most important to me you release that work under the same license. Allowing the people you share it with to have the ability to continue to modify and share it forever. I can't lock it behind DRM, and neither can anyone else.

I purposely chose a copyleft license for a few reasons: 1. On the off chance that I ever write something of value I want people to be able to share it. Both those who re-publish it, as well as those who consume it. 2. From what I read on the interwebs, a good portion of the data used to train Large Language Models, and other forms of AI is scraped from blogs like this one. If this is the case, then I like to think that one day the AI companies will find themselves in a copyright lawsuit, and something I wrote on this blog could bring a small victory in the battle to make AI open source. 3. It matches up with my most valued ethical priciple – Don't be an asshole, except to people who deserve it.

One of the points that became a common theme over the entire conference was presented by Joe Brockmeier in his talk entitled “Open Source can't win”. The idea being, the Free and Open Source community have started to become complacent when it comes to defending Free and Open Source Software.

We've become comfortable with the idea that Open Source has already won, when in reality it appears to be under attack like never before.

Everyone is of course aware of Red Hat's recent decisions surrounding CentOS, whatever your opinion on that is what I think is even more damaging to FOSS is Hashicorp's recent license change which in my humble opinion shows a real disdain for the entire concept of Free and Open Source Software. You can read about it here: opentf.org.

What Hashicorp has done should terrify anyone who has ever contributed code to an “Open Source” project, especially if you had to sign a CLA – Contributor License Agreement. Because now code contributed in good faith for the betterment of the project by members of the community could suddenly become proprietary with the contributors having little or likely no recourse.

It's with all this in the back, or front, of my mind that I've gone the route of adding the Creative Commons License to my work here. And if you host a blog I would encourage you to do the same.

Please feel free to use my posts, change them, modify them, or whatever – But if you share them, you pass those same rights on to whoever recieves it.


author: “Luke Rawlins” date: “2023-08-14” title: The problem of too much choice description: “Linux needs a good journaling app” url: /journal-apps tags: – Linux – Journal


For the last 3 years or so I've been journaling almost daily on a Mac and iOS app called bear.

A few months ago, when I started trying to commit to using my Linux laptop more often than the Mac I started looking around for a good cross platform journaling application.

Finding a notetaking application is no problem at all. Finding one with a privacy policy I could live with and had the same ease of use as bear was another thing entirely. So far I haven't been able to find anything that isn't web based or electron that I can really get into.

The problem isn't that there aren't enough choices, the problem is that there are too many choices and I can't pick one. The main problem with this is that I've neglected my journal for the last 3 or 4 months to the point that I don't make time for it like I used to.

I'm starting to have choice paralysis. I've spent a few months stringing my journals between plain text, standard notes, libre office, and a regular old notebook. Not only will I likely never be able to really keep track of those entries, now I feel defeated everytime I sit down to write – since I know deep down I probably won't want to use that same app again tomorrow.

What's the point of the post? Basically, just to let you all know that I'm indecisive and it's causing a mild amount of anxiety in my life that I could easily solve by just continuing to use the app I've been happy with for years... but that would mean not using my laptop for journaling and that annoys me a little bit as well.

Damned if you do damned if you don't...