sudoedit.com!

Just some guy on the internet.


author: “Luke Rawlins” date: “2024-01-15” title: Testing Plausible Analytics for the next month. url: /plausible-test/ tags: – Analytics


Analytics and other evil things I'm up to

I’ve never used analytics on this blog. But lately I’ve been curious to see how many page views I get.

With that in mind, I started looking around at what options were availbe to me.

Cloudflare

I host this site on Cloudflare Pages , which is a free service as long as you have something like less than 100 domains. Which I will never exceed.

Cloudflare offers some basic analytics for free but I’ve had my doubts about how accurate it is.

At one time it was showing that I get 500 visitors a day, and I’d honestly be surprised to find out that I’ve had 500 visitors in 3 years 🤣.

Privacy is at the top of my list

While looking into analytics I kept privacy at the top of my list so Google was always out of the picture. I wanted to make sure that if I implement analytics that it’s done in a way that respects the privacy of my readers.

But also affordable

After that I needed to find something that was affordable.

If I can't do analytics with both privacy and affordability then I just won't do it.

Plausible

So with that in mind I’m giving Plausible a try since they have a 30 day free trial, and after that I’ll probably be just fine with the $9 a month option for the forseeable future.

Right now I don’t spend much on this blog so if it turns out to be a worthwhile service and I decide to keep it I’ll update my privacy page to include it. If you’re the curious sort you can read about how they are different from Google Analytics. Privacy focused Google Analytics alternative

One cool feature that I found with Plausible, is that I can publicly share my blogs stats.

At the time of writing I’ve had 5 (unique visitors) 2 of them are me. One from each of my laptops to see if it was working.

Take a look at my dashboard from the link below if you’re curious.

Hey look the site stats are public!

Plausible dashboard for sudoedit.com


Send me an email if you want to share any thoughts about this post.


author: “Luke Rawlins” date: “2023-06-04” title: What I've learned (so far) training my puppy description: “Training a puppy is hard work, but absolutely worth the time and effort.” url: /training-walter/ tags: – Dogs


Back in August of last year my family decided it was time to bring a puppy into our home. We lost our first dog “Jake” back in April of 2022 at the age of 12. Jake was a great dog, and the day we lost him was one of the hardest day's for my family that I can remember.

He was perfectly healthy one day, the next day he started getting sick and couldn't keep his food down, the emergency vet said there wasn't anything wrong with him and sent us home with some anti-nausea medicine.

The next morning he died in my car on the way to see his usual veterinarian. Everyone was heart broken, that dog was an incredible companion and wonderful member of our family and I wish we would've had more years with him.

Getting a puppy

Towards the end of the summer my wife saw a Facebook post about a local shelter that had just rescued a bunch of dogs from all over the country that had been born in puppy mills that were shutdown for one reason or another. Probably, because puppy mills are disgusting places run by disgusting people… anyway.

The dog lover in me decided to sign up for their adoption event. When we got there they had the most handsome little Hound Mix I've ever seen so I had to get him.

Walter Puppy

(Little Walter!)

Walter is almost a year old now, around 80(ish) pounds…

Walter Puppy

(Big Puppy Walter!)

For the record I should note that my wife wanted a small dog – technically, there are bigger dogs out there so I guess all things being relative, Walter could be considered “small”.

His name is Walter because he vaguely resembles everyone’s Grandpa and he usually looks grumpy and tired. So he has to have someones Grandpa's name.

Puppies are assholes

The first thing you need to know about puppies, is that they are assholes.

They need to be under constant surveillance, otherwise they will eat all your shit, then shit out all your shit – in your living room, while looking you in the eye, then come beg you for a treat.

Puppies are like that friend who is always gaslighting you, and then acting offended if you call them out on their bullshit.

As far as puppies go, Walter is probably less of an asshole than most, at least most of the time.

Training puppies can be a lot of fun

Even with the above mentioned assholery. If you’ve got the time and patience to dedicate to it, training a puppy is actually a hell of a lot of fun.

In less than a year, Walter knows how to: 1. Slowly, and methodically come when called most of the time – usually stopping once to make eye contact and let you know he’s only doing it because he wants to. 2. Sit, Lay, Stand, and walk at a heel – if you have hotdogs and cheese, and no one more interesting is around. 3. Play dead – again assuming you happen to have some cheese. 4. Jump up and lick your face when you come in the house! As long as you are in one of the following categories. 1. A brand new person he’s never met. 2. Really old and frail. 3. Really young and terrified of dogs.

But really, he actually is a good dog. Once he’s out of the adolescent phase I have no doubt we’ll be able to shape these behaviors into obedience championship activities…

Positive Reinforcement

Use positive reinforcement training. If you talk to a “trainer” who talks about being the leader of the pack, or dominance then you are probably talking to an asshole who doesn’t know anything.

Getting a dog to love and respect you is easy. 1. Don’t be a dick. 2. Have what they want. 1. Hotdogs. 2. Cheese. 3. Peanut Butter. 4. Hugs, and pats. 5. The ability to say something like “Good dog” in any language you speak – even though they don’t speak that language. 3. Be quick to reward good behavior.

That’s literally it. Follow those general rules and your dog will worship the ground you walk on.

Yes it will be frustrating and you’ll want to give up a lot, you’ll have weeks where the puppy is learning everything faster than you would believe, and then you’ll have weeks where the puppy acts like it’s never heard the word sit.

A totally scientific and not at all anecdotal case study on the power of positive reinforcement.

One day last year I was walking Walter around my neighborhood. We assumed a leisurely pace stopping to smell the grass, the flowers, the rabbit crap, you know the finer things in life.

Along the way we passed a house that has some overgrown bushes in the front yard that butt up against the sidewalk. Walter (being a dog) sniffs around a bit, and discovers a discarded chicken sandwich.

If you asked him, he would probably tell you this was the best day of his life.

That was at least 6 months ago, maybe more. To this day, when we pass that bush Walter darts underneath it and sniffs around like a cocaine addict. The person who dropped that chicken sandwich trained my dog to check the bushes, without even trying. No alpha rolls or shushing, or e-collars required.

Get a clicker and learn how to use one.

Seriously start with a mechanical pen to mark the behaviors you want to see, and give rewards more often than you think you should. They don’t have to be big treats. You can cut a single hotdog up into at least 60 or 70 treats.

I’ll do a short blog, hopefully soon, on how to “load” a clicker.

Until then – don't be a dick. Unless it's to a human who deserves it... Like those people who think cve-2020-15778 is a big deal that needs to be fixed.


authors: [“Luke Rawlins”] date: 2020-03-31 description: “A python script that finds the second Tuesday of every month. Useful for finding patch Tuesday.” draft: false title: When is Patch Tuesday? url: /when-is-patch-tuesday/ tags: – python


Microsoft releases updates on a predictable cadence. The second Tuesday of every month is called “Patch Tuesday”.

There aren't any Linux distributions (that I'm aware of) that have a similar release cadence. Package updates are released pretty much as soon as they are ready. Theoretically, you could check for updates every day and always have new updates to install each day.

However, if you have decided for one reason or another to just standardize your Linux patching around Microsoft's patch release dates, then you will need a good way to figure out when the next Patch Tuesday will occur.

Python Script to get the second Tuesday of the month

I spent a lot of time searching the web for ways to list out all the Patch Tuesdays for a given year, and I didn't find anything that worked well for me so I wrote this little python script to help with scheduling.

    #!/usr/bin/env python3
    import calendar
    import datetime
    import argparse

    # get the current year
    now = datetime.datetime.now().year

    parser = argparse.ArgumentParser(description='Get patch tuesday for a given year. Current year by default.')
    parser.add_argument('year', nargs='?', help='The year, example 2020', default=now, type=int)
    args = parser.parse_args()

    for month in range (1,13):
        cal = calendar.monthcalendar(args.year, month)
        # Second Tuesday will be in the second or third week of the month
        week2 = cal[1]
        week3 = cal[2]

        # Check if Tuesday is between 8 and 14. If so Second tuesday is in week 2. Else it's week 3
        if week2[calendar.TUESDAY] >=8 <=14:
            patchday = week2[calendar.TUESDAY]
        else:
            patchday = week3[calendar.TUESDAY]
        print(calendar.month_name[month], patchday, args.year)

Example output:

When you run the script with no arguments it prints all the second Tuesday's for the current year:

    ./patch_tues.py
    January 14 2020
    February 11 2020
    March 10 2020
    April 14 2020
    May 12 2020
    June 9 2020
    July 14 2020
    August 11 2020
    September 8 2020
    October 13 2020
    November 10 2020
    December 8 2020

When you specify a year it will print the second Tuesday of each month for the specified year:

    ./patch_tues.py 2023
    January 10 2023
    February 14 2023
    March 14 2023
    April 11 2023
    May 9 2023
    June 13 2023
    July 11 2023
    August 8 2023
    September 12 2023
    October 10 2023
    November 14 2023
    December 12 2023

Note to the Pythonistas out there.

I know that this code isn't very “pythonic”. But it works. I'd love to hear any suggestions you might have on how to make it better. If you want to make any suggestions or improvements check out the git repo: https://github.com/thegreatluke/getpatchtuesday/blob/master/patch_tues.py

If you found this helpful, I'd love to. hear about it. Thanks for reading.

A Bash variation

Robert Mesibov, at Datafix has also written a good Bash script to help you track down patch Tuesdays. Check it out here: https://www.datafix.com.au/BASHing/2020-03-25.html


authors: [“Luke Rawlins”] date: 2016-12-03 16:43:10+00:00 draft: false title: Find services that require a restart description: “How to query a Linux server to find out which service require a restart.” url: /find-services-that-require-a-restart/ tags: – CentOS – Linux – Ubuntu


Ubuntu offers a live patching utility that allows kernel patches to be installed without requiring a system restart to be applied. Read more about online patching in this post about patching. That said, in many cases other services or processes on your system may need to be restarted after an upgrade.

Finding services that need to be restarted in Ubuntu

Install debian-goodies

sudo apt update sudo apt install debian-goodies

Now just run

sudo checkrestart

This command will output a list of processes and services that need to be restarted.

Update – 12/26/2016

I just discovered that there is another Debian/Ubuntu program that will not only check for services that need a restart but also restart them for you.

sudo apt install needrestart

Running this program without options will attempt to restart all services that have been updated.

You can also run this program interactively if you want to see which services need to be restarted and choose only the one's that you want to install.

sudo needrestart -r i

Finding services that need to be restarted in Centos 7 and RHEL 7

In Centos 7 and RHEL 7 the utility you need to check this should be installed by default. If not you can install the “yum-utils” package.

sudo yum -y install yum-utils

The command you will want to use is “needs-restarting” and will have to be run as root.

sudo sudo needs-restarting

sudo needs-restarting

This will again provide a list of services that need to be restarted.

OpenSUSE

When I originally posted this I neglected to mention OpenSUSE. Suse based distributions use a package manager called zypper, which as a side note is by far my favorite package manager. Anyway zypper natively has the ability to find services and processes that need to be restarted.

sudo zypper ps

Actually since Suse Enterprise Linux is one of my favorite flavors of Linux (I use it at least as much as Ubuntu if not more) I might do a blog early next week focused on package management with zypper.

Restart services manually

In all of the above cases if you are using a modern version of the distribution then you can restart services using systemctl

sudo systemctl restart

If you are using an older version then you will need to run the service command.

sudo service restart


authors: [“Luke Rawlins”] date: 2016-12-03 draft: false title: Free SSL Certificate with Let’s Encrypt description: “If you have ever installed an SSL certificate you know that it can be a tedious process. Let’s Encrypt makes this easy. Just call the letsencrypt command from the terminal and point it at your domain.” url: /letsencrypt/ tags: – Apache – letsencrypt – HTTPS – Linux – Ubuntu


NOTE: While this may still work the information is out of date.

Please see the instructions found here eff.org for more up-to-date instructions.

Free SSL Certificate with Let’s Encrypt

If you have ever installed an SSL certificate you know that it can be a tedious process. Let’s Encrypt makes this easy. Just call the letsencrypt command from the terminal and point it at your domain.

letsencryptlogo

Securing your website with a valid ssl certificate from a recognized and trusted vendor shows your website visitors that information transmitted between your site and their browser is encrypted. Now thanks to “Let’s Encrypt”, and the “Internet Security Research Group (ISRG)” obtaining a certificate has never been easier or more affordable… can’t get cheaper than free.

Prerequisites

  • Ubuntu 16.04 (Previous versions may work as well but I haven’t tested it. If you try it on 14.04 and it works let me know.)
  • Apache – with a virtual host configured. See this post if you’re not sure how to set up Apache 2 with virtual hosts on Ubuntu.
  • A domain name
  • Root access to the web server

Install Let’s Encrypt Automated Tools

The best thing about Let’s Encrypt is that it provides fully automated tools that make setting up your secure site as easy as possible. No need to learn about openssl commands or obtaining CA certificates.

Install python-letsencrypt-apache.

sudo apt update
sudo apt install python-letsencrypt-apache


This will ask to install quite a few python libraries and tools, accept the installation by pressing “y” when prompted.

Configure your site for SSL

https If you have ever installed an SSL certificate you know that it can be a tedious process. Let’s Encrypt makes this easy, just call the letsencrypt command from the terminal and point it at your domain. (Replace example.com with your own domain).

sudo letsencrypt --apache -d example.com -d www.example.com


The -d option specifies which domain or domains that you want to request an ssl certificate for.

At this point, you will be prompted to select the domain from the list of sites that are configured in the /etc/apache2/sites-enabled/ directory. If you already set up a virtual host you should see your site listed. Often letsencrypt will automatically detect your site based on the “ServerName” field in the virtual host configuration file.

Select your domain, chose https only or both http and https. I always choose https only since I don't have any real need for http, but you have the option to use both.

Let's Encrypt security options

Your SSL Certificate will be valid for 90 days, and can be easily renewed as follows.

sudo letsencrypt renew


You can run that now and it’s output should tell you that you have no certificates that need to be renewed. Let’s Encrypt recommends that you renew every 60 days.

Automate SSL Renewal with Cron

Automatically renewing your ssl can be done with a cron job. We will create a file in cron.weekly so that Let’s Encrypt will check once a week for sites that have a renewal available.

sudo vi /etc/cron.weekly/le-autorenew


Add the following to this file:

#!/bin/bash
#
#renew letsencrypt certificate and create log
/usr/bin/letsencrypt renew >> /var/log/ssl-renew.log


If you are interested in learning more about Let’s Encrypt or some of their sponsors (which includes big names like Mozilla, Facebook, Cisco, and many more) you can visit them here: https://letsencrypt.org


authors: [“Luke Rawlins”] date: 2016-12-04 draft: false title: Backup a MySQL Database description: “The mysqldump command outputs a file that contains SQL statements that can be used to rebuild your database, with all of its data. Which could come in handy in the event of an unrecoverable crash or even to just move the database to a new server.” url: /backup-mysql-database/ tags: – Linux – MariaDB – MySQL


database-mysql-svg If you have a MySQL database working behind the scenes on your web site or app then creating and storing backup's of that database can be vitally important to the operation of your business operations. A MySQL or MariaDB database uses the mysqldump command to create backups.

The mysqldump command outputs a file that contains SQL statements that can be used to rebuild your database, with all of its data. Which could come in handy in the event of an unrecoverable crash or even to just move the database to a new server.

Following the steps in this guide should work on any distribution of Linux that is using MySQL or MariaDB.

Contents

  1. Review the command
  2. Convert to script
  3. Run as nightly cron job
  4. Restore database

Quick note:

When you see me use angle brackets <> it means you should alter the command to fit your needs. For example means use your username; so cd /home/ should be cd /home/spidey for a user called spidey. Secondly all of the below commands should be run as a non-root user account.

Step 1 – Review the command

By default the mysqldump command will not build a SQL query to create or drop existing databases. So we will want to add some options to it in order to get the results we want.

mysqldump -uroot -p testDB > myDB.sql

This command will output (to the myDB.sql file) all the SQL code required to rebuild all of the tables and data within the testDB database. Notice that there is no space between -u and root. The -u option in this command stands for user and -p is password again note that no spaces are needed between the option and the value. The one thing we do not get from this command is the ability actually create the database. So using this command to restore the database tables will only work if the database already exists on the server that you are restoring to.

Here's what I would use instead:

mysqldump -uroot -p —add-drop-table —databases testDB > myDB.sql

Adding the --add-drop-table --databasesto the command tells mysqldumpto build the statement with CREATE statements if the database doesn't already exist, USE statements to then use the named database, and drop any tables that exist in that database before creating the new ones. This gives you a clean and full backup from the original database.

Step 2 – Use mysqldump in a backup script

Theoretically you could copy and paste the above command into a script to back up the database without alteration and it would work fine. A few problems, however, will need to be resolved in order to reduce the risk of dataloss, and to prevent malicious persons from getting access to your database.

Create a defaults file

The first step to securing our backup script is to pull the username and password out of the command while it's running. With the above command inserted into a cron job anyone who runs ps aux during its execution will be able to see the root database password. We can avoid that by creating a file in our /home directory that contains the user/password details and pass that file into the command instead.

Change to your home directory

cd ~/

Create the “.my.cnf” file using your favorite text editor.

It should look something like this:

[mysqldump]

user=root password=yourdbrootpasswordhere

Change permissions on .my.cnf to read/write only for yourself

chmod 600 .my.cnf

 NOTE: 06-01-18

If your password has a “#” in it you will want to put your entire password in quotes, otherwise the # will be read as a comment and the password will be truncated.

For example: password="Thishasa#init" If you had tried to use this password without quotes it would be read by mysql as “Thishasa” which would make authentication impossible.

Update the command

With the .my.cnf file some distributions (I tested with Suse Enterprise 12 and Ubuntu) will automatically check for the existence of this file and we can remove the -u and -p options from the command. The updated command will look like this

mysqldump —add-drop-table —databases testDB > myDB.sql

Much shorter and far more secure. If your distribution doesn't detect this file by default you can add

--defaults-file=/home/<user>/.my.cnf to the command string like this:

mysqldump —defaults-file=/home//.my.cnf —add-drop-table —databases testDB > myDB.sql

Use “date” to prevent over writes

We can't put this in a script yet because the file name will remain the same every time we run the command. We will need to build our script with the ability to change the name based on date so that we can restore to a particular point in time.

We will use a variable to write part of the file name

mkdir scripts mkdir backups cd scripts vi DBbackup.sh Copy the following into this file to create a simple backup script that we can run everyday.

#!/bin/bash today=$(date +“%m%d%Y”) mysqldump —add-drop-table —databases testDB > /home/user/backups/myDB_$today.sql

Next we need to add this to a cron job to run every night.

Step 3 – Run database backup as a cron job

As your user run the crontab -e command to bring up your users cron file. Add the following to the bottom of this file.

01 00 * * * /home//scripts/DBbackup.sh

This entry will run every day at 12:01 am and will produce a file in your home directory that contains the state of the database at the time it ran.

Step 4 – Restore the database

In the event that you need to restore this database MySQL makes this a pretty simple process. One command should be enough to recreate the database and insert all the data back into it.

mysql < myDB_.sql

Depending upon how large your database is will determine how long this command takes to complete, however, once it is finished your database will be up and running in the same state that it was in when the backup was taken.


author: “Luke Rawlins” date: “2021-10-01” title: Run it later with atd description: “Run Ansible playbooks using atd.” url: /schedule-ansible-at/ tags: – Ansible


Schedule Ansible playbook to run later with atd

Just a quick note today. I used Ansible as an example for this but you can use any command really.

If you have a need to run an Ansible playbook at a specific time but do not need to schedule it to run on a reoccurring basis. You can use the at deamon to schedule your playbook to run “at” a specific time or date.

Use the EOF (Check for end of file condition) to make at schedule jobs non-interactively:

For Example if you had a playbook you wanted to run tomorrow at 6am you could have script set up like this and then check the play.result file later to make sure it ran ok.

The script

#!/bin/bash
## schedule ansible playbook to run tommorrow at 6AM
/usr/bin/at 0600 tomorrow <<EOF
ansible-playbook <playbook_name>.yml -i inventory_file >> play.result
EOF
## Schedule another playbook to run on Sunday at 11PM
/usr/bin/at 2300 sunday <<EOF
ansible-playbook <playbook_name>.yml -i inventory_file >> play2.result
EOF

Why not just schedule it interactively?

Well you can and usually that’s fine. I’m just demonstrating how to use at in a script.

Where this comes in handy is if you have several “one off” changes that need to be pushed over a weekend at different times and you keep those change times in an external source that you can draw from. Something as complex as Service Now or something as simple as a csv file that has the schedule built into it.

Having a script build out the schedule ensures the time and date will be correct and it allows you to create many jobs simultaneously.


If you found this useful please support the blog.

Fastmail

I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze

Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!

Thanks!

Luke


authors: [“Luke Rawlins”] date: 2016-06-22 draft: false title: Filesystem and Directory size url: /filesystem-and-directory-size/ tags: – Linux


Just a quick look at df and du. This comes up a lot when we have filesystems that are filling up and need to find out which directories or logs are using the space.

How to find the size of mounted filesystems

From the terminal enter the df command.

luke@testserver:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 492M 12K 492M 1% /dev tmpfs 100M 780K 99M 1% /run /dev/xvda1 15G 3.1G 11G 22% / none 4.0K 0 4.0K 0% /sys/fs/cgroup none 5.0M 0 5.0M 0% /run/lock none 497M 0 497M 0% /run/shm none 100M 0 100M 0% /run/user

According to its man page df “displays the amount of disk space available on the file system” adding the -h argument tells df to display in human readable format.

Adding a “-T” notice caps will tell df to also display the filesystem type.

luke@testserver:~$ df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 492M 12K 492M 1% /dev tmpfs tmpfs 100M 780K 99M 1% /run /dev/xvda1 ext4 15G 3.1G 11G 22% / none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 497M 0 497M 0% /run/shm none tmpfs 100M 0 100M 0% /run/user

What if you are running out of space and you are not sure which directories or files are using up your hard disk?

The “du” command will give you the size of files and directories. Here's a few quick and useful examples of how to use du to determine file size.

Finding directory sizes

luke@testserver:~$ du -h 4.0K ./.local/share/applications 4.0K ./.local/share/sounds 16K ./.local/share 20K ./.local 8.0K ./.gconf/apps/gnome-terminal/profiles/Default 12K ./.gconf/apps/gnome-terminal/profiles 16K ./.gconf/apps/gnome-terminal 20K ./.gconf/apps 24K ./.gconf 8.0K ./.ssh 8.0K ./.dbus/session-bus 12K ./.dbus 24K ./.vnc 4.0K ./.config/ibus/bus 8.0K ./.config/ibus 32K ./.config/pulse 12K ./.config/dconf 56K ./.config 108K ./.cache/fontconfig 128K ./.cache 4.0K ./blog/dir1 4.0K ./blog/dir2 12K ./blog 316K .

du -h shows the size of files and directories in a human readable format. Running du with no parameters will cause it to display the current directory only. By default du will show the size of subdirectories. What if we just wanted to see one level deep in the / directory?

Use the —max-depth argument to control subdirectory depth

luke@testserver:~$ sudo du -h —max-depth=1 / 25M /boot 56K /root 780K /run 62M /lib 12K /tmp 12M /sbin 9.6M /bin 4.0K /lib64 4.0K /srv 12K /dev 4.0K /media 0 /proc 16K /lost+found 8.9M /etc 4.0K /mnt 368K /home 0 /sys 1.7G /usr 128M /opt 1.2G /var 3.0G /

Notice you might get some errors about /proc access. That is because files in /proc represent the live system and in some cases they will be available when du starts but gone by the time it finishes.

What if I want to sort by the largest directories first?

Pipe the output to sort

luke@testserver:~$ sudo du -h —max-depth=1 / | sort -hr 3.0G / 1.7G /usr 1.2G /var 128M /opt 62M /lib 25M /boot 12M /sbin 9.6M /bin 8.9M /etc 780K /run 368K /home 56K /root 16K /lost+found 12K /tmp 12K /dev 4.0K /srv 4.0K /mnt 4.0K /media 4.0K /lib64 0 /sys 0 /proc


authors: [“Luke Rawlins”] date: 2016-12-03 draft: false title: Change the Default Text Editor in Ubuntu description: “By default, Ubuntu opens these files in nano which I find bothersome. If like me you would rather use vim when making these changes here’s how to change the default.” url: /change-editor-in-ubuntu/ tags: – Linux – Ubuntu – VIM


Change the Default Text Editor in Ubuntu

vim

So I’m a huge advocate of Ubuntu. It has long term support releases, more packages than you would ever need, free online unattended patching, and you always have an in-place upgrade path to the next LTS version. What more could you ask for? I’d like to ask that nano lose its privileged status as the default text editor!

When making changes to sudoers, passwd, or group files you should really be using the built-in tools visudo, vipw, and vigr. These tools will check your syntax prior to committing changes to the file that could break your system. By default, Ubuntu opens these files in nano which I find bothersome. If like me you would rather use vim when making these changes here’s how to change the default.
Change Ubuntu’s default editor with update-alternatives.

Option 1) change the editor interactively.

sudo update-alternatives —config editor

There are 4 choices for the alternative editor (providing /usr/bin/editor).

Selection Path Priority Status —————————————————————————————— * 0 /bin/nano 40 auto mode 1 /bin/ed -100 manual mode 2 /bin/nano 40 manual mode 3 /usr/bin/vim.basic 30 manual mode 4 /usr/bin/vim.tiny 10 manual mode Press to keep the current choice[*], or type selection number:

You can see that the option we want is 3 vim.basic which after selecting will change the default from nano to vim.

Option 2) change the editor with a single command string.

Alternatively and probably better if you need to script it is to just set the default with a single command string.

sudo update-alternatives —set editor /usr/bin/vim.basic

This command will output a confirmation that the editor has been changed to vim.

You can verify that your change has been made as follows:

sudo update-alternatives —query editor

This command will output a lot of information:

Name: editor Link: /usr/bin/editor Slaves: editor.1.gz /usr/share/man/man1/editor.1.gz editor.fr.1.gz /usr/share/man/fr/man1/editor.1.gz editor.it.1.gz /usr/share/man/it/man1/editor.1.gz editor.ja.1.gz /usr/share/man/ja/man1/editor.1.gz editor.pl.1.gz /usr/share/man/pl/man1/editor.1.gz editor.ru.1.gz /usr/share/man/ru/man1/editor.1.gz Status: manual Best: /bin/nano Value: /usr/bin/vim.basic Alternative: /bin/ed Priority: -100 Slaves: editor.1.gz /usr/share/man/man1/ed.1.gz Alternative: /bin/nano Priority: 40 Slaves: editor.1.gz /usr/share/man/man1/nano.1.gz Alternative: /usr/bin/vim.basic Priority: 30 Slaves: editor.1.gz /usr/share/man/man1/vim.1.gz editor.fr.1.gz /usr/share/man/fr/man1/vim.1.gz editor.it.1.gz /usr/share/man/it/man1/vim.1.gz editor.ja.1.gz /usr/share/man/ja/man1/vim.1.gz editor.pl.1.gz /usr/share/man/pl/man1/vim.1.gz editor.ru.1.gz /usr/share/man/ru/man1/vim.1.gz Alternative: /usr/bin/vim.tiny Priority: 10 Slaves: editor.1.gz /usr/share/man/man1/vim.1.gz editor.fr.1.gz /usr/share/man/fr/man1/vim.1.gz editor.it.1.gz /usr/share/man/it/man1/vim.1.gz editor.ja.1.gz /usr/share/man/ja/man1/vim.1.gz editor.pl.1.gz /usr/share/man/pl/man1/vim.1.gz editor.ru.1.gz /usr/share/man/ru/man1/vim.1.gz

Notice specifically the lines labeled:

Status: manual Best: /bin/nano Value: /usr/bin/vim.basic

This output confirms that we switched from automatic selection to manual selection of our editor and that instead of the “Best” (Say’s who!?) value we chose vim.


author: “Luke Rawlins” date: “2021-08-30” title: Cloud Storage and Sanity description: “Cloud storage and privacy. An attempt to stay sane.” url: /cloud-storage-privacy-and-sanity/ tags: – Opinion


I have a lot of Apple devices in my house hold, my family and I have become accustomed to the ease of use, and deep integration of iCloud in iOS and MacOS devices. Apple’s recent announcement to add their child safety technology to iOS 15 and MacOS Monterey has been met with a lot of concern, not all of it unfounded. The EFF has written a few fairly compelling pieces about the dangers of this technology.

The flurry of news and opinions circulating through my normal reading lists has really started to tickle some of the more paranoid neurons in my tiny brain. I have to admit I go through cycles of digital privacy paranoia every so often, and not just for my own personal data – but also to make sure I’m not contributing to the problem of digital surveillance by stripping this site down to just the bare essentials – see my privacy policy for details.

Personally, I am of the opinion that you can’t expect absolute privacy from cloud providers. They need to protect their services in order to stay in business, and I sympathize with some of the concerns they have. For instance, I host a few sites for family members and have considered opening up hosting services for others, but any hosting on my end would come with a caveat that if you host egregious shit – I’m going to shut off your site, lock you out, and turn you over to the police. What would count as egregious shit would be completely up to me. Why would I expect anything different from Apple, Google, or Microsoft?

As far as iCloud goes, I’m actually less concerned about the photo upload hash comparison than I am about the iMessage component, but I’ll let you make your own judgments.

This post is really about the way’s I think about cloud storage when I’m not bogged down in anxiety over a foreign or domestic intelligence service aggregating my data. If that's a legitimate concern for you then please disregard anything I have to say here.


tin foil hat

This post is my attempt to avoid the tin foil hat lifestyle

I think the best way for me to start would be to illustrate why I think self hosting is not the best option for most of us.

Self hosted is not an option for most individuals.

The cost alone of self hosted options makes it unreasonable for most people. Most cloud storage providers will give you 2TB of storage for around $10 a month.

To purchase a bare minimum NAS with comparable storage, you’re looking at $500 dollars on the very low end and if you replace it every 5 years (good luck getting 5 years out of a cheap unit) you’re spending a little over $8 a month (not including electricity) on something you have to maintain yourself.

Unless you’re tech savvy you won’t be able to access your files outside of your home network and even if you are tech savvy your storage and sync won’t be as reliable from a slow upload connection at home as it is from the cloud.

I can already hear someone screaming at me through the void “I have great upload speeds, with no problems!” Go ahead and add the cost of your fast upload speed to the cost of your NAS.

Self hosting is simply not an option for most people. It costs too much, requires too much know how, and is only more secure and private if you know what you’re doing, and have the money to spend on upkeep. A misconfiguration or software bug in your NAS, network, or endpoints could quickly kill any privacy gains.

About security on self hosted storage.

Privacy is not the be all end all of security. Can your NAS survive a fire, tornado, earthquake, or volcano? Not to mention a toddler knocking it off a shelf, a stray football thrown in the basement, or a random sudden disk failure. Data protection isn’t just protection from bad people – it’s protection from all sorts of things. Do you really want to risk losing all your family photo’s, because your teenage son and his friends were playing games too close to the rack that holds your NAS?

xkdc the cloud

People tend to focus their security practices on confidentiality, while forgetting about integrity, and availability. Data on a home NAS is almost unquestionably more confidential than data in a public cloud, but data integrity due to mechanical failure, fire, or some other cause of loss is just as important for data security and I don’t think a $500 investment is going to get you anything even remotely comparable to what you get for $10 a month from OneDrive, iCloud, Dropbox, or Google Drive. Often you’re still going to need some offsite, probably cloud type, backup.

Not that you don’t need a cloud backup if you’re using a sync service like those I listed, you do, but if the point is to remove a public cloud vendor from your life you can’t do it for just the cost of a cheap NAS – you still have to trust someone along the way.

You really need to ask yourself if what you are gaining in confidentiality (if anything) is worth the trade off in availability, and integrity. I doubt most people will get any benefit from a home NAS – especially since many people will want to have access to it from the internet and will unknowingly expose themselves to other attack vectors by opening access into their home network.

How long can you really expect to self host?

I recently turned 40 – if I’m lucky I have another 40 or maybe 50 years to live. Of those 40 or 50 years probably at most 30 of those years will be in house large enough to justify the space for local storage, and after the age of 70 will I want to keep up with a home network? Maybe, but probably not – plus if I were to die along the way would the other members of my family know how to operate my self hosted storage systems? Probably not – and I’m betting most of you don’t have families who could or would be able to operate something like that either. As I get older I’d rather make it easier for family to get access to financial documents and photos, over the endless worry of government surveillance.

Assuming I’ve got another 600 months of life in me, at $10 a month for file and photo storage I’ll be spending around $6,000 on cloud storage from now till I die and I won’t have to find shelf space for it, or fiddle with fancy networking… I think that’s a not terrible deal.


Things I consider when looking at cloud storage

Assuming your are using one of the big cloud provides Apple, Google, Dropbox, Microsoft – their security and privacy practices are probably not that much different. I haven’t read all their privacy policies and this isn’t a sponsored post so do your own research on that end. I’m just telling you how I think about cloud storage in order to stay as sane as possible.

Integration, Interoperability, Data Ownership

Assuming relative equality of security, privacy, and capacity on the big cloud platforms these are the next big 3 considerations for me. If you are using a Linux desktop your storage options are going be far narrower than mine since at the moment I only use MacOS for my desktop.

Integration: between MacOS, and iOS iCloud is seamless, so that’s a check in the pro category for me in iCloud. But OneDrive has similar features at least for file storage, and is probably better if you are in a household with mixed Apple and Windows machines. The only thing that keeps me on iCloud is that I think the photo’s app is far superior to anything on Office 365 and photo sharing between family members is way too convenient for me to give up.

Data ownership: in a legal sense iCloud is where it should be – Apple doesn’t own your data. From a practical perspective it’s a little more complicated and that brings me to the next point which is interoperability – if you “own” the data but it’s difficult or impossible to move it to another platform do you really own it?

I have set my main desktop up to keep a local copy of everything – with a backup to a different service just in case I was ever locked out. In iCloud you “own” your data, but if you want to make sure you can move it around you should check your settings and make sure you have enough local storage to keep a local copy – I don’t think this is a uniquely Apple issue one of the weaknesses of cloud storage is the lack of portability.

On the Data Ownership point I actually think Google is ahead of Apple and Microsoft here for two reasons. 1. With Google drive I can actually request all my data (or parts of it) to be extracted from Google’s servers and downloaded to my computer. Apple offers something similar in iCloud. 2. Secondly, you can set up a trusted contact who can download data you’ve designated for them in the event that your account becomes inactive. For example if I were to die – my wife could still get access to my photo’s without having to know my Google account info. 1. On this point I suppose you can just share account info… but in some sense that’s a violation of terms of service and the control you get with the trusted contact means you can set more than one person and control the types of data they can get.

Interoperability: When I say “interoperability” I’m thinking of two specific and different things. 1. Interoperation between different cloud providers. i.e Portability 2. Compatibility across multiple operating systems. On point 1 none of the cloud providers are all that eager to work with each other, so you’re unlikely to find many tools that make it easy to move your data without an intermediary. Surprisingly, in this case it looks like Apple does provide an exit route at least for photos to be migrated to Google using the data privacy tools they’ve created. As with Google drive you can also use this site to download a full copy of pretty much everything Apple know’s about you from iCloud. At the time of writing (August 2021) I’m not sure if Microsoft or Dropbox offers anything similar.

Ransomware Protection

OneDrive and Dropbox both offer a system of versioning your files and will help you recover in the event of a ransomware event on your home computer. As far as I know Google and Apple don’t offer anything as robust for their cloud customers which is a shame and I hope it’s rolled out sooner rather than later. I’m guessing it won’t be too long till we see a large scale ransomware attack on MacOS. I don't worry much about it at the moment because I don't download torrents – but at some point that might not be good enough.


Recommendations

  • Don’t listen to strangers on the internet.
    • Note: I’m a stranger on the internet. 😆
  • Don’t trust free services.
    • Note: This entire site is free 😆
  • Don’t download random crap from the internet.
    • Note: you downloaded this site from the internet… whether or not it’s crap is left to the reader.
  • Don’t try to replicate robust cloud storage systems with a cheap NAS. Unless you know what you are doing – and your claim to knowledge can be corroborated by at least one unbiased person.
    • Note: I don’t run a home NAS…
  • Try not to pull out the tin foil hat.
    • I haven’t yet made a hat, but sometimes I think I should.
  • Don’t eat yellow snow.
    • Note: … no comment.