Looped Network

Mastodon

As the title alludes to, this morning I tried updating my Pinebook Pro running Manjaro Linux through my normal method:

sudo pacman -Syu

Today, this resulted in an error message about the libibus package:

warning: could not fully load metadata for package libibus-1.5.26-2 error: failed to prepare transaction (invalid or corrupted package)

Fun. I first wanted to see if it was really just this package that was causing the problem or if there were other issues. Being a pacman noob, I just used the UI to mark updates to the libibus package as ignored. Once I did that, all of the other packages installed successfully. This prompted for a reboot, which I gladly did since I figured I'd see if that made any difference. Once my laptop was back up and running, though, executing pacman -Syu again still gave the same error related to libibus.

Some searches online showed that a mirror with a bad package could be the problem, so I updated my mirrors via:

sudo pacman-mirrors -f5

This didn't solve the problem, but the new mirror gave me a different error message:

error: could not open file /var/lib/pacman/local/libibus-1.5.26-2/desc

With some more searches online, I saw a few people on the Manjaro forums say that simply creating those files was enough to fix similar errors they had with other packages. Creating the file above just resulted in an error about a second file being missing, so I ultimately ended up running:

sudo touch /var/lib/pacman/local/libibus-1.5.26-2/desc
sudo touch /var/lib/pacman/local/libibus-1.5.26-2/file

Now running an update allowed things to progress a little further, but I got a slew of errors complaining about libibus header files (.h) existing on the filesystem. My next less-than-well-thought-out idea was to just remove the package and try installing it fresh. I tried running:

sudo pacman -R libibus

Fortunately, Manjaro didn't let me do this by telling me that it was a dependency for gnome-shell. Yeah, removing that would've been bad. It was back to searching online. The next tip I stumbled across was to try clearing the pacman cache and then install updates with:

sudo pacman -Scc
sudo pacman -Syyu

This unfortunately gave me the same error about the header files. However, the same forum thread had another recommendation to run:

sudo pacman -Syyu --overwrite '*'

Curious about exactly what this would do prior to running it, I checked out the man page for pacman:

Bypass file conflict checks and overwrite conflicting files. If the package that is about to be installed contains files that are already installed and match glob, this option will cause all those files to be overwritten. Using —overwrite will not allow overwriting a directory with a file or installing packages with conflicting files and directories. Multiple patterns can be specified by separating them with a comma. May be specified multiple times. Patterns can be negated, such that files matching them will not be overwritten, by prefixing them with an exclamation mark. Subsequent matches will override previous ones. A leading literal exclamation mark or backslash needs to be escaped.

I took this to mean that instead of complaining about the header files that already existed on the filesystem, it would simply overwrite them since my glob was just * to match anything. I ran this, and sure enough everything was fine.

I mainly run Manjaro on my Pinebook Pro just because it's such a first class citizen there with tons of support. It's now the default when new Pinebook devices ship; back when I got mine it was still coming with Debian, though I quickly moved it over after seeing how in love the community was with Manjaro. I do find that I run into more random issues like this on Manjaro than I do with Fedora on my other laptop or Debian on my servers, for example, and at times it can be a little frustrating. I didn't really want to spend a chunk of my Saturday morning troubleshooting this, for example. But while there seem to be more issues with Manjaro, the documentation and community are so good that usually after a little time digging in, the solution can always be found. I've yet to run into any issue where the current installation was a lost cause forcing me to reinstall the operating system.

Just a few moments ago I needed to extract the audio component out of a video file into some type of standalone audio file, like .mp3. Since I've been working with Audacity to record audio, I figured maybe it had some capability for ripping it out of video.

My initial searches gave me results like this which quickly made it clear that while this is technically possible, it requires some add-ins that I didn't really want to mess around with. However, since the add-in mentioned in that video was for FFmpeg, I realized I could just use that directly.

I didn't have ffmpeg installed, but that was easy enough to rectify on Fedora 36.

sudo dnf install ffmpeg-free

Then I needed to extract the audio. I first checked how it was encoded in the video with:

ffprobe my_video.mp4

After sifting through the output, I saw that it was encoded as aac:

Stream #0:10x2: Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, s16, 317 kb/s (default)

Rather than that, I wanted to simultaneously re-encode the audio as MP3. Another quick search showed me some great resources. Ultimately, I ended up doing:

ffmpeg -i my_video.mp4 -q:a 0 -map a bourbon.mp3

As mentioned in the Stack Overflow post, the -q:a 0 parameter allows for a variable bitrate while while -map a says to ignore everything else except the audio.

Just a few moments later, and my MP3 was successfully encoded.

I recently ran across an interesting error with my development Kubernetes cluster, and while I still have no idea what I may have done to cause it, I at least figured out how to rectify it. As is commonly the case, most of the things I end up deploying to Kubernetes simply log to standard out so that I can view logs with the kubectl logs command. While running this against a particular deployment, though, I received an error:

failed to try resolving symlinks

Looking at the details of the error message, it seemed that running a command like:

kubectl logs -f -n {namespace} {podname}

Is looking for a symbolic link at the following path:

/var/log/pods/{namespace}_{pod-uuid}/{namespace}

The end file itself seems to be something extremely simple, like a number followed by a .log suffix. In my case, it was 4.log. That symbolic link then points to a file at:

/var/lib/docker/containers/{uuid}/{uuid}-json.log

Where the uuid is the UUID of the container in question.

Note: The directory above isn’t even viewable without being root, so depending on your setup you may need to use sudo ls to be able to look at what’s there.

I was able to open the -json.log file and validate that it had the information I needed, so I just had to create the missing symlink. I did that with:

sudo ln -s /var/lib/docker/containers/{uuid}/{uuid}-json.log 4.log

Since my shell was already in the /var/log/pods/{namespace}_{pod-uuid}/{namespace} directory, I didn’t need to give the full path to the actual link location, just specify the relative file of 4.log.

Sure enough, after creating this I was able to successfully run kubectl logs against the previously broken pod.

Lately I've been working through getting WinRM connectivity working between a Linux container and a bunch of Windows servers. I'm using the venerable pywinrm library. It works great, but there was a decent bit of setup for the underlying host to make it work that I had been unfamiliar with; you can't just create a client object, plug in some credentials, and go. A big part of this for my setup was configuring krb5 to be able to speak to Active Directory appropriately.

My setup involves a container that runs an SSH server which another, external service actually SSHs into in order to execute various pieces of code. So my idea was to take the entrypoint script that configures the SSH server and have it also both:

  1. Create a keytab file.
  2. Use it to get a TGT.
  3. Create a cron job to keep it refreshed.

Let's pretend the AD account I had been given to use was:

Username@sub.domain.com

In my manual testing, this worked fine after I was prompted for the password:

kinit Username@SUB.DOMAIN.COM

If you're completely new to this, note that it's actually critical that the domain (more appropriately called the “realm” in this case) is in all capital letters. If I run this manually by execing my way into a container, I get a TGT just like I'd expect. I can view it via:

klist -e

Unfortunately, things didn't go smoothly when I tried to use a keytab file. I created one in my entrypoint shell script via a function that runs:

{
    echo "addent -password -p Username@SUB.DOMAIN.COM -k 1 -e aes256-cts-hmac-sha1-96"
    sleep 1
    echo <password>
    sleep 1
    echo "wkt /file.keytab"
} | ktutil &> /dev/null

The keytab file is created successfully, but as soon as I try to leverage it with...

kinit Username@SUB.DOMAIN.COM -kt /file.keytab

...I receive a Kerberos preauthentication error. After much confusion and searching around online, I finally found an article that got me on the right track.

The article discusses the fact that an assumption is being made under the hood that the salt being used to encrypt the contents of the keytab file is the realm concatenated together with the user's samAccountName (aka “shortname”). So for my sample account, the salt value would be:

SUB.DOMAIN.COMUsername

The problem highlighted by the article is that when you authenticate via the UserPrincipalName format (e.g.: username@domain.com) rather than the shortname format (e.g.: domain\username), another assumption is made that the prefix of the UPN is the same as the shortname. This is very commonly not the case; in a previous life where I actually was the AD sysadmin, I had shortnames of first initial and last name while the UPNs were actually firstname dot lastname. So for example, my UPN was:

looped.network@domain.com

While my samAccountName was:

lnetwork

If this type of mismatch happens, you can use -s when running addent to specify the salt. After checking AD, I verified in my current case that the username was the same for both properties... but that in both places it was completely lowercase. I can't say why it was given to me with the first character capitalized, but after re-trying with username@SUB.DOMAIN.COM, everything was successful. This made sense to me because while AD doesn't care about the username's capitalization when it authenticates (hence why manually running kinit and typing the password worked), using a keytab file means that the wrong salt was given.

There’s nothing quite like being on a live call to make you realize that you’re not as savvy with Vim as you thought. I’ll probably be shifting back to Sublime with my main workflow for the foreseeable future.

I had written not very long ago about my progress on my little Write Freely python client that I've been working on to facilitate my ability to create posts from an SSH session to a VPS. I actually had a bit of an “Oh no!” moment just the other day when I realized that I might be able to accomplish what I'm looking to do just by going to the write.as website from a TUI browser like w3m, but a quick test let me know that Javascript was required.

This weekend I felt like I didn't have a ton left to work out from the perspective of the CLI version of the application, at least for a first build. I wanted to round out some of the functionality with pulling back post information to then be able to get IDs for deleting posts. Based on that, then I needed to update some of the help documentation. With that implemented, though, I wanted to test it from my VPS. Out of the gate, that was a bit of a pain since I'm not feeling like things are ready to push to something like PyPI yet. So instead, I just cloned my repo, manually created the virtual environment, installed the dependencies, and then created a shell script in my $PATH named writepyly that just contained:

#!/usr/bin/env bash
/home/{username}/code/writepyly/.venv/bin/python /home/{username}/code/writepyly/src/__main__.py $@

In this case, {username} holds my actual username on the system. This works great and allowed me to put some of the functionality through its paces. I got to fix a few bugs with things like trying to push posts when I didn't have any configuration files, for example. I apparently like to catch errors and then not actually stop the execution flow. This post, however, is being made from the my client on the VPS.

After getting the VPS side of things sorted, I went back to start building out the TUI version of the application, which I want to launch when writepyly is executed without any commands provided. In the original branch, that would simply print the help documentation. In this new version, only writepyly help will trigger that while writepyly by itself will cause the TUI to load up.

This will be an interesting learning experience for me since I have zero experience building something like this. I'm using rich as the framework for the TUI, and it honestly seems very easy to work with. I think building out everything except for creating new posts will be super easy. Creating new posts is going to involve basically having a text editor in my application, so I currently have no idea what the hell that will look like. Maybe instead of having a text editor for post creation, I'll just initiate prompt the user from the TUI for where the file they want to use is. I don't see a ton of value in trying to recreate something like Vim, Emacs, Micro, etc. given that they'll all be better solutions for writing content than what I would put together. 🤔

I feel dumb right now, especially after my post about what I've been doing with Neovim. While working on a personal project, I kept having complaints from Neovim about my file having mixed indentation, indents and unindents not aligning, etc. This project has now been worked on with both VS Code, Sublime, and Neovim. After struggling to manually rectify things one line at a time in Neovim, I eventually did the smart thing and took to the Internet where I learned that:

I can easily issue the command:

:set syntax=whitespace

To see what whitespace is comprised of tabs and which is comprised of spaces. If I've got Neovim set the way I want as far as tabs and spaces are concerned, I can then just issue:

:retab

To make everything match. I guess it's another “better later than never” scenario.

I had written a few months ago on Medium that I was trying to switch from using VS Code as my main editor to Vim. As I mentioned in that post, I've used Vim for years now, but never as my “main” editor for when I need to get serious work done, such as with my job. I also swapped from vanilla Vim to Neovim, which I found to have a few quality of life improvements that I enjoyed. I just couldn't stick with it, though, because I missed the how frequently VS Code saved me from myself when I did things like making stupid mistakes that I've need to debug manually because my editor wasn't telling me about the problems in advance. Likewise, I got irritated when I kept having to check things like what parameters I needed to pass to a method or where I defined a particular class manually because I couldn't easily peek them like I can in VS Code.

That being said, I knew this functionality was possible in Neovim (and Vim), but I just never bothered to check exactly how. During some initial homework on the matter, it seemed like parts of it were fairly simple while other parts were complicated. Ultimately, it turned out that how difficult the process is to set everything up really depends on how difficult you want to make it and how much you want to customize things. I just reproduced the steps I originally followed on my work laptop with my personal laptop to validate my notes prior to making this post, and it probably took me less than 5 minutes.

Plugins and init.vim

When I first started with Neovim, I quite literally told it to just use what I had already set up with Vim as far as configuration and plugins were concerned. I had used Pathogen for my Vim plugins and had my configuration done in ~/.vimrc. Neovim looks for configuration files in ~/.config/nvim, and they can be written in Vimscript, Lua, or a combination of the two. I initially just had my init.vim file with:

set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath = &runtimepath
source ~/.vimrc

This was taken straight from the documentation. It worked fine, but I wanted to keep my configs separate in this case. I started my just copying the content of my existing .vimrc file to ~/.config/nvim/init.vim.

Note: If you're curious, my full Neovim configuration is on GitLab.

Next I wanted a plugin manager. vim-plug seems to be extremely popular and was simple enough to install with the command they provide:

sh -c 'curl -fLo "${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/autoload/plug.vim --create-dirs \
       https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim'

Then I just updated my init.vim with the plugins I wanted to install:

call plug#begin('~/.config/plugged')
Plug 'https://github.com/joshdick/onedark.vim.git'
Plug 'https://github.com/vim-airline/vim-airline.git'
Plug 'https://github.com/tpope/vim-fugitive.git'
Plug 'https://github.com/PProvost/vim-ps1.git'
Plug 'https://github.com/wakatime/vim-wakatime.git'
Plug 'neovim/nvim-lspconfig'
Plug 'neoclide/coc.nvim', {'branch': 'release'}
call plug#end()

call plug#begin('~/.config/plugged') and call plug#end() indicate what configuration pertains to vim-plug. The path inside of call plug#begin is where plugins get installed to; I could pick whatever arbitrary location I wanted. Plugins can be installed with any valid git link. You can see above that there's a mix of full URLs and a shorthand method. I started off by just copying the links for plugins I already used with Vim (all of the full GitHub links) and then adding the others as I looked up how to do some additional configuration. More on those later.

With init.vim updated, I just needed to close and re-open Neovim for everything to apply, followed by running:

:PlugInstall

This opens a new pane and shows the progress as the indicated plugins are all installed. What's really cool about this is that I can also use :PlugUpdate to update my plugins, rather than going to my plugin folder and using git commands to check for them.

Note On Configuration

I ultimately ended up doing all of my configuration in Vimscript. I would actually prefer to use Lua, but most of the examples I found were using Vimscript. I also have a fairly lengthy function in my original Vim configuration for adding numbers to my tabs that I didn't want to have to rewrite, especially since I wholesale copied it from somewhere online. Depending on what you want to do, however, you may end up with a mix of both, especially if you find some examples in Vimscript and some in Lua. This is entirely possible. Just note there can be only one init file, either init.vim or init.lua. If you create both, which is what I initially did, you'll get a warning each time you open Neovim and only one of them will be loaded.

To use init.vim as a base and then also import some Lua configuration(s), I created a folder for Lua at:

~/.config/nvim/lua

In there, I created a file called basic.lua where I had some configuration. Then, back in init.vim, I just added the following line to tell it to check this file as well:

lua require('basic')

Error Checking

Note: I ended up not using the steps below, so if you want to follow along with exactly what I ended up using, there's no need to actually do any of the steps in this section.

This is where some options come in to play. Astute readers may have noticed the second to last plugin in my vim-plug config was for:

Plug 'neovim/nvim-lspconfig'

This is for the LSP, or Language Server Protocol. This allows Neovim to talk to various language servers and implement whatever functionality they offer. However, it doesn't actually come with any language servers included, so I needed to get those and configure them as needed. For example, I could install pyright from some other source, like NPM:

npm i -g pyright

And then I needed additional configuration to tell Neovim about this LSP. The samples were in Lua, which is why I initially needed to use Lua configuration alongside Vimscript:

require'lspconfig'.pyright.setup{}

This actually worked for me with respect to error checking. Opening up a Python file would give me warnings and errors on the fly. However, I didn't get any code completion. I started looking at options for this, but frankly a lot of them seemed pretty involved to set up, and I wanted something relatively simple rather than having to take significant amounts of time configuring my editor any time I use a new machine or want to try out a different language.

Code Completion

Ultimately, I stumbled onto onto Conquer of Completion, or coc. I don't know why it took me so long to find as it seems to be insanely popular, but better later than never. One of coc's goals is to be as easy to use as doing the same thing in VS Code, and I honestly think they've nailed it. I first installed it via vim-plug in init.vim:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

After restarting Neovim and running :PlugInstall, I could now install language servers straight from Neovim by running :CocInstall commands:

:CocInstall coc-json coc-css coc-html coc-htmldjango coc-pyright

After this, I fired up a Python file and saw that I had both error checking and code completion. There was just one final step.

Key Mapping

Given the wide array of key mapping options and customizations that people do, coc doesn't want to make any assumptions about what key mappings are available and which may already be in use. As a result, there are NO custom mappings by default. Instead, they need to be added to your Neovim configuration just like any other mapping changes. However, the project shares a terrific example configuration with some recommended mappings in their documentation. I legitimately just copied the sample into my existing init.vim file. This adds some extremely useful mappings like:

  • gd to take me to the declaration for what I'm hovering.
  • K to show the documentation for what I'm hovering (based on the docstring for Python, for example.)
  • ]g to go to the next error/warning and [g to go to the previous one.
  • Tab and Shift + Tab to move through the options in the code completion floating window.
  • Enter to select the first item in the code completion floating window.
  • A function to remap Ctrl + f and Ctrl + b, which are normally page down and page up, to scroll up and down in floating windows but only if one is present.

And tons of other stuff great stuff. I initially spent about 30 minutes just playing around with some throwaway code to test all of the different options and key mappings. It honestly feels super natural and now gives me the same benefits of VS Code while allowing me to use a much leaner and more productive editor in Neovim.

In my opinion, there's nothing quite like actual projects to really help me learn how to do something. Case in point, I posted last weekend about working on a Python WriteFreely client. I mainly work on it on weekends since, when I finish a day of coding for actual work during the week I usually don't have the motivation to do work on a side project of my own.

While working on it today, I realized that a method in my Post class was actually needed outside of that: check_collection

This method takes the collection passed by the user (think of it like an individual blog, if you're unfamiliar with the API) and validates that it is legitimate. While I initially included this in my Post class, as I added functionality to retrieve a list of posts I realized I needed it in areas where I wouldn't have all of the information to instantiate the Post class.

One immediate option was to just make my Post class more generic so that it could be instantiated and used with less up-front information. However, I didn't particularly like that setup. Instead, I realized that the solution was to simply make a new class, which I called WriteFreely that would serve as a super class. Then I made my Post class a subclass of it via:

from client import WriteFreely
class Post(WriteFreely):

In this way, my only change to the Post class was to delete the check_collection method which it will now naturally inherit from the WriteFreely parent class. I've honestly never really done anything with class inheritance before in a real-world scenario, so to me it's just further proof that I'll never get better experience with something than by simply doing it.

I recently found myself working on a project which required the GSS-NTLMSSP library. The application is going to be delivered via Kubernetes, so I needed to build a Docker image with this library. The project's build instructions are pretty clear about what the dependencies are, but given that this was going to be a Docker image, I wanted to use Alpine Linux as the base image in order to keep the image size as small as possible. The problem with this is that Alpine is just different enough to require different dependencies, publish their packages under different names, etc.

I started off by just doing things manually where I fired up a local Docker container running the base Docker image and then manually installing each package via apk add {package_name} to make troubleshooting easier. Once I had all of the packages from the aforementioned build documentation, I ran ./configure, looked at the errors, figured out which package was missing, installed it, and tried again. After several iterations of this process, ./configure executed successfully and it was time to attempt running make.

make ran for a minute but then would error out with:

undefined reference to 'libintl_dgettext'

This seemed odd to me because while running ./configure I had received an error that msgfmt couldn't be found, and I had installed gettext-dev in order to accommodate that. After some additional packages searches, I discovered the musl-libintl library is also available. I attempted to install that but received an error that it was attempting to modify a file controlled by the gettext-dev package. I uninstalled that via apk del gettext-dev and then ran into another error that—duh—msgfmt was now missing again. I handled that by just installing the vanilla gettext package, not the -dev version, and then finally everything compiled successfully.

The following is the full list of packages that I needed in order to get the build to succeed:

  • autoconf
  • automake
  • build-base
  • docbook-xsl
  • doxygen
  • findutils
  • gettext
  • git
  • krb5-dev
  • libwbclient
  • libtool
  • libxml2
  • libxslt
  • libunistring-dev
  • m4
  • musl-libintl
  • pkgconfig
  • openssl-dev
  • samba-dev
  • zlib-dev

Note that git is included just to clone the repo, and build-base is the meta package I used for compiling C software since just installing something like gcc will not include everything needed.