It's fairly straightforward to write your own custom error types in Javascript. The most basic syntax looks something like this:

class ArgumentError extends Error {
  constructor() {
    super(`The sensor is broken. Null temperature value detected.`);

It's important to first understand the workings of the actual Error prototype, along with Javascript prototypal inheritance.

A Javascript class is a template for objects- it will always have a constructor method, which is automatically executed when a new instance of this class is created. If you don't include one, Javascript will include one for you.

The extends statement allows your object to access all of the methods and properties of the parent method.

In this context, it means that ArgumentError has access to all of the contents of Error. However, to actually access those contents you need to use the super function. The super function allows you to call the parent class constructor with the arguments provided, or call functions on the object's parent. In this context, the message about the sensor being broken is passed into super, which calls the parent Error constructor passing the message as an argument. The Error constructor takes a few optional parameters- providing the string will pass this as the message.

Effectively, this means that ArgumentError is simply an Error that is passed a specific message.

You can also pass a specific argument in a trivially more complex example:

export class OverheatingError extends Error {
  constructor(temperature) {
    super(`The temperature is ${temperature} ! Overheating !`);
    this.temperature = temperature;

You can then perform checks against the property of the error, call functions accordingly.

try {
  } catch (error) {
    if(error instanceof ArgumentError) {
    } else if(error instanceof OverheatingError && error.temperature > 650) {

Writing functions that are nested in a functional style in JavaScript can be tricky. For instance, consider the following code:

const composeu = (unaryFunc1, unaryFunc2) => {
    return function(arg) {
        return unaryFunc2(unaryFunc1(arg));             

In order for this to work properly, the nested function invocations need to be written inside out. Existing functions can be effectively strung together/piped in a UNIX like fashion. The spread operator (...) allows for the number of functions chained to be variable.

In the following similar example, a function calling two binary functions are called on a set of arguments (known length) is returned:

const composeb = (binFunc1, binFunc2) => {
    return function(arg1, arg2, arg3) {
        return binFunc2(binFunc1(arg1,arg2), arg3);

You can also use these variables to control function flow, such as by storing a local variable. I wasn't able to figure out the following problem initially:

// Write a `limit` function that allows a binary function to be called a limited number of times

const limit = (binFunc, count) => {
    return function (a, b) {
        if (count >= 1) {
            count -= 1;
            return binFunc(a, b);
        return undefined;

In my line of work, I frequently end up helping customers who are running into issues with implementing Hashicorp Sentinel policies.

It's a “policy as code” product that ties in nicely with the Infrastructure as Code nature of Terraform. For additional information around the philosophical approach behind Sentinel and the advantages it confers, I recommend seeing this post from one of Hashicorp's founders, Armon Dadgar:

Sentinel is being revised very rapidly and is a paid product, so finding code examples that both actually work and are current can be very tricky. One of the best places to start is this repository of example Sentinel policies(and helper functions) for various cloud providers:

Though Hashicorp literature states “Sentinel is meant to be a very high-level easy to learn programming language”, it isn't easy, particularly if you aren't familiar with the general syntax of go. The difficulty extends outside the realm of the syntax to the actual way that troubleshooting is implemented, and the lack of IDE tooling (outside of a VSCode syntax highlighter). Debugging is chiefly a matter of using print and then running the sentinel binary with the trace flag, as error messages are often quite opaque.

For example, say you're creating a policy that is meant to check for tags, and you unexpectedly run into a situation where undefined is being returned where it's not being expected. This is typically the result of unexpected provider configuration, such as the addition of aliases.

Analyzing this can require a mixture of tfplan, tfconfig, and even tfstate if data sources therein don't contain computed values. Understanding computed values is critical to effectively writing Sentinel code- a lot of resources have values that aren't known until after an apply is performed. Because Sentinel runs occur between the plan and apply phases, it's not possible for a policy to effectively operate against such values. If your Sentinel mocks contain unknown for 'after' the value is likely computed.

If you're using the helper functions from the linked Hashicorp repository, these will often require some combination of all three imports.

At present, the only way to iterate over provider aliases is to use tfconfig.providers, which returns a JSON object containing specified providers.

Of Closure and Currying

Recursion has always been a difficult concept for me to wrap my head around. Consequently, Closure in Javascript is also difficult to understand. Here's a brief series of exercises on Front End Masters, written here mostly to organize my thoughts and try to cement the concepts I've learned.

Consider the following function that takes an argument, and returns a function that returns that argument:

const identityf = arg => {
    return function(){
        return arg

This is possible because of Closure, in which the context of an inner function includes the scope of the outer function. Nested functions can see block variables. On the back end, this involves using the heap instead of the stack to allow child functions to operate once the parent function is exited.

Things get more complex when you return functions:

// A function that takes a binary function, and makes it callable with two invocations
// For instance, calling liftf(multiply) (5) (6) would return 30
const liftf = func => {
    return function (first) {
        return function (second) {
            return func(first, second);

The reason that the multiple invocations(the (5) and (6) in the comments above) are possible is that the function is itself returning functions, and subsequent invocations are passed as arguments to the child functions. Multiple returns don't break the function because again, the child functions can operate even after the parent functions exit.

The process of breaking down functions with multiple arguments into a chain of single return functions is known as currying.

// This function takes a binary function and an argument, and returns a function that can take a second argument

const curry = (binaryFunction, arg) => {
    return function(secondArg){
        return binaryFunction(arg, secondArg);

curry(add,2)(7); // is equal to 9

I've recently switched employers, from a notoriously grindy place to work to a more people-centric place to work. I'm still in training, and consequently don't have any insight yet into how all of that is going to translate into my day to day work, but I've found myself in a place where I didn't foresee myself being- a Manjaro user.

I've dabbled in MANY Linux distributions over the years, and typically use either Ubuntu based distributions (for compatibility/easy targeting) or OpenSUSE Tumbleweed when I want cutting edge package versions. My new employer sent me a Dell XPS 15 with an i9 processor- which also features Qualcomm WiFi that doesn't currently have spectacular support (this support is provided by the ath11k kernel module). None of the Ubuntu, Fedora, OpenSUSE, or Arch ISOs were able to detect the WiFi card out of the box(despite the presence of the ath11k module and brand new kernel version in some cases)– which was a significant problem for me as I had no desire to compile a kernel just to use my wifi. I also didn't particularly want to ask my employer for a different computer, or buy a separate card out of pocket that is better supported as I often saw mentioned as a 'solution'.

I finally tried a Manjaro ISO out of desperation, and was pleased to find that it worked nominally with the WiFi card- until I installed it. I then was able to get things running by taking the steps in an Arch wiki article related to a similarly afflicted XPS model.

I'm definitely still getting the hang of pacman, but I'm already enjoying the presence of the AUR. It's a very nice looking system too, and clearly lots of effort has been put into customizing the look and feel of their GNOME/KDE spins, but I don't think I like the overtness of the Manjaro branding being present in my terminal (as default part of the preconfigured powerline prompt). Additionally, the experience has been rough around the edges overall (weird but non breaking errors in the package manager, lots of trouble with suspend/reboot/sound), though I attribute most of this to the markedly Linux hostile hardware.

I'll aim to update this again in the near future- though I don't see myself switching from my traditionally utilized distros yet, I'm definitely keeping an open mind. Here's hoping things stay stable as I'm ramping up at work, and that better hardware drivers make their way into the kernel.