wakeohsleeper

Blood-soaked clarity

I have this article to thank for dragging me down the parser combinator rabbit hole. The result has been three separate attempts at writing a parsing library, so here are a some thoughts.

Warning: Unsubstantiated claims ahead.

You might be better off without error handling

A lot of parsing crates out there tout their ability to let you configure how errors are returned from a parse call. This always mucks up the return type, forces you to have a bunch of error-related combinators, and makes the code harder to read. Dispense with error handling from the library, and let users handle errors themselves.

You don't need to be generic over input

Wouldn't it be nice if your library could accept any type of “input stream” and still work? Yes...but no. It isn't worth the pain. The input is almost always bytes. Discard the “abstract stream” concept and force your users to give you bytes.

You don't need to be generic over output

One of the draws of parser combinators is that they let you write declarative parsers in an imperative language. After all the setup and ceremony is done, it feels a little bit like magic. But the magic comes at the cost of debuggability. console.log-style debugging is out the window because the execution looks like a tree, with each node have 0 context about how it got there. Even stepping through with a debugger is pain. One way out of this mess is to always output bytes from your parsers. This forces the library users to parse things one token at a time, in an imperative manner.

Taken together, these ideas lead to designing a library whose only focus is “identifying patterns in bytes”. Bytes in, bytes out. Simple. Understandable. Still useful.

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

The basic unit for CPU time is a nanosecond. This is too small a measure to develop an intuition for. But you can build intuition by comparison. In the “Time for Computers” episode of the Two's Complement podcast, Matt Godbolt tries to tackle this by comparing the CPU time scale with a human time scale. This really helped me, so i'm reproducing a hand-wavy transcription of that scale here:

The basic unit of work a CPU can do is an instruction cycle. During the cycle the CPU will fetch an instruction, decode it and execute it.

CPUs take 1 cycle to do elementary operations like addition, subtraction, XOR's and such. A CPU cycle takes a third of a nanosecond. Charitably, we'll say a human doing the same kind of operation would take 1 second.

So that's our scale: 1/3 nanosecond for a CPU is like 1 second for a homo sapiens

The next type of operation a CPU does is multiplying. It takes anywhere between 4 and 6 cycles for a CPU to do multiplication. That works out as ~1.3 nanoseconds, which in human time is 4 seconds. Now that's still plausible. You could imagine someone who is good at mental math taking about that long to compute 398 times 16 (...i am not good at mental math).

Next up is division If you don't have some sort of look up table, you will need to use pencil and paper to manually calculate. That intuition is about right for CPUs. They can't do division that much better than humans can. Integer division is anywhere between 30 – 100 cycles (10 – 33 ns), which in our scale is anywhere between 30 seconds to a minute and a half of human time. This makes sense if you imagine the CPU needing to take out a pencil and paper.

CPUs read from memory as well. We are told that memory is slow, which is why we have CPU caches that are supposed to make things go faster.

An access to L1 cache is the fastest thing you can get. It's a tiny cache right next to the CPU on the order of 32K bytes in size. It takes 3 CPU cycles to read from L1, which is ~3 seconds in human terms. That's a bit like retrieving information from the sticky-note on your desk.

L2 cache is a bigger, further away cache. If we were comparing L1 to a sticky-note, L2 is like a set of ring-binders you have on the shelves behind you. Accessing L2 is 10 cycles away, which in human terms would be 10 seconds away. Seems a bit quick for a human, but it's within the realm of possibility.

L3 is the final cache layer shared between CPUs. That takes about ~40 cycles to get information, which is 40 seconds in human time.

Now if have to hit main memory, we are talking 100 – 120 nanoseconds. That is 6 minutes of human time. A trip down the elevator to the archives to get the book you need and go back up the elevator and back up to the office to put it in cache. In the working life of a computer whose working job is adding numbers together, that's twiddling your thumbs or taking a tea break. That's why all these performance geeks hate missing their cache so much

Now for the scary part. Reading from an SSD takes about 50 microseconds (not nano anymore!)...which is 2 whole days of human time. That's ordering something on amazon.com whenever you ask your CPU to read a file from disk. Disgusting. I will never use disk again /s.

If you are using a spinning disk whose head is not positioned in the right place and has to seek to the right sector, we are talking 1 – 10 milliseconds...which is 1 – 12 months in human months...

At the far end of this scale is rebooting the computer. Assuming it takes 5 minutes to do so (plausible depending on your computer), that's 32 millennia in human years. A civilization-ending event for your CPU. Think again before rebooting your computers.

From Rust is hard:

When you use Rust, it is sometimes outright preposterous how much knowledge of language, and how much of programming ingenuity and curiosity you need in order to accomplish the most trivial things. When you feel particularly desperate, you go to rust/issues and search for a solution for your problem. Suddenly, you find an issue with an explanation that it is theoretically impossible to design your API in this way, owing to some subtle language bug. The issue is Open and dated Apr 5, 2017.

This deeply resonated to the point I shed a tear or two. The language actively prevents me from designing APIs the way I want. I ran into this constantly when I was working on my parser combinator crate.

From the Zig website:

Focus on debugging your application rather than debugging your programming language knowledge.

Needless to say, I have been enjoying Zig very much.

It's easy to pick it up:

In the past week I have written just shy of ~1k lines of Zig. Compare this to ~5k lines of Rust in the past 3~4 months. Lines of code is a dubious metric, but I mention it because it illustrates how much easier it is to get your thoughts into code in Zig.

It leads to a more concrete understanding of your program:

I don't have a systems programming background, so it wasn't until I started dabbling with Rust that the concepts of heap vs stack memory started being relevant. But even then, Rust hides the dirty work of handling memory allocations from you. Zig pulls the curtain away by forcing you to think about where your bytes are. If I am working on a side-project, I think I'd rather have “segmentation fault” runtime errors than “unconstrained lifetime parameter” compile-time errors. The latter is a made up rule enforced by the Rust compiler. Understanding why it happens and fixing it leads to a tiny boost in understanding the Rust type system. The former is a fundamental operating system rule that i violated. Understanding why it happens and fixing it leads to a better understanding of program memory in general, regardless of programming language.

comptime:

I value good API design. Inferring stuff at compile time leads to APIs that are easier to use. For example, clap lets you define your command line interface by declaring a struct and adding attribute macros. This is great from the user's point of view, but the the barrier to entry on the library writer's side is high; you need to walk the dark alleys of of Rust's proc-macro system. Zig sidesteps this by making meta-programming part of the language with the comptime keyword. The best way I have to describe it is that it allows you to “program your own type system”. I'll leave you with a taste of Zig code to whet your appetite.

The following is a function that takes in a type as argument and returns the number of fields it has. It fails at compile time if the type is not struct-like

// Count the number of fields in `T`
fn countFields(comptime T: type) usize {
    // If `T` is not a struct emit a compile error
    if (@typeInfo(T) != .Struct) {
        @compileError("Type " ++ @typeName(T) ++ " is not a struct");
    }

    // Loop through the struct fields, incrementing the counter
    comptime var count = 0;
    inline for (@typeInfo(T).Struct.fields) |f| {
        _ = f;
        count += 1;
    }

    return count;
}

You would use it like this:

// Define a type called OneField. Instances of this type would 
// have a field called `one` with an 8-bit value.
const OneField = struct { one: u8 };
const ThreeFields = struct { one: u8, two: u16, three: u32 };

test "countFields" {
    try testing.expect(countFields(OneField) == 1);
    try testing.expect(countFields(ThreeFields) == 3);
    // this fails to compile
    // try testing.expect(countFields(u8) == 3);
}

#ziglang

I am at a point where I feel comfortable reading and writing rust code. It is a peculiar type of comfort though, verging on Stockholm syndrome. The emotional instability it causes me is baffling. One minute, I am prancing about because of a solution I was able to write using the language's constructs (ControlFlow anyone?). The next, progress seems so hopeless that I am actively considering picking up another language so I can get actual work done instead of horsing around with the borrow checker. Lifetimes take my mind hostage. The language throws wrenches into my API design. But I always come back.

I would jump ship if I could find an alternative. But Rust has managed to concoct the secret je ne sais quoi sauce of programming languages. The price to pay for this sauce is hours of my time and occasional sadness.

The only language that has come close in my eyes is Nim. But why do you need Unicode operators in the language ??! The quote: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away” comes to mind here. Why are there 7 types of “routines” in the language? That's almost as many string types in Rust ... ha ha .ha...cry. Do notation ?? Why tho?? But i digress.

I was talking about Rust. The language has kidnapped me, yet I sense a bond between us.

Send help.

#rustlang