mnmlmnl

@mnl@hachyderm.io

It is easy to underestimate how much work it is to keep a “usable” Obsidian vault. While the effort varies, a “properly” maintained vault (as in, I am comfortable with its level of messiness, and I know I will find the things I need to find without unreasonable friction) takes me about 10 hours of work per week.

The fact that I refer to it as an Obsidian vault is misleading. It is not about the tool—I find my use of Obsidian to have no friction anymore. I use it without any plugins and just write and write and write simple cross-linked markdown files.

What takes effort is taking your intellectual output and shaping it into something that has lasting value.

It's something I find missing from most PKM tooling/productivity workflow discussions.

Establishing context

There is, of course, something about being comfortable with your tools. But what is really hard and time-consuming is understanding how things will get used in the future.

Capturing is one part, but perfect capture will not lead to useful knowledge. You need to synthesize all the links, citations, highlights, thoughts, diaries, atomic notes, drafts, and ramblings you wrote and stuffed into your vault and put them in context.

Uncovering context is an activity where experience plays a big role. It means understanding the landscape of your thoughts and the landscape of your output. You need to know how you think and how you produce. This is not something you can just look up or copy. No one thinks like you, and the best you can gather from all the methodologies and productivity advice is the inspiration for building context and what other people find useful.

Anybody trying to convince you there is a proper way is dead wrong. It also doesn't mean there isn't value in understanding what they are doing and even adopting it.

The shape and function of knowledge

How you think helps you figure out how to reformulate things. It creates knowledge that has shape. When knowledge has the right shape, it will assemble with other knowledge with the right shape.

You can't just take someone's thoughts and make them with your own.

But your knowledge also needs to have function. This is, of course, related to its shape, but it is informed by what you will do with the knowledge.

How I write and store things I want to use for study is different from how I store things I need to write work documents is different from how I store things I need to create new thoughts is different from how I store knowledge I want to use for writing.

One big shape factor is how to “cluster” notes. This is where most of the debate is, and again, I think it is deeply related to your mode of thinking and production. Tags / folders / index notes / dynamic views / lists. None of this matters if you don't know the shape of your thinking and how knowledge and smaller thoughts relate to bigger nexuses.

Once you know the shape and function of your clusters, it is just a matter of encoding them with the tools at hand.

How I create shape and function

I'm currently in a very generative phase—I have been writing a lot of notes into my analog sketchbook, I have collected hundreds of links and highlights in Reader, I have written countless threads on mastodon, short articles on write.as.

However, none of that knowledge has found its way into my vault in a way that will make it useful in the future. What I need to do to make it useful is: – create Zettelkasten entries for atomic thoughts that I know can be crosslinked – structure notes to establish clusters of thoughts. This is maybe the hardest work. It takes a while and a lot of writing to start discerning these clusters – develop a new methodology for dealing with the large number of highlights and quotes I am gathering from Readwise (it is a new tool).

One of my interests lately is the fediverse, mastodon, activitypub, social media. After 5-6 weeks of research, thinking, writing, I am finally starting to understand what the skeleton of my Obsidian vault is going to be to leverage my work.

For me, it is both a purely “mechanical” workflow. I file scraps into: – wiki entries (pure knowledge) – quotes (cleaned up quote documents I can quickly search and drop into an article) – zettelkasten notes (clearly formulated, punchy thoughts) I also try to make my rambling (diary, mastodon threads, freewriting, notes about articles) reasonably discoverable (usually by cutting out blog-shaped parts of them and calling them “drafts” and referencing them in the relevant ZK and wiki entries).

But it is also establishing what I call “structure notes,” which you can think of as “book-sized” clusters. They are concepts that group many ideas, and I can probably write a medium-sized book about them.

For the fediverse thinking I have been doing, these initial structure notes are starting to gain shape. They are shorthand for entire arguments I have been having with myself. The title of these clusters encodes much more context for me than can be shared with others. Think of them as package names in a codebase, for example.

  • social structure of the fediverse
  • convivial technology in the fediverse
  • moderation tools in the fediverse
  • product thinking and the fediverse
  • technical resources and projects ideas about the fediverse

Conclusion

This is a quick write-up because I am just starting to go through all the content I have generated in the past few weeks, and I forgot about the time required to do a good job. It is also nice to see how there are phases to all this. I can stop gardening my vault for a couple of months and return to it with no problem.

Every couple of months, some part of my knowledge management workflow (most of which I do in my Obsidian vault ) crosses a threshold. Something has grown to the point of not being manageable anymore; a new tool has proven to be really useful and its use has to become more codified; something has turned out to not work as expected and needs to be readjusted or removed.

I've been using Readwise's Reader for a month now (since it came out of closed beta), especially extensively in the last week. It made me pass one of those thresholds.

Readwise Reader

For those who don't know Readwise, it is a website that collects your highlights from many different sources (ebooks, articles, webpages, ...). It combines this highlight management functionality with a spaced repetition mode where it will replay a bunch of highlights to you every day. I've been doing it, and it's been fun, but I don't know how much value there is there for me (I've got bigger plans for spaced repetition in 2023).

Their Reader application is a central hub for all this highlighting. No need to annotate things in kindle and then import it to Readwise and then transfer that to Obsidian. I can now import PDFs, ebooks, webpages, and random snippets of text right there into a centralized hub. I can then highlight, annotate, tag these documents and import the resulting highlights into obsidian. The process is quite smooth!

Dealing with internet writings

I am an avid consumer of internet content, usually coming from a small variety of sources: – links I find on hacker news, lobsters, RSS feeds... – links I find on social media – links I find while deep-diving into one topic

Often, these links are interesting to me but not relevant to what I am currently working on. I used to file them in my Evernote vault and never look at them again, and in late summer shifted to filing them into todoist so that I could go back through them properly. This “file and forget” action is very healthy, as it allows me to feel OK with moving on, despite my curiosity.

As my note-taking evolved, I realized the value of processing these resources and making the nuggets of information they contain discoverable. This is a lot of work, it requires you to read the article, and take notes extracting “reusable value” that can be crosslinked into the vault. It is also work that I find extremely pleasurable and inspiring, too inspiring in fact...

The problem with extracting value

What I would do is go to a coffee shop on the weekends, open my Todoist inbox, go through the links and read and process articles while writing them up, either on paper index cards or in my Obsidian vault. Creating reusable value out of them for me involves: – creating Wiki entries with “hard facts” – creating Zettelkasten entries with “atomic ideas” – writing about it (blog posts, book chapters, even just diary entries)

There are two problems with this approach: – the friction to getting into “information processing” mode is quite high. I need to get into work mode, I need to have 2 tools open, and I need to focus. But I do read a lot of content on half-flame, for relaxation. This means that I only do this on the weekends, in reasonably exhausting multi-hour sessions – I get way too excited. Once I start digesting articles and coming up with Zettelkasten entries, and writing things on index cards or in my vault, hundreds of ideas come to me, and I will end up processing about 10 articles but added about 40 more to the list while doing so.

Getting way too excited doing knowledge work is a thing I love about my brain: I get a lot of value out of these sessions. I've been milking 3 sessions from August writing down notes about “prototyping in software.” But it really doesn't help boil down the knowledge I've encountered. I also end up bouncing the same 3 concepts around for way too long, when more perspectives on the subject would be more helpful.

Do I even want to “boil down articles?”

Now, why do I even want to process all these links? Am I not happy just processing 10, getting a lot of value out of them, and moving on?

One thing I find lacking in my knowledge management currently is the lack of quotes and references to other sources. In my current way of approaching external articles (taking notes, coming up with my own ideas, making Zettelkasten cards ), I have more of a “discussion” with the document than an “extraction” process.

The downside is that often, the actual meaning of the document is not preserved, and I just write down my own biased ideas. And the other is that I don't have good quotes, but good quotes are actually great in blog posts.

What I want is a way to expand my set of ideas, by clearly identifying where they came from, who said them, and in which context, and only then start adding my own notes.

Highlighting and extracting value

I didn't use to highlight things in books and articles, because I would have to put much effort into transporting them into my vault. This changes quite significantly with Readwise and its Obsidian import plugin, as well as the Reader application.

It's been nice to have a place with a queue of articles and know that at least the fact I read them is going to be preserved (and even synced into my vault). Because of the friction of my current process (link into todoist, read carefully and process on the weekend), I would rarely ever productively consume content during the week. Either I would read an article but not take notes and whatever I got from it would mostly be lost, or I would extract “too much” value and not increase the breadth of my knowledge.

With Reader I feel I can just mindlessly scroll through the stuff I saved, realize that I don't have that much interest in it after all, and still preserve the one or other nice sentence in there that I can later in Obsidian cross reference. I will often get through 20-30 articles per day that way!

Where to go from here?

I started devising a workflow to go through the imported snippets in Obsidian (since there are now more than 100 in there), and finding ways to crosslink and file them as they come in. This in a way is the “knowledge work” part of what I used to do on weekends, where ideas are extracted, and things are put in context.

Because it has now been broken off from the “consumption and highlight” part, it feels much more approachable, and also something that can be done on the side, because I only need to recall the snippets for the topic I am working on right now, and can immediately crosslink digested nuggets, instead of having to read original sources.

Too many books this year, but these stood out:

  • the joy of abstraction by eugenia cheng, where oddly enough I've barely read 7 chapters and it's given me so much.
  • write useful books by brad fitzpatrick. eminently practical, super short.
  • seven sketches in compositionality, I've faked my way through 5 chapters, it is really fun and not too hard for me to get into.
  • the secrets of consulting by gerald weinberg – hilarious and wise, it really helped me reframe what I'm actually providing as I get into advisory consulting.
  • unmasking autism by devon price – amazing, deep and practical book about what it means to be autistic and how to overcome the trauma of living in a world where you don't fit.
  • intelligent embedded systems by Louis Odette – this is a book you have to program, which I didn't, but it's absolutely wild. build a VM, then a forth, then a lisp, then a prolog, then an expert system, all for concrete embedded applications. all with source code in 300 pages.
  • augmented lean by Natan Lindner (disclaimer, he's a friend and I consult for his company) about what digital technologies can bring to make manufacturing and supply chains more human and empower workers to have agency in an increasingly automated and monitored world.
  • old but gold: SICP JS version, which gave a new perspective on a book that fundamentally shaped my life. I've written more about it at dev.to.
  • hacking capitalism by kris nova – you really have to chew through it to extract what she wants to get at, and it could have used more editing, this has helped me understand a lot of things about operating in a capitalist world as a technologist. She also is building an amazing community around hachyderm.io which i recommend looking into.
  • weinberg on writing by Gerald Weinberg – a zettelkasten approach to writing, which fits my brain very well, and explains the deep wisdom, erudition, and quirkiness of Weinberg's writing.
  • old but gold: patterns of software by Richard Gabriel, which has to be one of my favorite books on software writing, and which I hadn't read in a decade
  • old but gold: the plenitude by rich gold, which is just such an inspiring book about an artist turned technologist behind a lot of the fun stuff at Xerox PARC
  • less life-changing but fun to work through: cloud native go about building “modern” software with go, with a lot of concrete takes and examples.

I really enjoy writing SQL builders. By that I mean that instead of writing a SQL query, I write a little struct along with a builder pattern that allows me to build up the query. This gives me a proper API documenting the query's intent, as well as a way to do schema migrations in the future while keeping the power of raw SQL.

This is three times more so these days, where most of this code can be TAB-completed with GitHub copilot. I wrote the following example in about 2 minutes.

type UserQuery struct {
	filterEnabled bool
	filterValue   string
	getFullInfo   bool
}

func NewUserQuery() *UserQuery {
	return &UserQuery{
		filterEnabled: false,
		filterValue:   "",
		getFullInfo:   false,
	}
}

func (uq *UserQuery) Filter(filterValue string) *UserQuery {
	uq.filterEnabled = true
	uq.filterValue = filterValue
	return uq
}

func (uq *UserQuery) FullInfo() *UserQuery {
	uq.getFullInfo = true
	return uq
}

func (uq *UserQuery) Build() (string, []interface{}) {

	binds := make([]interface{}, 0)
	selects := []string{"id", "name", "email"}
	wheres := []string{"1=1"}

	if uq.filterEnabled {
		wheres = append(wheres, "name LIKE ?")
		binds = append(binds, uq.filterValue)
	}
	if uq.getFullInfo {
		selects = append(selects, "phone", "address")
	}

	return fmt.Sprintf(
		"SELECT %s FROM users WHERE %s",
		strings.Join(selects, ", "),
		strings.Join(wheres, " AND ")), binds
}

func main() {
	q := NewUserQuery().Filter("Manuel%").FullInfo()
	query, binds := q.Build()
	db.Query(query, binds...)
}

One thing I love doing with this pattern is keeping performance metrics and logs of executed queries, which is easily done and configurable because I can add all kinds of things to the UserQuery class itself.

I've been an avid user of GitHub Copilot since it came out, and it has transformed how I write programs. I now am able to spend quite a bit of time on nice error messages, covering all edge cases with unit tests, devising “complete” APIs, and doing proper error handling.

I use Copilot for basically two use cases: – generate code I have already written in my head – fish for API functions I know exist but can't remember the name of. This often leads to discovering patterns I didn't yet know of

I do tend to turn copilot off when I have “real” programming to do because it tends to generate nonsense if it doesn't already have 2 or 3 base cases to latch onto.

I do sometimes use it as an idea generator, writing a little line of text and waiting for it to suggest some function names or structures. I've found good ideas that way, for example, suggested serialization structures that had more fields than I would have thought of adding, or completing accessor functions that I didn't fully think through.

ChatGPT is on another level

ChatGPT is on another level. I had planned to use it for the day to see what I could get out of it, and I have used it to generate the following: – a list of concrete instances of structures that test edge cases of an import function – a set of terraform resources to deploy a cronjob on AWS Fargate – a makefile to build and push a docker image to an AWS registry – a PHP script to collect and zip a set of resource files, naming the zip file according to the timestamp and git hash and branch, and pushing it to an s3 bucket – a set of next.js endpoints to implement an ecommerce cart functionality

None of this is code that is in any way complicated to write, and I had to redirect GPT from time to time, but the fact that I was able to build all of the above in about an hour's time is absolutely mindboggling. Not only did it generate reasonable answers, but it did a stellar job of documenting what it was doing.

It means I can fully focus on intent, testing, documentation, developer UX, unit testing, and the “complex” part of my job, without having to hunt down terraform documentation or copy-pasting some ECR Makefile magic.

The discourse around ChatGPT and Copilot is weird

I think most of the discussion around ChatGPT and Copilot is disconnected from the experience of actually using it. I'm talking about: – the danger of them producing broken code, – that you can induce it to regurgitate complete functions verbatim, – that it will make us stupid and careless

Both ChatGPT and Copilot confidently generate a lot of wrong or nonsensical suggestions. So does my IDE auto-completion. I found that anytime I dare Copilot to generate something I don't already know how to write, or something I have already written but with different arguments, it will generate something sensible on the surface but wrong. The effort it takes me to code review Copilot's suggestions is mentally more taxing than writing it myself, and after a while, I started to “feel” what Copilot will do well and what it won't.

Will people get bitten by blindly accepting Copilot's or ChatGPT's suggestions? Sure. If you care about your application, you will quickly notice that things are not working. Poor programmers can write broken code entirely without machine help. Good programmers will quickly realize that even the best code, machine-generated or not, will need review, testing, and validation.

Solving problems is about more than coding a solution

More importantly, you need to already have a good understanding of the solution to your actual problem to get a usable output.

Prompts that ChatGPT and to some extent Copilot do well on, like: – I need a next.js handler that takes a JSON structure X and stores it in the table Y in MySQL – Write a makefile to build a docker image that runs mysqldump on X and uploads the result on Y

require a level of planning and understanding of software architecture that requires “strategical” thinking. These smaller tasks are in themselves extremely abstract, only peripherally related to the real-world problem of “selling plants online” or even “backing up our production database.” These tools are made to work at that level.

If I were to ask ChatGPT “please backup my database,” why would I expect its answer to be any better than one of the hundreds of competent SaaS offerings out there? For its answer to be good, I need to guide it so that its answer fits well into my concrete codebase, team structure, operations, and planning. This is hard work, it requires thinking, communication, prototyping, learning new technologies, and knowing the business, the codebase, project constraints, requirements, and team topologies.

That is exactly what I enjoy doing as a principal engineer: high-level technical strategy, with very sharp decision-making when it comes to concrete problems while giving team members the opportunity to own and shape the result, or a more menial AI the pleasure to fill in the blanks.

I have been dabbling with category theory for a while, but I'm actually still unsure about really “studying maths” and how serious I want to be about it.

I don't deal much with applied mathematics. I don't need algebra for geometry, calculus for modeling, or advanced statistics for data analysis. I want better abstractions for refactoring legacy code and robust systems programming.

I'm interested in the insights abstract mathematics gives me regarding my programming. Yet, I understand that to fully realize those insights, I need to study mathematics the way mathematics wants to be studied: I need to be more careful and intentional about it.

Working through books

I now have a fair amount of “proper” mathematics books to study, but only a few hours per week that I want to dedicate to it (I have an open-source side-project I'm excited about, and a book and a blog to write)..

The books are: – Aluffi's Algebra Chapter 0 – the more popularized but still rigorous Seven Sketches in Compositionality – Awodey's Category Theory

These along with more “informal” material like: – Category for programmers by Bartosz Milewski, both the book and the lectures – Programming with Categories, the online lecture by Fong, Spivak and Milewski – Various functional programming and Haskell meetup recordings

I want to a least follow the proofs in the first set of books—maybe do some proof sketches. I feel that actually doing proofs only maintains interest enough if I can cross-check them with other people. Studying rigorously needs community. I'm not really a puzzle guy—I'm way too impatient.

One way to potentially make proof work more relevant (fun!) for me might be to look into theorem provers. I am afraid of the steep learning curve of doing computer proofs in fields that I am not acquainted with. I had reasonable success going through programming language theory coq tutorials; yet I would often hit a wall where I was unsure if I understood the theory, and was just struggling with the theorem prover, or if I had the theory wrong (even though I was familiar with the field, both on a theoretical and practical level).

Keeping track of my progress and insights in blogs

A fruitful way to solidify my progress, as messy as it might be, is going to be to write about my insights after each study session in this blog. I will be often wrong, and often messy—basically, I will publicly show that I have no clue about what I'm talking about. This always holds me back on my main blog, because I know it gets a fair amount of scrutiny and I am quite sensitive to criticism. I only want to talk about stuff I can back up confidently (sadly I only feel confident about a very few things, mainly that I am thinking my own thoughts.)

In these blog posts, it will become apparent I barely know what a set is, and have to go back to definitions every couple of minutes. One thing I realized however is that this lack of rigor doesn't necessarily make the insights less valuable. After all, I'm not a mathematician, I'm a programmer, and overall my software seems to work well enough. So what if I use fancy words wrong to actually get the work done? The best I can do is work at using them correctly more often.

Extracting value out of theoretical texts

I find that I get the most important ideas out of the very first lines of the introductory chapters. The motivation for a field of study is often what needs to stick because the theory will be forgotten soon enough; working through the theory in detail is however a good way to make the motivation stick. A Zettelkasten is a hack to make the insight retrievable without having to internalize it.

Mathematical texts are often devoid of motivation that I can relate to. I am starting to understand how to take abstract structures and start mining my knowledge (design patterns, applications, domains, programming languages theory) to see if I can something to relate it to. Structures like monoids can be easily found where they obviously relate to mathematical operations, but I am now trying to think out of the box. Can a UI be a monoid? Is there a monoid in handling database connection failures? Do errors form a category? Often, these bring up nothing—even then, I know I need to come back to them once I learn more theory.

For example, I am trying to understand what limits and colimits are about. What are they trying to tell me, why do people study them, and what use do they derive out of it? Can I come up with my own?

Seven Sketches Chapter 1 – Generative Effects

A valuable insight (which I capture in Zettelkasten notes) comes right at the beginning. They talk about generative effects. If I understand it correctly, if we have a system with a compositional structure (for example, virus transmission), it is possible to observe it partially (the observation doesn't conserve the operation of transmission) and end up with seemingly surprising results.

For example, we might observe that individual A had contact with individual B in a certain county and that individual B had contact with individual C in another. In each county, we didn't observe A having had direct or indirect contact with C. But putting both county observations, we clearly see that A had contact with B had contact with C.

This is something that comes up over and over again, for example, partial logs, in my day-to-day job. It applies literally everywhere, it's nothing special per se, but I now know that this has been studied in detail at a very high level of abstraction. This will allow me to group any “partial observation” pattern I can infer from my work into the same mental box.

Now, how is this related to the rest of the chapter? Ordering, partitions of sets, equivalence relations, monotone maps, meets, and joins? I will have to work that out. It will I'm sure shed light on parts of logging and observability I often struggle with: what IDs are assigned to what log, how and where do I store individual entries, what do I do with structured data and how do I index it, how do I fill in or indicate missing data, and how do I ultimately process reports. Ad-hoc approaches often work, but they take effort, and I never feel confident about having covered my bases.

I used to have the same issues with things like concurrency, but by getting more closely acquainted with monads and monoids, I now feel that that is a “solved” problem for me.

Another area I feel better knowledge of ordered sets and monotone maps will help is the startup and termination of concurrent processes. Feel free to call me out in a couple of weeks if I don't circle back on this.

One thing often missing from “let's rewrite something with some design pattern” articles is that many different ways of writing something that, on the surface, seems like the same code are valid. If you are writing golang because that's what the team uses, you can be as extraordinary a functional programmer as you want; the if err != style is going to be the best, simply because that's how your team communicates. If you are on a team of Haskell programmers who know how to instantiate a Monad type class, use Monad.

The value of knowing the concept of a monad is that mentally, you can think of both ways of writing the code as the same: chain things together repeatedly, with the next step somehow being related to the previous step. When I write if err !=, which is my preferred way of doing this kind of error handling, I still think of it as a monad. If I want to refactor this method in terms of saying, passing in an object that counts the types of errors for telemetry, I know that this is equivalent to lifting a state monad into it, at least abstractly. That I, in practice, then pass in a mutexed global object or actually use some liftM / monad transformer machinery doesn't change my thinking.

The way I think of abstraction now is that it has two directions. One direction is about seeing the abstractions, being able to boil down a concrete instance into a more abstract concept by forgetting the details. By doing that, you can do some more interesting, powerful logic on it. You can use further concepts to uncover new properties, maybe. You can shortcircuit complex refactors by mapping them to a simple transformation in this more abstract space.

The other direction is transforming your abstraction back into something concrete. You don't have to use a language that allows you to formulate things at the new abstraction level, although it's cool if you can. But you can totally write functors in C; in fact, everybody does anyway. More importantly, when you do that transformation back from abstract-land to concrete-land, you can take a lot of shortcuts as a programmer. It's cool that you can do all kinds of fancy stuff with a pure function, but if I want to use a mutating function to encode my functor and it works in practice, I can still think of it as a functor for most cases.

One concrete example I wrote about yesterday is the concept of a product in category theory. It's a very abstract idea, if you have these morphisms and these objects and you can do this and that then it is called a product. The usual approach is to then say “and that's what tuples and structs are” and moving on, but it falls way short of what I think this gives us. It personally allows me to see that if I have a function that parses a string into a date, it's fine for me to say, store a date as a string, because it's abstractly equivalent. If it sounds obvious, it's probably because that's an abstraction you have spent time already forming and internalizing, but probably didn't have precise mathematical language for it that now allows you to mine a huge swath of mathematical literature for further cool tricks.

I'm really starting to be able to formulate the value I get out of category theory for my down-to-earth programming, and it feels great having it all come together.

One thing that I think is not well taught regarding category theory for programmers is that most of the work stays in the category of types with functions as morphisms and only illustrates the “canonical” version of the structures studied. It's always tuples for products, interpreters for algebras and catamorphisms, etc...

I think it would be much more interesting to showcase that morphisms and objects can be anything.

Since everything can pretty quickly be made into a category or be handwaved into something close enough to be helpful as a category in the context of programming, you can apply the structures studied in category theory to anything (even beyond just types and functional programming).

Communicating about abstraction is really hard in the best of times, and using terms from category theory doesn't help much in day-to-day life, as they will probably confuse people who don't know them (and we all know that it's not something you pick up with a quick google).

Seeing the product when parsing a date

But being able to see, say, a product structure when parsing a date into a year, month, and day, and what that implies in terms of refactoring, is extremely useful. It's something that most developers get to intuitively, just through programming, but what they might not see is how widely it can generalize and how useful it is as a generative pattern.

Parsing a string into a date tuple means I can keep the string, or I can keep a tuple of integers, or I can embed the date in a bigger struct, or I can actually concatenate the string with something else. All the functions on (int, int, int) will still work because I can compose it with the parsing function to have them work on strings.

Sounds trivial, really, but I think there is a lot of value in putting a single, simple word on it.

It's easy to forget that seeing that a string for a date is the same as a parsed version of that string as a tuple is a simple refactoring pattern.

Many junior developers don't know that don't see that, and now I have a simple diagram to show why that is the case, and I can generate 27 more concrete examples of the pattern using that diagram.

I don't need to mention the word category, morphism, or product to do so; I can use it mentally to generate examples. That way, the more junior person will be able to develop the intuition through repeated exposure to concrete examples. At that point, I can jump in and formulate the theory (again, not in category theory terms, but in programming terms).

Keeping sketchbooks

In 2009, in a fit of frenzy, I decided I was going to draw so well that my notebooks would look like Da Vinci's. While I fell quite short of that goal, I established a long-lasting sketchbook habit, and I now have close to a hundred sketchbooks filled with notes and images, and diagrams. I am purposefully messy, because I use the sketchbook as a clipboard for the mind, and I need it to be quickly available.

However, I find that I almost never go back to previous sketchbooks for productive use later on. It is almost impossible to find things, and despite having numbered them, having written indexes on their cover, having grouped them thematically, nothing really stuck.

It is only when I started keeping a hefty amount of notes in obsidian that reuse actually started to happen. It showed me how much I had been missing this whole time.

All those minute thoughts that I have throughout the day where I think “oh that's cool, but I'll surely think about it again when the time comes?” These are the moments when I need to write things down in a way that the information will be easily retrievable when the right time comes. The day when you want to reference all your thoughts about cleaning up CSVs for a blog post, I won't have time to scour thousands of pages of messy notes. I want to search “CSV”, “clean up” and “data engineering” and get going.

Daily logs in obsidian

My obsidian notes are messy as ever. I keep a daily log in which I literally just freewrite and ramble, but that freewriting and rambling is augmented by cross-references to existing wiki entries, zettelkasten notes, literature notes, and furthermore searchable with standard text-retrieval methods.

Having digital daily logs means that I can quickly find these rambling thoughts later on. It also means that they can be quickly refactored into much more structured information. I can select a paragraph in a daily log, move it to a new note and create a wiki entry. Finally, it means that I can rework digital entries over time. While I try not to edit past daily logs, I can easily copy (or transclude) sections from a previous daily log and add my new thoughts to it in the current entry.

I haven't been writing so much in my obsidian vault in the last couple of weeks, but have been mining it for new articles. Seeing how much value I get out of long-passed rambles from the summer reminds me that I need get back to centralizing my notes digitally.

I'm combing my obsidian vault for notes about an article about glazed and I keep finding all these little snippets I wanted to write blog articles about. Turns out I've got a garbage blog right here—every time I come across one of these blog nuggets, I'll flesh it out a bit and post it here. This one is from May 2022.

It’s hard to let go after a run of creative successes.

A creative success could be making a few good songs in a row or getting a few solid programming features out of the way. The important aspect here is that it must feel like having been in a state of flow and having found gratification in your output. The quality of the output itself is actually irrelevant.

Riding that wave can be challenging because creativity needs to be replenished. It is easy to think that the next attempt will be just as good, just as flow-y as the last couple. But that is rarely the case, and the expectation of feeling as good about it as I just did often leads to frustration. The better I felt about my previous work, the stronger the frustration tends to be.

It takes actual effort to dial it back, to recognize that there is something biological going on, that recovery is needed. When I am not able to let go (not easy), I try to interleave some rote, stupid, repetitive tasks that still make me feel mildly productive. When making music, this could be sorting my samples or making sure my backups work well; it could be upgrading my plugins or even just following a paint-by-numbers tutorial on youtube. It really helps if the task as such doesn't really call for judgment about the quality of the output.

When mojo-mnl is back, he'll sure be glad that recovery-mnl has taken the time to sort through all the junk, upgrade the plugins, learned a thing or two about new technology, and made sure the dropbox wasn't overflowing.

It's easy to forget that the final output rests not only on the shoulders of creative, inspired moments but also on the shoulders of grunt work.