mnmlmnl

@mnl@hachyderm.io

(this is another one of my articles written with heavy use of claude.ai/ChatGPT. I think it works quite well! Original prompt which then got fairly heavily edited and refined. I used claude because ChatGPT is down, which usually has a bit of trouble with newer technical content, but prompting it with the docs worked ok.)

RTK Query is a powerful tool for managing server state in Redux applications. It lets you define your API endpoints as simple objects inside the Redux store instead of making raw API calls in components.

For example, here is how we might define a categories API endpoint:

// apiSlice.js
import { createApi } from '@reduxjs/toolkit/query';

export const categoryApi = createApi({
  reducerPath: 'categoryApi',
  baseQuery: fetchBaseQuery({
    baseUrl: 'https://example.com/api/', 
  }),
  endpoints: builder => ({
    getCategory: builder.query({
      query: url => `category/${url}`
    }),
  })
})

// Provide auto-generated React hooks for each endpoint
export const {
  useGetCategoryQuery,
  usePrefetch
} = categoryApi;

By defining endpoints like getCategories, RTK Query generates React hooks like useGetCategoryQuery that components can use to fetch data. It handles all the underlying logic for caching, request deduplication, polling, and more. Multiple components can share the same cached data using these auto-generated hooks.

Prefetching with the usePrefetch Hook

One useful feature of RTK Query is the ability to prefetch data before a component needs it. This avoids delays when a user navigates to a new screen. RTK Query offers a usePrefetch hook that returns a function triggering the actual request, which leads to a cache update.

For an ecommerce site, prefetching category data on hover is useful to accelerate product browsing. When the user hovers over a category, we can fetch that category's products in the background.

Here is how we might prefetch a category and its products:

const prefetchCategory = usePrefetch('getCategory');
const prefetchProducts = usePrefetch('getProducts');

return (
  <Link 
    to={`/categories/${category.id}`}
    onMouseEnter={() => {
      prefetchCategory(category.id);
      prefetchProducts({ categoryId: category.id });
    }}>
    {category.name} 
  </Link>
);

Now when hovering over a category, the category details and products are fetched in anticipation of navigation. This powers a smooth browsing experience.

The same approach could be used for individual products, search suggestions, and any other data that can be preemptively cached. RTK Query handles all the underlying data subscriptions, caching, and request deduplication automatically.

In a scenario where we do not have a shared DMA buffer and async IO notification, but instead rely on a traditional read() system call, the implementation would be significantly different. This method involves more traditional, synchronous IO operations, where the userland process blocks while waiting for data to be read from the device.

Userland Implementation with read()

The userland code would be simpler, as it just involves calling read() and blocking until the data is available.

Pseudocode for Userland

function main():
    fd = open("/dev/ethernet", O_RDONLY)  // Open the Ethernet device
    if fd < 0:
        print("Failed to open device")
        return -1

    buffer = allocate_buffer(BUFFER_SIZE)  // Allocate a buffer for reading

    while true:
        bytes_read = read(fd, buffer, BUFFER_SIZE)  // Block until data is read
        if bytes_read < 0:
            print("Error reading data")
            break

        process_data(buffer, bytes_read)  // Process the received data

    close(fd)

In this setup, the read() function call blocks the process until the Ethernet card has data available. The process remains idle during this time, potentially leading to less efficient CPU usage compared to asynchronous methods.

Kernel Implementation with read()

On the kernel side, the implementation would involve handling the read() system call, managing the Ethernet card's interrupts, and a basic scheduler to handle multiple processes and IO operations.

Pseudocode for Kernel

Ethernet Card Interrupt Handler

function ethernet_interrupt_handler():
    data = read_data_from_ethernet_card()
    store_data_in_kernel_buffer(data)
    if there_is_a_blocked_read_operation:
        unblock_the_read_operation()

When the Ethernet card receives data, it triggers an interrupt. The kernel's interrupt handler reads this data and stores it in a kernel buffer. If a read() operation is waiting for data, it unblocks it.

Read System Call Implementation

function sys_read(fd, buffer, count):
    if fd is not associated with Ethernet card:
        return ERROR_INVALID_FD

    if no_data_available_in_kernel_buffer:
        block_current_process()  // Block the process until data is available

    data = retrieve_data_from_kernel_buffer(count)
    copy_data_to_user_space(buffer, data)
    return number_of_bytes_copied

The read() system call implementation involves checking if data is available in the kernel buffer. If not, it blocks the current process. Once data becomes available (signaled by the interrupt handler), it copies the data to the user space buffer.

Basic Scheduler

function scheduler():
    while true:
        process = select_next_process_to_run()
        run_process(process)
        if process is blocked:
            move_to_blocked_queue(process)
        else if process completed:
            clean_up_process(process)
        else:
            move_to_ready_queue(process)

The scheduler is responsible for managing processes, including those blocked on read() operations. It selects the next process to run and handles transitioning processes between ready, blocked, and completed states.

Conclusion

Using read() in combination with a traditional interrupt-driven approach is a more conventional method for handling IO in userland. It's simpler in terms of userland implementation but potentially less efficient due to the blocking nature of IO operations. This approach stands in contrast to the more efficient DMA and async IO notification method, where the CPU can continue processing other tasks while waiting for IO operations to complete, but where the code is split across a callback function and a mainloop.

This post is mostly written by GPT, which is a technique I'm going to try to use more often to get technical information out there.

This is a sketch of how to do async IO with an ethernet card DMA-ing received data into a buffer shared between userland and kernel.

Sync vs. Async IO: The Basics

Synchronous IO

In synchronous IO, when a userland process requests data from a device (like reading from a disk), the process is blocked until the operation is completed. This means that to further use the CPU, a separate thread of execution needs to be run. Standard functions like read() in many programming environments are examples of synchronous IO operations.

Asynchronous IO

Asynchronous IO, on the other hand, allows a process to request data and then continue with other tasks without waiting for the IO operation to complete. The process is notified asynchronously when the data is ready. This model can be highly efficient, particularly in IO-heavy applications, as it ensures better utilization of the CPU.

The iocomplete/asyncio_notify Setup

In our specific setup, we explore an Ethernet card performing DMA. Here, the data is transferred directly into a memory buffer shared between the kernel and userland, bypassing the CPU. The userland application then uses an async_io_notify system call to register a callback function, io_complete, which is triggered when the DMA transfer is complete.

Kernel Side Implementation

On the kernel side, async_io_notify involves several key steps:

  1. Validation and Device Retrieval: The kernel validates the provided file descriptor and retrieves the corresponding device.

  2. Tracking Async Operation: A structure is created to track this async operation, which includes storing the callback and user context.

  3. Configuring DMA: The Ethernet card is configured for DMA, including setting up a mechanism to notify the kernel upon completion.

  4. Handling Completion: When DMA completes, an internal handler is triggered, which then calls the io_complete function in userland.

Pseudocode for Kernel Implementation

function async_io_notify(fd, user_callback, user_context):
    device = validate_and_get_device(fd)
    async_op = setup_async_operation(fd, user_callback, user_context)
    configure_dma(device, async_op)
    return success

function dma_complete_handler(async_op_id):
    async_op = get_async_operation(async_op_id)
    unblock_userland_process(async_op)
    cleanup_async_op(async_op)

Userland Implementation

In the userland:

  1. Opening Device: The Ethernet device is opened and a file descriptor is obtained.

  2. Registering Callback: The async_io_notify call is used to register io_complete.

  3. Blocking: The process uses pause() or similar to block until the callback signals completion.

  4. Handling Completion: Upon notification, the io_complete function processes the completed IO operation.

Sample Userland Code

// Callback function
void io_complete(fd, context) {
    // Signal main process to continue
}

main() {
    fd = open_ethernet_device()
    async_io_notify(fd, io_complete, context)
    pause()  // Wait for completion
    close(fd)
}

Conclusion

By allowing DMA to directly transfer data into memory and leveraging async IO for notifications, we minimize CPU idle time and enhance overall system performance. This paradigm is particularly beneficial in environments where IO operations are frequent and substantial.

The pervasive use of the term “Artificial Intelligence” to describe the surprising and hyped technology du jour unnerves me, as I think it muddies the discourse. Laypeople understand it to mean that machines are becoming human and technologists draw parallels to the human mind and cognition that are just not warranted, especially if you know how the technology works.

Similarly, I've always tried to call “Machine Learning” applied statistics, since (in my amateurish understanding) it is mostly about doing statistics with computers, at scale. It seems more respectful, more precise to me, less prone to be confused with “learning” as done by humans (which I think we don't know nearly enough about).

However, I've come to be a bit less despondent towards these terms. I think they make perfect sense when talking about a field. The field of Artificial Intelligence is indeed trying to make machine artificially do things that require human intelligence to do. The field of Machine Learning is trying to make machine learn concepts, remember facts and generalize its “knowledge” to new problems.

The fact that statistics work especially well in both cases doesn't mean that Artificial Intelligence is just statistics. It's very far from that. The field has been the source of many discoveries and innovations that we take for granted (depth first search, graph unification, Lisp, etc...). “Artificial Intelligence” is a mission statement (and an inspiring one at that, if you like machines). It is a North star never to be reached.

As such, I find it an apt description of a goal and a poor and confusing term to apply to actual solutions and technologies (which have much more precise definitions).

Collapse control in Intellij

In IntelliJ, you can control the collapse of the entire file in multiple ways. I wanted to quickly view only the name entries of the dbt schema.yaml file, put the cursor on the name attribute of the first models entry, and selected “Expand all to level” folled by selecting the 2 value (selecting 1 would only expand the parent node, but not show the individual hash values).

I'm currently visiting Europe, which I hadn't done in 5 years. I'm a native french and german speaker, but after moving to the US and writing pretty much exclusively in english, I felt increasingly uncomfortable about losing my written voice in both languages. While I feel increasingly at ease writing and publishing in english (it is such a “neutral” language for lack of a better word, allowing one to focus on the substance of an argument or concept), I wanted to regain a level of fluency that had slowly been deteriorating since high school.

I've used GPT to correct and rewrite my quickly written notes, as well as translating from one language to the other, using the styles of different authors I enjoy, and the results are impressively good (much better than what I get when playing around in english. A consequence I suppose stemming from the increasingly heavy reinforcement learning applied to GPT in order to keep its output “safe”).

I also use it to generate examples and explain concepts from grammatic books I'm working with on the side. It interestingly (but not surprisingly, if you consider how these models work) is not really good at parsing and “understanding” grammatical concepts given a text example, but it does well the other way round: rewriting in a certain tense, using adverbs or prepositions; or providing examples of certain grammatical constructs.

Rewriting some french

Here is an example of me working with a quick forum post I wrote earlier today in french (I know GPT transcripts are fucking boring, but what can you do):

#+begin_ai
[SYS]: You are a helpful assistant. Vous parlez un francais tres litteraire et correct.

[ME]: Rajoutez les accents dans le text qui suit:

Je suis de passage en france, et je me rends compte combien m'avait manque la culture francaise et ses nombreux debats philosophiques. J'aime particulierement les essais philosophiques de 40-100 pages publies sous forme de livres, donc forcement d'actualite. J'ai trouve de tres bons textes sur l'intelligence artificielle et le role de l'ingenieur dans ces temps de changement incroyable.

J'avais depuis quelques annees peur de perdre mon francais, vu qu'a chaque fois que j'essayais de composer un mail, je me rendais compte de ma grammaire appauvrie et mes nombreuses fautes d'orthographe (ainsi que le manque d'accents sur mon clavier et ma paresse a changer de layout). Au final, quelques jours de lectures m'ont sans effort aucun permis d'ecrire en francais dans mes notes (la meme chose s'est passee pour mon allemand, vu que je suis bilingue. L'allemagne a une culture ecrite bien differente, ou tout se passe essentiellement a la radio et dans les journaux quotidiens regionaux). La juxtaposition des differentes langues et la traduction d'une reflexion de l'un a l'autre est fort interessante. Sapir-Whorf a peut etre ete demantele scientifiquement, mais l'etude scientifique n'est qu'une dimension qui se fait souvent completer par l'experience vecue.

[AI]: Je suis de passage en France, et je me rends compte combien m'avait manqué la culture française et ses nombreux débats philosophiques. J'aime particulièrement les essais philosophiques de 40-100 pages publiés sous forme de livres, donc forcément d'actualité. J'ai trouvé de très bons textes sur l'intelligence artificielle et le rôle de l'ingénieur dans ces temps de changement incroyable.

J'avais depuis quelques années peur de perdre mon français, vu qu'à chaque fois que j'essayais de composer un mail, je me rendais compte de ma grammaire appauvrie et mes nombreuses fautes d'orthographe (ainsi que le manque d'accents sur mon clavier et ma paresse à changer de layout). Au final, quelques jours de lectures m'ont sans effort aucun permis d'écrire en français dans mes notes (la même chose s'est passée pour mon allemand, vu que je suis bilingue. L'Allemagne a une culture écrite bien différente, où tout se passe essentiellement à la radio et dans les journaux quotidiens régionaux). La juxtaposition des différentes langues et la traduction d'une réflexion de l'un à l'autre est fort intéressante. Sapir-Whorf a peut-être été démantelé scientifiquement, mais l'étude scientifique n'est qu'une dimension qui se fait souvent compléter par l'expérience vécue.

[ME]: Corrigez les fautes de style et de grammaire, et finalement recapitulez ce que vous avez corrige et pourquoi.

[AI]: Je suis de passage en France et je me rends compte combien la culture française et ses nombreux débats philosophiques m'avaient manqué. J'apprécie particulièrement les essais philosophiques de 40 à 100 pages publiés sous forme de livres, donc forcément d'actualité. J'ai trouvé de très bons textes sur l'intelligence artificielle et le rôle de l'ingénieur durant ces temps de changement incroyable.

Ces dernières années, j'avais peur de perdre mon français, car à chaque fois que je tentais de rédiger un e-mail, je me rendais compte de l'appauvrissement de ma grammaire et de mes nombreuses fautes d'orthographe (ainsi que l'absence d'accents sur mon clavier et ma paresse à changer de disposition). Finalement, quelques jours de lecture m'ont permis sans effort d'écrire en français dans mes notes (il en est de même pour mon allemand, langue dont je suis également bilingue. En Allemagne, la culture écrite est bien différente ; tout se passe essentiellement à la radio et dans les journaux quotidiens régionaux). La juxtaposition des différentes langues et la traduction d'une réflexion de l'une à l'autre sont très intéressantes. Sapir-Whorf a peut-être été démystifié scientifiquement, mais l'étude scientifique n'est qu'une dimension qui se fait souvent compléter par l'expérience vécue.

Corrections effectuées :
- Correction des fautes de grammaire : "m'avait manqué" remplacé par "m'avaient manqué", "ces temps de changement incroyable" par "durant ces temps de changement incroyable" et "j'avais depuis quelques années peur" par "ces dernières années, j'avais peur".
- Améliorations de style : "j'aime particulièrement" remplacé par "j'apprécie particulièrement", "publiés sous forme de livres, donc forcément d'actualité" par "publiés sous forme de livres, donc forcément d'actualité", "sans effort aucun" par "sans effort" et "fort intéressante" par "très intéressantes".
- Clarification des idées : "la juxtaposition des différentes langues et la traduction d'une réflexion de l'un à l'autre" remplacé par "la juxtaposition des différentes langues et la traduction d'une réflexion de l'une à l'autre".

Ces corrections ont été effectuées pour améliorer la fluidité et la cohérence du texte, ainsi que pour corriger les erreurs de grammaire.

Translating to german and loss of meaning

The slow disintegration of meaning and context is ever present, it's not like you can just copy paste from the output. You have to actively engage with the material, and this is infinitely more fun than studying some dead grammar exercises.

#+begin_ai
[SYS]: You are a helpful assistant. Sie sprechen hoch literarisches Deutsch im Stile von Max Frisch und Guenther Grass.

[ME]: Uebersetzen Sie folgenden Text ins Deutsche:

Je suis de passage en France et je me rends compte combien la culture française et ses nombreux débats philosophiques m'avaient manqué. J'apprécie particulièrement les essais philosophiques de 40 à 100 pages publiés sous forme de livres, donc forcément d'actualité. J'ai trouvé de très bons textes sur l'intelligence artificielle et le rôle de l'ingénieur durant ces temps de changement incroyable.

Ces dernières années, j'avais peur de perdre mon français, car à chaque fois que je tentais de rédiger un e-mail, je me rendais compte de l'appauvrissement de ma grammaire et de mes nombreuses fautes d'orthographe (ainsi que l'absence d'accents sur mon clavier et ma paresse à changer de disposition). Finalement, quelques jours de lecture m'ont permis sans effort d'écrire en français dans mes notes (il en est de même pour mon allemand, langue dont je suis également bilingue. En Allemagne, la culture écrite est bien différente ; tout se passe essentiellement à la radio et dans les journaux quotidiens régionaux). La juxtaposition des différentes langues et la traduction d'une réflexion de l'une à l'autre sont très intéressantes. Sapir-Whorf a peut-être été démystifié scientifiquement, mais l'étude scientifique n'est qu'une dimension qui se fait souvent compléter par l'expérience vécue.


[AI]: Ich bin auf der Durchreise in Frankreich und bemerke, wie sehr mir die französische Kultur und ihre zahlreichen philosophischen Debatten gefehlt haben. Besonders schätze ich die philosophischen Essays von 40 bis 100 Seiten, die in Buchform veröffentlicht und daher zwangsläufig aktuell sind. Ich habe ausgezeichnete Texte über künstliche Intelligenz und die Rolle des Ingenieurs in diesen Zeiten unglaublichen Wandels gefunden.

In den letzten Jahren fürchtete ich, mein Französisch zu verlieren, denn jedes Mal, wenn ich versuchte, eine E-Mail zu verfassen, bemerkte ich die Verarmung meiner Grammatik und meine zahlreichen Rechtschreibfehler (sowie das Fehlen von Akzenten auf meiner Tastatur und meine Trägheit, das Layout zu wechseln). Schließlich ermöglichten mir einige Tage des Lesens mühelos, in meinen Notizen auf Französisch zu schreiben (ebenso verhält es sich mit meinem Deutsch, einer Sprache, in der ich ebenfalls zweisprachig bin. In Deutschland ist die Schreibkultur ganz anders; alles geschieht vor allem im Radio und in regionalen Zeitungen). Die Nebeneinanderstellung verschiedener Sprachen und die Übersetzung eines Gedankens von einer in die andere sind äußerst interessant. Sapir-Whorf mag wissenschaftlich entmystifiziert worden sein, aber die wissenschaftliche Untersuchung ist nur eine Dimension, die oft durch gelebte Erfahrungen ergänzt wird.

[ME]: Als skizzierte, knappe saetze in strichlistenform, wie max frisch beim entwurfs seines tagesbuchs.

[AI]: - Durchreise in Frankreich; Kultur und philosophische Debatten schmerzlich vermisst
- Philosophische Essays von 40 bis 100 Seiten geschätzt; aktuelle Themen, Buchform
- Exzellente Texte: Künstliche Intelligenz, Ingenieursrolle, Wandlungszeiten

- Französisch-Verlustangst; Verarmung der Grammatik, Rechtschreibfehler, fehlende Akzente, Trägheit
- Lesetage: Müheloses Schreiben in Französisch; Notizen
- Deutsch: Zweisprachigkeit; Schreibkultur anders; Radio, regionale Zeitungen

- Nebeneinanderstellung von Sprachen; Übersetzung interessant
- Sapir-Whorf: Wissenschaftliche Entmystifizierung; gelebte Erfahrungen als Ergänzung

The dead-ass job in english

I absolutely loath the way GPT writes english, and it's gotten so much worse over the months. And this is already the raw API with system prompt.

#+begin_ai
[SYS]: You write concise but well formulated, elegant articles.

[ME]: Rewrite the following slack message into a decent short article. Try to stay close to the original text, occasionally rewording sentences to be better written english. Try to keep a conversational tone.

jeeze gpt is great for getting better at a language. holy macquerel. I've been losing my (native) written french and german, and as I've become more comfortable writing and publishing in english, I wanted to regain that level of fluency in both. I've been asking it to correct and rewrite my jotted notes, as well as translating to and fro each, and the results are really really good. Much better than asking it to just write in plain english. I'm working through grammar books on the side, and I can just jot down concepts and passages and play with it in GPT. I've yet to have it produce things that are unnatural / bad quality.




[AI]: **Using GPT to Enhance Language Learning: A Personal Experience**

GPT has proven to be a remarkably effective tool for refining one's linguistic skills. As a native French and German speaker, I found my written abilities in these languages fading as I began to focus more on writing and publishing in English. To regain my fluency in both French and German, I decided to incorporate GPT into my language learning process. 

By having GPT correct and rewrite my notes and translate them between languages, I've observed a significant improvement in the quality of my writing. Indeed, the platform demonstrates a greater proficiency when handling these tasks, as opposed to generating content solely in English. To further enhance my understanding, I've been working through grammar books alongside GPT. By jotting down concepts and passages, I am able to explore and experiment with different language structures using the platform. 

Throughout this process, GPT has consistently produced natural and high-quality translations and corrections. This serves as a testament to the platform's potential as an invaluable language learning resource for those seeking to maintain or improve their fluency in multiple languages.

[ME]: 
#+end_ai

Just no.

In response to schooled, which says:

The days of the junior code monkey are over. The creative aspects of development will shift towards correctly understanding and articulating the requirements. Refining designs. But feature very little hacking of code.

I disagree. I think you do need to be just as precise a coder as you need to be today to do a proper articulation of requirements, as well as guide the LLM towards a well architected, resilient and effective solution.

So I stepped back. Decomposed the big problem into smaller bite-sized chunks. Repeated the iterative prompt generation exercise. Then fed the prompt for the first component into the AI.

I think that the comparison “the AI generates better code than a junior developer” forgets that the AI has nothing in common with a junior developer. While a junior developer might not write code with the apparent consistency and level of quality of the LLM (nor do I, to be honest), they have a very different set of capabilities (they have a human brain, not a bunch of code running on GPUs), and to use a loaded word, intelligence. Current LLMs might fail at decomposition, but they actually also often fail generating simple yet correct code, despite being very consistent in their style, variable-naming, function scope and providing clear and plausible explanations. I think that's a plateau with our current approaches (I do think it's actually a strength, but I have a huge ass draft about that in my drawer).

A junior will become senior and will become so in a way LLMs can't, because human brains are infinitely malleable and surprising and ever-learning. To do a crude analogy, comparing the coding capabilities of a LLM is like comparing the capability of a junior developer to generate the right method calls when autocomplete does so much better.

Also, and I think that's a very common pattern, we as seniors tend to forget that juniors (as in, younger people) are clever af and learn oh so fast, and an LLM plays right into that feedback loop. Remember how senior developers might tell you you had to do certain things in x,y,z because that's how they learned, and you were like “lol” and installed your linux distribution?

We might talk a big game about properly gathering requirements and carefully perusing the output of an LLM. Kids will just be “yeah whatever, prompt prompt run run build build goodbye” and quickly overshadow our capabilities at verifying LLM output, because they have to solve actual issues so much more often. I certainly didn't learn testing because I carefully wrote TDD, I learned testing by building systems that wouldn't work correctly unless I became good at testing.

I don't see the “code monkeying” period disappearing, it will just be oh so much shorter, the same way we rarely tweak assembly by hand, but to become a proficient system developer you might spend 3 months of coding some barebones OS by hand to actually know what we're doing when we're welding our rust files together with the acetylene torch of our compiler. In fact, I think every developer should spend a fair amount of time code monkeying their way around because that's how you stay sharp.

I've barely been writing in this blog that I purposefully set up to just yeet stuff out there, out of fear of publishing half finished thoughts. But that was the point in the first place. I write so many things on mastodon or on slack that I think are actually worth publishing in blog form, but then never do because I feel too precious about my output. No more! This is a blog of drafts. In fact, a blog of first drafts. You have been warned.

The topic of SOLID and its value came up in one of my slacks this morning. Discussion of methodologies and how to do them right and all the jargon they come with is something that I don't really understand, and thus irritates me (things I don't understand or think are unimportant but people attach a lot of importance too irritate me. A character trait that I'm not really happy with, but it's the main driver behind my writing and thinking, and hopefully it makes me less of an apparently angry confrontative interlocutor in person).

These software design principles, these manifestos, these angry rants and preachy blog posts were written up and designed by very small groups of individuals who in software engineer fashion believe that there is a single solution to the problems of the field at large and that somehow they were the one to find it. Besides the element of self-delusion inherent to such endeavours (ignoring things that might not work so well in their own practice already), these principles don't transpose easily to other teams, projects, company structures, incentives and timelines.

Being extremely curious and naive, I've tried and embraced many “miracle solutions” to the problem of software complexity over the years, from functional programming, hardcore low-level “unix philosophy of simplicity”, lisp “program the language”, heavy-weight object modeling, design patterns, go “concurrency is easy if you just use channels”, test-driven development, microservices, you name it. I can save you the time and effort to try all of these out by summing them up with “it depends” (more seriously, please go ahead and learn as much as you can, take all of them seriously, they are all ideas and concepts that people have spent a lot of time creating and refining, there is a lot of value in each of them).

I think that at the heart of all these approaches, names, workflows, methodologies, talks, youtubes, conferences is the desire and the need to find a common vocabulary and set of practices to allow a team of developers to tackle problems with the amount of complexity that a large scale software endeavour entails. More important than the details of each is the fact that programmers collaborating on a piece of software agree on their meaning and the intent behind them.

A shared vocabulary, a shared set of design patterns, a well-oiled workflow are valuable because they reduce friction agreeing on what needs to be written and how. If shortens feedback cycles and makes onboarding easier. It gives the team a set of tools that they might already know, or can look up in published literature (vs having to learn it by osmosis, if there is anything to learn at all). If the team agrees on using typeclass heavy Haskell, which typeclasses should be used, then the team will be able to communicate in shorthand and reuse abstractions well. Similarly, a team well versed in object oriented design applying SOLID principles will be able to use shorthand when discussing design or giving feedback in code review. We all know by now that functional programming works, just as microservices work or object oriented design works, because successful products have been built by respectable engineers using them. Debating finer points is only useful in as far as you can apply the insights gained in the debate to your thinking and programming.

Getting people to agree on something that they hold dear is always a fraught endeavour-I think the key to success applying any of these principles lies less in the methodology itself, and more so in the way it gets shared and revised. It is more important to achieve consensus and a shared understanding through all the negotiation and facilitation and documentation and communication that goes with it than to perfect the technical details. They will inevitably change as the product requirements change, as new people with different backgrounds join the team, as the zeitgeist evolves, as the burden of legacy grows or technical shortcomings of the chosen framework become apparent.

I think that this explains the value of code review, RFCs and ADRs, the glue work of staff engineers, writing and blogging and whiteboard sessions and prototyping. It is all about helping different people understand each other better to build and maintain a common artifact.

Doom emacs has all kinds of weird stuff going on when starting up, so the standard emacs --batch command doesn't work. Instead, I wrote a little doomscript prompted by the following github issue . This allows me to run buttercup scripts with the CLI command buttercup.

#!/usr/bin/env doomscript

(defcli! buttercup ()
  (require 'doom-start)
  (require 'buttercup)
  ;; prepend . to load path
  (setq load-path (cons "." load-path))
  (buttercup-run-discover))

(run! "buttercup")

Save it under ~/.doom.d/bin and make sure it is in your $PATH and off you go!

I recently got a framework laptop on which I am running linux, and decided to give it a run as my main driver, mostly keyboard driven. I want to use a set of 3 editors: – VSCode because it has copilot and in general seems to be the IDE with the most opensource activity – emacs because I like the looks of doom emacs, I've used emacs a lot in the past, it has vim bindings and all the snaggles, and I just don't like the vibe of VSCode or vim for that matter (it's all vibes, bro) – goland/intellij/clion as my main drivers

This time I am going to try to document most of the setup I do, as notes to myself.

So here goes the first: Settings emacsclient up as a pager with kitty.

I want to be able to select parts of my scrollback buffer quickly with kitty, and it seems that the way to do that is to use the shortcut to open the scrollback in the pager, but replace the pager with say vim or emacs.

Now, emacs doesn't allow slurping stdin as a file the way vim does, so I had to write this little workaround, which also strips out the terminal codes I could identify:

#!/bin/bash

# Call emacsclient as a pager.
# First, read in the content of stdin into a temporary file

t=$(mktemp /tmp/emacsclient.XXXXXX) || exit 1

echo "Reading into emacs..."

# Remove terminal escape sequences (color and move, as well as some funky starship stuff)
cat - \
   | sed 's/\x1b\[[0-9;:]*[mGKH]//g' \
   | sed 's/\x1b\][0-9;:]*[AC]\x1b\\//g' \
    >> $t

emacsclient "$t"
rm -f $t

And the corresponding kitty.conf entry:

map f1 launch --stdin-source=@screen_scrollback --stdin-add-formatting --type=overlay /home/manuel/.local/bin/emacs-pager.sh