jan Pontaoski's Thoughts

libre

So, if you're reading this, you probably need some background information. gRPC is a popular RPC system based on HTTP2 and Protobuf. hRPC is an RPC system we at Harmony are porting to from gRPC, built on HTTP1 and Protobuf as well. Harmony is a chat protocol that falls somewhere in between Matrix and Telegram functionalitywise with a ton of extra goodies (besides our in-progress E2EE draft, which is basically just a micro Matrix statewise implemented with Protobuf instead of JSON.)

(If you're reading this on the KDE planet, there's juicy Qt stuff well after we explain what the heck all this networking stuff is, don't worry :) )

gRPC: The Good

gRPC has substantial language support, and is widely available in distros. It's also extremely optimised, using substantial custom HTTP2 behaviour for minimum network transfer.

gRPC: The Bad

gRPC has a very, very big flaw for publically facing services: streams play awfully with reverse proxies like nginx, as they're essentially HTTP2 requests that aren't closed. This causes proxies to be like “hmmm this is a slow loris attack, time to yeet this stream.” For our homeserver at https://harmonyapp.io, this means we had to configure nginx to be ok with requests taking an entire hour. Any streams would always terminate at exactly 60 minutes. To be fair to gRPC, there's a dedicated HTTP2 streams thing being worked on that would allow reverse proxies like nginx to play nice with it, but unfortunately that's not the case now.

Besides that, gRPC's client libraries, while widely available, range from mediocre to [ censored ] awful. gRPC is a Google product that isn't Go, which means that “error handling” is not a word in its dictionary. This has really bad implications for the C++/Qt client, Challah. Essentially, if anything goes marginally wrong, the client just straight up aborts. There is no way for us to gracefully recover from any errors that originate from the gRPC library. This is terrible for the user experience, as we can't even show a “something is going wrong” page. This is one of the big reasons we're moving away from gRPC: we cannot have our only desktop client be crashing on anything slightly less than perfect network conditions.

That wouldn't be a problem, if making our own implementation of gRPC was easy. Unfortunately, it's not. Remember the part where I said it used low-level HTTP2 a lot? Yeah, that gets very complicated very fast.

Additionally, our web client, Tempest cannot do the said low-level HTTP2 stuff. This requires us to specify in the protocol documentation a place for servers to name a grpc-web proxy for web clients to use.

With all of these issues (for our usecase specifically; none of these would affect its usage with microservices which seem to be the main reason people use gRPC) in mind, we knew that using gRPC wouldn't cut it if we wanted something as polished as we hoped. And thus, we started hRPC.

The Goals

We decided quickly that hRPC should: – require minimal if no changes to our .proto files – be dead-simple to implement – be web-compatible (which basically means HTTP1/WebSockets)

The Implementation

First things first, we needed to write a protoc plugin. Thankfully, that was simple. We decided to use a hybrid approach: simple to generate languages like Go would be done using Go's text/template package to write templates, which could either be packed into the binary or loaded from external files on disks to facilitate third parties writing their own templates. Complex to generate languages like Qt/C++ would be done using dedicated functions in protoc-gen-hrpc. This was actually so simple that we decided to write another plugin, protoc-gen-hdocs which generates our online reference documentation from the .proto files. Our JS client doesn't need to make use of this; as the protobuf implementation in JavaScript is transport agnostic. Blusk, the other lead developer of our project, simply wrote a function that takes request information + the inputs and transforms it into the outputs. Likewise, our Rust client and SDK makes use of its own code generation instead of the protoc plugin. That leaves our Go server/client and our C++ client as being generated by protoc-gen-hrpc.

The Flaws

Not everything is rosy with hRPC. Due to using HTTP1 and the straightforward solution to networking, our implementation is nowhere near as slim on networking as gRPC which spends a lot of time and effort shedding bytes. Additionally, we're forgoing the existing gRPC ecosystem, requiring anyone that wants to implement the Harmony protocol to write their own codegen that works with our .protos.

Challah: gRPC

Challah, our Qt/C++ client, uses QtConcurrent approximately a heck tonne to handle sending gRPC requests and receiving data from streams without blocking the main thread. In short, every request uses a thread from a thread pool to send, and depending on its nature, either uses a callback, or will result in data coming into the “main” events stream, in which case we don't take a callback. For streams, we use two tools:

  • thread doing a busy loop constantly doing a while (stream->Read())
  • events

Events are amazing, and allow us to use a largely mutex-free design. What happens is that our stream-reading thread will read the events stream, and translate gRPC reads into Qt events, which are posted to parents of our object hierarchy, which then re-post the events to their children as necessary. The flow for a message looks like “client thread reads event, posts event to communities model, which posts event to its child channels model, which posts event to its child messages model, which then updates data.” Some of this is working around gRPC, but is mostly a sane concurrency and state management solution on its own, which will mostly be intact with the port to hRPC.

Challah: hRPC

Now that I'm the one authoring the client RPC library, I get to make it perfect as possible for Challah. That means it's written with Qt and uses its proper concurrency mechanisms. Fun stuff :). Besides shedding a runtime dependency, the port to Qt will also massively help with portability: gRPC C++ is a giant and clunky beast with a lot of vendored dependencies. Currently, our macOS build isn't working (though it compiles) due to SSL woes. Qt's networking stuff lacks those woes, and moving to them will mean that our macOS build will begin functioning. This also opens the room to an Android build, as protobuf library will be substantially easier to pack for Android than gRPC. Codewise, this means that we can port from abusing QtConcurrent thread pools to just using Qt's native networking types. All in all, that's pretty good.

I'm not sure how to end this blog post, so I'll just drop some links:

Maybe I'll write some more about how implementation of the codegen or the C++ client worked if I see that this post is somewhat popular or if someone requests it. Tschö.

Tags: #libre

Yes, that title is too long and I know it.

If my previous blog post didn't make it clear, I don't like dealing with XML. Obtuse to write, obtuse to read. Given that I wrote a program so that I wouldn't need to write XML for an application menu protocol, it only makes sense that I would do the same for reading Wayland protocols. And thus, ReadWay and its non-web cousin ilo Welenko were born.

Parsing the XML

If you're familiar with Wayland, you're probably familiar with the XML files you can find in /usr/share/wayland and /usr/share/wayland-protocols. What you may not have noticed is the /usr/share/wayland/wayland.dtd file lurking alongside the core Wayland protocol. This is a document type definition file, which defines what a valid XML document looks like. Thankfully, this is a fairly simple DTD to write Go structures for. This DTD definition:

<!ELEMENT description (#PCDATA)>
  <!ATTLIST description summary CDATA #REQUIRED>

becomes this Go code:

type Description struct {
    Summary string `xml:"summary,attr"`
    Body    string `xml:",chardata"`
}

And this:

<!ELEMENT protocol (copyright?, description?, interface+)>
  <!ATTLIST protocol name CDATA #REQUIRED>

becomes

type Protocol struct {
    Name        string      `xml:"name,attr"`
    Copyright   string      `xml:"copyright"`
    Description Description `xml:"description"`
    Interfaces  []Interface `xml:"interface"`
}

Fairly simple, eh?

To unmarshal a protocol XML into a Go structure, you just xml.Unmarshal like this:

data, err := ioutil.ReadFile(path)
// handle error
proto := Protocol{}
err = xml.Unmarshal(data, &proto)
// handle error
// do something with proto

Templates

Of course, Go structs aren't particularly easy to read for documents even compared to XML. This is when Go's html/template package comes into play. You can throw a Protocol and a template at it like so:

<h1>{{ .Name }} <small class="text-muted">protocol</small></h1>

<p>
    {{ .Description.Body }}
</p>

{{ range $iface := .Interfaces }}
    <h2>{{ $iface.Name }} <small class="text-muted">interface version {{ $iface.Version }}</small></h2>

    <!-- finish rendering interfaces -->

{{ end }}

Of course, you have the more generic text/template package, which is what ilo Welenko uses. Same concept applies:

Kirigami.Page {
    title: "{{ .Name }}"
    ColumnLayout {
        {{ range $iface := .Interfaces }}
        Kirigami.Heading {
            text: "{{ $iface.Name }} version {{ $iface.Version }}"
        }
        {{ end }}
    }
}

(And yes, I am statically generating QML code in Go and loading it instead of marshalling it into Qt data types and using model/views/repeaters.)

See Also:

  • ReadWay hosted: ReadWay hosted on the internet. The “special thing that might happen when you drag an XML file onto [the] paragraph” is a Wayland protocol being rendered in your browser using WASM. The future is now. And it don't need no cookies.
  • ReadWay source: The static generator for ReadWay.
  • ilo Welenko: The desktop counterpart to ReadWay that renders into QML rather than HTML. At the time of this post, it's very incomplete compared to the web version.

Contact Me

Have any thoughts/comments/concerns about this post, or want to tell me that I shouldn't statically render QML? Here's how you can contact me:

  • Telegram: @pontaoski
  • Discord: pontaoski blackquill 🏳🌈#8758
  • Matrix: pontaoski@tchnics.de
  • IRC: appadeia_
  • Email: uhhadd@gmail.com

Tags: #libre

Go is one of the best languages to write a parser and tools that need some form of parsing in. This is mainly due to:

  • Great string and regexp functions in the stdlib for parsing
  • Easy and safe introspection for blank interfaces (Go's equivalent of a QVariant or a void pointer)
  • Labels. You have both gotos and the ability to break and continue deeply nested loops, which is great for handwritten parsers.
  • Fast compilation makes for fast iteration.

Screw XML

XML is unwieldy to write and obtuse to read. Unfortunately, things like Wayland use it for protocol descriptions. Fortunately, Go can be used to author tools that generate XML from a more human-readable format.

Introducing the Participle

Participle is a Go library that makes writing and parsing data into ASTs extremely easy. I'll demonstrate a simple usage of it for authoring a better Wayland protocol syntax that can transpile to XML.

One: Designing a syntax

This is mostly up to your opinion: I like the aesthetic of Go, so I went with a very Go-like aesthetic:

protocol appmenu

interface zxdg_appmenu_v1 {
    version 1

    request set_address(service_name string, object_path string)
}

Simple, yet descriptive.

Two: Building trees

Participle by default uses the tokens that form the Go language itself, which is important to know. A grammar has to play by Go rules if you stick with the default tokens.

Let's start by defining a simple protocol struct:

type Protocol struct {
}

It's empty, which isn't very useful. Let's give it a name element since we want to be able to name our protocol.

type Protocol struct {
    Name string
}

This looks like a nice start to our tree, but how does the parsing work? We add some metadata.

type Protocol struct {
    Name string `"protocol" @Ident`
}

This will tell Participle two things:

  1. It should look for the string protocol in our protocol grammar
  2. It should grab the next Identifier token and put it into the field

Now, we probably want to add a hook for an interface, as a protocol without interfaces is useless. Let's write that in:

type Protocol struct {
    Name       string      `"protocol" @Ident`
    Interfaces []Interface `{ @@ }`
}

The { @@ } will instruct the parser to capture as many interfaces as it can and stuff them into the array.

Now let's write a description for what we want an interface to look like, starting with a name.

type Interface struct {
    Name   string     `"interface" @Ident "{"`
    // Put the goodies here!
    Ending struct{}   `"}"`
}

The purpose of the Ending field is to make sure that our interfaces end with a closing bracket.

An interface is composed of requests. Let's take a closer look at what our design looked like:

request set_address(service_name string, object_path string)
^
|
| always "request"
request set_address(service_name string, object_path string)
        ^
        |
        | Must be a valid identifier
request set_address(service_name string, object_path string)
                    ^^^^^^^^^^^^^^^^^^^
                    |
                    | One unit with two parts: identifier and type
request set_address(service_name string, object_path string)
                   ^                                       ^
                   |                                       |
                   | these surround our arguments          |
request set_address(service_name string, object_path string)
                                       ^
                                       |
          this separates our arguments |

Describing this will roughly look like this:

"request" @Ident "(" argument, argument ")"

Let's put that into a struct:

type Request struct {
    Name      string     `"request" @Ident "("`
    Arguments []Argument `{ @@ [","] } ")"`
}

{ @@ [","] } is a fancy way of making the Arguments field say “capture as many of me as possible, and we might have a comma separating us.”

Now let's write an Argument struct.

type Argument struct {
    Name string `@Ident`
    Type string `@Ident`
}

Since this is basically just a tuple of identifiers, that's exactly what we made this struct.

Because an interface can have multiple requests, we add the following field to our Interface struct: Requests []Request `{ @@ } Like above, { @@ } will try and capture as many Requests as possible.

Put together, all our structs look like this:

type Interface struct {
    Name     string       `"interface" @Ident "{"`
    Requests []Request    `{ @@ }`
    Ending   struct{}     `"}"`
}
type Protocol struct {
    Name string `"protocol" @Ident`
}
type Request struct {
    Name      string     `"request" @Ident "("`
    Arguments []Argument `{ @@ [","] } ")"`
}
type Argument struct {
    Name string `@Ident`
    Type string `@Ident`
}

Three: parsing trees

Now that we have our AST designed, let's hook it up to Particple.

parser := participle.MustBuild(&ProtocolDescription{})
protocol := Protocol{}
parser.Parse(os.Stdin, &protocol)

That's easy, eh? Since building XML output is fairly straightforward (just build structs corresponding to the XML output and marshal the AST into them and marshal the structs into XML), I won't be covering that here.

From Here

Some links you may find useful:

blankInterface: A more complete Wayland protocol parser and XML generator.

Participle: The parser library used.

encoding/xml: XML library in Go's stdlib.

Tags: #libre

Note: acceptable from the perspective of a Tetris fanatic who regularly uses jargon like SRS, lock delay, DAS, ARR, etc. For the casual player, these games are perfectly fine. Albeit, I would recommend Quadrapassel over KBlocks to casuals because of the better rotation.

Errata: I mention that KBlocks can only repeat in one direction. It can actually rotate in both directions, it just breaks the norm with its default keybindings and that confused me.

the heck is a “DAS”?

  • DAS: delayed auto start: how long it takes for a piece to start flying to the wall
  • ARR: auto repeat rate: how fast a piece flies to the wall
  • SRS: super rotation system: the guidelines defining how pieces rotate.
  • lock delay: how long you have to move a piece before it locks after it touches a surface.

Why other open source implementations suck

Quadrapassel

The board is the wrong size. That's all you need to know to avoid this one.

Besides the incorrect size, Quadrapassel is barely SRS conformant (albeit the rotation handling is much better than that of KBlocks, which I'll get onto in a bit.)

Timing is also way off, with no lock delay, too much DAS, and not enough ARR.

KBlocks

The board is the correct size, but somehow the rotation handling is even worse than Quadrapassel, because pieces rotate around the center of their occupied region and not around the center of the pieces themselves.

There is only one correct rotation method:

Chart of rotations

Additionally, you can only rotate in one direction.

Like Quadrapassel, timing is off: no lock delay, too much DAS, not enough ARR.

What Nullpomino does right

Nullpomino offers one thing hardcore Tetris fans love: absurd fine-tuning. Each and every aspect can be configured, from DAS, ARR, lock delay, etc.

Additionally, there's a ton of gamemodes that exercise every skill a Tetris player can exercise. From plain single-player Tetris to all sorts of specialty training modes to multiplayer, Nullpomino has it all.

Also, Nullpomino is the fan game that you see in Tetris communities.

You can tell that it was made by Tetris fans for other Tetris fans.

Tags: #libre

rust is quite a neat language, isn't it? gigantic library ecosystem, memory safety, tons of developer-friendly tools in it. for Ikona, I decided to utilise this language, and instead of relying on binding generators that hide half the magic away from you, I wrote all bindings by hand.

rust –> C++ by hand: how?

obviously, rust and C++ are different programming languages and neither of them have language-level interop with each other. what they do both have is C. C—the lingua franca of the computing world. unfortunately, C is a very bad lingua franca. something as basic as passing arrays between programming languages becomes boilerplate hell fast. however, it is possible and once you set up a standardised method of passing arrays, it becomes far easier.

rust to C

so, in order to start going from rust to C++, you need to stop at C first. for Ikona, I put C API bindings in a separate crate in the same workspace. you have a few best friends when writing rust to C here: – #[no_mangle]: keeps rustc from mangling your symbols from pure C – unsafe: because C is ridiculously unsafe and Rust hates unsafety unless you tell it that you know what you're doing – extern "C": makes rust expose a C ABI that can be eaten by the C++ half – #[repr(C)]: tells rust to lay out the memory of a thing like C does – Box: pointer management – CString: char* management

memory management

Box and CString are your friends for memory management when talking to C. the general cycle looks like this:

pub unsafe extern "C" new_thing() -> *mut Type {
    Box::into_raw(thing) // for non-rustaceans, the lack of a semicolon means this is returned
}
pub unsafe extern "C" free_thing(ptr: *mut Type) {
    assert!(!ptr.is_null());
    Box::from_raw(ptr);
}

into_raw tells rust to let C have fun with the pointer for a while, so it won't free the memory. when C is done playing with the pointer, it returns it to Rust so it can from_raw the pointer to free the memory.

structs

for Ikona, I didn't bother attempting to convert Rust structs into C structs, instead opting for opaque pointers, as they're a lot easier to deal with on the Rust side.

an average function for accessing a struct value in Ikona looks like this:

#[no_mangle]
pub unsafe extern "C" fn ikona_theme_get_root_path(ptr: *const IconTheme) -> *mut c_char {
    assert!(!ptr.is_null()); // make sure we don't have a null pointer

    let theme = &*ptr; // grab a reference to the Rust value the pointer represents

    CString::new(theme.root_path.clone()).expect("Failed to create CString").into_raw() // return a char* from the field being accessed
}

this is very similar to how calling methods on structs is bridged to C in Ikona.

#[no_mangle]
pub unsafe extern "C" fn ikona_icon_extract_subicon_by_id(
    ptr: *mut Icon,
    id: *mut c_char,
    target_size: i32,
) -> *mut Icon {
    assert!(!ptr.is_null()); // gotta make sure our Icon isn't null
    assert!(!id.is_null()); // making sure our string isn't null

    let id_string = CStr::from_ptr(id).to_str().unwrap(); // convert the C string into a Rust string, and explicitly crash instead of having undefined behaviour if something goes wrong

    let icon = &*ptr; // grab a reference to the Rust object from the pointer

    // now let's call the method C wanted to call
    let proc = match icon.extract_subicon_by_id(id_string, target_size) {
        Ok(icon) => icon,
        Err(_) => return ptr::null_mut::<Icon>(),
    };

    // make a new Box for the icon
    let boxed: Box<Icon> = Box::new(proc);

    // let C have fun with the pointer
    Box::into_raw(boxed)
}

enums

enums are very simple to bridge, given they aren't the fat enums Rust has. just declare them like this:

#[repr(C)]
pub enum IkonaDirectoryType {
    Scalable,
    Threshold,
    Fixed,
    None
}

and treat them as normal. no memory management shenanigans to be had here.

ABI? what about API?

C has header files, and we need to describe the C API for human usage.

structs

since Ikona operates on opaque pointers, C just needs to be told that the type for a struct is a pointer.

typedef void* IkonaIcon;

enums

enums are ridiculously easy.

#[repr(C)]
pub enum IkonaDirectoryType {
    Scalable,
    Threshold,
    Fixed,
    None
}

becomes

typedef enum {
  ScalableType,
  ThresholdType,
  FixedType,
  NoType,
} IkonaDirectoryType;

not much to it, eh?

methods

methods are the most boilerplate-y part of writing the header, but they're fairly easy. it's just keeping track of which rust thing corresponds to which C thing.

this declaration

pub unsafe extern "C" fn ikona_icon_new_from_path(in_path: *mut c_char) -> *mut Icon {

becomes

IkonaIcon ikona_icon_new_from_path(const char* in_path);

C to C++

once a C API is done being written, you can consume it from C++. you can either write a wrapper class to hide the ugly C or consume it directly. here in the KDE world where the wild Qt run free, you can use smart pointers and simple conversion methods to wrangle with the C types.

advantages

the big advantage for Ikona here is the library ecosystem for Rust. librsvg and resvg are both Rust SVG projects that Ikona can utilise, and both are better in many ways compared to the simplistic SVG machinery available from Qt. heck, resvg starts to near browser-grade SVG handling with a huge array of things to do to SVGs as well as general compatibility. Ikona barely taps into the potential of the Rust world currently, but future updates will leverage the boilerplate laid in 1.0 in order to implement new features that take advantage of the vibrant array, high performance, and fast speed of available Rust libraries.

what I would have done differently

writing a bunch of rust to C boilerplate isn't fun, especially with arrays. since glib-rs is already in the dependency chain of Ikona, I should have utilized the GList instead of writing my own list implementation.

tags: #libre

welp, looks like it's finally time to write this :D

so, ikona 1.0 is here and ready to take on the world (of helping icon designers).

some firsts

so, this is a personal first for me. it's the first time I've released a GUI application that I feel like is actually thoroughly polished.

I believe this is also the first KDE application to be released that's predominantly programmed in Rust—I'm aware of rust-qt-binding-generator, but I haven't seen any KDE apps consume it.

the application itself

would be heretical to write a blog post about the 1.0 release of Ikona without talking about what it actually is, ja?

Ikona is a companion application to a vector editor like Inkscape, providing utilities for wrangling with icons and an icon preview.

Ikona's home screen

Ikona opens up to a fairly unassuming screen, giving users two options: the colour palette or the icon view.

before we get to the meat of Ikona, let's look at the colour palette.

Ikona's colour palette.

Ikona's colour palette is fairly simple—it shows a bunch of colours, and clicking them copies the hex code. the colour palette was designed to offer icon designers a vibrant and large array of colours that fit into the Breeze style.

Ikona's preview screen, light Ikona's preview screen, dark

this is where Ikona's meat lies—the application icon view. it displays application icons at a pixel-perfect size in an environment similar to a Plasma desktop.

by default, it just shows Ikona's icon. the real meat is when you press “Create Icon.” this exports a special type of SVG with the suffix .ikona.app.svg.

the .ikona.app.svg is a special type of input SVG that ikona knows how to process. normally, multiple sizes of an icon are stored as different files, making managing all of them cumbersome. however, the .ikona.app.svg combines all sizes of an application's icon into a single file, making it easier to cross-reference elements shared between sizes in the same file. this also allows Ikona to intelligently split and place icons in the correct locations on export.

Ikona can also support regular SVG files, however only one size of icon can be previewed at a time and Ikona cannot export optimized icons from this format.

saving the icon will cause Ikona to instantly update its preview of the icon.

once you're done designing your icon, you use the export screen to export your icon.

Export screen

you can select which sizes to export, and how to export the icon (to one folder with different names, or to folders per size with same name).

you can also take montages of your icon using Ikona. for ease of sharing, the montages are copied directly into your clipboard for pasting into your favourite chat application.

that's it for the GUI application, but not for Ikona.

ikona-cli

Ikona isn't just a GUI application—there's also a fully independent command line interface to its functionality.

 ➜ ikona-cli
ikona-cli 1.0
Carson Black <uhhadd@gmail.com>
Command-line interface to Ikona

USAGE:
    ikona-cli [SUBCOMMAND]

SUBCOMMANDS:
    class       Class your icon
    convert     Convert your icon from light <-> dark
    extract     Extract icons from an Ikona template file
    optimize    Optimize your icon

There are four subcommands: – class — Injects stylesheets and replaces colours with stylesheet colours. – convert — Converts light icons to dark and dark icons to light. – extract — Allows splitting .ikona.app.svg icons into multiple files on the command line. – optimize — Optimizes your icon with a variety of methods. Unlike more commonly used SVG optimizers, Ikona is able to optimize for ease of rendering, reducing the work SVG libraries have to do to render an icon. this translates to faster rendering and better performance, despite a slightly larger file size.

for the next release

for the next release, two features are planned:

­— wrangling with icon themes. icon themes are a pain to deal with, and a tool like Ikona can be scaled to wrangle with hundreds or thousands of icons instead of just a few being designed. — monochromatic icon preview stylesheet injection and classing are perfect for dealing with monochromatic icons, and Ikona will be able to preview them.

for the packagers

yes, rust sucks to deal with.

if your distro mandates that you aren't allowed to bundle dependencies, most of Ikona's dependencies are dependencies of librsvg, a package that most distros should have. this means only a few new packages are needed if they're not already used by other applications.

if your distro is fine with you bundling dependencies, then you're in for happy days. just rename the cargo vendor tarball to ikona.cargo.vendor.tar.xz and plonk it in the source root alongside CMakeLists.txt. CMake will take care of the rest of the job for you.

Tags: #libre