“Day 23: Separate Cookie Jars.”

I recently installed Firefox's Multi-Account Containers add-on.

In the past I've used Sandboxie – a mechanism in Windows for creating sandboxed areas. But abstracting at the OS layer means having multiple windows running, and most of the time I've not bothered.

I moved my browsing from Chrome back to Firefox a few months ago, and need some privacy add-ons. I'm currently running:

  • Privacy Badger: I put this in first
  • uBlock: Added afterwards, and stopping a lot of trackers getting to Privacy Badger
  • ClearURLs: For where tracking is in the URL
  • Cookie AutoDelete: So cookie are only stored for sites I choose to whitelist

Most of the time this gives an acceptable balance between privacy and convenience. If a page comes out malformed I'll temporarily disable them one by one until I've found the problem and either put in an exception or decide not to use that site.

Firefox multi-account containers give another layer of abstraction. It overlays a set of “containers” that you create (it comes out-of-the-box with some templates: personal, work, shopping, banking). Each of these containers can be considered a separate “cookie jar” giving both protection from tracking and convenience.

It's particularly convenient if you have multiple accounts on a particular site. For example, I have a personal AWS account where I store backups, digital photos, and run experimental machines. I also have a work account for AWS lab work. Being able to have separate sets of cookies stored for these two accounts is really useful.

Conversely, if I follow a link someone has posted on Facebook then that link opens in the same container as my Facebook account – it can't harvest cookies and history from normal activity.

I now have an MO for web browsing and web enabled services with which I'm happy.

This post is day 23 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 22: Very Big Mangles.”

Checking drawings into git

This post was going to be an extension of the “git or not” subject. My question “What drawing packages are there where you can commit your work to git?” has two answers: one from now and one from the 1980s.

  • The answer from now is to use draw.io and unselect “compressed” under the preferenence menu. That way you get good descriptive XML that, for a lot of editing, allows diff to do its thang. You have most of the advantages of modern drawing (e.g. connectors attached to obects) with a readable descriptive file format.
  • The answer from the 1980s is xfig. It has its own descriptive file format ... and as soon as I looked at it the scene dissolved into a flashback.

The rest of this post is a flashback

Flat Metal rolling in the 1980s

The year is 1988 and I'm three years out of university, working at GEC in Rugby. With a degree in Engineering Science (with a third year option of Electrical Engineering) I've landed in Britain's largest electrical engineering company in its heyday. Having left university heading for a career in electrical machinery and power electronics, I've swerved into mathematical modelling and control for steel rolling.

Hot rolling of flat steel (sheet) is an industry with a massive financial throughput: in the 1980s our rule of thumb was that stopping a rolling mill caused losses of tens of thousands of pounds per hour. The upside of this financial throughput is that quite big investments can be justified for small improvements.

In the mid to late 1980s we were just getting to the point where we could model how thick the metal would be when rolled (using metalurgical properties and temperature measurements to calculate the set points for the very big mangles). Minimising the length of material that was out of tolerance would give enough financial gain to justify investments. Until this point the set points were applied using lookup tables, and the race into active control featured the automation engineering giants of the day: General Electric; GEC; Asea; Alsthom; Siemens; MDS; Hitachi; Toshiba.

At the point where I hit this scene, attention was moving from not just controlling rolled metal thickness but also the profile and flatness: flat metal isn't really flat, it's thicker in the middle than at the edge (this is known as crown or profile). It's a function of the fact that the rolls used to flatten it will bend under the force. Not only do we have to consider the crown when rolling it, we also have to ensure that the thickness reduction from rolling is proportional across the width: if we roll it too thin at the edges then they'll be too long and will be wavy like a pie crust. Fixing wavy edges means more downstream processing which costs money.

A World Conference

Everyone was racing towards controlling crown and wavy edges at the same time, and we in GEC had our own story to tell. A conference was convened in 1988 in Pittsburgh for those working on the problem, both in the engineering companies and the steel producers, to share their findings and experience.

This sort of conference in the 1980s was pretty formal. People wore suits and presented finished material. In the days before Powerpoint, this meant that you put your presentation onto 35mm slides (which is why they are still called slides in Powerpoint and equivalents) which would get loaded into a slide carousel for projection. Those of us whose presentations were a little more “immediate” could get away with using overhead projection and took A4 or letter size acetates with us to the conference.

I was nominated to represent GEC at this conference. As a 25 year old, three years out of university, I was going to America for the first time to a learned conference. And I had to write a paper to present.

In which we finally talk about (x)fig

We submitted a written paper – probably now lost forever. I think I wrote it using LaTeX and, in the pre-PDF days, we printed it and put the printout in an envelope to post to the conference organisers, who published it in a proceedings journal.

For the accompanying presentation we decided we'd talk about curve fitting: the bending of the rolls in a mill is neither linear nor parabolic, and we/I had experimented with quartic fits to our models of deformation. What I wanted to show at the conference was how we overlaid modelled curves on both the simulated results and real life measurements from a traversing thickness gauge.

The mechanisms needed to show this were surprisingly difficult. There was no Matlab to make plots, this was all to be done on an overhead projector.

Reader, I used fig (before it was renamed xfig). I was able to make fig diagrams which each had alignment crosses for overlaying on an overhead projector. The results of the simulation or measurements or fit could be spat out of a FORTRAN program as a series of points to be pasted into a .fig file (using emacs), giving a faithfully aligned set of curves that could be overlaid to show the refinements in fitting. For the time, we were pushing the boundaries of presentation technology to bring the curve fitting process to life. And as I sat down, the session chairman said to me off microphone “Wow, that was amazing” – our presentation had hit the mark.


While writing this piece, I did a quick search for our paper: I'd assumed it was lost in posterity, lying somewhere in a vault in Pittsburgh. Happily I found, via a search of public internet, a paper in which we are cited as a reference. Like the snail that looked back after crawling across a beautiful piece of polished marble, I can proudly say “I left my mark”.

A sincere thank-you to the original developers of fig.

This post is day 22 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 21: Studying.”

Been a few days since my last entry. There are things I want to write about, but I've been up to my neck in studying for an AWS exam.

It's a recertification: my AWS Solution Architect Associate ticket runs out in July, and it is to my employer's benefit if I renew it.

I'm wary of recerts: a few years ago my RHCE came up for renewal and I failed it. Passed it again the following year, but it's expired again and I've not renewed it ... yet. Got a stack of Splunk certs to renew as well, but for the moment my focus is on AWS.

The AWS SA-Assoc covers a lot of ground: authorisation and permissions; compute; auto-scaling; networking; storage; databases and then a long tail of familiarisation with other services.

Exam's on Friday afternoon. We'll see how it goes.

This post is day 21 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 20: Taking a lead from Television for goldfish.”

One of the more annoying things is watching television programmes that keep telling you what happened earlier and what's going to happen later. Often made to allow for advert breaks, it seems like the most lazy form of padding.

With this in mind, I though I'd share the notes I made for topics I might write about later in this series.

Who am I?

One of the fates I fear, but must be ready for later in life, is dementia. I'm 57 now, so hope that if it arrives it won't be for decades. But a good carer will often ask of a dementia patient “what is this person like”, with an understanding of their pre-dementia life informing their day-to-day interactions.

Maybe, among other things, this #100DaysToOffload series will help in answering that question.

After the break

My sh*t list of topics falls into two sets: technical and non-technical. There are far fewer than 80, but it's a shelf I'll steadily draw from. Some may never get properly written up.

Non Technical

  • Big Brother – how we've unwittingly arrived at Orwell's distopia and consented to it. “He loved Big Brother”
  • NHS as the national religion – I'd earmarked this as a topic before weekly prayers (aka clap for carers) was instituted
  • Taiwan as a non-state – the peculiarly wrong international status of Taiwan
  • The Church in China – how the Chinese Patriotic Catholic Association represents the biggest schism in today's Catholic church
  • Air Travel – the peculiarly damaging yet privileged position of the aviation industry
  • Kodak and Fujifilm – a tale of two companies whose biggest market disappeared and their different fates
  • Nothing to hide – my own take on the great fallacy of our times
  • Social Media – another angle on the advent of Big Brother via consent


  • Power consumption – how the greening of our power system is predominantly viewed from the supply side, and how demand side “greening” could be just as potent
  • OpenSCAD – tales of constructive solid geometry
  • git or not-git [posted at Day 19] – how applications and tools store your work
  • Engraving music with Lilypond – why computed engraved sheet music is so much more beautiful that “music notation” software

In it for the long haul

My posts so far have been far from daily, and are likely to stay so – Kev's revised terms of reference (100 posts within 365 days) are more likely to be met. It's a set of subjects that's all over the place, but that reflects the state of my mind and view and opinions and experiences.

This post is day 20 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 19: There are two sorts of applications – those where you can check your work into git and those where you can't.”

Before I start this post, if there are any non-technical readers git is a toolkit for managing code. It allows you to track versions, make branches, tag releases and more. Many pieces of free/libre software have robust names, and git is one of them.

File Formats and Wire Protocols

Back in the twentieth century, before REST or XML or JSON was a thing, I remember a colleague had a .sig line “Look after the wire protocols and the file formats and the application will look after itselt”. At the time everyone was “going OOT” which in the worst cases meant buying a C++ compiler to compile your C code. But the emphasis was on methods of writing software, functional recomposition, and only a little thought was given to APIs and file formats.

Some years ago I read a piece written by someone who'd worked at both Google and Amazon. He laid out what, as a developer, made Amazon different from the ground up: APIs were (and I suspect still are) EVERYTHING. The discipline around APIs comes directly from Jeff Bezos: if you write any component to go anywhere into the technology stack at Amazon, the API must be fixed and documented and published, usable both within Amazon and by partners. And the availability and stability and useability of service within the Amazon technology stack makes it scalable and modificable. But that's not the primary subject of this post.

If it ain't plain it ain't text

This post was going to be about OpenSCAD. The break since my last post is because I've been going through the larval stage with OpenSCAD, designing a case for my Pi-zero/inky-pHAT room temperature display. I've tried various 3D modelling tools, and have plumped for OpenSCAD.

In OpenSCAD you don't draw the item you're modelling: you write a text file that describes it in terms of solid shapes and their relationship to each other. This means that if, for example, you add an opening in a shape, the change is simple enough to be recognised by diff.

So as I've been learning and using OpenSCAD, I've been registering and committing my work to git.

Git friendly file formats

Lots of pieces of software have file format that are notionally an ASCII representation. Hell, a Microsoft Word document saved as .docx is a zipfile of a mess of XML files. My criterion for “git friendly” is this:

If you make a small change, running a diff should show just that change, ideally in one place.

Git friendly software

Here are a few pieces of software I use that are git friendly:

  • OpenSCAD [3d modelling with geometric shapes]
  • Lilypond [Music engraving]
  • LaTeX [Document preparation]
  • Splunk [Data analytics – everything in the GUI is writen to a .conf file]
  • Ghostwriter [Hey, it's markdown!]

Here are a few pieces of software that are disappointingly git unfriendly:

  • Inkscape – A simple file with a few rectangles in has a lot of elements

I'll update theselists as further thoughts come to me, marking the updated items. If anyone can suggest a git friendly diagramming tool that would be really appreciated.

This post is day 19 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 18: You get what you pay for.”

My ISP isn't terribly expensive, but doesn't get recommended on uSwitch or CompareTheMarket. But I moved to them when one of the majority providers left me with a flaky service for two months.

It wasn't their fault. There was an intermittent fault in the cabinet that serves our village. I'm not au fait with the equipment that underpins domestic broadband, but in datacentre terms there was a flaky line card in the cabinet. It was affecting the whole village: I was having to travel 25 miles each way to my office because I couldn't reliably work at home; the vicar was leaving emails from parishioners unanswered because he couldn't read them. Maybe one in three houses were having broadband outages.

Now, as consumers we have no way to interface with BT Openreach, who operate the exchanges and cabinets – our sole relationship is with our ISP. And the ISPs lodge reports with BTOR. So we had the farce that house after house was visited by BTOR where they checked the drop cables and said that wasn't the fault – charging the ISP £75 each time. My then ISP executed a KillSession on my connection and, when it went again, said “we've done it once, we now have to leave it while BTOR investigate”. I ended up writing to my MP: either BTOR were so useless that they couldn't identify an obvious pattern in their fault reports or, worse, they were leaving the fault unfixed so they could fill their boots with call-out charges to the ISPs: the fact that customers were left with no service wasn't a consideration. Eventually, one Sunday morning, everyone's broadband started working again and there were no more problems – they'd finally fixed the root cause after two months.

So I switched ISPs. The bunch I'd been with had run an IPV6 trial a couple of years previously, but hadn't taken it forward and had instead invested in carrier grade NAT. If I wanted IPV6 I was going to have to go elsewhere and, after a little market research, I opted for Andrews and Arnold (www.aa.net.uk).

They give me what I want from an ISP. They:

  • Let me talk to someone knowledgeable when I need to talk (and answer emails for background queries)
  • Present their costs fairly and openly, with a choice of paying one-off fees up front and no lock-in or bundling them and choosing to be locked in
  • Let me see the exact condition of my connection

This last is unique: their router performs a layer-2 ping on my router every second and from this and other metrics I can see in real time and up to 30 days' history:

  • The upstream and downstream bandwidth usage minute by minute
  • The rate of packet loss (if it is regularly more than zero for more than an instant I know, and they know, that something is wrong)
  • The average and maximum ping latency at any moment: this should be constant unless I'm really spanking the line

They also let me set some connection parameters: for instance, my router can be set to use 95% of the available bandwidth, meaning that I don't cause line congestion resulting in latency and packet loss (more important than ever in these times of VOIP and video conferencing). While other ISPs have varying levels of line contention in the back-end capacity, A&A's position is that contention rates aren't important, congestion is. And they endeavour never to be the cause of congestion to customers – if they regularly see it somewhere in the network they fix it.

I also like transparency when something does go wrong: a couple of months ago there was an unplanned loss of connectivity to some areas, lasting half an hour in the afternoon. As a customer I was able to see the trouble ticket and the updates shown as they worked on it – the next day I looked again, and the fault had been properly closed out (a route table poisoning issue that wasn't possible in a firmware update that was already queued up to be implemented).

They're no technical slouches either. As well as offering IPV6, I've been able to point Firefox to their DNS over HTTPS service, taking out the plain text transmission of queries. And privacy is taken seriously: I've not inspected myself, but I believe them when I'm told “we don't log DNS queries”. All part of ambient privacy: “The details of your day to day life should not be a matter of record”.

And finally, there's some proper nerdery behind them as well: there is a record of their Friday afternoon experiment to see what sort of ADSL bandwidth could be got from a length of wet string – turns out the answer is over 1Mb/s.

You can keep your BT Internet or Plusnet or Talk Talk or Sky – whatever the price comparison sites may tell me, it's A&A for me.

This post is day 18 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 17: Text input using a pointer only.”

Dasher isn't new. I first came across it over a decade ago, and it has progressed since then. But there hasn't been a new version since 2016 when the primary developer of version 5 died.

Dasher is a text entry system that uses a pointing device only. You steer through a wall of letters picking out words as you do so. It is predictive, so the “gates” for each letter are different sizes, making common words easier to navigate through.

It's really hard to describe in words, so pop over to its home page at http://www.inference.org.uk/dasher/ and then come back for the rest of this post

While it's an interesting diversion for those of us who can use a keyboard, it is those whose movement is severely impaired that can benefit most. With a joystick you are able to write text. At its most ingenious, it can be linked to a gaze camera so that someone totally paralysed and unable to speak can still communicate if they are able to control their eye movement.

I love free software. Instead of dying with its creators, it becomes part of their legacy.

This post is day 17 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 16: Wrongdoing as well as failure.”

Disclaimer: this post is written in a personal capacity representing my views.

I emailed my MP. I email him from time to time, and he always replies unless I clearly write no reply required.

I wrote this email because the Horizon affair is an embarrassment to government as well as the Post Office and Fujitsu. I'm, sure it will be kicked down the road and end up as an item in a catalogue of government IT failures. But I want him to know that the wool has not been pulled over my eyes: that individuals in both the Post Office and Fujitsu knowingly gave false report that resulted in convictions of innocent people.


I've seen the written statement from Paul Scully about the review into the débacle with the Horizon system. ( https://www.parliament.uk/business/publications/written-questions-answers-statements/written-statement/Commons/2020-06-10/HCWS280/ )

I am disappointed in the terms of reference. They are written to find the corporate failures that resulted in these miscarriages of justice – which there doubtless were. Terrible, terrible failures with a faulty system being made live, and no independent reconciliation of transactions to verify they were true. A failure at the most basic level to ensure probity in a financial system.

But it's not OK to classify everything as corporate failure when there was also wrongdoing, wrongdoing where individuals knowingly presented false evidence that resulted in wrongful convictions. Those who explicitly perjured themselves have already been referred to the Director of Public Prosecutions (and please use whatever influence you have to ensure these referrals result in prosecution), but there was more widespread wrongdoing in both the Post Office and in Fujitsu.

The Horizon review needs to be an inquiry, and the terms of reference need to include the finding of wrongdoing. The resulting prosecutions would be complex, but are important.


This post is day 16 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 15: Peelian Principles Etc.”

In Britain we are not policed by the government. We are not policed by the Home Office, nor by a devolved government, nor by the council. We police ourselves.

While we are all responsible for maintaining law and order, most of the work is done on our behalf by professional police. They answer to a police authority, whose composition is set down in law. But the police authority is not answerable to government.

This is different to the army: the army are directly answerable to the crown (meaning the executive branch of government) and act on their behalf. This is why the deployment of troops in Northern Ireland for decades was so significant: it represented that the UK government at the time could no longer rely on the population of the province to keep their own law and order. The police cannot impose order on wider society against its will.

Our policing is not perfect, and represents our society by that token. There are occasional bent coppers, as there are crooks in our society. There are coppers who are “uniform fillers” – as there are underachievers in society. But the vast majority are decent people, who give their best in a difficult job. When you and I are running away from trouble, they are running towards it ... on our behalf.

The system works pretty well. In the memes of “Heaven is a place where...” they usually say ”...where the police are British”. It's not perfect, just as wider society isn't perfect, but as long as they're running into trouble spots on my behalf, I take my hat off to them.

Thank-you for doing my job of keeping law and order on my behalf.

This post is day 15 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

“Day 14: A bit of Python, a bit of Splunk.”

I've been tinkering for a while with a Splunk development instance. It sits in AWS on a t3a.medium spot instance on a 50GB volume, and costs me about $0.55 per day. It's a bit of a kitchen sink where I try stuff out, and every few months I have to remember to renew my Splunk development licence.

I've also been tinkering with a Raspberry Pi 3B. Among other things, I attached a GPIO sensor from Pimoroni, which measures air temperature, pressure, relative humidity and gas resistance (which is an indicator of air quality).

I've got the RasPi forwarding the measurements to Splunk. Previously I'd installed a Universal Forwarder, but for these regular measurements it made more sense (and gave a learning opportunity) to send them using the HTTP Event Collector (HEC). I found a Python module to toss events to HEC, though from what I now know it would be as simple to generate a payload for the Request module. So I set this up to take readings every 5 seconds and forward them as events to HEC – I was then able to produce a pretty little dashboard.

Next I wanted to send them to Splunk as metrics rather than events: metrics were introduced in Splunk7 and are typically 2-3 orders of magnitude faster for numerical data. After a bit of fnurgling I was able to lob the readings at HEC in a format suitable for a metrics index. My code for this end is not pretty: it's all at a single level with parameters hard coded at the top of the file. The script also hangs every few days – I think it stalls when reading from the device, as the data stream to Splunk just stops.

So I need to do some work on this code. It needs to be properly structured, and I'm going to add a periodic request to Dead Man's Snitch, so I get notified when it hangs.

I've also been tinkering with an eInk display on a separate Pi Zero. This is now putting a request onto the REST API of Splunk (authorised with a token) and displaying values on the eInk. This code is more recently written, so is somewhat better structured and has the parameters in a config file – it is single pass, so a cron job runs it every 5 minutes to update the eInk.

I've set up a Github repository and some of the code is in there. My intention is that I make it all presentable enough that I can offer it as a three component portfolio piece:

  1. The code to read the sensor and forward it to Splunk
  2. A Splunk app where all the relevant Knowledge Objects are held
  3. The code to run the display

I'll then also need to create an overarching README to show how it hangs together, together with a README for each component describing how it works and documentation strings once my code is properly structured.

While the problem space is relatively trivial, getting it from “Oh look, it works” to “Here's something I'm prepared to show people” is going to need quite some time. Hope to have that done before my 100 Days/Posts are up!

This post is day 14 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.