hammertoe

A coffee-loving software developer.

So, last night was the culmination of about eight weeks of work on a side project I've been working on called “Choirless” as part of a IBM-led competition called Call for Code. You might have seen me mention it a few times on Twitter.

Call for Code

Call for Code is about getting teams of developers together to tackle some of the major problems we face on this planet. For the last three years it has been about climate change. This year, as you'd imagine, there is a track for COVID-19. Not trying to 'cure' COVID-19 (IBM are doing work elsewhere in helping with that) but more to do with the effects of isolation on us. Remote education, community cohesion, crisis communication.

In fact, I had even started writing a draft post about Choirless back at the start, but never got a chance to finish the post as was so busy working on the development!

This is the only bit from that post from the depths of my drafts folder that makes sense to salvage. It is about a kick-off event in the UK for Call for Code:

I had an inkling of an idea before the event started: a tool to enable “virtual choirs” — inspired by my 9 year old daughter. I originally wrote the idea up a few weeks prior and floated it by some colleagues to get their input, which was overall pretty positive. So the day before the UK labs challenge I started asking about on the various internal Slack channels as IBM if anyone wanted to join me. I was originally thinking that no-one would and I'd be building it myself... but then up popped Glynn and Sean. I know Sean as I've recently joined the London City Developer Advocacy team at IBM. Glynn I'd never met, but had heard his name a lot from colleagues. And so the night before a team of three of us was suddenly formed.

We actually won that internal UK Labs Call for Code competition, and set our sights on the global competition.

What is Choirless?

Choirless is a musical collaboration platform built to enter Call for Code 2020. It allows music groups to create a video wall recording of a piece of music, where all of the individual submissions are captured separately on the performers' phone, tablet or laptop.

Choirless came about during the 2020 Covid-19 Pandemic, as countries went into lockdown and social-distancing prevented choirs, bands and other musical ensembles from meeting and performing in person. Video meeting platforms such as Zoom and WebEx proved useless for live collaboration because of the network latency and the audio being optimised for speech.

Choirless aims to make it very easy for choir leaders create songs made out of several parts (e.g. alto, tenor, soprano) and to organise choir members to provide renditions of a part. All of the contributed videos are stitched together into an video wall with no special equipment and without employing costly and time-consuming video editing software.

Alpha Release and Competition Submission

Fast forward eight weeks and I can announce that, on the day before the competition deadline, we just launched the first “alpha” release of it. That is, we did a private test in which we invited IBM colleagues and friends to take part in a virtual choir to sing “Yellow Submarine”.

Below is the video we created and submitted as part of our entry to the competition.

The Performance

It was the penultimate day before we had to submit out entry for the competition. We suddenly had a bold, crazy idea... lets do an actual performance. We've just about got the system working, lets test it with actual people all the way through.

So far we'd tested various parts of the system, and we knew they worked. But we'd yet to test the entire thing end-to-end. Letting real users loose on the site to record their own renditions, and submit them. And have the servers convert, process, synchronise and stitch the pieces together with no human interaction at all.

Anyone who has worked on this sort of thing before know this is where the problems lie. You think it all works and then someone comes along with a different web browser, or a different computer, or does something totally unexpected and it all fails.

If it worked, we'd have some up to date material for our submission video (above), and if it failed we'd no doubt learn a load of things and still have a fallback for the submission.

Sure enough, that evening we discovered a bug. For some reason audio was being heavily compressed and sounding awful. We had no idea why all of a sudden. Sean, who knows more than anyone should, about video recording in web browsers, stayed up working on it that evening. Just before we were all about to go to bed “I think I've got it!” he wrote. And posted a video at 11pm of him on his ukulele in glorious full-quality sound.

So we were back in action again. We agreed to wipe the database clean from all the testing data that night, and first thing in the morning Glynn was going to get his guitar and record a new reference track for us to sing to. I had an interview in the morning with a copywriter who was writing a piece on Choirless.

Morning came of the final day.

Glynn uploaded his guitar pieces at 9am, and his vocal part. I recorded mine once I'd finished the interview, and Sean uploaded his. Just the three of us...

We announced everything on Slack at work, and tried to cajole as many people in as we could. I'd set up a notification system that would post update into the Slack channel as pieces were uploaded.

Nothing.

No-one contributed.

Oh crap :( Why not? Have we misjudged this?

Then at exactly 4pm...

Our colleague Angela had a go. And contrary to her assessment, her singing was perfect. And even added in a dolphin on a stick part way through. In just two minutes the system had taken Angela's contribution and added it to the mix.

Then 10 minutes later, not just one, but two came in at the same time. Daniel and Steve.

Then another. And another. And suddenly someone with a recorder. Then a clarinet. Someone working from their shed. Someone got their wife and kids involved. Then someone played the drums...

It. Was. Working.

The slack status monitor was showing each new part as it came in live. And three minutes later we'd get a new rendering. This was a living performance evolving in front of our eyes.

Absolutely elated, Sean, Glynn and I were polishing off the last bits of the submission. Tidying up the codebase on Github, making sure we had README files for everything. We had to create a roadmap document showing where we were going. I edited the performance from that day into our submission video. Even whilst people were adding more to the performance!

And finally that evening, after checking things over for a third time, I hit submit.

This truly has been a project that has been more than the sum of its parts. Having, almost randomly, found Sean and Glynn, I can honestly say that this project would not be anywhere near as good without the contributions from all three of us.

So, Onwards! The submission is done. Our fingers are crossed we win. But development continues. We hope to have a public beta out for people to try themselves in the next month.

If you want to read more about the technical details of Choirless, and the technology behind it (including some great services from IBM Cloud) you can on our development mini-site:

https://choirless.github.io/

And for Coil subscribers, below is the full video of the performance.

Read more...

In a fairly light-hearted conversation today about Bitcoin speed someone made this comparison:

to which I replied:

They were not happy with that response.

But the point is that they were actually right, we didn't stick with 56k modems. We did move on to newer technology.

Now lets get slightly technical for a bit to understand why the Bitcoin-modem analogy is both so good, but also so bad (for Bitcoin).

The Open Systems Interconnection model (OSI model) defines networks as 7 'layers'. Each layer builds on top of the one below it. So for example the bottom layer, layer 1, is the physical medium ie. fibre optic cable, copper wire, radio waves, carrier pigeons. And building then on top of that we have additional layers around the way information is transmitted. Layer 3 is where IP sits, which is the protocol that computers on the internet use to address each other and exchange data. Right at the top is layer 7, which is where things like HTTP sit, which is the protocol used specifically by the World Wide Web.

So what?

Well, the point being that each layer only needs to know about the layer immediately below it and no others. This gives a level of abstraction which means the you can replace a lower layer and the layers above it stay the same. This is why the internet (IP, layer 3) continues to work the same even when the physical media (layer 1) was changed from copper wires to fibre optic cables to radio waves. And why layer 2 could change from 56K modems to ADSL to Ethernet.

Right now my web browser is sending data to Coil over the internet and has no idea whether it is going over wifi, or fibre, or copper. In fact, the journey from my laptop to Coil's servers it will no doubt go over all 3 of those physical mediums on its way.

So we can change the lower layers, and the upper layers remain working the same. That is the whole point. As technology improves to allow us to move data faster we can upgrade the infrastructure of the network and the applications and the users at the top are completely unchanged and unaware.

What about Bitcoin?!

You may have head Bitcoin referred to as a “Layer 1” solution. And things like Lightning Network or Liquid referred to as “Layer 2”. Now, these are not the same layers as above, but it is a similar model. Layer 3 might be perhaps DeFi (as a concept), and Layer 4 might be a specific application, e.g. a loans service or trading exchange.

So what happens when we need our DeFi network to run faster? We swap out the lower layer to something newer or faster right? Oh.... layer 1 is Bitcoin. So what do we swap that with? We could put something like XRP in it's place. It achieves the same thing (move value from A to B without a central authority) and our loans service could work the same.

So, yes, we can improve things and advance similarly to the way we did with the internet... but that actually involves replacing the lower layer (Bitcoin) with something else.

Adding Layers

But doesn't Lightning Network solve this? Well, not really. Lightning network adds additional functionality on top of the lower (Bitcoin) layer... but you still have the lower layer. You still have Bitcoin, plodding along at 7 transactions per second and 10 minutes per block. So the only way in which Lightning can improve things is if you somehow sacrifice or compromise on some of the functionality of Layer 1. In the case of lightning, it does this by creating temporary IOUs that you pass about and 'settle up' later on. Well, that might be great, but you still need to settle up at some point. Otherwise all you are doing is just using a system of IOUs and trusting the other parties. Which is against the whole ethos of what cryptocurrencies are all about.

Now, you might be fine with that. You might say that trusting someone or something for a short period of time is a compromise you are willing to make such that you can pay for your coffee in due time. But again, if you are going to do that, why bother with Bitcoin at all?

At some point you are going to want to settle up, and that involves a transaction on Layer 1. And this is then the problem. If Lightning Network takes off and before widely used, then everyone using it will still need to open and close channels on the underlying network. And if they can only do that at 7 transactions per second, then you are going to be in for a long wait. And you coffee is still getting cold.

But... but... I'll have already opened a channel, and will keep it open and never ever close it. Well, great, what you are saying is that you will just rely on a series of IOUs and never settle them. You'll never record those transactions in a global, immutable, decentralised ledger.

So, again, why bother building it on Bitcoin in the first place?

Header image: John Barkiple on Unsplash

21st June 2020

So DEV is now Web Monetization-enabled! DEV is a site dedicated to software developers and developer advocates. Think of it a bit like the Coil blogs, but dedicated entirely to software development.

Great!

We are used to Web Monetization here on Coil, but how does it actually work and where did it all come from?

Let's try and do a bit of a recap of the history leading up to the current point in time. I feel we are about to “cross the chasm” with Web Monetization, but it has been the result of a long journey of technologies, people and standards. And forgive me if any of this is not quite right, corrections welcome ;)

Read more...

So you might have just seen the news over the past day about PayID – an initiative to create a universal payment identifier. It is actually a very simple concept and technology. It is a simple standard by which a payment service or digital wallet can look up the destination payment information from a human-readable handle. So in my case it maps matt$quernus.co.uk to my XRP wallet address X75nEw5QD8Ej8jWt7EkJXHoVAV9YCtjuUSJppADpNtPKdim. Think of it like a distributed address book.

But, being so simple, means it can be very easily adopted by websites and wallet providers. Any entity, individual or company that controls a domain name can implement a PayID pointer. For example I wrote up a simple howto for more technical people who want to get started straight away.

But what about less-technical people? Well the pieces are all in place for 3rd parties to start implementing PayIDs for you. And with social media this is where it starts to get interesting...

You already have an identity on Twitter, for example. My id on there is @hammertoe. So it would be easy for Twitter to map hammertoe$twitter.com to some payment address information I specify. What if I could put in my Twitter settings a payment destination? Then anyone who knows my Twitter handle, automatically knows my address to pay me.

But what address would I put in there? I could put in my XRP wallet address as above. I could put in my SWIFT bank routing details if you wanted to pay me from a bank account. I could put in multiple addresses of different types, as PayID supports this.

But... there is something even more interesting... the PayID standard also supports ILP (Interledger) addresses. And ILP is a means to actually pay people on different networks (as opposed to PayID which is just about finding someone's payment address).

So what if sites knew an ILP payment pointer for me and could put that in my PayID? Well some sites already do know that. This very website you are reading on, Coil, knows my ILP Payment Pointer as that is how they pay me. So they already have the information to setup hammertoe$coil.com as a PayID. Similarly DEV just recently implemented Web Monetisation and so you can put your ILP payment pointer in your settings in there. So they have all the information to setup a PayID for hammertoe$dev.to.

A colleague of mine said last night, I can see this being useful on Github. Yes! Indeed. Github is a collaborative site for software development. Pretty much any software developer has a profile and account on there. What if you commissioned some work and needed to then pay the developer for it? Easy! Just send the payment to me athammertoe$github.com.

So what is enabled by a very simple “phone book” could really be quite remarkable.

To the future!

[ Cover Photo Brooke Cagle on Unsplash ]

What is PayID?

So you heard the great news about PayID? It is a universal payment identified to be used with both traditional banking and cryptocurrency accounts to provide a simple, easy to use ID for payments.

It was launched today, and quite a lot of coverage about it:

Want to pay me? My PayID is matt$quernus.co.uk. Is that a bank? XRP? Bitcoin? Who knows? Who cares! But you can pay me by sending a payment to that ID. And the great thing is that it is backed by a whole load of companies.

Setting up PayID

There are some great instructions for setting up a PayID server here: https://docs.payid.org/

But! What if you don't want to run a server to do it? What if you just want to statically configure an entry or two and you happen to run your own server and have your own domain?

Here is how to setup a PayID on an Apache server to serve up a simple static PayID file. In this case directing to an XRP wallet.

The convention for the lookup is that matt$quernus.co.uk is rewritten to an HTTPS request to https://quernus.co.uk/matt ie. the local part before the $ is put in the end as the path.

First: You need to configure Apache. I'll include the full virtualhost directive below for completeness, but the PayID bit is just the last 3 lines below the comment.

https://gist.github.com/hammertoe/a2af750262b7c210f536a0bc9481fc23

The first list turn Apache's rewrite engine on. You may already have this. The second line is a rewrite condition that means it will only catch requests with a specific accept request HTTP header. The last line is the actual rewrite rule and will fetch the requested file with .json on the end.

So I then have a file /var/www/htdocs/www.quernus.co.uk/.pay/matt.json that contains:

https://gist.github.com/hammertoe/db2279254c2a6d08ad0419d797632227

You can look up the full format of this file in the PayID docs site above, but in short the above details an XRP account on the mainnet of the XRP Ledger.

Demo of PayID in Xumm

Below is a demo of this excellent wallet Xumm, one of the first to support PayID, looking up the PayID setup above.

In agile software development there is a concept called the “definition of done”. The Definition of done is: “when all conditions, or acceptance criteria, that a software product must satisfy are met and ready to be accepted by a user, customer, team, or consuming system.”

DoD is something that is agreed upon at the start of a project by everyone. For example, something is not done just when the code is written, but when the code is written, documented and the tests to ensure it works are complete.

I recently stumbled across a cybersecurity researcher and blogger called Lisa Forte, who has just started a new podcast called Rebooting.

Her first episode she interviewed climber and adventurer Kenton Cool:

I started watching it when it first came out a couple of weeks ago, and remarked on the hilarious line from Lisa “Do you remember your first ever Everest climb?”... 😂😂😂 I mean, the guy has climbed Everest 14 times, so I guess is valid, but did make me chuckle with the ordinariness of the statement.

For some reason I didn't manage to watch it all the way through at that time. I got distracted with helping out my daughter with something. But yesterday remembered it and went back to finish the episode off.

One of the key things that hit me was part way through in which Lisa says:

...that's the kind of weird thing about mountaineering, isn't it? That for a lot of sports you know the finish line is the main goal that you think about. You dream about the finish line which is the end of the race or the end of whatever, right? But with mountaineering you're thinking about a summit and really you're thinking about the halfway point

to which Kenton replies:

...but then it comes out back to goal setting. So how do you set the goal? How do you set out the vision? How do you communicate that? If everybody on the team is on board and invested in in where the finish line actually is — the finish line is through the front door. At home.

This highlights the differences in perceptions. To most people outside the climbing world, yes, reaching the summit is the obvious goal. But it isn't to those involved. Getting back down is. Getting back through that front door to see family and loved ones again. That is what Kenton's job as a guide is.

It is a great interview, and some great insights and stories there. I'm looking forward to catching up on Lisa's other episodes she has done.

So remember. What is your goal? What does “done” look like? What does “success” look like?

Read more...

This is a write up of a live coding session from my show “ML for Everyone” broadcast on the IBM Developer live streaming Twitch channel every Tuesday.

This session was an attempt to train a neural network to detect the sentiment of tweets. Specifically I wanted it to be able to detect joyful tweets for a hackathon project.

This is a follow on from the previous session in which I used an existing sentiment analysis service, IBM Watson Tone Analyzer to detect sentiment. Using that service was nice and quick to get going, but it only allowed me to send one tweet at a time to it, which resulted in the service being quite slow, or me hitting rate limits of the service. So this is the beginnings of creating my own simpler version of that service.

The full video of the streaming session is below:

Session recap

In this session I used IBM Watson Studio to analyse the content of around 800,000 tweets I downloaded from twitter. Each tweet contained one of the words: joy, anger, angry, happy, sad.

The goal was to create and train a neural network using Keras, a high level Python API, to learn what a 'joyful' tweet might look like.

The basics steps of the process were:

  1. Download a selection of tweets, about 800,000 in total from Twitter's API
  2. Categorise those tweets into being either 'joyful' or 'angry'. I used a pretty naive crude regular expression match for this.
  3. Tokenise the tweets, using a tokeniser in the Kera preprocessing package that split the words up and lowercased them
  4. Download a pre-trained “word vector” that represents words in tweets as a 100-dimensional vector.
  5. Create a neural network consisting of two LSTM layers (ideal for learning word sequences) with dropout layers to prevent overfitting.
  6. Load the word vector from above into the embedding layer of the network
  7. Train the network on the processed tweets
  8. Evaluate the network performance with a few real world examples

Conclusion

Well, it seemed to work. Looking at the examples we tested on we got:

“I love the world”: 53% joy; 47% anger

“I hate the world”: 22% joy; 78% anger

“I'm not happy about riots”: 45% joy; 55% anger

“I like ice cream”: 63% joy; 37% anger

The next steps will be to take this trained model and deploy it as a service such that we can then query it from the Joyful Tweets application.

The full python notebook is below:

I hope you enjoyed the video, if you want to catch them live, I stream each week at 2pm UK time on the IBM Developer Twitch channel:

https://developer.ibm.com/livestream/

15th June 2020

One of the services offered by IBM Cloud is the IBM Watson Tone Analyzer service. This is a service that allows you to send a document of text to it and it will analyse the text and return the various tones in the text. It can detect the following tones in the text: joy, fear, sadness, anger, analytical, confident and tentative.

As an idea to have a play around with this, and also for something to submit to the Grant for the Web / DEV hackathon, I decided to see if I could create a simple website that analyzes your tweets for tone.

The video below is longer than my usual shows, at nearly 3 hours long as I was going right from the start to end, to build the entire site to analyse and display the tweets.

https://www.cinnamon.video/watch?v=333183211095983753

The code for this session was written in Node.js (javascript) was built using the following technologies:

Using the Tone Analyzer API

The following code shows the main function that connects to Twitter, fetches the tweets and analyses them, comments on the process are inline:

https://gist.github.com/hammertoe/4b1dc236a9db1a2208bb778268efbe19

The full code for this service can be found in the JoyfulTweets Github repository:

https://github.com/hammertoe/JoyfulTweets

Issues in Development

Towards the end of the show I started to hit a few issues. Firstly I was using a free instance of the Tone Analyzer and that only allows a certain number of calls per month, and with all the testing I managed to hit that. I then moved it to a paid tier to avoid that, but then started to hit internal rate limits in the service as I was making the calls for each tweet in parallel. So the service was getting 30 requests at once, and returning HTTP 429 Too Many Requests. I attempted to use a rate limiter extension to Axios, but it still seemed to be hitting it.

In the next show I created my own version of a tone analyser by creating and training a neural network on a sample set of tweets. Then the show after that deploying that as a service to use instead.

End result

I got a basic version working by the end of the show, but then afterwards spent a bit of time adding some more styling to the page to make it look at bit nicer and getting the tweets to lay out nicely.

I hope you enjoyed the video, if you want to catch them live, I stream each week at 2pm UK time on the IBM Developer Twitch channel:

https://developer.ibm.com/livestream/

So, one of the teams I'm on at work started doing this thing called “Five for Friday” in which you have to list 5 things on a topic. So last week was food, etc. This week was “work”. I went a bit overboard and did 8 items. But thought it might be interesting to publish wider:

  1. As a young teenager, living in Virginia, I tried to start a business as a babysitter. God knows why. But i went around and stuck flyers in everyone's letterbox. Next day got a visit from the police. Apparently that is illegal to do in the US. How the hell should I know?!

  2. I went to quite a posh boarding school in the UK (my parents living overseas). And I started brewing beer/wine in my dorm. I used to take jam, honey, juice from the dining hall and ferment it. I built a brewery hidden in my room, in a hidden panel in the wall. One day I got busted as I had sold some to some kid and he got very drunk and ratted me out. I got technical suspension for 2 weeks. My final year report from my housemaster said “I don't know what Matthew is going to go on and do in life, but he will find a way to make money somehow”.

  3. I lived in Albany, NY between (high)school and university. Was just starting out with the internet. 9600 baud modem and a floppy with Trumpet Winsock on. The small ISP in the area, AlbanyNet, moved office to across the road from me. They mentioned they were looking for support staff. One day I cooked a big batch of chicken wings and wandered over and said “Hey guys, welcome to the neighbourhood, want some wings? Oh and if I do a few hours tech support for you in the evenings will you waive my internet bill?”. They said yes. I then learned about the internet, introduced to FreeBSD, SunOS, Cisco routers, modems etc.

  4. From above I then told the company (an aviation company that chartered aircraft around the former Soviet Union) I was working for (as a general office dogsbody) that I could save them money on their $10,000 / month email (Sprintmail) bill. I proceeded to work out how to tunnel TCP/IP over PPP over X.25. Setup a Cisco router and out own 56K leased line. Switched them from Sprintmail to Eudora and from X.400 to SMTP/POP3 and their bill dropped to $2,000 / month and they could still dial up to a local POP anywhere from London to Yuzhno-Sakhalinsk.

  5. A chance encounter in a lift (elevator) with a fellow student at university resulted in me joining him to form “Netsight”. We grew from two guys in a bedroom to 16 people over 15 years. From being a domain name reseller to building and developing intranets for corporations. We ran our own datacentre in Bristol as well. We were one of the longest running “new media” businesses in Bristol.

  6. At a conference I met a guy who convinced me to leave Netsight and join him for a Los Angeles-based startup, building a health, nutrition and fitness tracking system. The guy who hired me turned out to be actually crazy – he had been keeping the devs away from the CEO (“she is a very important woman you must never disturb her”) and her from us (“The devs are very busy, don't ever disturb them”). One night he went out in LA and blew $60k on the company AMEX and was fired. I then took over and was in charge of the mobile development team. Alas company ran out of money couple of years later. And suddenly pulled the plug.

  7. I learned how to make cryptocurrency trading bots and was going to be rich from that, then the market crashed.

  8. I saw a tweet from someone I'd met at a Python developers conference 10 years prior looking for developers. He now works for a startup that got taken over by IBM. Which is where I now am still 2 years later.

[Photo by Tony Hand on Unsplash]

And an extra 9th item for Coil subscribers:

Read more...

So I had an idea for the DEV / Grant for the Web hackathon that is running until the end of this week:

There are a lot of pretty bad things going on in the world right now. Whilst not wanting to ignore or minimise them, sometimes it would be nice to just take a break for our own mental health. Wouldn't it be great if we could look at just the “joyful” tweets in our Twitter timeline?

So that is what I built! :)

Full details of the build, along with a link to a quite long (2.5 hour) live coding session are over here:

https://dev.to/hammertoe/joyful-tweets-3jpf

The site is monetized via Coil, and is just a demonstration of how easy it is to add web monetisation to a website.

It doesn't always get it right... in fact it can sometimes get it spectacularly hilariously wrong:

Give it a go yourself!

Head on over to the site at:

https://joyfultweets-daring-wildebeest-eq.eu-gb.mybluemix.net/login

And it will redirect you to log in via Twitter and then bring you back and show you a filtered version of your tweets... hopefully a bit more joyous than before.

Read more...