whydoitweet

I'm a Lecturer in Computer Science at Brunel University London. I teach games and research simulations. Views are my own :).

Last Wednesday I fully ruptured my Achilles tendon, changing a lot of things in my life. In this post I'll talk a short bit about it (don't want to make a meal out of it).

The Issue

It occurred while I was playing goal with football, I walked out of the goal and then it struck, like a whip just above my left heel. Initially I hoped it was a partial rupture of my Achilles tendon, but two days later I found out it was a complete rupture. I suppose it looks like this inside, with the exception that mine is fully ruptured:

Courtesy of BruceBlaus, Wikimedia Commons.

The Cause

The cause is very unclear. Apparently I'm in a risk category, being 38 and an active goalkeeper. But it's been very hard to locate the precise trigger. Could be something I did earlier on that football practice that I forgot about, or it could having cycled with a too low saddle a few days. Or it could be something completely different altogether.

The Treatment

I have a so-called equinus cast, which I can't walk on and points my foot somewhat downward. The idea is that the tendons have some time to reattach, upon which I will get a cast at a slightly different angle, and eventually a boot. And at some point I will get quite extensive recovery therapy.

The Future

Well... if the lockdown forced me to take things a notch down, I can assure you that this injury forces me to take things another five notches down.

Aside from a lot of exercise types, there are only a few things that I cannot do, such as clipping my left toenails ;). However, a lot of the things I can do go slower, are riskier and cost more energy. So I try to pick my battles, doing only a subset of the tasks I would normally do but trying to them as well as I normally would.

Showering has particularly turned into a limbo-esque experience. Will the cast get wet? Can you keep your grip while you wash your hair? Do you remember that the bathroom floor is still wet when you get out? Anyway, so far so good after one attempt ;).

In terms of recovery, I will just have to wait and see. At the moment I'm mainly trying to figure out whether I'm getting the right treatment, and both how I can support my own recovery further and be of use to the household and workplace.

What if you get it?

If you ever happen to get this injury, make sure to have a look at AchillesBlog.com.

Read more...

Two days from now, it will be the first time for me to teach an in-person lab session since the Covid-19 virus hit the UK.

Now there is a lot of discussion about whether in-person teaching is a good thing or not, and here in the UK's the unions are already making a move to get rid of it while the pandemic rages on and test and trace isn't fixed. But frankly it's not up to me to decide whether to teach in person or not, as those decisions are normally made by the Upper Echelons of University Management, who often go by excessively prefixed and Latinized job titles, because a simple phrase like “Director” simply isn't popular enough.

In this post, I just want to summarise the thoughts and opinions I have been having about this all, and share those with all of you for your consideration. I am happy to see my views corrected or trashed, and Twitter or Mg.Social would be the place of choice to do that :).

Will teaching in person worsen the Covid-19 spread?

After having read about topics around this for several months now, I think the answer to that question is a clear YES. However, by how much it worsens the spread depends on the precaution taken during the teaching:

  • Social distancing helps, if the room you're in has decent ventilation or you're outdoors.

  • Masks help, if you actually wear them properly, covering also the nose, and talk and cough in them (am I the only one who frequently sees people talking into their phones with their masks down?)

Do help?

Many people and universities clearly think they do, but a recent article using a supercomputer calculation claims that they're largely ineffective at preventing spread.

In my opinion, any kind of barrier should help reduce the spray of stuff when you speak (as an aside, house walls work particularly well, but laying down bricks and mortar halfway down the pub table might not be the most charming way of infection prevention). But of course a visor doesn't quite seal off the face as well as a medical-grade mask does. So relying on a visor alone probably isn't a good idea?

If you wear a visor, then I think we'd need to pair it with a mask :). Source: Wikimedia commons.

How does one debug a code while social distancing?

In England we have to social distance 2 meters. But the main purpose of having Computer Science lab sessions at all is to help students in debugging their broken programs.

And that debugging involves operating a keyboard and a mouse, whilst looking at the student's screen. An alternative could be to remotely control a student's computer, but that opens up a whole new can of (cybersecurity-related) worms.

Come to think of it, perhaps we should just all have our own keyboard and mouse at least, and just plug it into workstations when we help students out? A good screen I can view from 2m distance, but I lack the implements to operate a mouse over 2 meters ;).

Reminiscing of childhood cartoons I posted about a while ago, Inspector Gadget surely would be an asset for socially distances investigations... Source: *pixy.org*.

My closing thoughts

I'm not fundamentally afraid of running an in-person lab session, but I do prefer to be on the safe side of things, because I want to be sure I've done all I reasonably could to prevent any possible latent infection on my side from propagating to the students. Can you imagine what it would feel like to realise you were the source asymptomatic superspreader of some massive Covid-19 outbreak?

I also recommended students to wear a mask, but fully understand it can't be made mandatory, as some people are physically not well able to breathe through a mask, or have psychological difficulties with wearing one.

I also do see the fundamental benefit of running a physical programming lab session in a very select set of circumstances: when people cannot do their work otherwise, or when they are stuck with a problem so intricate that it can't be troubleshooted online.

In the end, there are two things that really matter to me in this context:

  • I don't want students to be robbed of in-depth support when they need it.
  • And I don't want to expose students to any unnecessary risk of contracting Covid-19, given that about 0.5-1% does not survive the disease and a much larger percentage ends up with long term complications.

Lastly, for the teachers among us, I found this quite useful Conversation article on how to effectively teach with a mask.

Credits

Read more...

Read more...

This is a short introduction on the Covid-19 simulation code we've been working on for the last 6 months. This post is actually mirror from a blog post published on the Software Sustainability Institute website, so that there's also an introduction available here :).

The Covid-19 outbreak has greatly changed our lives, and authorities worldwide are using public health measures, such as lockdowns and enforcing mask use, to try to mitigate the pandemic. In recent months, these measures have increasingly been enacted in local areas, to limit the negative side effects in communities where the spread is limited. Many of these decisions are guided by the outcomes of observations, as well as so-called Susceptible-Exposed-Infectious-Recovered (SEIR) models that operate on a national level.

At Brunel and within the HiDALGO project, we developed the Flu And Coronavirus Simulator (FACS) to try and support the NHS and others in understanding the viral spread. FACS is an open source simulation tool that models the spread in local areas, to supplement e.g. CovidSim [http://covidsim.eu/] and many other codes that work on a more national level.

*Example of locations as we extracted them from OpenStreetMaps for Facs (London Brent Borough) (source).*

The Flu And Coronavirus Simulator Software

We started working on FACS during the March lockdown, loosely derived from concepts we used in the Flee migration modelling code. FACS has been open development from the start, and we frequently provide justifications for algorithmic changes either directly in the code or as part of raised GitHub Issues. The code is young and rapidly developed, which both has advantages (for what it covers it’s relatively small, simple and easy to modify) and disadvantages (it’s more prone to have mistakes and is less polished than many older codes).

FACS uses OpenStreetMap data to extract buildings and residential areas within the region, and places agents in households throughout the borough, and adds in other locations such as supermarkets, leisure facilities, hospitals and schools (see image). It also uses an agent-based modelling algorithm where each person is first represented as an agent with a home in the borough, and with certain needs. For instance, an agent may want to shop at a supermarket for one hour per week or to reside in the office for 30 hours per week.

Every day, the code then randomly books visits for these agents to nearby locations, based on their needs. Once all the visits for a given day are booked, FACS then uses an equation to determine how likely it is that infectious visitors in that location are going to infect susceptible ones. A first version of the equation is in the paper, but we actually managed to discover a better equation during a recent scientific group discussion.

Any good simulation study of Covid-19 spread should include runs with differing assumptions and possible scenarios. To facilitate this for FACS, we developed a plugin for the FabSim3 tool, named FabCovid19. We used FabCovid19 to easily run different scenarios for different boroughs with one-liner bash commands, and we are also currently using it to investigate how sensitive our forecasts are to some of our main assumptions.

Insights

Using FACS, we can estimate the spread of infections and hospital arrivals for different scenarios, such as when authorities decide to close schools locally or when they require people to wear masks. At the time of writing we have trialled the model for eight different boroughs in London, and though not all models are realistic yet we do see an overarching trend of the recent versions of the code producing a second wave of Covid-19 infections which hits somewhat more gradually than the first one (assuming no additional public health interventions are imposed).

*Illustrative estimate of number of infectious people across time in three London boroughs without any lockdown measures (left) and a more realistic scenario with lockdown measures enacted and maintained (right). The x axis is the simulation time in days ranging from March 1st (left) to August 28th (right). Also, note the different labels on the y-axis. FACS predicted a relatively early second wave at the time (we ran this in June), hence we labelled our code as “pessimistic” upon its initial release. (source)*

Reuse

We made FACS available under a BSD 3-clause license, and at time of writing we are reusing the code to model Covid-19 spread in Madrid, and a research group at NUST in Pakistan is reusing the code to build a model for Islamabad.

The code is rather simple to install and may take anywhere between 10 minutes and a few hours to run for most cases. If you would like to try out the code for yourself, you can either run some of the examples we provided (see here), or follow our guide for preparing your own simulations.

Closing thoughts

I realized that these code introduction blog posts are quite useful as a general quick reference, so I decided to make this part of a new series called Code Introductions.

Credits

Header image also comes from the paper :).

Just a short post here. After 3 weeks of holiday we find ourselves in 2 weeks of quarantine with the kids (2 and 5), in accordance with UK Law.

It's the second time we self-isolate: the first time was in mid-April when my wife had clear Covid-19 symptoms.

Apparently even around that period back then (early May in this instance), the adherence to the self-quarantining policy was staggeringly low, with about 20% of households supposedly sticking to it.

The second time quarantine is a very different experience to the first actually. I think we're much less anxious about it, but a bit more annoyed and frustrated because around us society is functioning pretty close to normal, while we're klutzing around to get our groceries delivered correctly. Also, nobody of us is having Covid symptoms currently (although we do appear to have some very minor other throat issue going on in the household).

Of course, the paper above about the 20% compliance isn't giving a great feeling either, and a lead government advisor setting a bad example further feeds the monster of frustration.

A few days I was called about my opinion of a possible second wave in London. You can read about my view here in a Guardian article that got out today. The TLDR version of my opinion? As far as we can tell from our simulation tests it is bound to happen, but going to be less sudden and steep (and therefore less severe if people respond appropriately) than the first wave.

Later today the new Coronavirus numbers were published. Cases in the UK are up from 1,735 to 2,988 in a single day after weeks of steady and/or slowly ascending numbers. I mainly hope it's just a blip, and secondly that it will indeed progress in a less sudden fashion than the first time around.

But irrespective of how things evolve later, these recent increases sure show it's a good time to stick to those self-quarantine rules.

Credits

Image courtesy of pikist.com.

Read more...

In the office we commonly face...tasks. Think of things such as writing a report, responding to a complicated e-mail, developing a simple program or organising an event.

Now not all tasks are equal and different tasks are best done in different ways. Here I will present 9 different ways how one can do a particular task, based on my personal experience over the years.

To keep it generic and simple, I assume that any task you have will have a description (in other words, you have an idea what you need to do) and a deadline (date/time that the task needs to be completed).

Anyway, let's move on to the list :).

#1: By The Book

Aim: Finish a task as intended, by a reasonable deadline.

The simplest way to do a task is to do it as specified of course. You refer to the task description (or your manager's expectation) and do whatever effort is needed (within reason) to get the task done by the deadline.

If a task description is not clear, you try to clarify it, e.g. by improving the description or asking for clarifications from colleagues or your manager. And if the deadline is too tight, you ask for an extension. Simple as that.

#2: Deadline Rush

Aim: Finish a task as intended, by a not-so-reasonable deadline, dropping other priorities.

Of course, some deadlines cannot be moved (e.g. a deadline for a funding proposal), leading to tasks that need to be done quickly.

In Deadline Rush mode, you know that the deadline is too tight for practical purposes, and you temporarily drop other priorities to complete the task. So you dedicate all your working time on this main task, and even choose to work overtime.

Many employers expect their employees to do Deadline Rushes from time to time, but there are a few important things to keep in mind:

  • A Deadline Rush costs a lot of energy, and you will need a recovery period of reduced effort after it to avoid long-term mental or physical health damage.
  • For some employees, working overtime might actually be illegal.
  • A Deadline Rush can result in more errors than tasks done by the book, so it is worth it to try and avoid it by anticipating tasks better.
  • A Deadline Rush can result in other tasks failing, due to lack of time.
  • Because of the four reasons above, tasks that are not absolutely crucial are not worth a Deadline Rush.

While a Deadline Rush can occasionally be good, I try to avoid it because of the risk for errors and the recovery period needed afterwards.

*Going The Extra Mile: when you don't want to show up with that pre-baked cake from the gas station... (Source)*

#3: Go The Extra Mile

Aim: Finish a task as well as you reasonably can.

Sometimes there are tasks where it can really help to do better than expected. In academia it can be things like flagship papers, large funding proposals, or even tutorials that are central for the learning of students. And sometimes you find yourself in a new collaboration and just want to do the best job you can.

If there is a clear benefit to do a little extra, and your workload is not too high otherwise, you can choose to Go The Extra Mile.

Going The Extra Mile usually means:

  1. Doing a task by the book (#1), but well before the deadline.
  2. Carefully examine what you produced, and think what you can do to improve it.
  3. Make the improvements you had in mind, given that they don't delay the task too much.
  4. Deliver the task, ideally a bit before the deadline.

But Going The Extra Mile isn't always the best option. When the workload is high, it can simply take too much effort, and doing work like could also give unreasonable expectations with new collaborators.

#4: (Person) Time-limited

Aim: Spend X hours of your time to finish a task.

Speaking of high workload, sometimes it's simply not feasible to finish all tasks to the standard you'd like, but the job still needs to be done.

One way to cope with such a situation, is too simply limit the amount of time you personally dedicate to each task, and then declare it finished once the time has been spent. For example, you decide to spend no more than 4 hours to compose a two-page abstract. This approach is common in contracting work, and it really helps to stay on top of your schedule.

In some cases, doing a job with a time limit may cause you to finish the task with the same quality in less time, because you end up working more efficiently. However, be mindful of the following downsides:

  • Time-limited work may lead you to deliver an incomplete result, or an unpolished result. (I normally aim to get the work complete first, before trying to polish anything).
  • Especially if doing time-limited work makes you more efficient, it can cost you more energy than doing the work by the book.

#5: Urgent Mode

Aim: finish a job to normal standard as soon as possible.

When Covid-19 first hit Britain, and I started to develop the Flu And Coronavirus Simulator, I worked a lot in Urgent Mode, because there were no hard deadlines, but it was blatantly obvious that getting tasks completed sooner would help me to get feedback earlier and progress faster.

In Urgent Mode, contrary to #4, it's a bit more important to deliver it fast than to make it exceptional, so you start the task straight away, and complete it as soon as possible, dropping other tasks that are not downright essential for your work, but not (necessarily) compromising on work-life balance.

That last bit is very important, because where a Deadline Rush has a set end time (the deadline), these kind of urgent situations often have an unknown duration. So if you use up too much slack early on in the urgent period, you may be too worn out to do a decent job later on the urgent period. So, in Urgent Mode I recommend to work hard, but not so hard that you exhaust yourself.

Nevertheless, getting that balance of work and rest right can be difficult when you work in Urgent Mode for a long time, so you should normally expect to be needing to recover at any rate at the end of it.

https://www.youtube.com/watch?v=rVlhMGQgDkY

Like robots, people working in Autonomous Mode are far from perfectly efficient, but are bound to learn a lot from the experience.

#6: Autonomous Mode

Aim: Finish a task as intended, by a reasonable deadline, without consulting your line manager/tutor.

Sometimes you're given a task where the task itself actually serves to either develop your skills (e.g. in education) or to assess your performance. In these cases, you can adopt what I call Autonomous Mode. Autonomous Mode can be combined with many of the other ways you can do a task, by the way.

In Autonomous Mode you try to finish a task by the book (#1), but you avoid consulting your line manager or tutor. In the case of self-education, you could also choose to avoid consulting other resources, such as existing code fragments on the web when you're learning how to code, although I am not entirely sure how helpful that is.

The good side of working in Autonomous Mode is that it helps you become a more independent worker, and that it fosters the development of new skills. It also makes it easier to assess how you perform as an individual.

However, in Autonomous Mode you are much more prone to get stuck on difficult tasks, trying to solve a particular problem for a long time without making any progress. When that happens you will face a dilemma: do you break the autonomy, or do you accept that your Autonomous Mode led to a substandard result (and wasted time because of getting stuck)?

As a Lecturer, I normally recommend students to work in Autonomous Mode, but to consult me after they get stuck for a certain period of time (e.g., 2 hours). I believe that helps them to develop skills without wasting too much time being stuck.

*Backfill mode: making the most of that interstitial (waiting) time... (Source)*

#7: Backfill Mode

Aim: Finish a task as intended, by a reasonable deadline, with a limited attention capacity.

Sometimes you end up with a short period of idle time because a colleague is late for an appointment with you, or you are participating in a teleconference where your attention is only required for small amounts of time. In these cases, you can choose to multitask a bit, and complete one of your task while waiting, or attending something else. But beware: in backfill mode you will usually be less attentive, and have less working memory at your disposal for the task.

Tasks that work well for backfill mode are normally ones that you can start and stop easily, such as checking a report for writing quality or debugging a simple script. Reading reference material (but not in-depth publications) also works rather well.

#8: In Iterations

Aim: Finish a task, then improve it, then improve it, and so on...

Sometimes the quality you need to deliver is beyond your zone of comfort, and even “simply” going the Extra Mile (#3) isn't going to be good enough. In such cases, it's best to just decline the task if there is time pressure, but if you do have loads of time to spare, then working In Iterations can be a good approach.

When working in iterations, you go through the same four steps described in #3, but you keep repeating steps 2 and 3 until the result is strong enough.

PhD students very often use this approach, as they will want to improve their skills, and a supervisor is (hopefully!) often on hand to give concise and targeted feedback.

A Token Effort if you're a pipe builder, but a masterpiece if you're an artist. Courtesy of René Magritte (1898–1967).

#9: With A Token Effort

Aim: Do enough to “keep the ball rolling” during a busy period.

Normally, it's best to just decline a task when you are too busy to get around to it. It reduces your workload, and keeps your brain free to focus on the others.

However, sometimes not doing a task can mean a missing opportunity or a withering collaboration. In these cases, people sometimes resort to doing a token effort. A token effort means that you're not aiming to immediately complete a task, but to put in just enough effort to “have something new”. It can consist of making a simple outline of something, to make 1 or 2 changes to a code to show some new results, or even to just find a few new literature sources.

Token efforts can be both good and bad: they can be good to help keep collaborations alive in difficult periods, but the approach is annoying when one side is putting in a token effort when another is putting in a full effort. So, when working in this mode it is essential that you let the others know about your work constraints.

Closing thoughts

So, in summary, there are nine ways I can think of how you can do tasks. The default is to do it by the book (#1). If you have limited amounts of time, then you can opt for the deadline rush (#2), the person time-limited (#4) approach or even the token effort (#9). If you want to hone the quality, perhaps consider going the extra mile (#3) or doing it in iterations (#8). If you wish to hone your skills over time, consider autonomous mode (#5), and if you have time to kill, consider using backfill mode (#7). And lastly, for tasks during a crisis situation it's often best to resort to urgent mode (#6).

Indeed, some modes give a better result than others, but for each of them there is an appropriate situation to use it :).

Next time, in what is probably a slightly less serious instalment, I will talk about different ways how you can choose NOT to do a task! ;)

If you feel I have overlooked anything important, or made a silly mistake, please do let me know on Twitter (@whydoitweet)!

Credits

Header image courtesy of the pikist.

Read more...

After 1 year and 3 days, it's time for a brief look back from different perspectives.

The Timeline

Before I joined Coil, I had not blogged for quite a while. I think the last blog I ran was on a now-defunct website called notaboutme.nl about 8 years ago, and I spent most of my effort writing music reviews and roleplaying-game related content. I stopped blogging mainly because life took over, and because I always did struggle to blog at a consistent pace, and build up a community.

But one year ago I changed my mind, and made a very careful start with this blog. It's a choice I don't regret at all, but initially I was rather hesitant to start a personal blog, primarily due to fear of confusing the boundaries between professional and personal lives.

In the early months, I gained traction quite quickly, and especially the Small Sims series attracted a lot of attention. The posts, which were partially inspired by my work, actually fed back into work as well, as I sometimes pointed students to the posts to help them a bit with getting to grips with code. My favorite one is #7 on elections by the way, which I guess could even be relevant for the upcoming US election?

The Game World series has been one that I enjoyed writing the most, as it gave me a new way to share the experiences I built up through about two decades personal experience of world development (both in writing and in code). I do believe it can be much better than it currently is, but there is ample time to polish it and supplement it in the coming years :).

In 2020, the output was more variable, and more in line with how I normally produce things (very much in waves, instead of having a constant stream). A post about appraising your assumptions from February is worth mentioning, but what I ended up getting the most traction with was my voluntary reviews of Covid-19 simulation codes. It led to me being invited to write two pieces for the Conversation, and I ended up participating in a whole range of efforts to help mitigate the pandemic.

When the UK came out of lockdown in late spring, I ran out of gas in many areas, but fortunately it manifested itself more in loss of physical productivity than a loss of mental well-being. And then, over the summer, I had a large wave of posts in June/July before taking it a bit easier again this month :).

The Numbers

Here you go, a list of all posts by categories along with the number of upvotes for each installment:

My general-interest, tech and politics posts are popular while Small Sims, Boffins Office and the Fork are not so popular. And lastly, almost nobody cared about my “Special Week in December” coverage, so I suppose that week was not so special after all...

My top post got only 13 up votes, but a large number of my posts end up with 4 up votes or more. This leads me to conclude that among Coil subscribers, I'm a bit of a niche hit with a narrow but dedicated readership.

I also shared a Coil Blog Survey a while ago, which you can fill in whenever you like. That survey got four responses in total, with two people especially appreciating Blast from the Past and Coil posts, and one person specifically enjoying the diversity in posts :).

Of course, the Coil Boost also provides further numerical input, but in terms of analysis it is not so directly useful. This is because I suspect there is a large but unknown delay between when posts are made, and when they are processed for Coil Boost calculations (unless my August month with three posts really was four times as awesome as my July month with 10+ posts ;)).

The Current View

I am very happy to have gone down this path, and I am grateful that Coil established this platform, and that the XRP community signposted me to it (I think it was Hodor actually who caused me to find it initially...). Many of the Coil posts have been useful to share and discuss with existing friends, as I could easily signpost them to my opinions by referring to the posts.

But moreover, I made a bunch of new online friends via Coil, I got inspired by many of the topics that I decided to write about, and I realise that many of my creative outbursts that would otherwise have gone lost are now nicely preserved on this blog. So I have 0 regret having gone down this path, and look forward on how this will grow in the second year :).

Image courtesy of *publicdomainpictures.net*.

The Future

As for that second year, I already shared some thoughts on that in my previous post. But on the whole, expect me to post all across the board, at somewhat irregular intervals, and to spin off a new project or two in due time.

A part of me is keen to move this blog to an external website (still hooked up with Coil), but I think that it still may be too early to do that now, as any time spent moving is probably not really worth it at this stage :).

But what I do expect to do is to build more and more automated scripts. Not only for tiles, but perhaps also for images adorning my blog posts, for example...

Credits

Header image courtesy of pixabay.

Read more...

Strap up for a Rocky Ride! This post is a follow-up from GameWorld 12, where we looked at dithering and colour noise algorithms for our 2D map tiles.

Today, we're going for the next step, and generate whole rock formations without manually drawing a single pixel! Because we are getting more complex now, I will link to individual repl.it files throughout the blog post, in addition to just giving the link to the whole program at the end. I think that will make it easier for you to track down the relevant bits of code :).

But yes, to get there, we need a whole load of steps to build up. So, time to get started!

Preparation: Matrix data structure and a random pixel picker

In general, to go to this more advanced level, we need two new tools. First, we need a simpler way to generate shapes, so instead of using color values, I will use a simple matrix of 0 and 1 values to develop the shapes of the stones.

The matrix has a size of 48 by 48, essentially the same dimensions as the pixels in the tile.

I put all the matrix-related code in matrix.py, and a new matrix is basically made like this:

def new():

return np.zeros((48,48), dtype=int)

Now, to grow stones, I want to be able to randomly pick pixels in a tile, but in a way such that I don't pick the same pixel twice when I use the algorithm 48x48 times. To do that, I made a pixel picker which works in two steps:

  1. It generates a list of all the pixel coordinates in the tile. So from (0,0) to (0,47), then (1,0) to (1,47) etc. until we reach (47,47).
  2. It then creates a second list, which is made by taking out randomly chosen elements from the first list. The second list therefore is a random shuffle from the first list.

In code, this looks like this in tile_patterns.py:

def random_pixels():

# build a list with all the pixel coordinates.

pixels = []

for x in range(48):

for y in range(48):

pixels.append((x, y))

# randomly take elements from the first list to build a second one.

pixels2 = []

while len(pixels) > 0:

i = random.randrange(len(pixels))

pixels2.append(pixels[i])

del pixels[i]

print(pixels2)

return pixels2

Growing stones

To grow stones, we first have to place them in different locations (I place 15 in a tile), and after that we will choose random pixels to grow out from. I do this as follows:

def make_stones():

tile = matrix.new()

pixels = random_pixels()

for i in range(15):

tile[pixels[i][0],pixels[i][1]] = i+1

for i in range(100000):

x = random.randrange(48)

y = random.randrange(48)

grow(tile, x, y)

tile = matrix.make_binary(tile)

return tile

The 100,000 value means that I will choose random pixels 100,000 times to let the stones grow. Here a smaller value may lead to bigger cracks, while a bigger value takes rather long in terms of calculation time.

Now, to grow the stones we do the following:

  1. we pick a pixel.
  2. if the pixel is a “stone”, then we pick a random neighbour.
  3. if that neighbour is not a stone, and has neighbours that are either not a stone, or part of the stone found in step 2, then we turn that neighbour into a stone.

In code, this looks like this:

def grow(tile, x, y):

neigh = get_neighbour_list(x,y)

val = tile[x,y]

if val > 0:

i = random.randrange(8)

if tile[neigh[i][0], neigh[i][1]] == 0:

if not has_neighbour(tile, neigh[i][0], neigh[i][1], val):

tile[neigh[i][0], neigh[i][1]] = val

return True

return False

The condition in step 3 is placed in the has_neighbour() function, which looks like this:

def has_neighbour(tile, x, y, mask_value):

if tile[x,y] == 1:

return False

neigh = get_neighbour_list(x,y)

count = 0

for a in neigh:

if tile[a[0],a[1]]>0 and tile[a[0],a[1]] != mask_value:

count += 1

if count > 0:

return True

return False

All this is enough to generate a stone tile roughly like this:

This looks okay, but we lack a bit of depth...

Adding shades and highlights

To add shading, I developed a simple algorithm. Essentially it crawls pixel by pixel from bottle to top, and adds a shade pixel as soon as it exits one of the cracks. In code it looks like this:

def bottom_edge(matrix):

shades = new()

for x in range(matrix.shape[0]):

shade = False

for y in range(matrix.shape[1]+1):

y2 = (matrix.shape[1]-y)%48

if matrix[x,y2] == 0:

shade = True

continue

elif shade == True and matrix[x,y2] == 1:

shades[x,y2] = 1

shade = False

return shades

I also use a variation of this, crawling from right to left, to add shading to the right side.

And to do highlighting, I use two more variations, one from left to right to add highlights on the left side, and one from top to bottom to add highlights to the top side.

You can see the code for all four of them here in matrix.py :).

By using these “pixel crawlers”, our tile will then look like this:

I think this gives enough depth for floor tiles, but perhaps not so much for rounded mountain rocks (that problem I still have to solve(!)).

Adding “cliff” effects

To add rudimentary cliff effects, we essentially need a filter that makes a rock wall a bit darker at the bottom, and a bit lighter at the top. To do this, I defined two functions called drop_shadow() and top_highlight(). Here's what drop_shadow() looks like:

def drop_shadow(t, i, intensity=0.5):

d,x,y,tw,th = t.get_properties(i)

th2 = int(th/2)

for yi in range(th2, th+1):

for xi in range(0, tw+1):

gradient = (yi-th2) / (th-th2)

color = shade(tp.get_rgb(t,i,xi,yi), intensity*gradient)

draw.pixel(t, i, color, xi, yi)

It essentially loops, row by row, over the pixels in the bottom half of a tile, and darkens the color of each pixel. The gradient is a variable that starts at 0.0 halfway down the tile, and gradually increases to 1.0 at the very bottom of the tile. In this way, each row of pixels in the bottom half gets darkened, and to a greater extent if those rows are closer to the bottom. When we apply this filter to our rocky road, this is what it looks like:

And here's what top_highlight() looks like in code:

def top_highlight(t, i, intensity=0.5):

d,x,y,tw,th = t.get_properties(i)

th2 = int(th/2)

for yi in range(0, th2+1):

for xi in range(0, tw+1):

gradient = 1.0 - (yi / th2)

color = highlight(tp.get_rgb(t,i,xi,yi), intensity*gradient)

draw.pixel(t, i, color, xi, yi)

This is almost the same as drop_shadow(), but instead it works on the top half of the tile, and adds more highlighting towards the top border. As a result, we get something that looks like this:

And when we combine all three, our rudimentary cliff looks as follows:

It is an interesting result, because while it does look rocky, it does seem that we somehow need to add more depth to the individual rocks to get a true cliff-like effect going :).

Closing thoughts

Well, that must have been a rocky ride for most of you, and honestly it took me quite a long time to get this right. But at least we now have a basic form of rocks without having to manually draw then :).

One thing that I didn't mention is that good tiles have to connect with each other correctly at the edges. With this algorithm that will be the case, but only if you connect the exact same rock pattern to each other in multiple tiles.

I may be able to solve that problem someday, but that one might end up becoming a little bit more compute-intensive ;).

Instead, for this series, I will probably look at entirely different tile types in the next instalment, or even cover a different topic altogether.

Lastly, you can find the full repl.it for this tutorial here. Instead of making new ones for every blog post on tiles, I will be enhancing this version though, so that this link will always link to the newest tile generator.

Read more...

Time for the next level in tile-generating :).

Last time we looked at generating a range of simple shapes, like balls, trees and bricks. This time around, I am adding a new range of filters to the tile maker, so that you get rid of that cartoony look, at least a little bit.

It is but a single step on my enormous quest to generate nice-looking tiles without having to resort to pixel-painting ;).

Dither

This first one is simple. You pick a color, and then sprinkle x% of the tile with that color at random.

It's a very useful technique, and in retro RPGs you will often find dithering in grass tiles, water tiles or sand tiles.

Dithering as it was used in Mystic Quest. Courtesy of *Hardcoregamer.com*.

To do dithering we first have to have a randomly shuffled list of all pixels in a given tile. You can do that using this function:

def random_pixels():

# build a list with all the pixel coordinates.

pixels = []

for x in range(48):

for y in range(48):

pixels.append((x, y))

# randomly take elements from the first list to build a second one.

pixels2 = []

while len(pixels) > 0:

i = random.randrange(len(pixels))

pixels2.append(pixels[i])

del pixels[i]

return pixels2

Now, randomizing becomes quite simple, because we just take a bunch of elements from our shuffled list of pixels. To make using the function easy, I define an intensity parameter, which can be set between 0.0 (0% dithering) and 1.0 (100% dithering).

Now using the building block above, and my previous code, I can make a dither function like this:

def dither(t, i, color, intensity):

d,x,y,tw,th = t.get_properties(i)

coords = tp.random_pixels()

threshold = int(len(coords) * intensity)

for j in range(threshold):

xj = coords[j][0]

yj = coords[j][1]

tp.pixel(t, i, color, xj, yj)

Here is an example of a simple dithered tile:

And you use the following code to make it:

tp.draw_plain(t, <tile_index>, "#999")

te.dither(t, <tile_index>, "#222", 0.1)

Noise

A Noise filter distort colors, e.g. it can turn a blue square into a square with varying types of blueish colors. In the most extreme case, you can get tiles with simply randomly colored pixels everywhere.

To distort colors of a single pixel, we can use the following:

def color_noise(rgb, intensity):

intensity *= 255

distortions = (int((random.random() - 0.5) * intensity), int((random.random() - 0.5) * intensity), int((random.random() - 0.5) * intensity))

return constrain(rgb[0] + distortions[0], rgb[1] + distortions[1], rgb[2] + distortions[2], rgb[3])

Once again, an intensity of 0.0 is no distortion, and 1.0 almost complete distortion.

I then made a color_filter function that simply applies a function like this to a full tile:

def apply_color_filter(t, i, filter_func, intensity):

d,x,y,tw,th = t.get_properties(i)

for xi in range(0, tw+1):

for yi in range(0, th+1):

color_hex = tp.rgb_to_hex(filter_func(tp.get_rgb(t,i,xi,yi), intensity))

tp.pixel(t, i, color_hex, xi, yi)

Right now I only use it once, but I figured I may be needing something like this in the future ;).

Anyway, this then makes the noise function quite easy to write. Here it is:

def noise(t, i, intensity):

apply_color_filter(t, i, color_noise, intensity)

Noisy tiles can then look like this:

Now that could work reasonably well for a sand tile, right?

It could also look like this, if you go on full random by the way ;):

Closing thoughts

But wait! At the top of the page you can see even more tiles, right? Indeed, I also developed an effect to add a drop shadow, a top highlight, and even a generator to make random stony patterns (that one took AAAAAAAGES by the way!).

But adding those here would make this post a bit bulky, so I will be sharing those somewhere in the next few days :).

Until then, please do let me know if I could explain any part of this tutorial more clearly, as I'm happy to make changes.

Lastly, I have a working repl.it of the tile generator (including early versions of the additions explained in the next blog post!) here.

Read more...

In a week from now, it will have been a year since I started this blog on Coil. And in a day from now, I will be starting a holiday period of three weeks, which is about thrice as long as most of my holiday periods have been in the past ;).

In my opinion, holidays in lockdown conditions with young kids are a little pointless, so we bumbled along at mid-pace working conditions during periods such as Easter actually much longer than and the like. But this time the lockdown measures are much lighter, and we decided to take a few weeks proper holiday.

As for this blog, I suppose this blog entry marks the end of an unforeseen four week hiatus. But rest assured, nothing is fundamentally wrong on my side, and work-wise we even made it into a few major newspapers this week :). So why did it happen?

The month prior to my unexpected hiatus I posted quite a lot, and I found myself not so much losing inspiration, as running into practical road blocks on various sides, and ending up a notch below the motivational posting threshold on some other/older topics. Of course, a summer holiday is a good moment to sort those things out, so I do expect to get a clearer opinion on what dead ends to cut off, and where to go more in depth in Year 2.

Indeed, as the title says, blogging is hard, at least it is for me. It is hard to keep a consistent rhythm, and even harder to find that sweet spot of putting together things that energise me when I write it, yet are also enjoyable, interesting or useful to read. I didn't get it entirely right in the first year, and I actually don't need to get it right in the second year either, because blogging is a hobby project for me to begin with.

But chances are I will stroll down some different directions in the coming weeks, just because it'll be more fun that way :).

Credits

Header image from needpix.

Read more...