Tuesday, 11 October 2016

Mario T-Shirt

From an original idea by David Gavilan Ruiz

Sunday, 18 September 2016

Mortality Statistics

The Office for National Statistics collects mortality data for the United Kingdom. There's lots of information hidden within the data, but some of it is quite difficult to extract and/or visualise.

The underlying data for questions of life expectancy and mortality rates would seem to be a statistic named "Qx" (raw data). According to their guide, this is defined by the ONS as:
The mortality rate between age x and (x +1); that is, the probability that a person aged x exactly will die before reaching age (x +1).
It is sometimes expressed as a probability multiplied by 100,000. You can plot this for males as a surface with one axis being the age, "x", and the other being the year of observation of "Qx":
The noise towards the "back" of the graph is due to the fact that concrete data beyond the age of 100 is very sparse, so probabilities are difficult to calculate accurately. The next derived value is "Lx" which the ONS defines as:
The number of survivors to exact age x of 100,000 live births of the same sex who are assumed to be subject throughout their lives to the mortality rates experienced in the 3-year period to which the national life table relates.
I've approximated that as a normalised probability:

L[0] = 1
L[i] = L[i-1] * (1 - Q[i-1])

This gives a monotonically decreasing graph that reminds me a bit of a zero curve from fixed-income finance.
You can already see that the shape of the "Lx" curve has changed appreciably between 1850 and 2000. But the graphs (which I prepared using the excellent plotly) become easier to interpret if we add more colour levels:
You can start to see that there are strange "striations" in the surface. We now look at "Dx", defined as:
The number dying between exact age x and (x +1) described similarly to Lx.
I used the following formula:

D[i] = L[i] - L[i+1]

This gives a surface that has a few interesting features clearly visible:

  1. The infant mortality rate (defined here as the number of deaths between 0 and 1 years, i.e. D[0]) has dropped from over 15% in 1850 to less than 0.5% after 2000.
  2. The adult life expectancy has improved; particularly after about 1950. This is indicated by the increase in height of the hump near the "front" of the surface, along with a gradual shift of the highest point towards greater ages. I suggest this is due to the introduction of the National Heath Service.
  3. There is a curious small green hump in the plateau near Year=1918, Age=20.
If, instead of graphing "Dx", we graph "log10(Dx)", we see even more features:
We can see that there is a second hump a couple of decades later that the original. Obviously these are due to the two World Wars (1914-1918 and 1939-1945 for Britain). However, the humps are present (though to a lesser extent) in the female graph too:
I was somewhat confused by this but it appears that some UK "life tables" (as actuaries optimistically call them) do not include deaths of service men and women on active duty. So perhaps the humps account for only civilian casualties: e.g. bombing victims and deaths due to general privations caused by the wars.

Another feature I found quite striking was the "male late-teen spike" that is seen better in this view:
There is a relatively sudden increase in male deaths in late-teens. This is well-known phenomenon that is nowhere near as pronounced in the female data:
In fact, after World War II, for females older than one year, log10(Dx) becomes amazingly linear until it peaks at 88 years old (for 2013 data).

One final feature, that I originally thought was a data or plotting bug, is the diagonal cleft you can just see in the green peak to the far right of the male (and female) Dx surfaces:
The diagonal cleft is aligned at exactly forty-five degrees:
Year=2000, Age=81 approx
Year=1950, Age=31 approx
Year=1940, Age=21 approx
Year=1930, Age=11 approx
Year=1920, Age=1 approx
My first theory is that the cleft is due to deaths in World War II. If a large percentage of young people are killed in the space of a few years, then this age range will be "under-represented" in the death statistics in subsequent years.

My second theory involves not the death rate, but the live birth rate. There was a noticeable dip in live births during World War I:

This dip wasn't seen during Second World II. Essentially, if there were fewer babies born in 1917-1919, there will be relatively fewer eight-one-year-olds in the year 2000.

So which theory is correct? I honestly don't know. Perhaps neither, Perhaps both. It may just be due to an accident of history that the drop in birth rate during World War I coincides with the tragic loss of men and women in their early twenties during World War II.

Interactive graphs here. Data courtesy of ONS.

Sunday, 11 September 2016

Lexicon 1

For a long time, I've been thinking about implementing a word puzzle dictionary to help solve (or cheat at, if you prefer) crosswords, codewords, Scrabble, and the rest. You know the sort of thing:
"One across, nine letters: C-blank-O-blank-S-blank-blank-R-blank" *
I've always assumed that running this on the client-side in JavaScript would be far too slow. It turns out I was very wrong; if you're careful with your coding, you can get it running in a browser on a low-end tablet or phone with memory and CPU cycles to spare.

My first problem was getting hold of a good lexicon or word list. The obvious avenue to explore was the "official" Scrabble word list used in competitions, but it quickly became apparent that there are at least two such lists:
These are simply lists of words that are considered "valid"; the former is over 178,000 words and the latter over 267,000 words. They do not contain definitions nor any metadata about the words such as frequency.

In contrast, SCOWL (Spell Checker Oriented Word Lists) contains over 423,000 words with information such as relative frequency and regional variations (American/Canadian/British). Some of the words are suspect (domain-specific abbreviations and bad optical character recognition are two likely sources of suspicious "words") but the frequency information makes it easy to cull outliers.

Instead of choosing just one word list, I decided to go for all three and divided the superset of words into five levels of "rarity" based on the SCOWL frequency metadata. It turns out that SOWPODS is a superset of TWL06, but contains some words not found in SCOWL.

After a lot of curation (in Excel, of all tools!) I ended up with a list of 432,852 words split into five levels of increasing "rarity". Each word is implicitly tagged with the word lists in which it is found. I limited the words to those between two and fifteen letters (inclusive) where each "letter" is one of the twenty-six Roman letters "A" to "Z" (no digits, apostrophes, hyphens or accents are allowed).

I could have stored the lexicon as a simple text file or as a JSON array of strings, but this would have taken up several megabytes, so I decided to have a go at trivial compression. I know, I know ... most HTTP servers will serve up compressed files out of the box and there exist any number of JavaScript libraries that do client-side decompression of ZIP files, but I felt like re-inventing a wheel. Give me a break!

The main constraint I gave myself was to try to get the lexicon to the client-side in a usable form as quickly as possible. This meant a trade-off between file size and time taken to decompress and store in a suitable data structure. One thing you don't want to do is sort half a million words into alphabetical order on the client. That means the lexicon (or parts thereof) needs to be pre-sorted.

Sorted word lists have a lot of redundancy at the beginning of each entry. For example, imagine a list starting like this:
  • ...
If we replace the beginning of each word with a digit ('0' to '9') specifying how many letters from the previous word we should replicate, the list becomes:
  • 8S
  • 4WOLF
  • 7VES
  • ...
Another technique is to substitute common letter sequences with ASCII characters outside the ranges '0' to '9' and 'A' to 'Z'. It turns out that the most common four-letter sequences near the end of the words are:
For three-letter sequences, the list is:
While the most common near-end two-letter sequences are:
Using these two simple techniques, the 432,852 words can be compressed into a single ASCII string of just over 1.5 million characters. This string can be efficiently converted into a high-performance in-memory data structure within a second on a typical PC.

The full demo can be found here, whilst the JavaScript file that decompresses the word list is lexicond.js.

* "CLOISTERS" obviously

Saturday, 20 August 2016

Modulo Arithmetic and Days of the Week

Yesterday, I noticed something curious about the Wikipedia entries for days of the year. Here's an example from January 1:

The text that caught my eye is:
This date is slightly more likely to fall on a Tuesday, Friday or Sunday (58 in 400 years each) than on Wednesday or Thursday (57), and slightly less likely to occur on a Monday or Saturday (56).
Huh? At first I thought this was an obvious mistake; surely the days of the week are evenly distributed. After all, there are 365 or 366 days in a year and 7 days in a week, and these numbers are relatively prime.

Here's my (erroneous) line of reasoning:

  1. Neither 365 nor 366 are divisible by 7. [True]
  2. Therefore, January 1 can be any day of the week. [True]
  3. The "leap year" cycle repeats after 400 years. [True]
  4. But 400 is not divisible by 7. [True]
  5. Therefore, the "leap year and January 1 weekday" cycle repeats every 2800 years. [False]
My mistake was trying to divide 7 (the number of weekdays) into 400 (the number of years in a "leap year" cycle). What I should have done is try to divide 7 into the number of days in a "leap year" cycle.

So, how many days are there in 400 full years? Well, there are (100 - 4 + 1) = 97 leap days, so there are (365 * 400 + 97) = 146097 days.

Amazingly, 146097 is divisible by 7, so the "leap year and January 1 weekday" cycle repeats every 400 years, not every 2800 years. As we know, 400 is not divisible by 7, so this cycle cannot possibly have a uniform distribution of weekdays for January 1.

Indeed, even leap days (February 29) are not immune from this lack of uniformity because 97 is not divisible by 7. In any 400 year period, there are:
  • Thirteen each of Mondays, Wednesdays and Fridays,
  • Fourteen each of Saturdays and Sundays, and
  • Fifteen each of Tuesdays and Thursdays.

At this point in the discussion, someone usually witters on about calendar reform.

So why not...

Imagine reforming the calendar such that January 1 is always a Saturday but without forsaking the 7-day weekly cycle. One scheme would be to have a 52-week standard year (364 days) with periodic "leap weeks" (371 days). If we define a leap year to be any year that is divisible by 6 or 62 we end up with a 186-year leap cycle with 153 standard years and 33 leap years, making a total of (153 * 364 + 33 * 371) = 67935 days. That implies that the mean year length is (67935 / 186) = 365.2419355 which is closer to the solar year of roughly 365.242181 days than the current Gregorian calendar (365.2425).

So here's the new scheme:

  • A standard year is 364 days, divided into 52 weeks of seven days.
  • A month is defined as 4 weeks or 28 days. There are thirteen months in a year.
  • A leap year consists of 371 days with the extra "special" week of festivities tacked on to the end.
  • A year is a leap year if it is divisible by 6 or 62.
  • All years, months and weeks begin on Saturday.
Strange, but this seems vaguely familiar...

Saturday, 23 July 2016

Saturday, 16 July 2016

Blok Clok

When I was a boy, I was given a curious present: a perpetual desk calendar made from wooden cubes:
You can still buy them, so there must be some enduring appeal. The calendar interested me because of a strange quirk that isn't immediately obvious.

There are four blocks: one for the day of the week (with seven labels), two for the day of the month (1 to 31) and one for the month (with twelve labels). The "weekday" and "month" blocks are easy to label. For example, two months on each face fills the "month" block and, providing the container covers up the lower half of the block, all is peachy.

The two date blocks are a little more involved. Dates run from "01" to "31":


By inspection, we see that the only digits that appear twice on a single date are "1" and "2". However, if you sit down with a pencil and paper, it quickly becomes apparent that you'll also need two zeroes. That means that the following digits are required:


But, hang on! That's thirteen digits. Two cubes have only twelve faces between them. How do they fit? The clever "quirk" is that "6" and "9" occupy the same face; you simply flip the block around 180 degrees. This gives you the following faces for the two blocks:


This arrangement allows to you represent all integers between "00" to "32" inclusive; just enough! It's deceptively clever. And that's probably why I like my wooden perpetual calendar so much.

Fast forward several years, if not decades, and my current fascination with clocks led me to consider whether you could tell the time with a similar arrangement. I knew representing the hours would not be a problem: as we've seen, two blocks can represent all integers between "00" to "23" inclusive for a twenty-four-hour clock. But representing minutes is harder.

It is fairly obvious that you need at least three blocks to represent all integers between "00" to "59" inclusive. But are three sufficient? It turns out that they are; you don't even need to use the upside-down "9" trick. The solution uses an additional cube to display a colon between the hours and minutes digits. For example:


The "tens minutes" digit takes up a single block: "0" to "5" inclusive. So the problem simplifies to trying to represent the "units minutes" digit ("0" to "9") and a colon with two blocks. The following labelling is one solution:


Using three blocks to represent minutes between ":00" to ":59" inclusive can be repeated for the seconds. This gives us eight blocks:

  hours-a    0,1,2,3,4,5
  hours-b    0,1,2,6,7,8
minutes-a    :,0,1,2,3,4
minutes-b    0,1,2,3,4,5
minutes-c    :,5,6,7,8,9
seconds-a    :,0,1,2,3,4
seconds-b    0,1,2,3,4,5
seconds-c    :,5,6,7,8,9

Not only do the blocks need to rotate to any of their six faces (and in the case of "hours-b", the "6" sometimes has to be flipped to become a "9") but pairs of blocks also have to be swapped periodically:

hours-a <=> hours-b
minutes-a <=> minutes-c
seconds-a <=> seconds-c

Some extra fettling is needed to make the rotations aesthetically pleasing, but the result is a twenty-four-hour clock that twists and turns at a fair old lick.

Minimal Rotor Clock

The Rotor Clock I posted about last month is all very well, but there are a heck of a lot of moving parts: thirty-two rotors, each with ten faces. You could physically build it, but things would be a lot easier if there were fewer components. So I set about trying to reduce the number of rotors for a twelve-hour clock that displays the time as words.

The extreme (degenerate) case is a single rotor with 12 * 60 = 1440 faces. This would produce a working clock, but the rotor would need to be enormous (or the text very small). The next idea I had was a clock with two rotors: one for hours and one for minutes. You could repeat the twelve hours five times and make both rotors have 60 faces. This would still require very large rotors. Also, telling the time in a "natural" British English manner is quite tricky. Consider:


The solution I came up with (which may not be optimal, but is possibly close) is to use four rotors, each with at most twelve words or phrases plus a blank face:

    Rotor A      Rotor B      Rotor C         Rotor D
    =======      =======      =======         =======
    THIRTEEN     ONE          O'CLOCK         ONE
    FOURTEEN     TWO          MINUTE PAST     TWO
    EIGHTEEN     FIVE         HALF PAST       FIVE
    NINETEEN     SIX          QUARTER TO      SIX
    TWENTY       SEVEN        MINUTES TO      SEVEN
                 EIGHT        MINUTE TO       EIGHT
                 NINE                         NINE
                 TEN                          TEN
                 ELEVEN                       ELEVEN
                 TWELVE                       TWELVE

Note that rotor B is identical to rotor D.

This produces a clock that can tell the time as hoped for, but with the insertion of another blank face in rotor A and a bit of duplication in rotor C, I was able to reduce the number of large rotations quite considerably. This produces a series of relatively smooth transitions. The final rotors are organised thus:

    Rotor A      Rotor B      Rotor C         Rotor D
    =======      =======      =======         =======
    (blank)      (blank)      (blank)         (blank)
    THIRTEEN     ONE          O'CLOCK         ONE
    FOURTEEN     TWO          MINUTE PAST     TWO
    (blank)      THREE        MINUTES PAST    THREE
    EIGHTEEN     SIX          HALF PAST       SIX
    TWENTY       EIGHT        QUARTER TO      EIGHT
    (blank)      NINE         MINUTES TO      NINE
    (blank)      TEN          MINUTE TO       TEN
    (blank)      ELEVEN       (blank)         ELEVEN
    (blank)      TWELVE       (blank)         TWELVE

If you look at the JavaScript code, you'll see that control of the rotors is via two data tables. This scheme lends itself to efficient compression. Indeed, a slightly cut-down variation of the clock can be compressed into a single 1017-byte HTML page (providing you're using a highly-compliant browser such as Google Chrome or Mozilla Firefox).