From Lockdown Sceptics
Stay sane. Protect the economy. Save livelihoods.

An experienced senior software engineer, Sue Denim, has written a devastating review of Dr. Neil Ferguson’s Imperial college epidemiological model that set the world on a our current lock down course of action.

She appears quite qualified.

My background. I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects.

She explains how the code she reviewed isn’t actually Ferguson’s but instead a modified version from a team trying to clean it up in a face saving measure.

The code. It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was “a single 15,000 line file that had been worked on for a decade” (this is considered extremely poor practice).

She then discusses a fascinating aspect of this model. You never know what you’ll get!

Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.

This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it.

Ms. Denim elaborates on this “feature” quite a bit. It’s quite hilarious when you read the complete article.

Imperial are trying to have their cake and eat it.  Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.

Readers may be familiar with the averaging of outputs of climate model outputs in Climate Science, where it’s known as the ensemble mean. Or those cases where it’s assumed that errors all average out, as in certain temperature records.

Denim goes on to describe a lack of regression testing, or any testing, undocumented equations, and the ongoing addition of new features in bug infested code.

Denim’s final conclusions are devastating.

Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one. 

On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.

Full article here.




Source link