The acid test for any model is whether it can predict successfully out of sample. There has been no evidence offered of the model’s ability to forecast. However, we do have a natural experiment to fall back on.
In “Intervention strategies against COVID-19 and their estimated impact on Swedish healthcare capacity,” the authors of that study re-implemented the Imperial College model and applied it to Sweden. An examination of the model documentation and the model source code (also written in c and on github), shows it is the same model. The Swedish version of the model made clear short term predictions of the number of fatalities that would occur in Sweden if it followed its announced laissez-faire policy of social distancing and how much those fatalities would be reduced if various other policies were followed that are similar to those employed in the US and the UK. In chart A of figure 4 in the paper, by May 9 when this review was written, the model predicts about 100K deaths if Sweden followed its announced policy and about 25K deaths if it adopted the most stringent social distancing policies. On May 9, the actual number of fatalities in Sweden was 3,175 deaths.
Thus, the model massively over-predicted fatalities. The zombie assumption is the likely problem.
Overall conclusion: this model cannot be relied on to guide coronavirus policy. Even if the documentation, coding, and testing problems were fixed, the model logic is fatally flawed, which is evidenced by its poor forecasting performance.