Suppose the number of people infected grows exponentially with time as . Consider two cases: in the first, the true rate is less than r by δ percent; in the second, it is greater than r by δ percent. The absolute value of the error in the estimated number of infections is in the former and in the latter case. The ratio of the latter to the former is simply
As t gets large enough, increases, in essence, exponentially. Of course one could argue that the real-word infection curve would soon reach its inflection point and get concave. If 100% get infected, it would eventually look, normalized by population size, like a distribution function. For the exponential growth phase, however, is a strictly increasing function and is greater than one for any positive t.
This ratio would still be (strictly) above one for any (strictly) convex function, as follows from Jensen’s inequality:
Things get more complicated with functions that look like the logistic curve, but it’s still worth bearing in mind this asymmetry, striking for fast-growing convex functions. This is just my two cents on the Taleb–Ioannidis debate, or should I call it Taleb’s criticism of Ioannidis and others: underestimating the spread rate by a certain amount or percentage can lead to a much greater error in the estimate of the number of the sick than overestimating it by the same amount or percentage.