Q&A: Statistics and Mistaken Conclusions
Statistics and Mistaken Conclusions
Question
Lately, ever since the coronavirus pandemic burst into our lives, we’ve been hearing many statisticians presenting completely different models and forecasts; most of them were wrong, at least in the Israeli arena. I wanted to ask a somewhat ambitious question, and for a moment set the numbers aside. Since statistics is an exact science (as close as possible), ostensibly one might have expected a high degree of agreement between the opinions, or at least an ability to predict the development. Was their failure in the pure statistical calculation, or in the assumptions they used to build their models? This question can be put more generally: is statistics a limited tool because it relies on many assumptions that the researcher brings to it, or is it the researcher who has difficulty carrying out this complicated task wisely?
Answer
Statistics is not mathematics. That is a common mistake. There are mathematical tools that are used, but such tools are used in many fields. The essence of scientific fields is not the mathematics but the assumptions on which the mathematical tools are applied. There are many of those, and of course the results depend on them very strongly.
By the way, and unrelatedly, I don’t know whether you’re right that the statisticians were so mistaken.
Discussion on Answer
There were a lot of mistakes, for example… Bibi said at the beginning of the coronavirus outbreak (12.3) that there was 3-4% mortality worldwide, although on the website https://www.worldometers.info/coronavirus/ other data were published. According to my calculation, it depends on whether people calculated the percentage of deaths relative to those who recovered, or checked it relative to the number of sick people. In my opinion it’s obvious the calculation should be according to the recovered, unlike Bibi.
I didn’t understand what follows from the fact that statistics is not mathematics. In any case, it depends on variables that the researcher is trying to examine and to which he attributes importance. Logical failures in the assumptions, and in the analogies the researcher tries to draw between his findings and the relevant issue, may overshadow the precise statistical inference the researcher presented. The multiplicity of opinions in statistics (which is supposed to be fairly precise) reveals gaps in logic and in the perception of reality. Isn’t that so?
https://www.ynet.co.il/articles/0,7340,L-5724717,00.html
An article that shows (in a general way at least) the large gaps between the grim forecasts in Israel, the different reality, and the unclear effectiveness of the lockdown… That is, assuming the lockdown isn’t the factor here, then the forecasts that spoke about 10,000 deaths in Israel missed by a few points.
What mistake did you see in the forecasts?
The forecasts are that 0.2%-0.5% of those infected die.
Thanks to the lockdowns, not many people were infected. So not many died.