Why Experts Doubt Mortality Rates in Recent COVID-19 Study
There has been a lot of talk lately about several COVID-19 studies that use antibody testing to estimate the prevalence of infection in a population. In theory, antibody tests reveal evidence that a person has already had and recovered from COVID-19 , which, given the lack of tests available, when people are still experiencing symptoms (or may only experience mild symptoms if they are not completely asymptomatic.) be a useful tool for assessing the full extent of a pandemic.
In particular, a study on prevalence rates in Santa Clara County, California, attracted attention with its finding that the prevalence of people with antibodies, that is, people who have become infected and recovered, is much higher than official estimates.
Antibody research is of interest because it can help us assess who has immunity against COVID-19, which can be helpful as states seek to open up again, especially as we move towards potential herd immunity. But putting too much faith in research that suggests low mortality rates, especially one that does not account for errors and bias in its data, can lead people to think that the disease is less dangerous than it really is.
In the Santa Clara study, 3,330 people were recruited through Facebook ads and tested for antibodies against the virus that causes COVID-19. Of these 3,330 people, 50 tested positive. Extrapolating this number, the authors estimate that the prevalence of recoveries in Santa Clara County is 50 to 85 times the official figure. Using this number, the authors estimated the mortality rate from COVID-19 to be in the range of 0.12-0.20, compared with other estimates , which usually range in the range of 1-3%. It is difficult to estimate the mortality rate from this disease, given its novelty, as well as the lack of widespread tests. But in general, in scenarios where the testing rate was higher, the mortality rate seems to be somewhere in the 1 to 3% range.
The findings from the Santa Clara study were overwhelming and did not stand the criticism that accompanied the first encouraging headlines. Since this preprint was released, there have been one data problem after another. And the Santa Clara study isn’t the only antibody test that has sparked controversy – although it got its biggest notice last week.
When it comes to analyzing these studies to determine how much reserves we can invest in them, scientists consider the following factors:
How were the participants recruited?
For the Santa Clara study, participants were recruited through Facebook ads. While the researchers did try to correct the oversampling of some demographics, this is a sampling technique that introduces some biases, not least because many participants may have been motivated by a desire to know if they had a previous illness. was COVID-19 or not. This factor could have significantly skewed the results, especially since the study’s findings were based on only 50 positive results out of 3,330.
Then comes the problematic discovery that some of the participants for the Santa Clara study were recruited by the wife of one of the authors, who sent an email to a high school email address, along with the assumption that testing would determine if they could “get back to work without fear,” and false claim that the test has been approved by the FDA.
The actual sample will be taken at random. In the real world, this may not always be possible, but it is important to note the sampling method followed by the study and how it corrected for any possible bias.
How accurate is the test?
COVID-19 antibody tests are still in development and we still have a lot to learn when it comes to their accuracy. This all complicates the fact that the FDA has relaxed its rules by allowing companies to sell tests that have not been verified by the government.
When it comes to test accuracy, the rate of false negative results should be considered – when test results indicate that you do not have antibodies to COVID-19 when you actually do have them – as well as the number of false positives when test results say that you have antibodies against COVID-19, when in fact they are not.
In an analysis of 14 different antibody tests conducted by a team of more than 50 scientists, only three gave consistently reliable results. And of these 14 different tests, only one did not give false positives. False positive results are especially problematic because they can give subjects a false sense of security – the mistaken belief that a person is immune when in fact the test results were erroneous. (Of course, it’s important to note that we don’t yet know to what extent antibodies confer immunity to the virus that causes COVID-19.)
This is the number of false positives that is particularly troubling in the Santa Clara study. The authors’ conclusions do not account for any false positives, which is particularly problematic given that the test they use has a confidence interval of 0.1% to 1.7% false positives , while their reported results were that 1, 5% of their samples were positive. … In theory, all of these positive samples could be false positives, which the authors did not account for. Even though the false positive rate is much lower, with such a small sample size, small differences can skew the results slightly.
Are the results true?
The Santa Clara study uses its findings to estimate mortality rates from 0.12 to 0.20 percent. However, as Undark points out , if the death rate were only 0.12 to 0.20, then the number of deaths from COVID-19 in New York could mean that 12.5 million people were infected, while the population the city is only 8.3 million.
Given that New York still registers new cases every day and has not seen an additional 4 million influx in the past month, this suggests that the death rate is not really in the 0.12 percent to 0.20 percent range. …
Has this study been peer reviewed?
Life in a pandemic is changing every day, and new research results are published every hour. Overall, the ongoing research is very good: there is still a lot we don’t know. However, given the speed at which all this is happening, and our desperate desire for more information, more and more preprints – studies that have not yet been peer reviewed – are being reported as news.
It is doubly important to double-check how the study is covered when it is new and not peer-reviewed. What do other scientists say about the methodology and results? What do they say about the limitations of the study? No study is perfect. If the news coverage did not include a variety of voices – including analysis from experts who were not involved in the study, as well as experts who can talk about limitations – this is a sign that you might want to wait before making a decision. can you rely on the results.