What the study proves is that if you have low prevalence of disease a test with even good specificity is crap. The raw prevalence was 1.5%. They then decided the data was skewed by where and whom responses came from, so they mathed their way to higher numbers. Additionally the study drew from volunteers, likely people concerned they may have been infected. To top that off, the test makers estimate 98.3 to 100% specificity. If their test were 99% specific over half the positives would be expected to be false positives.
Santa Clara is also part of the epicenter of cases in norcal. They're #4 in case number for all counties in California. So yea, outside of NYC NOLA etc. I'd expect few places to have their number of cases.
This is also a really good demonstration of the difficulty we're going to face of using the antibody testing to determine who can go back to work.
Suppose you gave a typical person the following hypothetical: "I have an antibody test that has 99% specificity and 100% sensitivity. At this point, we have good evidence the true prevalence of this disease is 1%. I've tested 10,000 people and you were one of the positives. How likely are you to actually have protective antibodies?"
Most people would answer "99%." That answer is very understandable -- when the test comes back positive, it is correct 99 times out of 100. The answer is also wrong.
If you test 10,000 people among whom the disease has a 1% prevalence, you will generate 100 true positives -- that is people who test positive and actually have the disease. Unfortunately, given the 99% specificity, you will also generate 100 false positives -- people who test positive but have no COVID-19 antibodies. So if you are an individual among those 10,000 and you have just been handed a positive result, it's actually
even odds that you are not in fact protected from the disease at all (i.e. 100 people in the sample have a positive test and have protective antibodies and 100 people in the sample have a positive test and have no protective antibodies).
This is the reason we use confirmatory screening, but the cause of the false positive must not be replicated by the confirmatory screening or it's of no value. It will take time to get the testing correct even when we have tools that we are reasonably confident have high specificity and high sensitivity. But that will require both patience and sufficient faith in the public health community to believe that 99% specificity alone isn't something you ought to bet your health on.