What do you make of this?

Status
Not open for further replies.

Amity Island

Well-Known Member
Relationship to Diabetes
Type 1
Hi Everyone, can anybody verify what is being said in this video? It's something to do with taking the false positives from the total number of tests done, not just from the the number of positive tests done. Giving a very different result.

Thanks

 
It would be easier to find the last "More or Less" on BBC sounds and have a listen where somebody far more articulate than me explains it. Maybe the episode one before last.

Essentially there are two ways of defining the false negative rate and unless you say which definition you are using then whatever you say is rubbish - as demonstrated by Hancock and that crazy interviewer who thought she knew what she was talking about but clearly showed she did not.
 
Ah, well, see it's that Applied Mathematics - you have to understand a lot more about the subject and the problem you're applying it to, to grasp the maths.
 
It would be easier to find the last "More or Less" on BBC sounds and have a listen where somebody far more articulate than me explains it. Maybe the episode one before last.

Essentially there are two ways of defining the false negative rate and unless you say which definition you are using then whatever you say is rubbish - as demonstrated by Hancock and that crazy interviewer who thought she knew what she was talking about but clearly showed she did not.
Morning Doc B,

Having watched it again, is this what he is saying?

Correct me if I misunderstand him, but in a nut shell he's saying that the government are for example, publishing that they have had say 11,000 positive tests and then allowing for about 1% of the 11,000 (allowing just 110 of them) will be false positives, thus predicting 10,890 of the 11,000 positive test are true positives.

The chap in the video is saying this is completely the wrong method. You don't deduct 1% from just the positive test results, you deduct 1% from the total number of tests done then, minus that 1% figure from the number of positive tests. In the same example the governments 11,000 positive needs to be adjusted with the 1% false positive from the entire number of tests.

So in the example where you have done:

1,000,000 ( 1 million total tests) and a (minimum) of those are1% false positives.

1% of 1,000,000 is 10,000.

So from say 11,000 positive tests, 10,000 are false positives, leaving in actual fact only 1,000 true positives. e.g in this example about 9 out of the 10 positive tests are false.

Changing the positive test results from 10,890 nearer to 1,000.
 
The chap in the video is saying this is completely the wrong method. You don't deduct 1% from just the positive test results, you deduct 1% from the total number of tests done then, minus that 1% figure from the number of positive tests. In the same example the governments 11,000 positive needs to be adjusted with the 1% false positive from the entire number of tests.

That's silly. A test that's not positive obviously can't be a false positive, because it's not a positive at all.

The 1% (actually thought to be a bit less than that, and the ONS weekly survey have to do extra checking to reduce theirs further) is based on what people actually mean by false positive which is the proportion of the positive tests which are actually of people who do not have the infection. (If you wanted the proportion of all tests that incorrectly test positive you'd need a much smaller number which would also have to vary depending on the infection rate.)
 
How many of the negatives are false negatives?
 
I'll have a go @Amity Island, but its the thick end of 30 years since I looked at this sort of stuff professionally, so bear with me.

Not sure what he was trying to say except as usual he was trying to put the best gloss possible on whatever point was being made. Trouble is he did not really understand what he was talking about.

The underlying issue is not too difficult. No test is perfect. There are always errors and it is wise to have an assessment of the extent of those errors. Conventionally you would express that statistically but if you don't understand statistics then you look for something that you think you do understand. That's where the false positive thing comes in. By the way, there are also false negatives but they are not mentioned. They then get things really twisted when they talk about the false positive RATE. If you do that you have got to define what rate is being referred to.

So, you evaluate your testing procedure, which will measure something. Make the measurement a number of times on what you think is the same sample and you will get a measure of the reproducibility of the result. You then set a value at which you deem the test to be positive and from your measurements calculate a statistical probability of your test being in error, i.e. giving a false positive or a false negative. Normally this is expressed as something like 95/95 confidence level, that is you are 95% confident that 95% of the results fall into the category you have assigned it to.

Give that to a Hancock or a mouthy journalist and they, probably like you, get totally lost. What they want to know is how many tests are wrong and they want a simple answer. Well, there isn't one. Rather than say they are not clever enough to understand they finish up with rather silly concepts of false negatives and false positives and the even sillier concept of a false positive (or negative) rate. Sounds impressive but it is all actually a bit meaningless.

They do very silly things like taking 95/95 confidence intervals and say quite erroneously it means that 5% of the tests give false results. Some then move that on and say that 5% of the positive results are false. Others use that to say that 5% of all results are false. Quite clearly you cannot have 5% of positive results being false and 5% of all results being false at the same time. Neither is right and all you end up with silly conclusions of the sort you mention in your post.

Personally I would ban the use of percentages by politicians and journalists unless they have a recognised qualification in statistics and statistical analysis. Even then I would not trust what they are saying.
 
How many of the negatives are false negatives?

A false negative is a negative test for someone who is, in fact, infected (so someone who ought to test positive). So the number of the negatives that are false negatives depends a lot on the real infection rate, and the infection rate isn't that high.

It's thought to be fairly small: most actually infected people will get a positive test so you don't end up with many in the testing-negative but really infected set.

I think this is the relevant episode of More or Less: https://www.bbc.co.uk/programmes/p08s7b5d
 
my understanding was with 1 million tests you get 1% that will give an incorrect pos result,or 10,000 incorrect tests.

using the 1 million tests on 1 million UNINFECTED people it would appear that there are 10,000 infected,when actually nobody is infected..

now assume 10,000 people ARE infected,and using those million tests,this would show that out of a million tests,20,000 people are infected this would indicate a 2% infection rate,which would be wrong by 50%!
 
That's silly. A test that's not positive obviously can't be a false positive, because it's not a positive at all.

The 1% (actually thought to be a bit less than that, and the ONS weekly survey have to do extra checking to reduce theirs further) is based on what people actually mean by false positive which is the proportion of the positive tests which are actually of people who do not have the infection. (If you wanted the proportion of all tests that incorrectly test positive you'd need a much smaller number which would also have to vary depending on the infection rate.)
Hi Bruce,

It seems to make sense to me lol🙂. If you've tested 1,000,000 people, apparently you will get 10,000 false positives. -- Bruce, is this bit correct? if using a 1% false positive.

Bruce second bit:

If in that same 1,000,000 tests you get 11,000 positives, you'd then have deduct the 10,000 false positives from it. Leaving just 1,000 true positive test results.

What do you reckon?
 
using the 1 million tests on 1 million UNINFECTED people it would appear that there are 10,000 infected,when actually nobody is infected.

Yes, which is obviously a concern for the ONS survey. And we can be sure that whatever they're doing gives a false positive rate much lower than 1% since their estimates for the infection rate have suggested that 0.02% or so of the population are infected (with a large error margin, but even so).

That kind of false positive rate is presumably not practical for the regular PCR tests. Since we're getting O(7000) positives (the "confirmed cases") from testing O(80000) people a day, that seems large enough that false positives don't look like being that important to me. (We might do 200,000 tests a day, but the weekly report shows that's on 80,000 or so people.)
 
It seems to make sense to me lol🙂. If you've tested 1,000,000 people, apparently you will get 10,000 false positives. -- Bruce, is this bit correct? if using a 1% false positive.

Yes. Fortunately(?) we test less than 100,000 people a day and we're getting over 5,000 positive tests.

If in that same 1,000,000 tests you get 11,000 positives, you'd then have deduct the 10,000 false positives from it. Leaving just 1,000 true positive test results.

No, I don't think any statistician would support doing that. Rather, you'd just have to say that your test isn't good enough to say anything beyond that there might not be any infection at all (though there might be).
 
Yes. Fortunately(?) we test less than 100,000 people a day and we're getting over 5,000 positive tests.



No, I don't think any statistician would support doing that. Rather, you'd just have to say that your test isn't good enough to say anything beyond that there might not be any infection at all (though there might be).
Hi Bruce,

So in simple terms, is what the chap in the video saying correct or not?

Thanks
 
looking at the trend of hospital admissions would be a better metric as to actual increase or decrease of infections over time.
although testing numbers give a good indication of immediate level of exponential spread which indicates future hospital admissions aprox 2-3 weeks later
 
So in simple terms, is what the chap in the video saying correct or not?

He's wrong. (He claims Carl Heneghan supports his idea of what false positive means, which I find very hard to believe. I suspect a misunderstanding, and I think More or Less conclude the same.)

(Lots of Heneghan's critiques seem perfectly reasonable to me: PCR isn't really a yes/no test so knowing the iteration number would surely be useful; it seems plausible that some positive tests are showing people who aren't infectious, and maybe we shouldn't worry about them if we could measure that. I'd like it if he took seriously the possibility that this infection might have an unusual rate and severity of long-term complications; I think whenever he's talked about that he's dismissed it by saying that other viruses can also (rarely) cause long-term complications.)


Suppose there's some condition (or other property) of a person, X (which most people don't have), and a test for it which just returns positive or negative.

There are two relevant errors for this test
  1. if I get a positive test, what's the chance that I don't in fact have X?
  2. if I have X, what's the chance that the test returns a negative result?
To estimate the first one you obviously take some people who've tested positive and check how many of them in reality don't have X. To estimate the second one you need to find some people who have X, test them, and see how many test negative. Neither involves the total number of people you tested.

(For our coronavirus tests we don't have a great idea of what the two error rates are.)
 
He's wrong. (He claims Carl Heneghan supports his idea of what false positive means, which I find very hard to believe. I suspect a misunderstanding, and I think More or Less conclude the same.)

(Lots of Heneghan's critiques seem perfectly reasonable to me: PCR isn't really a yes/no test so knowing the iteration number would surely be useful; it seems plausible that some positive tests are showing people who aren't infectious, and maybe we shouldn't worry about them if we could measure that. I'd like it if he took seriously the possibility that this infection might have an unusual rate and severity of long-term complications; I think whenever he's talked about that he's dismissed it by saying that other viruses can also (rarely) cause long-term complications.)


Suppose there's some condition (or other property) of a person, X (which most people don't have), and a test for it which just returns positive or negative.

There are two relevant errors for this test
  1. if I get a positive test, what's the chance that I don't in fact have X?
  2. if I have X, what's the chance that the test returns a negative result?
To estimate the first one you obviously take some people who've tested positive and check how many of them in reality don't have X. To estimate the second one you need to find some people who have X, test them, and see how many test negative. Neither involves the total number of people you tested.

(For our coronavirus tests we don't have a great idea of what the two error rates are.)
Thanks Bruce.
 
Thanks Bruce.
Hi Bruce,

So using the official figures, this is what he's saying (watch video at 2:50sec).

Total covid19 tests done to date
19,583,360

Total covid19 positives found to date
423,236 (which hasn't been adjusted properly for false positives -- which is his grievance)

Using a false positive average of 2.3% of the total number of tests done, not just 2.3% of the positive tests (Government's 0.8% low and 4.3% high averaged to 2.3%)

2.3% of 19,583,360 = 450,417 false positives

A sum which is greater than the official 423,236, which outweighs any possible true positives.

As he said, even with a conservative 0.8% / 156,666 false positives this still brings true positive tests figure down to 266,570 which is almost 50% less than the official figures.
 
Last edited:
To my mind, if you were randomly testing the population then this false positive percentage might be correct, but you are mostly testing people who are symptomatic or have had contact with people who are symptomatic and therefore much more likely to test positive than a random sample, so those false positive percentages will no longer be valid to be applied.

As you clearly demonstrate, you end up with a net negative or people testing positive for the virus and we know from hospital admissions that cases are rising so clearly there is a problem with this guy's maths/logic and he is doing the country a disservice in spreading his views because it encourages people to ignore the government guidance.
That is my take on it anyway.
 
To my mind, if you were randomly testing the population then this false positive percentage might be correct, but you are mostly testing people who are symptomatic or have had contact with people who are symptomatic and therefore much more likely to test positive than a random sample, so those false positive percentages will no longer be valid to be applied.

As you clearly demonstrate, you end up with a net negative or people testing positive for the virus and we know from hospital admissions that cases are rising so clearly there is a problem with this guy's maths/logic and he is doing the country a disservice in spreading his views because it encourages people to ignore the government guidance.
That is my take on it anyway.
Hi Rebrascora,

Thanks for joining this discussion, it's really nice to hear everybody's take on it 🙂. It's interesting that the number of positives being found is similar to the number of false positives one would expect for the total tests done. What if nobody had it? We'd still get all these covid cases. He was saying we need to base the lockdowns on deaths and hospital admissions, which are very low.

Regarding his views, I don't think he necessarily sees it as "his views" he sees it more as presenting the facts? Particularly about the false positive rates.
 
Last edited:
Yes but the "facts" he is presenting don't make sense. You can't have a negative number of positives. You also have to consider that there will be false negatives too which may well negate any false positives. I think his mistake is in applying the false positives from random testing to the actual high risk sample being tested.

We know that Covid cases are rising.... that is a fact because it is shown in hospital admissions and those admissions are just the critical cases ie the tip of an iceberg that represents the much larger number of people infected who are still within the general populous (I assume you can accept the proposal that not all people who have the virus are hospitalised) but hopefully mostly self isolating.
 
Status
Not open for further replies.
Back
Top