I've spoken to a number of Pharma companies on this very issue, and while mostly what you suggest is probably true, the ISO guidelines +/-20% from a lab test only apply 95% of the time. 5 tests out of a hundred could be outside of that. Apparently the manufacturing process cannot guarantee exactly the same amount of enzyme/reactive agents are sprayed/absorbed onto/remain active on the strips, even within the same pot. I don't know why this is given the level of manufacturing precision available in other areas, but I have been told it more than once by different companies.
For my own part I believe the +/- 1.7mmol/L 'window' often used for a basal test is at least in part designed to allow for strip-to-strip variability. Certainly anything within, say, 2mmol/L being considered 'pretty much the same'.
You only have to test your own fingers a few moments apart to see some sort of variation. And of course I suppose blood itself will not be entirely homogenous and the blood in your feet might have a slightly different makeup to that in your fingertips.
I'd love more accuracy, but at least we can cope with that we've got for the most part - and the PIL recommendations always caveat BG readings with "if you don;t feel like the number you got, then retest" statements!
Mike. My earlier post was in reply to DeusXM's bald comment that "meters have a 20% error margin". As you know, the issue of "errors" in scientific measurement is not quite as straightforward as might first appear and really conflates a number of different sources of lack of confidence in the results that we record. Volumes have been written about this and we could easily go completely off-topic very quickly.
Leaving aside for a moment the issue of hypos, where we are looking for absolute accuracy (i.e. is my bs above or below 4 because I have to decide whether to take extra carb?), I would argue that us practicing diabetics should be most interested in the reproducibility of the results obtained from our meters. Thus, if the "true" blood sugar of my finger-prick sample is 6.0, and my meter records a value of 5.0 (or 7.0 for that matter, i.e. within a 20% margin of error) I don't really care so long as it tells me 5.0 (or 7.0) or thereabouts every time I test when the "true" value is 6.0. Of course it's the 'thereabouts' that really matters and that was what I was trying to address in my earlier post and which was at the heart of the op and DeusXM's comment.
The Accu-Chek Aviva stuffer states:
"
Reproducibility(day-to-day imprecision): The mean imprecision is <1.9%. In a typical series of tests, a coefficient of variation of 1.8% was obtained."
Like you, I have tried the experiment of repeat testing within a few minutes and found small variations, but not usually more than plus or minus 0.1 or 0.2 or so, which is consistent with the reproducibility range quoted by Roche.
I haven't had the benefit of looking at ISO 15197, which I understand is the relevant standard, because the robbing b's at the ISO want to charge me 134 Swiss Francs to download a copy
😡, but is the 95% figure that you mention referring to the number of outliers that would be tolerated? In the Accu-Chek stuffer, Roche refer to "System accuracy according to ISO 15197: 198 out of 200 samples (99.0%) are within the minimum acceptable performance criteria." I've always thought that the possibility of an outlier was behind the caveat to ignore a meter result if you really feel it's bonkers, although I would re-test or try to find a reason why it doesn't make sense (like I've just dipped my finger in a bowl of sugar before I tested).
This is a very interesting topic. Perhaps you could send me a PM if you want to discuss it further, so that we don't bore everyone else.