Hi
@Essex. Used to be a proper scientist and one of the first things you should do when measuring something is to find out how reproducible your test measuring system is. So what I decided to do was to do ten tests in quick succession to see what the average was and to get a standard deviation, the standard deviation being a measure of the spread of the results. Rather than pepper one finger, I decided to spread the agony by doing one test in each finger and both thumbs.
What I found was the readings varied from 4.2 to 5.6. The mean value was 4.9 and the standard deviation was 0.43.
What does this mean? It means that if the true value of your blood glucose was 4.9 then you would expect to get 95% of your test readings in the range 4.9+/-0.86, that is somewhere between 4 and 6. It means that quoting blood glucose readings from a hand monitor to 1 decimal place is really pushing it. It means that differences in readings have to be much greater than 2 before you can even begin to think that they are statistically significant.
Where does this variability come from? Two sources.
The first is the "accuracy" of the testing machine. I don't know what this is, you would need to do a number of tests in quick succession on a test solution to do that but I suspect it would be pretty good and far better than the +/- 10 or 15% required by the specification. You might get a bigger variation between different batches of test strips.
The second, and I suspect the most important, is sampling error. Blood is not homogenous and blood glucose levels will vary quite a lot as it travels round the body. What you are doing is taking a tiny drop from a whole body full and so it would be quite amazing if the readings from each drop were identical. By always testing in the same place, at a finger end, you are just about the only thing you can do to control this variable.
I end up by suggesting that you should not get too hung up on small variations in blood glucose readings, and by small differences I mean 3 whole units unless there is a clear and reproducible pattern. That is a non technical way of saying you need readings enough to show statistical significance before worrying. Above all, I would not worry about the accuracy of the meter. In my view, the whole thing is limited by sampling error and that the saving grace is that the system is well capable of detecting the sorts of changes that should be of concern.
PS Usual caveat about T1's. You do far more testing than and monitoring the rest of us and have more experience in interpreting readings. As such my remarks might be more of interest than practical significance.