Which is not T2..All they know is that insulin production is low and something has caused that.
Which is not T2..All they know is that insulin production is low and something has caused that.
It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.Yup, that's how virtually all medical trials are conducted - the same people run both control group and intervention group.
Which is not T2..
I think some parts of the low-carb world have a very insulin-resistance focused view of things, together with a belief that cutting carbs cures all T2D-related metabolic ills, more or less. A pancreas with duff beta cells doesn't fit into that story - cutting carbs isn't going to fix beta cells and they obviously aren't associated with insulin resistance - so for them it either doesn't happen, or if it does, it's not T2D.It's more a hypothetical statement than a claim:
"but they do suggest that other factors may conspire to damage the pancreas"
I have no idea why there is so much objection to this statement. It's fairly well known that the pancreas has been damaged in some way in T2 diabetics from scans that that show a decreased mass and also tests that show blunted insulin response and also markers that show cell apoptosis. As some people don't respond to weight loss and see improvements, there are other factors at play - I know someone who has had T2 since their 30s and has never been overweight. No antibodies or any markers for T1. All they know is that insulin production is low and something has caused that.
Indeed, problems with insulin under production is not T2 that's the whole problem with current classifications.A pancreas with duff beta cells doesn't fit into that story - cutting carbs isn't going to fix beta cells and they obviously aren't associated with insulin resistance - so for them it either doesn't happen, or if it does, it's not T2D.
People can obviously define terms however they want, but on the other hand, every major medical organisation defines T2D as a combination of insulin resistance and insulin insufficiency. So it doesn't seem very convenient to adopt a non-standard definition. Perhaps it would be more convenient to define a new term to cover "insulin resistance but no insulin insufficiency".Indeed, problems with insulin under production is not T2 that's the whole problem with current classifications.
'It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.'It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.
Your other comments re "control" don't seem very relevant. The prime effect being studied was weight loss; the total meal replacement shakes were a way of achieving weight loss, but I don't think they were crucial for the study.
Unless you think that T2 is a condition of hyperinsulinaemia..in which case "insulin insufficiency" is not the issue.every major medical organisation defines T2D as a combination of insulin resistance and insulin insufficiency
Timely ? It's ancient history. When I was dxed in 1992 it was commonplace to hear that T2 was probably half a dozen conditions sharing the main symptoms.I think that sort of reinforces the importance of the Taylor/Lean suggestion that perhaps revisiting the classifications of diabetes with more nuance and separation is timely.
This is an area I'm interested in knowing more about, so I dug into it a bit.Nobody has suggested they are 'inherently incompetent'. They are useful in certain contexts. You're missing the point - the results have to be analysed in a certain way, with the clusters as the unit of comparison not the individuals within them.
This is an area I'm interested in knowing more about, so I dug into it a bit.
As I expected, Spiegelhalter is certainly talking about incorrectly handling within-cluster correlation. There is no suggestion that it is somehow inherently invalid to to analyse a cluster RCT at the individual level. The DiRECT trial was certainly not "incompetent". I assume the assertion arises from some Internet garbling of reality.
The main issue is that with a cluster RCT you need to have a bigger trial than with an individual randomisation, to overcome the effects of within-cluster correlation. See eg https://pubmed.ncbi.nlm.nih.gov/15209195/
Background: Primary care research often involves clustered samples in which subjects are randomized at a group level but analyzed at an individual level. Analyses that do not take this clustering into account may report significance where none exists. This article explores the causes, consequences, and implications of cluster data.
Methods: Using a case study with accompanying equations, we show that clustered samples are not as statistically efficient as simple random samples.
Results: Similarity among subjects within preexisting groups or clusters reduces the variability of responses in a clustered sample, which erodes the power to detect true differences between study arms. This similarity is expressed by the intracluster correlation coefficient, or p (rho), which compares the within-group variance with the between-group variance. Rho is used in equations along with the cluster size and the number of clusters to calculate the effective sample size (ESS) in a clustered design. The ESS should be used to calculate power in the design phase of a clustered study. Appropriate accounting for similarities among subjects in a cluster almost always results in a net loss of power, requiring increased total subject recruitment. Increasing the number of clusters enhances power more efficiently than does increasing the number of subjects within a cluster.
The source of the Spigelhalter garbling may have been a comment in his book, where he lists as a common mistake analysing cluster RCTs as if they were individually randomised. In other words, without taking into account the intracluster correlation.
Here's a Spiegelhalter paper https://pubmed.ncbi.nlm.nih.gov/16279132/ looking at ways of better estimating an appropriate intracluster correlation coefficient, for this kind of study, using Bayesian methods. It works through a real world cholesterol treatment example, randomised at practice level and looking for individual effects.
Nice. "Practice effect" = "intracluster correlation coefficient", same thing.I emailed him for his comments on this, so hopefully we can put this to bed
His response is below.
"Cluster-randomised trials need to allow for the design when making comparisons between groups, and this appears to have been appropriately done in this study, by including a 'practice effect' in the model."
Eddie, lad, all that stuff is about cluster trials in general, its not about Taylor's own trial. I've got Spiegalhalter's book on Kindle which won't let me cut and paste the relevant passages. So I will have to copy them and then write them up. Just to disabuse of the insulting notion that's 'internet garbling'.This is an area I'm interested in knowing more about, so I dug into it a bit.
As I expected, Spiegelhalter is certainly talking about incorrectly handling within-cluster correlation. There is no suggestion that it is somehow inherently invalid to to analyse a cluster RCT at the individual level. The DiRECT trial was certainly not "incompetent". I assume the assertion arises from some Internet garbling of reality.
The main issue is that with a cluster RCT you need to have a bigger trial than with an individual randomisation, to overcome the effects of within-cluster correlation. See eg https://pubmed.ncbi.nlm.nih.gov/15209195/
Background: Primary care research often involves clustered samples in which subjects are randomized at a group level but analyzed at an individual level. Analyses that do not take this clustering into account may report significance where none exists. This article explores the causes, consequences, and implications of cluster data.
Methods: Using a case study with accompanying equations, we show that clustered samples are not as statistically efficient as simple random samples.
Results: Similarity among subjects within preexisting groups or clusters reduces the variability of responses in a clustered sample, which erodes the power to detect true differences between study arms. This similarity is expressed by the intracluster correlation coefficient, or p (rho), which compares the within-group variance with the between-group variance. Rho is used in equations along with the cluster size and the number of clusters to calculate the effective sample size (ESS) in a clustered design. The ESS should be used to calculate power in the design phase of a clustered study. Appropriate accounting for similarities among subjects in a cluster almost always results in a net loss of power, requiring increased total subject recruitment. Increasing the number of clusters enhances power more efficiently than does increasing the number of subjects within a cluster.
The source of the Spigelhalter garbling may have been a comment in his book, where he lists as a common mistake analysing cluster RCTs as if they were individually randomised. In other words, without taking into account the intracluster correlation.
Here's a Spiegelhalter paper https://pubmed.ncbi.nlm.nih.gov/16279132/ looking at ways of better estimating an appropriate intracluster correlation coefficient, for this kind of study, using Bayesian methods. It works through a real world cholesterol treatment example, randomised at practice level and looking for individual effects.
Nope.Eddie, lad, all that stuff is about cluster trials in general, its not about Taylor's own trial. I've got Spiegalhalter's book on Kindle which won't let me cut and paste the relevant passages. So I will have to copy them and then write them up. Just to disabuse of the insulting notion that's 'internet garbling'.
" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..Nope.
You're dead in the water.
He has made his comment.
" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..
'appears' is standard medical/healthcare language" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..
As in, masks "appear" to work in preventing transmission of .....'appears' is standard medical/healthcare language
Please stop trying to make everything about COVID 🙄As in, masks "appear" to work in preventing transmission of covid19.
To me, it appears "appears" is the problem though. It either has been done or not. If one is not sure or doesn't have the experience or training to know if something has been done, then say we are not sure, or we don't have the expertise to say etc, rather being non-committal and saying appears to have been done or it appears to work.Please stop trying to make everything about COVID 🙄