Mike Lean & Roy Taylor on improving T2D classification

Status
Not open for further replies.
Yup, that's how virtually all medical trials are conducted - the same people run both control group and intervention group.
It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.

Your other comments re "control" don't seem very relevant. The prime effect being studied was weight loss; the total meal replacement shakes were a way of achieving weight loss, but I don't think they were crucial for the study.
 
Which is not T2..

I think that sort of reinforces the importance of the Taylor/Lean suggestion that perhaps revisiting the classifications of diabetes with more nuance and separation is timely.

I saw a conference presentation that suggested there may be dozens of subtypes of “T2”, which perhaps explains some of the confusion we get on the forum. When people are discussing T2, they are perhaps sometimes talking about very different things (eg the long-running carbohydrate intolerance that some members have experienced).

Unfortunately T2 seems to be a broad container for “some other sort of diabetes” which many people get assigned, until they have checks to reveal they have something more niche and specific.

The Direct trial seems to have identified a specific subset of those diagnosed with T2 who can potentially put their condition into remission through weight loss.

Though personally I am extremely uncomfortable with the phrase ‘over-fat induced’ which feels all sorts of stigmatising wrong!
 
It's more a hypothetical statement than a claim:

"but they do suggest that other factors may conspire to damage the pancreas"

I have no idea why there is so much objection to this statement. It's fairly well known that the pancreas has been damaged in some way in T2 diabetics from scans that that show a decreased mass and also tests that show blunted insulin response and also markers that show cell apoptosis. As some people don't respond to weight loss and see improvements, there are other factors at play - I know someone who has had T2 since their 30s and has never been overweight. No antibodies or any markers for T1. All they know is that insulin production is low and something has caused that.
I think some parts of the low-carb world have a very insulin-resistance focused view of things, together with a belief that cutting carbs cures all T2D-related metabolic ills, more or less. A pancreas with duff beta cells doesn't fit into that story - cutting carbs isn't going to fix beta cells and they obviously aren't associated with insulin resistance - so for them it either doesn't happen, or if it does, it's not T2D.
 
A pancreas with duff beta cells doesn't fit into that story - cutting carbs isn't going to fix beta cells and they obviously aren't associated with insulin resistance - so for them it either doesn't happen, or if it does, it's not T2D.
Indeed, problems with insulin under production is not T2 that's the whole problem with current classifications.
 
Indeed, problems with insulin under production is not T2 that's the whole problem with current classifications.
People can obviously define terms however they want, but on the other hand, every major medical organisation defines T2D as a combination of insulin resistance and insulin insufficiency. So it doesn't seem very convenient to adopt a non-standard definition. Perhaps it would be more convenient to define a new term to cover "insulin resistance but no insulin insufficiency".
 
It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.

Your other comments re "control" don't seem very relevant. The prime effect being studied was weight loss; the total meal replacement shakes were a way of achieving weight loss, but I don't think they were crucial for the study.
'It's one of the reasons for blinding and for cross-over trial design. Neither of these were possible for the DiRECT trial.'

And yet the 'impossible' is exactly what Taylor is claiming when he said he 'randomised 302 participants' and exactly what you are trying to defend . In his Lancet article in 2018 Taylor said he randomised 50 or 60 Medical Practices now he:s claiming he randomised 302 participants. On which occasion. 2018 or now was he speaking the truth ?

'Your other comments re "control" don't seem very relevant'..
Whaaaaaaat , control isn't relevant in a controlled study !!!!! What on earth are you talking about, it's the raison d'etre of such a study . If you're not controlling the diet in a diet study you cannot draw any conclusions from the result. And the shakes and bars weren't 'crucial' to the Newcastle Diet. ???? Give the glad tidings to Taylor's devoted followers.
 
every major medical organisation defines T2D as a combination of insulin resistance and insulin insufficiency
Unless you think that T2 is a condition of hyperinsulinaemia..in which case "insulin insufficiency" is not the issue.

A simple test would be a c-peptide at or before final diagnosis.
 
I think that sort of reinforces the importance of the Taylor/Lean suggestion that perhaps revisiting the classifications of diabetes with more nuance and separation is timely.
Timely ? It's ancient history. When I was dxed in 1992 it was commonplace to hear that T2 was probably half a dozen conditions sharing the main symptoms.
We had the Amylin controversy in the mid to late 1990s where it was contended that over production of Amylin was involved in T2 and a proportion of T2s would need to be re-dxed as 'Amylinotics'. That idea died down, if they can't afford to look after newbie T2s properly they certainly can't afford to test every newbie for Amylin levels.
Then we had the news in the early 2000s that up to 20% -30% of T2s had insulin lacking the tethers to tether themselves into the insulin receptor port on the cell wall. The insulin could put the glucose in the glucose receptor port ( a clathryn coated pit) but couldn't tether themselves to the insulin receptor port to signal a delivery of glucose. So the GLUTs in the nucleus slumber on without swarming up to get the glucose. And of course they can't afford to examine every newbies insulin under a scanning electron microscope, so tough luck.
Then along comes Taylor with his one-size-fits -all theory that fat in the pancreas and liver cause T2.
With this latest piece from him he seems tob trying to back out gracefully from his earlier dogmatic assertions ( without much actual proof ) that he alone knew the answer
 
Nobody has suggested they are 'inherently incompetent'. They are useful in certain contexts. You're missing the point - the results have to be analysed in a certain way, with the clusters as the unit of comparison not the individuals within them.
This is an area I'm interested in knowing more about, so I dug into it a bit.

As I expected, Spiegelhalter is certainly talking about incorrectly handling within-cluster correlation. There is no suggestion that it is somehow inherently invalid to to analyse a cluster RCT at the individual level. The DiRECT trial was certainly not "incompetent". I assume the assertion arises from some Internet garbling of reality.

The main issue is that with a cluster RCT you need to have a bigger trial than with an individual randomisation, to overcome the effects of within-cluster correlation. See eg https://pubmed.ncbi.nlm.nih.gov/15209195/

Background: Primary care research often involves clustered samples in which subjects are randomized at a group level but analyzed at an individual level. Analyses that do not take this clustering into account may report significance where none exists. This article explores the causes, consequences, and implications of cluster data.

Methods: Using a case study with accompanying equations, we show that clustered samples are not as statistically efficient as simple random samples.

Results: Similarity among subjects within preexisting groups or clusters reduces the variability of responses in a clustered sample, which erodes the power to detect true differences between study arms. This similarity is expressed by the intracluster correlation coefficient, or p (rho), which compares the within-group variance with the between-group variance. Rho is used in equations along with the cluster size and the number of clusters to calculate the effective sample size (ESS) in a clustered design. The ESS should be used to calculate power in the design phase of a clustered study. Appropriate accounting for similarities among subjects in a cluster almost always results in a net loss of power, requiring increased total subject recruitment. Increasing the number of clusters enhances power more efficiently than does increasing the number of subjects within a cluster.


The source of the Spigelhalter garbling may have been a comment in his book, where he lists as a common mistake analysing cluster RCTs as if they were individually randomised. In other words, without taking into account the intracluster correlation.

Here's a Spiegelhalter paper https://pubmed.ncbi.nlm.nih.gov/16279132/ looking at ways of better estimating an appropriate intracluster correlation coefficient, for this kind of study, using Bayesian methods. It works through a real world cholesterol treatment example, randomised at practice level and looking for individual effects.
 
Last edited:
This is an area I'm interested in knowing more about, so I dug into it a bit.

As I expected, Spiegelhalter is certainly talking about incorrectly handling within-cluster correlation. There is no suggestion that it is somehow inherently invalid to to analyse a cluster RCT at the individual level. The DiRECT trial was certainly not "incompetent". I assume the assertion arises from some Internet garbling of reality.

The main issue is that with a cluster RCT you need to have a bigger trial than with an individual randomisation, to overcome the effects of within-cluster correlation. See eg https://pubmed.ncbi.nlm.nih.gov/15209195/

Background: Primary care research often involves clustered samples in which subjects are randomized at a group level but analyzed at an individual level. Analyses that do not take this clustering into account may report significance where none exists. This article explores the causes, consequences, and implications of cluster data.

Methods: Using a case study with accompanying equations, we show that clustered samples are not as statistically efficient as simple random samples.

Results: Similarity among subjects within preexisting groups or clusters reduces the variability of responses in a clustered sample, which erodes the power to detect true differences between study arms. This similarity is expressed by the intracluster correlation coefficient, or p (rho), which compares the within-group variance with the between-group variance. Rho is used in equations along with the cluster size and the number of clusters to calculate the effective sample size (ESS) in a clustered design. The ESS should be used to calculate power in the design phase of a clustered study. Appropriate accounting for similarities among subjects in a cluster almost always results in a net loss of power, requiring increased total subject recruitment. Increasing the number of clusters enhances power more efficiently than does increasing the number of subjects within a cluster.


The source of the Spigelhalter garbling may have been a comment in his book, where he lists as a common mistake analysing cluster RCTs as if they were individually randomised. In other words, without taking into account the intracluster correlation.

Here's a Spiegelhalter paper https://pubmed.ncbi.nlm.nih.gov/16279132/ looking at ways of better estimating an appropriate intracluster correlation coefficient, for this kind of study, using Bayesian methods. It works through a real world cholesterol treatment example, randomised at practice level and looking for individual effects.

I emailed him for his comments on this, so hopefully we can put this to bed
His response is below.

"Cluster-randomised trials need to allow for the design when making comparisons between groups, and this appears to have been appropriately done in this study, by including a 'practice effect' in the model."
 
I emailed him for his comments on this, so hopefully we can put this to bed
His response is below.

"Cluster-randomised trials need to allow for the design when making comparisons between groups, and this appears to have been appropriately done in this study, by including a 'practice effect' in the model."
Nice. "Practice effect" = "intracluster correlation coefficient", same thing.
 
This is an area I'm interested in knowing more about, so I dug into it a bit.

As I expected, Spiegelhalter is certainly talking about incorrectly handling within-cluster correlation. There is no suggestion that it is somehow inherently invalid to to analyse a cluster RCT at the individual level. The DiRECT trial was certainly not "incompetent". I assume the assertion arises from some Internet garbling of reality.

The main issue is that with a cluster RCT you need to have a bigger trial than with an individual randomisation, to overcome the effects of within-cluster correlation. See eg https://pubmed.ncbi.nlm.nih.gov/15209195/

Background: Primary care research often involves clustered samples in which subjects are randomized at a group level but analyzed at an individual level. Analyses that do not take this clustering into account may report significance where none exists. This article explores the causes, consequences, and implications of cluster data.

Methods: Using a case study with accompanying equations, we show that clustered samples are not as statistically efficient as simple random samples.

Results: Similarity among subjects within preexisting groups or clusters reduces the variability of responses in a clustered sample, which erodes the power to detect true differences between study arms. This similarity is expressed by the intracluster correlation coefficient, or p (rho), which compares the within-group variance with the between-group variance. Rho is used in equations along with the cluster size and the number of clusters to calculate the effective sample size (ESS) in a clustered design. The ESS should be used to calculate power in the design phase of a clustered study. Appropriate accounting for similarities among subjects in a cluster almost always results in a net loss of power, requiring increased total subject recruitment. Increasing the number of clusters enhances power more efficiently than does increasing the number of subjects within a cluster.


The source of the Spigelhalter garbling may have been a comment in his book, where he lists as a common mistake analysing cluster RCTs as if they were individually randomised. In other words, without taking into account the intracluster correlation.

Here's a Spiegelhalter paper https://pubmed.ncbi.nlm.nih.gov/16279132/ looking at ways of better estimating an appropriate intracluster correlation coefficient, for this kind of study, using Bayesian methods. It works through a real world cholesterol treatment example, randomised at practice level and looking for individual effects.
Eddie, lad, all that stuff is about cluster trials in general, its not about Taylor's own trial. I've got Spiegalhalter's book on Kindle which won't let me cut and paste the relevant passages. So I will have to copy them and then write them up. Just to disabuse of the insulting notion that's 'internet garbling'.
 
Eddie, lad, all that stuff is about cluster trials in general, its not about Taylor's own trial. I've got Spiegalhalter's book on Kindle which won't let me cut and paste the relevant passages. So I will have to copy them and then write them up. Just to disabuse of the insulting notion that's 'internet garbling'.
Nope.
You're dead in the water.
He has made his comment.
 
Nope.
You're dead in the water.
He has made his comment.
" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..
 
" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..

You can email him your opinion of his verdict then.
I'm sure he'd be interested in your critique of him.
 
" this appears to have been appropriately done in this study" 'appears' doing some heavy lifting there old friend..
'appears' is standard medical/healthcare language
 
Please stop trying to make everything about COVID 🙄
To me, it appears "appears" is the problem though. It either has been done or not. If one is not sure or doesn't have the experience or training to know if something has been done, then say we are not sure, or we don't have the expertise to say etc, rather being non-committal and saying appears to have been done or it appears to work.

After the past 3 years, I'd like to see more honesty and candor in what we are told and recommended.
 
Status
Not open for further replies.
Back
Top