One of the biggest problems of understanding the effects of yoga on our health and minds and bodies more broadly is that a lot of yoga research is, frankly, a bit shit. You can read more about why I think so here.
So it was great to see the publication of this guidance last week on how yoga studies should be designed and reported. It tackles some really important issues, including recommending:
- A clear definition of what yoga intervention was studied – this is one of the biggest problems of yoga research, as ‘yoga’ is not a clearly defined concept and one intervention might look at the effects of breathing and meditation, and another might look at the effects of a fairly dynamic and athletic practice. This makes it impossible to compare and to untangle what it is that is causing the effect.
- A clear description of how the yoga intervention was administered – online vs in-person, how long for, how frequent etc and if any home practice was included
- A clear description of who the yoga intervention was tested on – age group, sex etc
- Using the word ‘yoga’ in the title or abstract of the paper so that it can be easily found
Will this help? Maybe. It just doesn’t feel big enough. It doesn’t go far enough – simple things like not even addressing the issue of not having a control. It’s just a little lukewarm.
In fact, what I had hoped for when I saw the guidance publicised by the Minded Institute on social media was a radical rethinking of how the research is done, like the megastudy example from behavioural science last year.
What the hell is a megastudy?
Basically, researchers who are interested in a particular problem come together, and each brings to the table what intervention they want to test to address it. Then they use a huge sample and test their intervention. All the data are collated and analysed to find out which intervention was most effective.
So, for instance, in this megastudy (summary here and here), a group of independent researchers wanted to find out what intervention is most effective at getting people to exercise. One team brought monetary incentives to the table, another brought text reminders and so on. Each research team then tested their intervention using participants from the same chain of US gym, to try and keep the sample as consistent as possible – there were over 60,000 participants in total. In case you are interested, the top-performing intervention was a cash reward when returning to the gym after a missed workout, although worth noting that most of the interventions significantly increased gym attendance.
Likewise, in another megastudy, a group of independent researchers tested interventions to find out which ones are most effective at getting people to take the flu vaccine, this time using Walmart and their sample was enormous (over 600,000 participants). In this case, they found that the most effective way to get people to have the flu vaccine was to send them multiple text messages, days apart, that said that their vaccine was ‘waiting for them’.
Could we run a yoga research megastudy?
For instance, say you wanted to measure what might help people sleep better. You could contact Virgin Active or David Lloyd (or insert other gym chain) to collaborate. You could invite people to take part, take a baseline measurement (ideally something non-invasive like a questionnaire) and then ask them for a month to only take part in one activity. Some people will only do vinyasa flow, others will only do pilates, others will only use the gym. Maybe you do a special and introduce yoga nidra classes and invite some people to only do that. Then at the end of the month, you use the same standardised questionnaire to measure sleep. Would this work?
Maybe it wouldn’t, maybe the megastudy works better in behavioural science because it’s measuring different things or is easier to control. But what I love about the megastudy is that it is imaginative and tries to tackle the problems with study design.
Maybe all we need is a different type of megastudy, something like a multi-centre clinical trial, where say 10 research groups run the exact same protocol in similar populations, and collate their data.