This page explains the difference between explanatory and pragmatic research, a distinction that is relevant to many of our investigations.
Published studies and reviews usually address questions about internal validity (“Was the observed effect caused by the intervention, or by other factors?”), but sometimes they neglect questions about external validity (“Will intervention work on other populations, in other contexts?”) and patient relevance (“Does this intervention work in ways that matter to the ‘patients,’ i.e. the targets of the intervention?”).
Research with high external validity and high patient relevance (ideally, in addition to high internal validity) is sometimes called “pragmatic” research. Pragmatic research is contrasted with “explanatory” research, which asks whether an intervention works under artificial or tightly controlled conditions (often, for the purpose of understanding how an intervention works).[1]More confusingly, explanatory and pragmatic studies are sometimes called “efficacy” and “effectiveness” studies, respectively. The article which introduced the explanatory-pragmatic terminology is Schwartz & Lellouch (1967). More recent overviews of the concept include Loudon et al. … Continue reading
Explanatory and pragmatic studies both have their uses. Explanatory trials can give us insight into how an intervention works, and they can be a relatively inexpensive way to figure out which interventions are promising enough that we should test their broad effectiveness in pragmatic trials, which are more expensive. But when the distinction between explanatory and pragmatic research is neglected, interventions can be promoted as having been proven effective in randomized controlled trials, even though these trials were not pragmatic in design. In some cases, this can lead to the wide adoption of an expensive intervention that is thought to be effective (on the basis of explanatory RCTs), but which is later shown to be ineffective when it is finally tested in a well-conducted pragmatic RCT.[2]Prasad & Cifu (2015) provide some examples.
To illustrate the difference between explanatory and pragmatic research, here’s a (partial) description of a well-designed explanatory trial on low-carb vs. low-fat diets:
We selectively restricted dietary carbohydrate versus fat for 6 days following a 5-day baseline diet in 19 adults with obesity confined to a metabolic ward where they exercised daily. Subjects received both isocaloric diets in random order during each of two inpatient stays…[3]Hall et al. (2015). For a relatively accessible explanation of the study, see Stephan Guyenet’s two posts (1, 2) on the paper
In real life, people are not “confined to a metabolic ward” where they have no access to food and drink beyond what researchers give them. They do not usually exercise “on a treadmill for 1 hr each day at a clamped pace and incline,” as the subjects in this study did. They do not necessarily consume the same number of calories on either of two very different diets (that’s what “isocaloric” means). This trial might teach us something about the biological mechanisms of metabolic regulation, but its setup is so removed from normal life that it probably can’t tell us much about which diets best help people lose weight under the conditions of normal life, in which they have continuous access to any food and drink they like, and they exercise as much as like in whatever way they like, and their calorie intake may vary greatly depending on which foods and drinks they consume. (In other words, this study has low external validity.)
Moreover, most of the study’s outcome measures — e.g. “24-hr respiration quotients” and “net macronutrient oxidation rates” — aren’t things that patients care about. Even the body weight outcome, which patients do care about, is of limited pragmatic importance, because it was only measured shortly after the interventions were delivered. Patients care some about immediate weight loss, but they care even more about sustainable, long-term weight loss. (In other words, this study has limited patient relevance.)
And that’s fine. This trial is clearly designed as an explanatory trial. But it would be a mistake to conclude much of anything about what diet advice works in the real world from a study like this.
The diet trials that were included in the systematic reviews I summarized in my carbs-obesity report were quite different in design. Nearly all of these trials were conducted on free-living adults, with relatively little monitoring or interference by researchers, with suspected common failure to strictly adhere to the diet guidelines they had been randomly assigned to follow (as would also be true of diet advice given in the real world). Many of them measured body weight and other patient-relevant outcomes weeks or months after the intervention. In other words, they were relatively pragmatic trials.
Of course, there isn’t a sharp and absolute distinction between explanatory and pragmatic research. Instead, the difference can be seen as a multidimensional continuum. For example, the PRECIS-2 tool allows trial designers or review authors to rate a trial design from 1 to 5 — from very explanatory to very pragmatic — along 9 dimensions:
- Eligibility: To what extent are the participants in the trial similar to those who would receive this intervention if it was part of usual care?
- Recruitment: How much extra effort is made to recruit participants over and above what would be used in the usual care setting to engage with patients?
- Setting: How different are the settings of the trial from the usual care setting?
- Organization: How different are the resources, provider expertise, and the organization of care delivery in the intervention arm of the trial from those available in usual care?
- Flexibility (delivery): How different is the flexibility in how the intervention is delivered and the flexibility anticipated in usual care?
- Flexibility (adherence): How different is the flexibility in how participants are monitored and encouraged to adhere to the intervention from the flexibility anticipated in usual care?
- Follow-up: How different is the intensity of measurement and follow-up of participants in the trial from the typical follow-up in usual care?
- Primary outcome: To what extent is the trial’s primary outcome directly relevant to the participants? (E.g. rate of falls matters to the elderly more than their bone density does.)
- Primary analysis: To what extent are all the data included in the analysis of the primary outcome? (E.g. a particularly pragmatic data analysis would make no special allowance for non-adherence.)
Though designed for evaluating biomedical trials, these questions illustrate several dimensions along which studies in many fields can be more or less pragmatic in their design.
Sources
DOCUMENT | SOURCE |
---|---|
Gaglio et al. (2014) | Source (archive) |
Guyenet (08/13/2015) | Source (archive) |
Guyenet (08/18/2015) | Source (archive) |
Hall et al. (2015) | Source (archive) |
Loudon et al. (2015) | Source (archive) |
Prasad & Cifu (2015) | Source (archive) |
Precis-2 | Source |
Royal College of Emergency Medicine (2015) | Source (archive) |
Schwartz & Lellouch (1967) | Source (archive) |
Footnotes
1 | More confusingly, explanatory and pragmatic studies are sometimes called “efficacy” and “effectiveness” studies, respectively. The article which introduced the explanatory-pragmatic terminology is Schwartz & Lellouch (1967). More recent overviews of the concept include Loudon et al. (2015), Gaglio et al. (2014), and Royal College of Emergency Medicine (2015). This latter source defines pragmatic research the way I do: “Pragmatic research asks whether an intervention works under real-life conditions and whether it works in terms that matter to the patient… Explanatory research asks whether an intervention works under ideal or selected conditions. It is more concerned with how and why an intervention works.” The other two recent sources define pragmatic research mostly in terms of external validity, but it is clear from the questions they ask that they consider patient relevance to be an important part of research pragmaticness. For example, question 8 of the PRECIS-2 tool described in Loudon et al. (2015) is explicitly about patient relevance. |
---|---|
2 | Prasad & Cifu (2015) provide some examples. |
3 | Hall et al. (2015). For a relatively accessible explanation of the study, see Stephan Guyenet’s two posts (1, 2) on the paper |