Doctors, Placebos and Damned Statistics!

Earlier this week there was a big media hoo-haa over a PLoS One published paper on placebos.

The BBC lead with “’Most family doctors’ have given a patient a placebo drug” while the Daily Mail ran with “Nearly ALL doctors have given patients a placebo – either to keep them happy or reassure them” and both mentioned the attention grabbing result that 97% of doctors have knowingly prescribed a placebo. Both the BBC and the Daily Mail specifically give antibiotics being prescribed for viral infections as an example of placebo.

The study itself was titled ‘Placebo Use in the United Kingdom: Results from a National Survey of Primary Care Practitioners’ and, as PLoS One is an Open Access journal, is free for anyone to read. The result, as reported by the mainstream press, certainly do make interesting reading, but I wonder if these figures hold up on slightly closer scrutiny?

The survey gained responses from 783 general practitioners, via a web based survey. This is said to be representative and comes from random sampling from Doctors.net.uk registrations. According to the 2012 NHS Workforce Survey, there are 40,265 GPs in the UK which leads these 783 GPs sampled to be around 2% of the available population, chosen from a pool of just 71% of available GPs.

The really interesting detail in this paper though is the classification of placebo.

Here, placebo is divided into ‘pure’ and ‘impure’. A pure placebo, according to the paper, is a sugar pill or saline injection. There is a rather impressive list of what could pass as an impure placebo:

 

  • Positive suggestions
  • Nutritional supplements for conditions unlikely to benefit from this therapy (such as vitamin C for cancer)
  • Probiotics for diarrhea
  • Peppermint pills for pharyngitis
  • Antibiotics for suspected viral infections
  • Sub-clinical doses of otherwise effective therapies
  • Off-label uses of potentially effective therapies
  • Complementary and Alternative medicine (CAM) whose effectiveness is not evidence-based
  • Conventional medicine whose effectiveness is not evidence-based
  • Diagnostic practices based on the patient’s request or to calm the patient such as

○ Non-essential physical examinations

○ Non-essential technical examinations of the patient (blood tests, X-rays)

A few interesting ones there, especially ‘Positive Suggestion’ and ‘Complimentary and Alternative medicine whose effectiveness is not evidence based’ (pretty much all of it, then). It’s safe to say that a number of the intervention on this list would not normally be classified as placebo.

In adding Positive Suggestion to this list of placebos, they have essentially included almost every GP consultation in this group. If a GP tells you that this treatment ‘will make you feel better’ or similar, they are using positive suggestion. If they neglect to do this, they could be accused of having poor ‘bedside manner’. Even if a GP gave a sugar pill and told a patient that it’d make them feel better, this would count as impure placebo.

The only feasible way that a pure placebo could be registered here is if a GP gave a patient a sugar pill/saline injection without any guidance – ‘Here’s some pills. There you go…’

Unsurprisingly, the 97% figure given is for impure placebo use, where the pure placebo result was reported at 12%. This sounds incredibly high for the pure placebo group, until you take into consideration the fact the numbers are also grouped into frequency.

The 12% pure placebo use figure is an ‘at least once in career’ statistic. When we look at the ‘frequently’ numbers this drops dramatically to just 0.9%. This type of placebo prescribing is very rarely used in real life.

So, looking again at the impure placebos and concentrating on the frequently used (daily or weekly) the overall incidence drops to 77% – quite a drop from the headline grabbing 97%. Of this 77% the most frequently used are ‘Non essential physical examinations’ and ‘Positive reassurance’.

The ‘non-essential’ exams sound a little disconcerting, but don’t forget that a doctor often has to carry out a number of different tests to reach a diagnosis. Does a test that comes back negative constitute as being unnecessary?

When we remove the slightly obscure definitions for impure placebos, three placebos stand out. They are ‘Off-label uses of potentially effective therapy’, ‘Antibiotics for suspected viral infections’ and ‘Conventional medicine whose efficacy is not evidence based’.

I’m going to let off-label use slide. Off-label prescribing isn’t necessarily a bad thing and many drugs are used for multiple ailments. For example, the tricyclic antidepressant amitriptyline is prescribed off-label for neuropathic pain. The GMC understands the need for and use of off-label prescription, and although it can be beset with problems, including it here with the placebos is at best questionable.

The figure for antibiotics being prescribed for suspected viral infections is 25.2%. This, it has to be said, is pretty high. I would suspect that it’s more to do with doctors giving patients what they expect and demand than doctors willingly giving a placebo. If this is the case, it’s pretty poor practice.

So finally we reach ‘conventional medicine whose use is not evidence based’. This group of ‘impure placebo’ is frequently prescribed by 26.2% of GPs surveys. Interesting? Well, that depends on the context in which you look at the study.

This study was part funded by the Southampton Complementary Medical Research Trust and co-authored by George Lewith of the Southampton Complementary and Integrated Medicine Research Unit.

Lewith is particularly involved in the research of homeopathy and acupuncture (as well as other CAM modalities) and has published research backing the idea that the homeopathic consultation, not the sugar pill, is what is responsible for any effect seen (i.e. homeopathy is placebo). He has been criticised for continuing to prescribe homeopathy despite this.

Could it be that this study – partially funded by a CAM group, partially designed and carried out by a CAM practitioner/researcher to show that placebo use is widespread – could be in turn cited as an argument for CAM use? After all, if CAM medications are placebo, is it better to prescribe them for ailments than, for example, antibiotics for viral infections? If medicines that have no evidence base are prescribed regularly, then why shouldn’t evidence-free CAM be an option?

That’s certainly one way of looking at the results as presented, and especially as reported by the press. But digging a little deeper, and being a little more careful as to selecting what counts as placebo, this survey actually shows that placebo is much less widespread in it’s use, and that CAM really isn’t an alternative to anything. The real story here is how this paper has been presented to and by the mainstream press and what fallout there may be from that.

The press release from the University of Southampton only contains the 97% and 12% figures for impure and pure placebos. The vital frequency information has been stripped out completely. What’s happened here (and the researchers will have known this would happen) is that the press release has been churned into an article without any one of the journalists looking into this having digested the original PLoS One study.

Running this through Churnalism.com shows that both the Daily Mail and the Independant have respectively taken 51% and 63% of the press release verbatim, unsurprisingly taking out the details of funding and not bothering to go into detail about the paper itself. This is not only a clear show of poor journalistic standards, but an example of dishonesty in presentation of research results. Moreover, the press release itself has been specifically designed to use cherry-picked data that will grab headlines rather than show representative data.

What is already a badly designed study, which could only ever show a high proportion of placebo use, has been deliberately misrepresented to gain maximum exposure. All this shows everyone involved in a bad light, starting with the researchers, through PLoS One and ending with bad journalism.

The annoying thing is that this could have been so different. The study could have chosen its differentiation of placebo types better, possibly splitting them into 3 or more types; pure (as before), reinforced (placebo with psychological reinforcement) and active (off-label prescribing, etc) to start with. There could have been a larger sample size to reduce the possibility of noise. More importantly, though, the results of the survey could have been presented more honestly, without the spin and headline-grabbing tactics. Sure, this wouldn’t have caught the media’s attention quite so much, but it would have been a more valid representation of placebo use and more useful for future research. Unless of course, that was the intention all along?

——————————————————

Scott Gavura has written a similar piece on the Science-Based Medicine blog. It’s worth a read.

11 Comments

  • By Acleron, March 24, 2013 @ 10:56 am

    It’s time for universities to stop advertising scientific results through PR departments. The mangling by both the PR dept and then by a journalist destroys any meaningful information.

  • By ayse, March 25, 2013 @ 3:22 am

    unfair to get PLoS involved in this. the investigators are responsible for the bad design and sensation seeking or uneducated reporters are responsible for reporting conclusions without understanding the problems of the design. i find bad quality of research in peer review journals as well. problem is diminishing use and value of critical thinking and jumping to conclusion to get some exciting news out there at ever faster speed.

  • By David, March 25, 2013 @ 8:53 am

    I don’t think PLoS One have been unfairly mentioned. The journal itself has to be mentioned if for no other reason than so people can look at the study themselves.

    The criticism, as you rightly point out, should be and is focused at the mainstream media and Southampton University’s PR department. PLoS One did nothing particularly unusual and as you point out, there are often poor studies published in other journals that are sensationalised by the press. This is just one example.

  • By ayse, March 25, 2013 @ 2:20 pm

    to David: well-said. yes, of course the journal where it was published should be cited, but i was just referring to the part of the piece that insinuates as if its being published through plos is one of the qualifying characters of the publication’s shortcomings.

    i am not sure about the procedures involved in reporting through pr depts: do they write a summary themselves and advertise it without having reviewed by a study author or the authors write the summary? if the academic institutions could just put the abstract of their studies on their website to inform public about where their tax $s are going, i don’t really see any problem with it.

    in my view, we should stop blaming externalities and start training good scientists and reporters, and most importantly encourage collaboration among scientists so they can discuss their research for feedback before embarking upon futile and arduous effort.

  • By David, March 26, 2013 @ 9:16 am

    I think that’s the biggest point to take away from this. Many papers have cut science journalists and journalism, leaving only journalists who are not fully qualified to analyse and critically asses science stories. But that is only half the problem.

    Because this is the case, scientists and researchers need to become better at communicating with the press. I’ve heard is said that when a science story is poorly reported, journalists will hear from industry and pseudo-science proponents well before they hear from scientists.

    It would be unfair for me to speculate on the peer review policies as PLoS. But I will say that because PLoS One will publish any and all studies that they deem are “technically sound”, there is room for poor studies to make it on to the site.

  • By ayse, March 26, 2013 @ 3:33 pm

    ditto!

    after a stanford study showed no nutritional benefit of organic food compared to non-organic, and the media jumped on it as if the only reason people consume organic products and made unfounded inferences that paper did not even address, i suggested to my epidemiology department that we should implement a course teaching graduate students how to communicate their research. however, even if the authors explained their findings as best as they could, the media would still find a way to sensationalize it. it is possible that publicizing at the research institution’s website was initially designed to tackle this problem by explaining the studies’ intentions and findings in lay terms before media does from the original article. obviously it did not reach intended aim.

    aside all, i believe it is difficult for general public to understand the limitations of observational studies and that is a challenge we may never be able to resolve.

    PLos issue is a separate one. i support its premise because it facilitates the publication of negative results which would not get a chance in major journals. yes, there are dangers to not getting peer-review but there is also a bigger benefit of having access to all the research conducted in the field otherwise unknown. if we had responsible research and publishing by scientists and reporting by the media, we would have no problem with non-peer review journals. the readers and users of these publications should also start taking responsibility evaluating the quality of a study. to me PLoS is a start and we will learn and problem solve as we go. i know many people, who believe everything bmj, jama, nejm etc report because they think these studies were reviewed by some experts in the field. i have seen many crap research published in these journals especially recently! maybe good ones are going to PLoS? :)

  • By Barbara, April 2, 2013 @ 1:25 am

    About prescribing antibiotics for suspected viral infections . . . my father had COPD. He was at serious risk of death from pneumonia (and did eventually die of it, or rather of the pneumonia / COPD interaction). When my father had a cold, it tended to lead to bronchitis or pneumonia. His doctor began to prescribe an antibiotic whenever my father (who was rather stoic) developed a bad enough cold that he called for an appointment to see the doctor. I don’t know how common this is, but it seemed a reasonable precaution to me. And yet it was prescribing an antibiotic for what was thought to be a cold at the time the prescription was written.

  • By David, April 2, 2013 @ 8:51 am

    Hi Barbara.
    I’m sorry to hear that you lost your farther due to the effects of COPD (chronic obstructive pulmonary disease). As it sounds like you know, antibiotics are not indicated for COPD or for viral infections. It strike me and this is purely personal speculation, that your father’s doctor was taking a precautionary step to prevent any opportunistic infections which may have weakened him further while attempting to overcome colds or other viral infections. As you’re probably aware COPD often gets progressively worse overtime and this may have been an attempt to help improve your father’s general health knowing that he wasn’t going to improve overall. It really is impossible to know for certain, but I can understand why they would do this in his case.

Other Links to this Post

  1. Skewing Data – and Suppressing the Facts | Potato Skin Belt — March 24, 2013 @ 12:52 pm

  2. Science-Based Medicine » Behold the spin! What a new survey of placebo prescribing really tells us — March 28, 2013 @ 1:02 pm

  3. I’ve got your missing links right here (30 March 2013) – Phenomena: Not Exactly Rocket Science — March 30, 2013 @ 6:10 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

WordPress Themes