Feeds:
Posts
Comments

I am very glad the Cochrane Collaboration exists. However, it is important to consider how its activities might be harmful and to take every effort to mitigate this risk.

I periodically re-read Joel Lexchin’s review of outcome in industry supported vs non industry supported studies to remind myself of the powerful and pervasive impact that conflict of interest has on the evidence base. Given most randomised controlled trials are industry funded, there is a risk that Cochrane Reviews lend this biased evidence base legitimacy.

Of course, much of what the collaboration already does limits the chances of this bias being fed through into the findings of their reviews.

The wrong choice of comparator drug or outcome shouldn’t be fed through if inclusion criteria are robust.

Similarly, good inclusion criteria should limit the effects of poor study design. However, in Lexchin’s review, study quality did not explain the bias seen.

Publication bias is more problematic. Freedom of information requests have been needed in recent years to gain access to unpublished negative studies and get a true picture of treatment effect. Not all Cochrane reviewers go to these lengths.

I support recent calls on the Cochrane blog and in the literature, for meta-analysts to report the funding sources and author conflicts of interest of included studies.

However, I propose the Cochrane Collaboration go further. If all meta-analyses included a separate analysis of both independent and industry funded studies, the most perverse distortions of the evidence base might be visible.

Perhaps more importantly, routinely demanding such analyses from Cochrane reviewers would make it easy for the collaboration to assess periodically how effective their procedures are at preventing funder bias from being fed through into the conclusions of their reviews.

This blog was originally posted at BMJ.com.

Most doctors and nurses will have a deep well of patient stories – examples of great fortitude and its converse. It is clear to any clinician that some patients either feel their symptoms (or report them) more than others.

This is well recognised in parts of the medical literature with, for example, the distinction made between irritable bowel syndrome patients and ‘non-patients’ – people with the same symptoms as those who seek care but who never present to services.

I wonder, however, whether stoicism has been neglected in the epidemiological literature. It seems likely to be related to many exposures and many diseases. Association with exposure and with disease are two of the key features of a confounding variable.

Stoicism is particularly likely to be associated with diseases where diagnosis relies on self-reported symptoms. Many diseases fall into this category.

I feel stoicism is likely to be related to exposures such as cycling to work every day, ambient household temperature, and so on. However, it’s not hard to imagine that many health behaviours might be influenced by how stoical people are.

Confounding is well illustrated using a classic example, coffee and cancer. People who drink coffee are more likely to get cancer. They are also more likely to be smokers. The well-established association between smoking and cancer largely explains the association between coffee and cancer. Smoking, in this example, is a confounder.

Epidemiologists try to control for confounders to avoid misleading results.

Pubmed search for ’stoic*’ AND ‘confound*‘reveals 24 papers, only three of which are even vaguely relevant. None are epidemiological and I can access only one. By failing to control for stoicism in epidemiological studies, do we risk throwing up reams of spurious associations – warmer home environment is associated with fatigue, winter cycling is associated with less chronic low back pain, etc? Will this lead us to spend money trialling useless interventions?

In some cases it may be possible to adjust – person by person – for reported symptoms at another time point. However, to control for confounders, you usually need to be able to measure them.

It is entertaining to think how one might measure stoicism. For example: ‘When you last had a sore throat, did you (a) get on with it (b) have a day off work or (c) go to see your GP?’ We might apply more objective tests – time to first complaint when the waiting room is kept at ten degrees, perhaps? We could devise a scale.

A version of this blog was originally posted at BMJ.com.

Dear sir, I have completely failed to understand a simple criticism of our work, please tell everyone, yours, BBCnews 

Tweet by @bengoldacre, 4 November 2011

The misuse of epidemiology is everywhere. Lets look at two prominent culprits.

”Three fold variation’ in UK bowel cancer death rates’ splashed a recent BBC News headline.

Had someone not thought to check the numbers we’d have all gone away grumbling about postcode lotteries. And we’d all have been wrong. What was going on?

All things vary. If a coin is flipped several times, chance dictates it will not land heads exactly half the time. But the more times you flip the coin, the closer to a half you get.

In the year in question, there were only six deaths from bowel cancer in the Shetland Islands, twelve in Antrim, fourteen in Watford, and so on. Even with a uniform rate of bowel cancer deaths across the UK, might there only be two deaths in Shetland the following year? Would that merit a headline or could it just be statistical noise?

Thankfully, as reported by Ben Goldacre, someone properly analysed the data. The variation in death rates seen was actually less than would be expected by chance. Embarrassingly, the BBC failed to understand the criticism.

Whilst the poor quality of science journalism has been well documented by Ben and others over the years, ineptitude is less concerning than the deliberate misuse of data.

The debate surrounding the recent UN High Level Meeting on Non Communicable Disease (NCD) was mired in epidemiological naughtiness. Among the most prominent culprits were the NCD Alliance, a grouping of NGOs backed by the pharmaceutical industry, who unfortunately – given their funding – emerged as the leading advocacy organisation.

“NCDs were responsible for 63% of all global deaths in 2008. This is not just a statistic, it is the deaths of 36 million people. With the incidence of NCDs predicted to rise by 17% over the next ten years worldwide, we must work together to ensure it is world leaders who attend the Summit and agree to a concrete set of commitments that will result in sustained action.”

Professor Jean Claude Mbanya, President of the International Diabetes Federation, quoted in a press release (pdf) from the NCD Alliance.

Given immortality is not currently achievable and that we must all die of something, presenting mortality data as counts is thoroughly misleading. Non age standardised counts fail to capture the difference between a death from a respiratory tract infection aged five and a death from ischaemic heart disease aged ninety five.

With huge expertise within the alliance, it is hard to believe that their presentation of data in this manner was anything other than a deliberate choice to spin the data and inflate the contribution of NCDs. Whilst counts are easier to understand than Disability Adjusted Life Years lost or age adjusted rates, the alliance could have presented data on numbers of premature deaths – less misleading and totally comprehensible.

I don’t dispute the assertion that NCDs are emerging as a major problem in low and middle income countries, and I’m gutted the summit achieved so little. However, the data is compelling when presented honestly. It doesn’t need to be spun.

It is very easy to misrepresent epidemiological data and for the unscrupulous to spin data for the press. Given the volume of coverage epidemiological research generates, groups such as the Science Media Centre have their work cut out.

I don’t know what the solution is. Ben Goldacre’s suggestion that we need more science editors and fewer science reporters seems sensible. I’d be interested in people’s thoughts.

A version of this blog was originally posted at BMJ.com.

“Your chances of surviving from cancer, in America, if you are diagnosed with cancer, is better than in the UK.”

Mark Littlewood, BBC Question Time, 13 October 2011.

 

I was recently an audience member at a recording of BBC Question Time in East London – a deeply frustrating experience.

I’ll save discussion of panelist Mark Littlewood’s assertion that increased choice in the NHS would lead to reduced health inequalities for another time  – what data there is suggests less educated people reliant on public transport aren’t as likely to exercise their choices. I’ll also leave for now Andrew Lansley’s claim that the Health Bill is not about increasing competition.

I will, instead, examine Mark Littlewood’s claim about cancer survival.

Those seeking support for the Health Bill have made a concerted effort to rubbish NHS cancer care over recent months. The most outrageous example of this was David Cameron’s assertion, in the run up to the last election, that “our death rate from cancer is actually worse than Bulgaria’s.”

Stephen Henderson commented at the time, “I’m reluctant usually to rely on anecdotal evidence, I‘ve been in a Bulgarian hospital and I’m pretty confident that their low cancer mortality has nothing to do with standards of care.” 

It is instructive to review once again why Cameron and Littlewood may be right, and why that tells us little about the state of NHS cancer care. This is not new ground, as it has been covered in recent months by Rob Aldridge, Ben Goldacre, and Cancer Research UK. However, I think the ideas are worth reiterating.

Mortality and survival rates are not the same. Exposure to risk factors has a major effect on incidence and therefore on mortality from cancer even where survival remains unchanged. Cancer mortality is falling rapidly in the UK.

The data most widely cited in this debate describes cancers diagnosed before the NHS Cancer Plan was implemented.

There is variation in the proportion of cancer cases included in national registries. The UK data is fairly complete. In the US, only a quarter of cancers are included in the main cancer registry. Historically, disadvantaged groups have been under-represented in the sample, a problem given poverty is correlated with poor survival.

Finally, cancer survival depends on when disease is diagnosed. For example, in the US, the PSA blood test is more widely used meaning many men are diagnosed very early with prostate cancer. Whilst this increases cancer survival rates, most people with prostate cancer die with rather than of their disease. Many American men will have been exposed to unnecessary anxiety and potentially unnecessary treatment. A similar argument can be made regarding UK-Europe comparisons of breast cancer survival rates.

I have no idea how familiar Mark Littlewood is with the literature on cancer survival but, given how much is at stake in these NHS reforms, it would be helpful if he could qualify the remarks he made on cancer survival in a public forum.

A version of this blog was originally posted at BMJ.com.