Research evidence in diabetes

The aim of this section is provide an introduction to research evidence in diabetes: where to find it, how to interpret it, and problems with the evidence base. It is not about how to do research. It will be useful to readers who cannot find what they want in other sections, perhaps because not all other sections have yet been written, or perhaps because the evidence base needs updated. They will then need to search for studies and interpret what they find.

Evidence based medicine: what is it?

Evidence based medicine (EBM) is the use of current best evidence in making decisions about the care of individual patients, but also about the provision of new forms of health care.

However, evidence alone is not enough.

As the Centre for Evidence-Based Medicine in Oxford ( states; “Good doctors and health professionals use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients. Evidence-based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions.”

Types and hierarchies of evidence

The type of evidence required varies with the question, which may be about;

  • treatment
  • diagnosis
  • prognosis
  • causes of disease
  • quality of life

So the first step in finding evidence is deciding on a focused question.


Questions on treatment include;

  • does this treatment work, where “work” means doing more good than harm? So need to consider both effectiveness and adverse effects.
  • Is it better than the best current treatment?
  • Is there is no current treatment, is this treatment better than nothing?
  • Is it cost-effective?

The best known hierarchy of evidence is for questions about treatment. The version below is taken from the SIGN website ( and comes from SIGN guideline 50, a guide for guideline developers.

1++ High quality meta-analyses, systematic reviews of RCTs, or RCTs with a very low risk of bias
1+ Well conducted meta-analyses, systematic reviews, or RCTs with a low risk of bias
1 - Meta-analyses, systematic reviews, or RCTs with a high risk of bias
2++ High quality systematic reviews of case control or cohort studies, High quality case control or cohort studies with a very low risk of confounding or bias and a high probability that the relationship is causal
2+ Well conducted case control or cohort studies with a low risk of confounding or bias and a moderate probability that the relationship is causal
2 - Case control or cohort studies with a high risk of confounding or bias and a significant risk that the relationship is not causal
3 Non-analytic studies, eg case reports, case series
4 Expert opinion

Note that the lowest level, “expert opinion” implies that the experts do not have access to any good evidence, and are having to give the best advice they can based on experience.

Note also that this hierarchy is for evidence on effectiveness. Evidence on adverse effects of treatment often comes not from RCTs, but from larger longer term observational studies such as case series. This applies particularly to uncommon adverse effects, or to adverse effects seen only in some subgroups, such as the elderly, or those with co-morbidities. RCTs are often very good for internal validity but less so for generalisability.

Reference: Sarah Wild and colleagues compared recruits to several large trials of treatment in type 2 diabetes, with people on a national diabetes register, and found that some large trials recruited so selectively that most people with type 2 diabetes could not be recruited. This reduces the generalisability of the results to people with type 2 diabetes. Diabetic Medicine 2012;30:300-308.


This section is based partly on the interim NICE guidance on evaluation of diagnostic tests.

Diagnostics involves a variety of tests and measurements that can be used to determine what conditions a patient may have or be likely to develop. These tests can be used for suggesting or confirming suspected diagnosis, for screening, monitoring chronic conditions, for ruling out possible diagnoses, monitoring patients following treatment for side effects or recurrence, and predicting future events.

The types of evidence in diagnostics are often different from those used in evaluating treatment – fewer RCTs, and more observational studies. The most important difference is that diagnostic tests have few direct outcomes, that is, outcomes that matter to the patient that come directly from the test itself. Most outcomes of interest follow from treatments that are either initiated or not initiated based on the tests. The second important difference is that tests are frequently done in conjunction with other tests or measurements, and it is the Diagnostics: Figure 1: Determinants of the clinical effectiveness of a diagnostic technologies (Medical Services Advisory Committee (MSAC), 2005) Click to enlarge.
Diagnostics: Figure 1: Determinants of the clinical effectiveness of a diagnostic technologies (Medical Services Advisory Committee (MSAC), 2005) Click to enlarge.
data from the series of tests that is used in decision making.

Fryback and Thornbury provide a useful framework for thinking about evaluation of diagnostic tests.

Evaluation a diagnostic technology may go through the following stages (Fryback and Thornbury, 1991):

  1. Technical quality of test information – does it measure what it says it does
  2. Diagnostic accuracy – how reliable is the test?
  3. Impact on diagnosis – how much does it affect the physician’s diagnosis.
  4. Therapeutic impact – does the result change patient management?
  5. Effect on outcomes – does the patient do better because of the test being done?
  6. Societal impact – change in costs and benefits

Types of evidence for different questions

  • Treatment
  • Causes
  • Prognosis
  • Diagnosis

Types of evidence

  • Hierarchies of evidence
  • Systematic reviews (what are they, how they differ from narrative reviews, why do systematic reviews on the same topic sometimes come to different conclusions)
  • RCTs (what is randomisation, why is it done, what is adequate allocation concealment, blinding, individual vs cluster randomisation, external vs internal validity, why results from RCTs can differ from those from observational studies)
  • Cohort Studies
  • Case control
  • Case studies
  • Qualitative
  • Adverse events – sources of evidence

Problems with evidence

  • Biases in evidence (e.g. publication bias, language bias selective outcome reporting, spin in reporting, inappropriate comparators, conflicts of interest)
  • Underpowered trials
  • Outcomes measured - problems eg. surrogate outcomes, short-terms outcomes, outcomes not relevant to patients
  • Conflicting evidence
  • Indirect comparisons, network meta-analyses (why and when are they done –eg from diabetes – ranibizumab etc)
  • Lack of evidence for some problems (bias towards pharma funded studies)
  • Generalisability (how patients in trials not representative of ‘real world’ patients), resourcing of trials vs routine care

Biases in reporting of evidence- Reporting bias

Reporting bias – classification of different types

“Selective reporting can occur in various ways in randomised clinical trials (RCTs), for example: omission of all data for a measured outcome, reporting of data for only some of the time points measured, reporting only per-protocol analyses and reporting only subgroup analyses or sub-scale data. Additional problems are the lack of reporting of measures of variation and exact P values for non-significant outcomes and upgrading a pre-specified secondary outcome to primary in the publication, or vice versa.“

1) Publication bias

We know that many studies are never published or not published in full. So the published literature does not provide all the data that is out there, and some studies are more likely to be published than others. This is known as publication bias.

One feature of publication bias is that studies with positive results are;

  • more likely to be published (publication bias)
  • more likely to be published rapidly (time lag bias)
  • more likely to be published in English (language bias)
  • more likely to be published more than once (multiple publication bias)
  • more likely to be cited by others (citation bias)

So systematic reviews of published studies are likely to be biased towards the positive.

Definition: “Publication bias refers to the greater likelihood that studies with positive results will be published”[1]

An alternative definition: Publication bias is the selective or multiple publication or suppression of trial results so that the scientific record is distorted.

Publication bias causes harm by giving a false representation of the evidence base which can mislead clinicians, policy-makers and patients. It is a form of scientific and research misconduct. Failure to report research findings is not only unscientific but also unethical, because the contributions of patients or other volunteers who have participated in trials, have been discarded .


From : Dickersin K, Chalmers I (2010). Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation. James Lind Library (

Managing publication bias

The Cochrane Collaboration provides advice on how to deal with publication bias. The Review Manager software (which is free to download) provides a funnel plot facility which can be used to try to quantify publication bias.

The Supercourse provides a set of slides at from Kamran Abbasi, Deputy editor, BMJ

Fujian Song and colleagues provide an overview of publication and related biases

Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):iii, ix-iii,193.

RESULTS: Studies with significant or positive results were more likely to be published than those with non-significant or negative results. There was convincing evidence that outcome reporting bias exists and has an impact on the pooled summary in systematic reviews. Studies with significant results tended to be published earlier than studies with non-significant results, and empirical evidence suggests that published studies tended to report a greater treatment effect than those from the grey literature. In a few cases, publication and related biases had a potentially detrimental impact on patients or resource use. Publication bias can be prevented before a literature review (e.g. by prospective registration of trials), or detected during a literature review (e.g. by locating unpublished studies, funnel plot and related tests, sensitivity analysis modelling).

Dwan and colleagues reported that; “There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported.”

From: Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. Plos One. 2008;3(8):e3081.

Hopewell and colleagues also reported that trials with positive findings are more likely to be published, and that trial s with positive findings were published more quickly.[2]

2. Biased reporting

Outcome reporting bias has been defined as the selection for publication of a subset of the original recorded outcome variables based on the results.

The ORBIT Group examined a group of Cochrane reviews, and concluded that that;

“ a fifth of decisions may have been incorrect as a result of this problem, and a quarter of reviews may have seriously overestimated the benefits of treatment”

Kirkham and colleagues came to similar conclusions;

“A third of Cochrane reviews (96/283 (34%)) contained at least one trial with high suspicion of outcome reporting bias for the review primary outcome.

Of the 42 meta-analyses with a statistically significant result only, eight (19%) became non-significant after adjustment for outcome reporting bias and 11 (26%) would have overestimated the treatment effect by 20% or more.

Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ (2010); 340:c356. [3]

**Further reading ** McGauran N, Wieseler B, Kreis J, Schuler YB, Kolsch H, Kaiser T. Reporting bias in medical research - a narrative review. Trials. 2010;11:37.

Spin in reporting

Spin has been defined as:

In the context of a trial with statistically nonsignificant primary outcomes, spin was defined as use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.(Boutron et al 2010)

In journal articles Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 2010;303(20):2058

Boutron and colleagues examined 616 publications from randomised controlled trials and concluded that;

“ the reporting and interpretation of findings was frequently inconsistent with the results.”

Spin in medical reporting

Half of medical reporting 'is subject to spin'

Yavchitz et al looked at press releases reporting RCTs and found that;

"Spin, defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases.”

Yavchitz A, Boutron I, Bafeta A, Marroun I, Charles P, Mantz J, et al. Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study. PLoS Med. 2012 9/2012;9(9):e1001308.

Interpreting and evaluating evidence

  • Appraisal checklists for different study designs eg. CASP, SIGN, CEBM etc
  • How to read a paper (eg Trish Greenhalgh’s papers)
  • Ways of reporting evidence eg. forest plots, NNT, odds ratios, relative risk (ref. Bandolier articles),
  • Reporting quality - guidelines types eg. PRISMA (systematic reviews), CONSORT (RCTs), STROBE (epidemiological studies), EQUATOR (observational), STARD (diagnostics)

Publications and peer review

  • Peer reviewed journals (what are they, what is peer review, why does it matter, hierarchy of journals (e.g. impact factors of diabetes journals), access to journals, (how to find open access journals in diabetes)
  • Meeting abstracts – where to find diabetes meeting abstracts, percentage that go unpublished, peer reviewed?

Finding the Evidence

Focusing your question

The first step in your search for information is to identify your terms to fit the PICO format, where PICO stands for:

  • Patient or** P**roblem
  • Intervention (a cause, prognostic factor, treatment, etc.)
  • Comparison (the main alternative -if appropriate)
  • Outcome (the effect being measured or tested)

This process will require you to consider all the search terms you will need to construct your search strategy and help you focus your search question in a precise and answerable format. For more details on asking a focussed question click here.

After defining your PICO, the next step in the search for information is generally to access secondary sources: that is, information collected, analysed and synthesized from primary studies in an objective structured manner.

Searching Databases

The Cochrane Database of Systematic Reviews is a good starting point as it contains high quality reviews. Its reviews are mostly of interventions, but it is currently expanding to include diagnostics reviews. However, you should be aware that a high proportion of studies have been excluded because they do not meet the eligibility criteria. This is because Cochrane reviews are mostly limited to RCTs as evidence of efficacy. Another source is the Centre for Reviews and Dissemination (CRD) databases, funded by the UK government and freely accessible.

Information concerning all ongoing or completed registered clinical trials can be obtained from the WHO International Clinical Trials Registry Portal or from the US government site

Data concerning preliminary results from ongoing studies can be obtained by accessing abstracts of oral or poster presentations at scientific meetings, although these have not undergone full peer review. This section offers guidance as to how to access information from any of these sources.

If you want to search more widely for literature on diabetes, including primary studies, then PubMed is the ideal database due to its wide coverage of diabetes journals. It is freely available and has many guides and search tools to help you focus your searches.

Health economics

  • What is it, why is it done, rationing, how is health economics used in, diabetes, who is doing it

Health Technology Assessment

  • What is it, how is it used in diabetes, who is doing it, signpost to websites


  1. ^ Olson CM et al. Publication bias in editorial decision making. JAMA. 2002 Jun 5;287(21):2825-8.

  2. ^ Hopewell S et al. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev 2009;MR000006.

  3. ^ McGauran N et al. Wieseler B, Kreis J, Schuler YB, Kolsch H, Kaiser T. Reporting bias in medical research - a narrative review. Trials. 2010;11:37.


Nobody has commented on this article

Commenting is only available for registered Diapedia users. Please log in or register first.