Understanding the strengths and weaknesses of clinical research in cancer

Published on 09/04/2015 by admin

Filed under Hematology, Oncology and Palliative Medicine

Last modified 09/04/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 4 (1 votes)

This article have been viewed 5442 times

26 Understanding the strengths and weaknesses of clinical research in cancer

What are the important elements when assessing reports of trials?

By far the most important elements are the design and conduct of the trial. The methods of analysis, while still important, are generally less likely to lead to inappropriate conclusions than inappropriate design and management of a trial. This chapter is based on the methods recommended by various EBM groups (Box 26.1).

Because there is such a huge volume of medical literature published each year, it is important to take a systematic approach to what you appraise thoroughly (Box 26.2). The first step in appraisal is to screen the paper to see if it is worthy of careful reading. It may be possible to answer these screening questions on reading the title and abstract.

Step 1 – Screening questions

Step 2 – Appraising a paper reporting a trial

There are a number of crucial questions that need to be answered (Box 26.2):

Concurrent or historical controls?

When a new therapy is being tested some investigators will give the new treatment to all the patients and will then compare the outcomes with those obtained from the records of similar patients treated in the past – so-called historical controls. Any such comparison is fraught with danger as factors other than the new treatment may have change over time.

2 Was the study based on a pre-specified protocol?

A protocol written before the trial starts is a prerequisite to good research and some journals are now recommending that the original protocol be submitted with the final paper to ensure that there was such a protocol and to identify any deviations from the original design. Any deviation from the original study design can result in skewed observation of clinical benefit.

When reading a paper bear in mind the following elements of trial design:

Subgroup analysis

Subgroup analyses are fairly common in trials; they use the data from a study to compare one endpoint across different subgroups. There are three major problems with this approach:

Where a subgroup is found to behave differently, you should consider if there is a plausible biological mechanism for this and whether other trials have found a similar finding. Where there is a statistical analysis, this should be done as a formal test of interaction. Strong empirical evidence suggests that post hoc subgroup analyses often lead to false positive results (Example Box 26.2).

Example Box 26.2
International study of infarct survival

Sleight P. Subgroup analyses in clinical trials: fun to look at – but don’t believe them! Curr Control Trials Cardiovasc Med 2000;1(1):25–27.

Analysis of subgroup results in a clinical trial is surprisingly unreliable, even in a large trial. This is due to a combination of reduced statistical power, increased variance and the play of chance. Reliance on such analyses is likely to be erroneous. Plausible explanations can usually be found for effects that are, in reality, simply due to the play of chance. When clinicians believe such subgroup analyses, there is a real danger of harm to the individual patient.

In order to study the effect of examining subgroups the investigators of the ISIS trial, testing the value of aspirin and streptokinase after MI, analysed the results by astrological star sign. All of the patients had their date of birth entered as an important ‘identifier’. They divided population into 12 subgroups by astrological star sign. Even in a highly positive trial such as ISIS-2, in which the overall statistical benefit for aspirin over placebo was extreme (p < 0.00001), division into only 12 subgroups threw up two (Gemini and Libra) for which aspirin had a non-significantly adverse effect (9% ± 13%).

ISIS-2 was carried out in 16 countries. For the streptokinase randomization, two countries had non-significantly negative results, and a single (different) country was non-significantly negative for aspirin.

There is no plausible explanation for such findings except for the entirely expected operation of the statistical play of chance. It is very important to realize that lack of a statistically significant effect is not evidence of lack of a real effect.