The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. All authors should consult the Handbook for guidance on the methods used in Cochrane systematic reviews.
The Handbook includes guidance on the standard methods applicable to every review planning a review, searching and selecting studies, data collection, risk of bias assessment, statistical analysis, GRADE and interpreting results , as well as more specialised topics non-randomized studies, adverse effects, complex interventions, equity, economics, patient-reported outcomes, individual patient data, prospective meta-analysis, and qualitative research.
The list of all author contributors of the Handbook is available here. These provide core standards that are generally expected of Cochrane reviews. For further information and for any Handbook enquiries please contact: support cochrane. Chapter Analysing and presenting results.
Version 1. Cochrane Library Cochrane. Non-protocol interventions may be identified through the expert knowledge of members of the review group, via reviews of the literature, and through discussions with health professionals. As described in Section 8. In RoB 2, the only deviations from the intended intervention that are addressed in relation to the effect of assignment to the intervention are those that:. For example, in an unblinded study participants may feel unlucky to have been assigned to the comparator group and therefore seek the experimental intervention, or other interventions that improve their prognosis.
Similarly, monitoring patients randomized to a novel intervention more frequently than those randomized to standard care would increase the risk of bias, unless such monitoring was an intended part of the novel intervention. To examine the effect of adhering to the interventions as specified in the trial protocol, it is important to specify what types of deviations from the intended intervention will be examined.
These will be one or more of:. Bias due to deviations from intended interventions can sometimes be reduced or avoided by implementing mechanisms that ensure the participants, carers and trial personnel i. Blinding, if successful, should prevent knowledge of the intervention assignment from influencing contamination application of one of the interventions in participants intended to receive the other , switches to non-protocol interventions or non-adherence by trial participants.
This term makes it difficult to know who was blinded Schulz et al A review of methods used for blinding highlights the variety of methods used in practice Boutron et al Blinding during a trial can be difficult or impossible in some contexts, for example in a trial comparing a surgical with a non-surgical intervention.
Lack of blinding of participants, carers or people delivering the interventions may cause bias if it leads to deviations from intended interventions. For example, low expectations of improvement among participants in the comparator group may lead them to seek and receive the experimental intervention.
Such deviations from intended intervention that arise due to the experimental context can lead to bias in the estimated effects of both assignment to intervention and of adhering to intervention. An attempt to blind participants, carers and people delivering the interventions to intervention group does not ensure successful blinding in practice. For many blinded drug trials, the side effects of the drugs allow the possible detection of the intervention being received for some participants, unless the study compares similar interventions, for example drugs with similar side effects, or uses an active placebo Boutron et al , Bello et al , Jensen et al Deducing the intervention received, for example among participants experiencing side effects that are specific to the experimental intervention, does not in itself lead to a risk of bias.
As discussed, cessation of a drug intervention because of toxicity will usually not be considered a deviation from intended intervention. See the elaborations that accompany the signalling questions in the full guidance at www.
Risk of bias in this domain may differ between outcomes, even if the same people were aware of intervention assignments during the trial. For example, knowledge of the assigned intervention may affect behaviour such as number of clinic visits , while not having an important impact on physiology including risk of mortality.
For the effect of assignment to intervention, an appropriate analysis should follow the principles of ITT see Section 8. Instrumental variable approaches can be used in some circumstances to estimate the effect of intervention among participants who received the assigned intervention.
Missing measurements of the outcome may lead to bias in the intervention effect estimate. Possible reasons for missing outcome data include National Research Council :. This domain addresses risk of bias due to missing outcome data, including biases introduced by procedures used to impute, or otherwise account for, the missing outcome data.
Some participants may be excluded from an analysis for reasons other than missing outcome data. The ITT principle of measuring outcome data on all participants see Section 8. Therefore, it can often only be followed by making assumptions about the missing outcome values. Therefore, assessments of risk of bias due to missing outcome data should be based on the issues addressed in the signalling questions for this domain, and not on the way that trial authors described the analysis.
To understand when missing outcome data lead to bias in such analyses, we need to consider:. Whether missing outcome data lead to bias in complete case analyses depends on whether the missingness mechanism is related to the true value of the outcome. Equivalently, we can consider whether the measured non-missing outcomes differ systematically from the missing outcomes the true values in participants with missing outcome data.
For example, consider a trial of cognitive behavioural therapy compared with usual care for depression. If participants who are more depressed are less likely to return for follow-up, then whether a measurement of depression is missing depends on its true value which implies that the measured depression outcomes will differ systematically from the true values of the missing depression outcomes.
The specific situations in which a complete case analysis suffers from bias when there are missing data are discussed in detail in the full guidance for the RoB 2 tool at www. In brief:. It is tempting to classify risk of bias according to the proportion of participants with missing outcome data. In situations where missing outcome data lead to bias, the extent of bias will increase as the amount of missing outcome data increases. However, the potential impact of missing data on estimated intervention effects depends on the proportion of participants with missing data, the type of outcome and for dichotomous outcome the risk of the event.
It is not possible to examine directly whether the chance that the outcome is missing depends on its true value: judgements of risk of bias will depend on the circumstances of the trial. Therefore, we can only be sure that there is no bias due to missing outcome data when: 1 the outcome is measured in all participants; 2 the proportion of missing outcome data is sufficiently low that any bias is too small to be of importance; or 3 sensitivity analyses conducted by either the trial authors or the review authors confirm that plausible values of the missing outcome data could make no important difference to the estimated intervention effect.
Indirect evidence that missing outcome data are likely to cause bias can come from examining: 1 differences between the proportion of missing outcome data in the experimental and comparator intervention groups; and 2 reasons that outcome data are missing. If the effects of the experimental and comparator interventions on the outcome are different, and missingness in the outcome depends on its true value, then the proportion of participants with missing data is likely to differ between the intervention groups.
Therefore, differing proportions of missing outcome data in the experimental and comparator intervention groups provide evidence of potential bias. Trial reports may provide reasons why participants have missing data. It is likely that some of these e. Therefore, these reasons increase the risk of bias if the effects of the experimental and comparator interventions differ, or if the reasons are related to intervention group e.
In practice, our ability to assess risk of bias will be limited by the extent to which trial authors collected and reported reasons that outcome data were missing. The situation most likely to lead to bias is when reasons for missing outcome data differ between the intervention groups: for example if participants who became seriously unwell withdrew from the comparator group while participants who recovered withdrew from the experimental intervention group.
Trial authors may present statistical analyses in addition to or instead of complete case analyses that attempt to address the potential for bias caused by missing outcome data.
Approaches include single imputation e. Imputation methods are unlikely to remove or reduce the bias that occurs when missingness in the outcome depends on its true value, unless they use information additional to intervention group assignment to predict the missing values. Review authors may attempt to address missing data using sensitivity analyses, as discussed in Chapter 10, Section Errors in measurement of outcomes can bias intervention effect estimates.
Measurement errors may be differential or non-differential in relation to intervention assignment:. This domain relates primarily to differential errors. Non-differential measurement errors are not addressed in detail.
Whether the method of measuring the outcome is appropriate. Outcomes in randomized trials should be assessed using appropriate outcome measures. For example, portable blood glucose machines used by trial participants may not reliably measure below 3. Such a measurement would be inappropriate for this outcome. Whether measurement or ascertainment of the outcome differs, or could differ, between intervention groups.
The methods used to measure or ascertain outcomes should be the same across intervention groups. This is usually the case for pre-specified outcomes, but problems may arise with passive collection of outcome data, as is often the case for unexpected adverse effects.
For example, in a placebo-controlled trial, severe headaches occur more frequently in participants assigned to a new drug than those assigned to placebo. These lead to more MRI scans being done in the experimental intervention group, and therefore to more diagnoses of symptomless brain tumours, even though the drug does not increase the incidence of brain tumours.
Even for a pre-specified outcome measure, the nature of the intervention may lead to methods of measuring the outcome that are not comparable across intervention groups. For example, an intervention involving additional visits to a healthcare provider may lead to additional opportunities for outcome events to be identified, compared with the comparator intervention. Whether the outcome assessor is blinded to intervention assignment. Blinding of outcome assessors is often possible even when blinding of participants and personnel during the trial is not feasible.
However, it is particularly difficult for participant-reported outcomes: for example, in a trial comparing surgery with medical management when the outcome is pain at 3 months. The potential for bias cannot be ignored even if the outcome assessor cannot be blinded. Whether the assessment of outcome is likely to be influenced by knowledge of intervention received.
For trials in which outcome assessors were not blinded, the risk of bias will depend on whether the outcome assessment involves judgement, which depends on the type of outcome. We describe most situations in Table 8. Implications for risk of bias if the outcome assessor is aware of the intervention assignment. Reports coming directly from participants about how they function or feel in relation to a health condition or intervention, without interpretation by anyone else.
They include any evaluation obtained directly from participants through interviews, self-completed questionnaires or hand-held devices. The participant , even if a blinded interviewer is questioning the participant and completing a questionnaire on their behalf. Outcomes reported by an external observer e. The assessment of outcome is usually not likely to be influenced by knowledge of intervention received.
Assessment of an X-ray or other image, clinical examination and clinical events other than death e. Review authors will need to judge whether it is likely that assessment of the outcome was influenced by knowledge of intervention received, in which case risk of bias is considered high. Outcomes that reflect decisions made by the intervention provider, where recording of the decisions does not involve any judgement, but where the decision itself can be influenced by knowledge of intervention received.
Hospitalization, stopping treatment, referral to a different ward, performing a caesarean section, stopping ventilation and discharge of the participant.
Assessment of outcome is usually likely to be influenced by knowledge of intervention received, if the care provider is aware of this. This is particularly important when preferences or expectations regarding the effect of the experimental intervention are strong. Combination of multiple end points into a single outcome. Typically, participants who have experienced any of a specified set of endpoints are considered to have experienced the composite outcome.
Composite endpoints can also be constructed from continuous outcome measures. Assessment of risk of bias for composite outcomes should take into account the frequency or contribution of each component and the risk of bias due to the most influential components. This domain addresses bias that arises because the reported result is selected based on its direction, magnitude or statistical significance from among multiple intervention effect estimates that were calculated by the trial authors.
Consideration of risk of bias requires distinction between:. This domain does not address bias due to selective non-reporting or incomplete reporting of outcome domains that were measured and analysed by the trial authors Kirkham et al For example, deaths of trial participants may be recorded by the trialists, but the reports of the trial might contain no data for deaths, or state only that the effect estimate for mortality was not statistically significant.
Such bias puts the result of a synthesis at risk because results are omitted based on their direction, magnitude or statistical significance. It should therefore be addressed at the review level, as part of an integrated assessment of the risk of reporting bias Page and Higgins For further guidance, see Chapter 7 and Chapter Bias in selection of the reported result typically arises from a desire for findings to support vested interests or to be sufficiently noteworthy to merit publication.
It can arise for both harms and benefits, although the motivations may differ. For example, in trials comparing an experimental intervention with placebo, trialists who have a preconception or vested interest in showing that the experimental intervention is beneficial and safe may be inclined to be selective in reporting efficacy estimates that are statistically significant and favourable to the experimental intervention, along with harm estimates that are not significantly different between groups.
In contrast, other trialists may selectively report harm estimates that are statistically significant and unfavourable to the experimental intervention if they believe that publicizing the existence of a harm will increase their chances of publishing in a high impact journal.
Whether the trial was analysed in accordance with a pre-specified plan that was finalized before unblinded outcome data were available for analysis. We strongly encourage review authors to attempt to retrieve the pre-specified analysis intentions for each trial see Chapter 7, Section 7. Doing so allows for the identification of any outcome measures or analyses that have been omitted from, or added to, the results report, post hoc. Review authors should ideally ask the study authors to supply the study protocol and full statistical analysis plan if these are not publicly available.
In addition, if outcome measures and analyses mentioned in an article, protocol or trial registration record are not reported, study authors could be asked to clarify whether those outcome measures were in fact analysed and, if so, to supply the data. Trial protocols should describe how unexpected adverse outcomes that potentially reflect unanticipated harms will be collected and analysed.
However, results based on spontaneously reported adverse outcomes may lead to concerns that these were selected based on the finding being noteworthy. For some trials, the analysis intentions will not be readily available. It is still possible to assess the risk of bias in selection of the reported result.
For example, outcome measures and analyses listed in the methods section of an article can be compared with those reported. Furthermore, outcome measures and analyses should be compared across different papers describing the trial. Selective reporting of a particular outcome measurement based on the results from among estimates for multiple measurements assessed within an outcome domain.
0コメント