The elements of outcomes used in the various systematic reviews addressing a given condition should be comparable

In our study, the largely incomplete pre-specification of outcomes in protocols restricted our ability to assess comparability in outcome elements across protocols. In cases where the various elements were specified, however, we observed variation in specific metrics and methods of aggregation. An example of such variation is: one protocol prespecified that the outcome domain of visual acuity would be measured as mean change in visual acuity from baseline to one year, while another protocol pre-specified that visual acuity would be measured as percent of participants with improvement in visual acuity of at least three letters at one year. While both protocols specified the same outcome domain at the same time-point, differences in the specific metric and method of aggregation would preclude a direct comparison of the visual acuity results. Efforts to promote comparability of outcomes across related Paclitaxel clinical trials have led to the creation of core outcome measures within research fields. One such effort is the Core Outcome Measures in Effectiveness Trials Initiative, whose investigators have produced guidance on methods for identifying core outcome sets. Because the issue of comparability of outcomes across systematic reviews is complex, we recommend that researchers within a field and patients consider developing comparable outcomes across systematic reviews, adding to a core list over time as appropriate. There are pros and cons of establishing comparability in outcomes across reviews, however. Increased comparability will likely facilitate formal comparisons across systematic reviews and development of clinical practice guidelines. In addition, decisionmakers would be better able to compare more directly the effectiveness of treatment options. For example, hundreds of measurement scales have been used to assess mental status in schizophrenia and quality-of-life, making comparability across clinical trials very challenging. Finally, use of comparable outcomes could discourage authors from ‘cherry-picking’ outcomes to be used in their studies. On the other hand, comparability across reviews is not always possible or desirable. Limiting outcomes to those used by previous researchers risks excluding an outcome that is in fact important, or authors may be compelled to include an outcome that they do not consider important. Additionally, it might not be possible to identify a priori all relevant outcomes and outcome elements for a rapidly evolving field or for a field with a large number of relevant outcomes. This poses a concern for investigators conducting methodological research in systematic reviews, and for users of systematic reviews generally. Although we do not believe that relying on the Methods sections of three completed Cochrane reviews in the cases where we could not find the protocols is likely to have influenced our findings, we believe that all protocols and previous versions of completed systematic reviews should be made available to researchers. Furthermore, an updated protocol was published for only one of the protocols we examined. The Cochrane Collaboration should consider keeping all protocols up-to-date by publishing updated versions of protocols or publishing protocol amendments for all its reviews. In this way, Cochrane review protocols would be formally amended in the same way that clinical trial protocols are amended and made available, providing an accessible audit trail. This practice will facilitate Cochrane’s contribution of its protocols and updates to PROSPERO –, an international database of prospectively registered systematic reviews. Our focus on Cochrane reviews is both a strength and a limitation.

Leave a Reply