Editorial peer review is the gateway to scientific publication. It was established to ensure that research papers were vetted by independent experts before they are published. Despite the importance of this process, its impact is still considered suboptimal and it needs to be improved. For this purpose, we need appropriate outcome measures, particularly a validated tool that clearly defines the quality of peer review reports. The final aim of the present PhD project was to develop and validate a new tool for assessing the quality of peer review reports in biomedical research.
As the starting point for the development of a new tool, we performed a systematic review aimed to identify and describe the existing tools used to assess peer review report quality in biomedical research. We identified a total number of 24 tools: 23 scales and 1 checklist. None of the tools reported a definition of ‘quality’. Only one described the scale development and 10 provided measures of validity and reliability. We classified the quality components of the 18 tools with more than one item into 9 main quality domains and 11 subdomains.
Secondly, we formed a steering committee composed of five members with diverse expertise, which defined the quality of peer review reports. We then conducted an online survey intended for biomedical editors and authors to 1) determine if participants endorsed the proposed definition of peer review report quality; 2) identify the most important items to include in the tool, and 3) identify any missing items. Based on the participants’ qualitative and quantitative answers, the steering committee modified the initially proposed definition of peer review report quality, reviewed all items, and ultimately, drafted and refined the final version of the tool.
The ARCADIA (Assessment of Review reports with a Checklist Available to eDItors and Authors) tool was finally developed. The tool is a checklist that includes 14 items encompassed in 5 domains. Each item should be ticked as ‘Yes’ or ‘No’. However, an item could also be assessed as ‘Not applicable’ (NA) depending on the reviewer’s expertise, type of study, type of biomedical journal, availability of study data, materials and protocol.
Finally, we tested the tool and evaluated its acceptability, reliability, and validity. ARCADIA was validated by a heterogeneous sample of both biomedical editors and authors using a sample of peer review reports from two different biomedical journals (i.e., The BMJ and BMJ Open). Field-testing demonstrated that the psychometric properties of ARCADIA are not entirely satisfactory. Results from the validation study should be used to inform a new version of the ARCADIA tool, which should be also validated in a real-editorial setting using peer review reports associated with manuscripts with different study designs and from different types of journals.
This thesis reports the development and validation of ARCADIA, a new tool for assessing the quality of peer review reports in biomedical research. ARCADIA constitutes the first tool that has been systematically developed to assess the quality of peer review reports and its validation is based on a large and diverse sample of biomedical editors and authors. This tool could be used regularly by editors to evaluate the reviewers' work, and also as an outcome when evaluating interventions to improve the peer review process.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados