Pet Research Approval is based on Confidence than on Proof of Scientific Rigour
A growing body of evidence raises concerns about the scientific validity and reproducibility of published research findings due to the substantial risk of bias in preclinical animal studies. Systematic reviews discovered poor reporting rates of bias prevention techniques (such as randomization, blinding, and sample size calculation) in the published literature and a link between these low reporting rates and exaggerated treatment effects. It might be possible to identify bias risks sooner, before the research has been conducted, if the majority of animal research were subject to ethical or peer review. For instance, animal studies are authorised in Switzerland based on a harm-benefit analysis and a full explanation of the study procedure. Therefore, we compared the reporting rates of the same measures in a representative sub-sample of publications (n = 50) with the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described in applications for animal experiments submitted to Swiss authorities (n = 1,277). In applications for animal experiments, measures against bias were disclosed at extremely low rates, ranging on average from 2.4% for the statistical analysis plan to 19% for the primary outcome variable, and from 0.0% for the calculation of the sample size to 34% for the statistical analysis plan in publications from these experiments. We found a weak positive correlation between the internal validity scores (IVS) of publications and applications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of describing these measures in applications partially predict their rates of reporting in publications. The IVS was calculated based on the proportion of the seven measures against bias. These findings suggest that key information about experimental design, which establishes the scientific validity of the findings, is missing from the authorities licencing animal experiments. This information may be crucial for the weight given to the research’s benefits in the harm-benefit analysis. Applications for animal experiments may frequently be allowed based on implicit confidence rather than explicit evidence of scientific rigour, similar to articles getting accepted for publication despite poor reporting of measures against bias. Our results cast considerable doubt on the peer-review process for scientific publications as well as the current authorization process for animal studies, which over time may damage the validity of research. One viable method to change the system is to transition from the authorization processes that are already in place in many nations to a preregistration system for animal research. This would help to prevent needless harm to animals for fruitless research as well as improve the scientific quality of data from animal trials.