***post authored by Lisa Matthias***
The overarching goal of science is to deepen our understanding about the world we live in, and then to use this understanding, for example, to address social or medical problems. However, in order to pursue those goals effectively and efficiently, the scientific findings we base our actions on have to be credible. But how can we assess the credibility of research? Is a going through peer review enough, or being published? What if the study made it into in a “high-ranking” journal? Is that enough to deem findings credible?
At best these are proxy indicators, at worst entirely false and misleading. The key to making an informed assessment of how trustworthy a study is, is sufficient transparency in reporting the findings. This enables independent researchers to closely scrutinize the study design and underlying data, as well as to conduct replication attempts. To determine credibility, scientific studies need to be assessed along (at least) the following three dimensions: (1) method and data transparency, (2) analytic reproducibility and robustness, and (3) effect replicability. If then independent researchers are unable to identify fatal study design faws, data processing or statistical errors, result fragilities, and replicability issues, the study can be seen as credible - or “not-yet-proven-wrong”.
To simplify this process, and to enable researchers to receive credit for transparent reporting, Dr. Etienne LeBel founded Curate Science, a web platform that tracks and quantifies the trustworthiness of empirical research. The interactive platform provides direct access to transparently reported study components:
The availability of study design details, analytic choices, and underlying data.
This is displayed as open practice badges that link to (1) preregistered study protocols, (2) open materials, (3) open data, (4) open code, and (5) compliance to relevant reporting standard at article and study-levels.
The ability to consistently observe an effect of similar magnitude as originally reported in new samples using similar methodologies and conditions as the original study.
Gathering and showcasing replications of published effects conducted transparently by independent researchers.
The ability of reported results to be reproduced by repeating the same data processing & statistical analyses on the original data, & to be robust to different data processing & data analytic decisions.
Embedded computational containers, allow for reproducing the results and re-analyzing their robustness directly within the browser.
By making it easier to discover and access transparently reported studies and ensuring continuous community engagement, the platform has great potential to not just raise awareness about open practices, but also to increase their uptake, and accelerate the transition to Open Science.
Curate Science is committed to rewarding positive scientific behaviors, but does not punish questionable research practices or lower levels of transparency. Researchers are encouraged to adopt open and transparent practices, but to do so at their own pace. LeBel explains, “we make it easy to get the most credit possible for conducting and reporting one's research only a little bit more transparently. That is, as transparent as you currently have time for and/or are comfortable with. For example, if you're uncomfortable publicly posting your data for an article, you could still earn credit by publicly posting your code and linking to it on Curate Science” (Only 1 transparency component is required for being added to the database).