作者
Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A Bail, Odd Erik Gundersen, Jake M Hofman, Jessica Hullman, Michael A Lones, Momin M Malik, Priyanka Nanayakkara, Russell A Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J Salganik, Marta Serra-Garcia, Brandon M Stewart, Gilles Vandewiele, Arvind Narayanan
发表日期
2023/8/15
期刊
arXiv preprint arXiv:2308.07832
简介
Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear reporting standards for ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (porting Standards achine Learning Based cience). It consists of 32 questions and a paired set of guidelines. REFORMS was developed based on a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility.
引用总数
学术搜索中的文章
S Kapoor, E Cantrell, K Peng, TH Pham, CA Bail… - arXiv preprint arXiv:2308.07832, 2023