透明度(行为)
再现性
可靠性(半导体)
质量(理念)
数据质量
计算机科学
数据科学
统计
工程类
数学
运营管理
公制(单位)
物理
量子力学
功率(物理)
哲学
计算机安全
认识论
作者
Meryem A. Yücel,Robert Luke,Rickson C. Mesquita,Alexander von Lühmann,David M. A. Mehler,Michael Lührs,Jessica Gemignani,Androu Abdalmalak,Franziska Albrecht,Iara de Almeida Ivo,Christina Artemenko,Kira Ashton,Paweł Augustynowicz,Aahana Bajracharya,Élise Bannier,Beatrix Barth,Laurie Bayet,Jacqueline Behrendt,Hadi Borjkhani,Lenaic Borot
标识
DOI:10.1038/s42003-025-08412-1
摘要
Abstract As data analysis pipelines grow more complex in brain imaging research, understanding how methodological choices affect results is essential for ensuring reproducibility and transparency. This is especially relevant for functional Near-Infrared Spectroscopy (fNIRS), a rapidly growing technique for assessing brain function in naturalistic settings and across the lifespan, yet one that still lacks standardized analysis approaches. In the fNIRS Reproducibility Study Hub (FRESH) initiative, we asked 38 research teams worldwide to independently analyze the same two fNIRS datasets. Despite using different pipelines, nearly 80% of teams agreed on group-level results, particularly when hypotheses were strongly supported by literature. Teams with higher self-reported analysis confidence, which correlated with years of fNIRS experience, showed greater agreement. At the individual level, agreement was lower but improved with better data quality. The main sources of variability were related to how poor-quality data were handled, how responses were modeled, and how statistical analyses were conducted. These findings suggest that while flexible analytical tools are valuable, clearer methodological and reporting standards could greatly enhance reproducibility. By identifying key drivers of variability, this study highlights current challenges and offers direction for improving transparency and reliability in fNIRS research.
科研通智能强力驱动
Strongly Powered by AbleSci AI