脂类学
计算机科学
变化(天文学)
数据科学
仿形(计算机编程)
样本量测定
数据质量
人口
数据挖掘
风险分析(工程)
生物信息学
生物
医学
工程类
统计
数学
环境卫生
操作系统
物理
公制(单位)
天体物理学
运营管理
作者
Gavriel Olshansky,Corey Giles,Agus Salim,Peter J. Meikle
标识
DOI:10.1016/j.plipres.2022.101177
摘要
Large 'omics studies are of particular interest to population and clinical research as they allow elucidation of biological pathways that are often out of reach of other methodologies. Typically, these information rich datasets are produced from multiple coordinated profiling studies that may include lipidomics, metabolomics, proteomics or other strategies to generate high dimensional data. In lipidomics, the generation of such data presents a series of unique technological and logistical challenges; to maximize the power (number of samples) and coverage (number of analytes) of the dataset while minimizing the sources of unwanted variation. Technological advances in analytical platforms, as well as computational approaches, have led to improvement of data quality – especially with regard to instrumental variation. In the small scale, it is possible to control systematic bias from beginning to end. However, as the size and complexity of datasets grow, it is inevitable that unwanted variation arises from multiple sources, some potentially unknown and out of the investigators control. Increases in cohort size and complexity have led to new challenges in sample collection, handling, storage, and preparation. If not considered and dealt with appropriately, this unwanted variation may undermine the quality of the data and reliability of any subsequent analysis. Here we review the various experimental phases where unwanted variation may be introduced and review general strategies and approaches to handle this variation, specifically addressing issues relevant to lipidomics studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI