声音(地理)
质量(理念)
音质
语音识别
声学
美学
计算机科学
艺术
哲学
认识论
物理
作者
Andros Tjandra,Yi-Chiao Wu,Baishan Guo,John P. Hoffman,Brian E. Ellis,Apoorv Vyas,Bowen Shi,Sanyuan Chen,Matt Le,Nick Zacharov,Carleigh Wood,Ann Lee,Wei-Ning Hsu
出处
期刊:Cornell University - arXiv
日期:2025-02-07
标识
DOI:10.48550/arxiv.2502.05139
摘要
The quantification of audio aesthetics remains a complex challenge in audio processing, primarily due to its subjective nature, which is influenced by human perception and cultural context. Traditional methods often depend on human listeners for evaluation, leading to inconsistencies and high resource demands. This paper addresses the growing need for automated systems capable of predicting audio aesthetics without human intervention. Such systems are crucial for applications like data filtering, pseudo-labeling large datasets, and evaluating generative audio models, especially as these models become more sophisticated. In this work, we introduce a novel approach to audio aesthetic evaluation by proposing new annotation guidelines that decompose human listening perspectives into four distinct axes. We develop and train no-reference, per-item prediction models that offer a more nuanced assessment of audio quality. Our models are evaluated against human mean opinion scores (MOS) and existing methods, demonstrating comparable or superior performance. This research not only advances the field of audio aesthetics but also provides open-source models and datasets to facilitate future work and benchmarking. We release our code and pre-trained model at: https://github.com/facebookresearch/audiobox-aesthetics
科研通智能强力驱动
Strongly Powered by AbleSci AI