变压器
计算机科学
事实上
语言模型
机器学习
人工智能
工程类
电气工程
政治学
电压
法学
作者
Yi Tay,Mostafa Dehghani,Jai Prakash Gupta,Dara Bahri,Vamsi Aribandi,Zhen Qin,Donald Metzler
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:16
标识
DOI:10.48550/arxiv.2105.03322
摘要
In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures.
科研通智能强力驱动
Strongly Powered by AbleSci AI