规范化(社会学)
计算机科学
变压器
残余物
人工智能
概化理论
缩放比例
自然语言处理
机器翻译
字错误率
机器学习
语音识别
算法
统计
数学
电压
社会学
人类学
物理
几何学
量子力学
作者
Toan Nguyen,Julián Salazar
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:133
标识
DOI:10.48550/arxiv.1910.05895
摘要
We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI