存在的问题:
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models,which has proven effective on various NLP tasks。
Recent studies reveal that fusing the intermediate encoder layers (EncoderFusion) is beneficial for Seq2Seq models, such as layer attention, layer aggregation, and layer-wise coordination。
the uppermost decoder layer pays more attention to the encoder embedding layer. Masking the encoder embedding layer significantly drops model performance by generating hallucinatory (i.e. fluent but unfaithful to the source) predictions. The encoded representation of the standard Seq2Seq models (i.e. w/o fusing encoder layers) may not have enough capacity to model both semantic and surface features (especially at the encoder embedding layer). We call the problem described above the source representation bottleneck。
论文的创新点:
1.use fine-grained layer attention method to qualitatively and quantitatively evaluate the contribution of individual encoder layers。
2.EncoderFusion approaches: connecting the encoder embedding layer to softmax layer (SurfaceFusion)。approach shortens the path distance between source and target embeddings, which can help to learn better bilingual embeddings with direct interactions。
3.在翻译、文本摘要、语法错误纠正三个应用场景的对比实验做得比较充分详细,缺点基本网络架构图都不给一个。
------------恢复内容结束------------