[Paper Reading] Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

论文链接:https://arxiv.org/pdf/1502.03044.pdf

代码链接:https://github.com/kelvinxu/arctic-captions & https://github.com/yunjey/show-attend-and-tell & https://github.com/jazzsaxmafia/show_attend_and_tell.tensorflow


主要贡献

在这篇文章中,作者将“注意力机制(Attention Mechanism)”引入了神经机器翻译(Neural Image Captioning)领域,提出了两种不同的注意力模型:‘Soft’ Deterministic Attention Mechanism & ‘Hard’ Stochastic Attention Mechanism。下图展示了Show, Attend and Tell模型的整体框架。

[Paper Reading] Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Stochastic 'Hard' Attention
  • Deterministic 'Soft' Attention

实验细节

 

上一篇:追根溯源:EntityFramework 实体的状态变化


下一篇:前端js怎么实现大文件G级的断点续传(分块上传)和分段下载