对比学习&Transformer论文清单

终终终终于放假了= =
寒假可以看点自己感兴趣的论文了,今年大火的对比学习和一些Transformer相关的论文一直存着没看
列个论文清单,有空的话慢慢看过去

Contrastive Learning

综述

  • A Survey on Contrastive Self-supervised Learning【20.11】

具体方法

  • A Simple Framework for Contrastive Learning of Visual Representations(SimCLR)【ICML2020/20.02】
  • Big Self-Supervised Models are Strong Semi-Supervised Learners(SimCLRv2)【NIPS2020/20.06】
  • Momentum Contrast for Unsupervised Visual Representation Learning(MoCo)【CVPR2020/19.11】
  • Improved Baselines with Momentum Contrastive Learning(MoCov2)【20.03】
  • Contrastive Multiview Coding(CMC)【ECCV2020/19.06】
  • Representation Learning with Contrastive Predictive Coding(CPC)【18.07】
  • Exploring Simple Siamese Representation Learning(SimSiam)【20.11】
  • Bootstrap your own latent: A new approach to self-supervised Learning(BYOL)【20.06】
  • Unsupervised Learning of Visual Features by Contrasting Cluster Assignments(SwAV)【NIPS2020/20.06】
  • Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination【CVPR2018/18.05】
  • Data-Efficient Image Recognition with Contrastive Predictive Coding【ICML2020/19.05】
  • Learning Deep Representations by Mutual Information Estimation and Maximization【ICLR2019/18.08】
  • Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning【20.11】
  • A Theoretical Analysis of Contrastive Unsupervised Representation Learning【ICML2019/19.02】
  • Contrastive Transformation for Self-supervised Correspondence Learning【AAAI2021/20.12】
  • Supervised Contrastive Learning【20.04】
  • Dimensionality Reduction by Learning an Invariant Mapping【CVPR2006】
  • Adversarial Self-Supervised Contrastive Learning【NIPS2020/20.06】
  • Intriguing Properties of Contrastive Losses【20.11】

应用

  • Contrastive Learning for Image Captioning【NIPS2017/17.10】
  • Contrastive Learning of Structured World Models【ICLR2020/19.11】
  • Cross-Modal Contrastive Learning for Text-to-Image Generation【21.01】

Transformer

综述

  • Efficient Transformers: A Survey

NLP

  • Attention is All You Need(Transormer)

  • Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

  • REFORMER : THE EFFICIENT TRANSFORMER

  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  • XLNet: Generalized Autoregressive Pretraining for Language Understanding

  • RoBERTa: A Robustly Optimized BERT Pretraining Approach

  • ALBERT: A Lite BERT For Self-Supervised Learning Of Language Representations

  • FastBERT: a Self-distilling BERT with Adaptive Inference Time

  • TinyBERT: Distilling BERT for Natural Language Understanding

  • Improving Language Understanding by Generative Pre-Training(GPT)

  • Language Models are Unsupervised Multitask Learners(GPT-2)

  • Language Models are Few-Shot Learners(GPT-3)

CV

  • ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
  • On The Relationship Between Self-attention and Convolutional Layers
  • an image is worth 16x16 words: Transformers for image recognition at scale
  • exploring self-attention for image recognition
  • end-to-end object detection with transformers(DETR)
  • Deformable DETR: Deformable Transformers for end-to-end object detection
  • act(end-to-end object detection with adaptive clustering transformer)
  • image transformer
  • generating long sequences with sparse transformers
  • generative pretraining from pixels
  • hamming ocr: a locality sensitive hashing neural network for scene text recognition
  • actBERT: learning global-local video-text representations
  • max-deeplab: end-to-end panoptic segmentation with mask transformers
  • end-to-end dense video captioning with masked transformer
  • end-to-end video instance segmentation with transformers
  • foley music: learning to generate music from videos
  • end-to-end lane shape prediction with transformers
上一篇:-2021最新对比学习(Contrastive Learning)相关必读论文整理分享


下一篇:理解对比表示学习(Contrastive Learning)