NIPS 2016上22篇论文的实现汇集

http://blog.csdn.net/jiandanjinxin/article/details/54087592

日前,LightOn CEO 兼联合创始人 Igor Carron 在其博客上放出了其收集到的 NIPS 2016 论文的实现(一共 22 个)。他写道:「在 Reddit 上,peterkuharvarduk 决定编译所有来自 NIPS 2016 的可用实现,我很高兴他使用了『实现( implementation)』这个词,因为这让我可以快速搜索到这些项目。」除了 peterkuharvarduk 的推荐,这里的项目还包括 Reddit 其他用户和 Carron 额外添加的一些新公布的实现。最终他还重点推荐了 GitXiv:http://www.gitxiv.com 。另外,在本文后面还附带了机器之心关于 NIPS 2016 的文章列表,千万不要错过。

  1. 使用快速权重关注最近的过去(Using Fast Weights to Attend to the Recent Past)

  论文:https://arxiv.org/abs/1610.06258

  GitHub:https://github.com/ajarai/fast-weights

  2. 通过梯度下降来学习通过梯度下降的学习(Learning to learn by gradient descent by gradient descent)

  论文:https://arxiv.org/abs/1606.04474

  GitHub:https://github.com/deepmind/learning-to-learn

  3. R-FCN:通过基于区域的全卷积网络的目标检测(R-FCN: Object Detection via Region-based Fully Convolutional Networks)

  论文:https://arxiv.org/abs/1605.06409

  GitHub:https://github.com/Orpine/py-R-FCN

  4. 用于 k-均值的快速和可证明的 Good Seedings(Fast and Provably Good Seedings for k-Means)

  论文:https://las.inf.ethz.ch/files/bachem16fast.pdf.

  GitHub:https://github.com/obachem/kmc2

  5. 如何训练生成对抗网络(How to Train a GAN)

  GitHub:https://github.com/soumith/ganhacks

  6. Phased LSTM:为长的或基于事件的序列加速循环网络训练(Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences)

  论文:https://arxiv.org/abs/1610.09513

  GitHub: https://github.com/dannyneil/public_plstm

  7. 生成对抗式模仿学习(Generative Adversarial Imitation Learning)

  论文:https://arxiv.org/abs/1606.03476

  GitHub:https://github.com/openai/imitation

  8. 对抗式多类分类:一个风险最小化的角度(Adversarial Multiclass Classification: A Risk Minimization Perspective)

  论文:https://www.cs.uic.edu/~rfathony/pdf/fathony2016adversarial.pdf

  GitHub:https://github.com/rizalzaf/adversarial-multiclass

  9. 通过视频预测的用于物理交互的无监督学习(Unsupervised Learning for Physical Interaction through Video Prediction)

  论文:https://arxiv.org/abs/1605.07157

  GitHub: https://github.com/tensorflow/models/tree/master/video_prediction

  10.权重规范化:一种加速深度神经网络训练的简单重新参数化( Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks)

  论文:https://arxiv.org/abs/1602.07868

  GitHub:https://github.com/openai/weightnorm

  11. 全容量整体循环神经网络(Full-Capacity Unitary Recurrent Neural Networks)

  论文:https://arxiv.org/abs/1611.00035

  GitHub:https://github.com/stwisdom/urnn

  12. 带有随机层的序列神经模型(Sequential Neural Models with Stochastic Layers)

  论文:https://arxiv.org/pdf/1605.07571.pdf

  GitHub:https://github.com/marcofraccaro/srnn

  13. 带有快速局部化谱过滤的图上的卷积神经网络(Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering)

  论文:https://arxiv.org/abs/1606.09375

  GitHub:https://github.com/mdeff/cnn_graph

  14. Interpretable Distribution Features with Maximum Testing Power

  论文:https://papers.nips.cc/paper/6148-interpretable-distribution-features-with-maximum-testing-power.pdf

  GitHub:https://github.com/wittawatj/interpretable-test/

  15. 使用神经网络组成图模型,用于结构化表征和快速推理(Composing graphical models with neural networks for structured representations and fast inference )

  论文:https://arxiv.org/abs/1603.06277

  GitHub:https://github.com/mattjj/svae

  16. 使用张量网络的监督学习(Supervised Learning with Tensor Networks)

  论文:https://arxiv.org/abs/1605.05775

  GitHub:https://github.com/emstoudenmire/TNML

  17. 使用贝叶斯条件密度估计的模拟模型的快速无ε推理(Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation)

  论文:https://arxiv.org/abs/1605.06376

  GitHub:https://github.com/gpapamak/epsilon_free_inference

  18. 用于概率程序的贝叶斯优化(Bayesian Optimization for Probabilistic Programs)

  论文:http://www.robots.ox.ac.uk/~twgr/assets/pdf/rainforth2016BOPP.pdf

  GitHub:https://github.com/probprog/bopp

  19. PVANet:用于实施目标检测的轻权重深度神经网络(PVANet: Lightweight Deep Neural Networks for Real-time Object Detection)

  论文:https://arxiv.org/abs/1611.08588

  GitHub:https://github.com/sanghoon/pva-faster-rcnn

  20. 数据编程:快速创建大训练集(Data Programming: Creating Large Training Sets Quickly)

  论文:https://arxiv.org/abs/1605.07723

  代码:snorkel.stanford.edu

  21. 用于架构学习的卷积神经结构(Convolutional Neural Fabrics for Architecture Learning)

  论文:https://arxiv.org/pdf/1606.02492.pdf

  GitHub:https://github.com/shreyassaxena/convolutional-neural-fabrics

  22. 价值迭代网络(Value Iteration Networks)

  论文:https://arxiv.org/abs/1602.02867

  TensorFlow 实现:https://github.com/TheAbhiKumar/tensorflow-value-iteration-networks

原作者的 Theano 实现:https://github.com/avivt/VIN

上一篇:【一起学OpenFOAM】系列由来


下一篇:OpenGL 开始学习指南