为微软samples-for-ai贡献代码是种怎么样的体验?

推送原文链接:传送门

关注SomedayWill,了解为微软项目贡献代码的始终。
为微软samples-for-ai贡献代码是种怎么样的体验?

还记得微软神器samples-for-ai吗?它可不仅仅可以用来安装框架,它其实是个开源的AI样例库,以Visual Stdio工程形式包装,会为微软用户学习开发AI应用带来不小的帮助。现在,Someday及其小伙伴们的代码已经被sameples-for-ai采用!我们是github上首个用Keras框架实现Progressive Growing of GANs模型的项目。该模型出自NVIDIA的研究,原论文将在ICLR 2018上发表,是目前世界上最先进的对抗生成网络模型之一。

samples-for-ai项目目前正处在收尾阶段,样例近期还会继续扩充~

Keras-progressive_growing_of_gans项目的代码的主要贡献者为:naykun、Somedaywilldo、Leext、参与者还有WJQ、WJJ。(Somedaywilldo当然是Someday的github ID了)

二话不说,先上项目地址~

微软samples-for-ai项目地址:

https://github.com/Microsoft/samples-for-ai

其中我们的贡献位于examples/keras/Progressive growing of GANs:

https://github.com/Microsoft/samples-for-ai/tree/master/examples/keras/Progressive%20growing%20of%20GANs

下面是我们的项目原地址(大本营):

https://github.com/Somedaywilldo/Keras-progressive_growing_of_gans

(求github小星星,能为大本营和samples-for-ai各点一颗就更好了嘻嘻~)

附上我们的README,README出自Someday之手,可能是目前为止Someday写的最正式的一次README:

Keras-progressive_growing_of_gans

Introduction

Keras implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

Developed by BUAA Microsoft Student Club.

Leader developers: Kun Yan, Yihang Yin, Xutong Li

Developers: Jiaqi Wang, Junjie Wu

Requirements

  1. Python3

  2. keras 2.1.2 (TensorFlow backend)

  3. CelebA Dataset

How to run

1. Clone the repository

2. Prepare the dataset

First download CelebA Dataset.

Run h5tool.py to create HDF5 format datatset. Default settings of h5tool.py will crop the picture to 128*128, and create a channel-last h5 file.

  $ python3 h5tool.py create_celeba_channel_last <h5 file name> <CelebA directory>

Modify config.py to your own settings.

  # In config.py:
data_dir = 'datasets'
result_dir = 'results'

dataset = dict(h5_path=<h5 file name>, resolution=128, max_labels=0, mirror_augment=True)
# Note: "data_dir" should be set to the direcory of your h5 file.

We only support CelebA dataset for now, you may need to modify the code in dataset.py and h5tools.py if you want to switch to another dataset.

3. Begin training!

  $ python3 train.py

In train.py:

  # In train.py:
speed_factor = 20
# set it to 1 if you don't need it.

"speed_factor" parameter will speed up the transition procedure of progressive growing of gans(switch resolution), at the price of reducing images' vividness, this parameter is aimed for speed up the validation progress in our development, however it is useful to see the progressive growing procedure more quickly, set it to "1" if you don't need it.

So far, if your settings have no problem, you should see running information like our running_log_example

4. Save and resume training weights

Parameters in train.py will determine the frequency of saving the training result snapshot. And if you want to resume a previous result, just modify train.py:

  # In train.py:
image_grid_type         = 'default',
# modify this line bellow
# resume_network         = None,
# to:
resume_network         = <weights snapshot directory>,
resume_kimg             = <previous trained images in thousands>,

5. Using main.py (optional)

We provide main.py for remote training for Visual Stdio or Visual Stdio Code users. So you can directely start the training process using command line, which will be convenient in remote job submission.

  $ python3 main.py   --data_dir = <dataset h5 file directory>    \
--resume_dir = <weights snapshot directory> \
--resume_kimg = <previous trained images in thousands>

Contact us

Any bug report or advice, please contact us:

Kun Yan (naykun) : yankun1138283845@foxmail.com

Yihang Yin (Somedaywilldo) : somedaywilldo@foxmail.com

Reference

  1. Progressive Growing of GANs for Improved Quality, Stability, and Variation, Tero Karras (NVIDIA), Timo Aila (NVIDIA), Samuli Laine (NVIDIA), Jaakko Lehtinen (NVIDIA and Aalto University) Paper (NVIDIA research)

  2. tkarras/progressive_growing_of_gans (https://github.com/tkarras/progressive_growing_of_gans

  3. github-pengge/PyTorch-progressive_growing_of_gans (https://github.com/github-pengge/PyTorch-progressive_growing_of_gans)

License

Our code is under MIT license. See LICENSE

这个项目在上周六杀青,代码主要是naykun和Someday贡献的,复现过程异常艰难,中途曾经多次想放弃。首先作为一篇非常新的顶会论文,我们读论文、搞清模型原理就花了大量的时间,naykun甚至在火车上手写了原文翻译稿(给大佬递茶),而Someday则把原作者所有的代码都hack了一遍才把训练数据和输出结果调出来,因为不同框架的色彩channel位置不一样。项目的日常就是Someday和naykun接力debug到1点,然后Someday和naykun失眠到2点。上周这个项目配合着OO捎带电梯、冯如杯院审、蓝桥杯一起食用,味道十分酸爽。

不过,在这段艰难的过程中,Someday深深体会到了科学家的严谨和实力、单从代码上就可以看出Someday自己还差那么10年功力。

在接下来的两周中,Someday会和naykun大佬一起,用推送记录下贡献项目的这段血泪史。这两周的推送会带给你GANs(对抗生成网络)的原理(Someday翻译了GANs的开山论文),Progressive growing of GANs 的原理,以及我们从根本看不懂模型代码,到经过几次战略调整之后,逐步成功复现模型的全过程,应该会为大家带来不少的收获,希望对深度学习、对抗生成网络有兴趣的朋友们多多支持,多多转发!

为微软samples-for-ai贡献代码是种怎么样的体验?

(原作者生成的高清明星脸)

目前我们只支持CelebA数据集,功能也还较为单一,我们会在接下来的时间里,添加更多的数据集和功能,敬请期待!

关注SomedayWill,了解为微软项目贡献代码的始终。

为微软samples-for-ai贡献代码是种怎么样的体验?

上一篇:Mac上实现Python用HTMLTestRunner生成html测试报告


下一篇:【黑客免杀攻防】读书笔记10 - switch-case分支