撰写: 墨理三生
接上篇博文: StyleMapGAN | 测试实验记录【一】
声明:本博文按照官方readMe步骤,对测试实验过程进行简洁记录,仅供参考,认真整理,请勿搬运
StyleMapGAN | celeba_hq 风格迁移 - 图像编辑 测试 | 测试实验记录【二】
StyleMapGAN 基于 StyleGAN2 改进
论文题目
Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing
所运行代码 + paper
本博文记录StyleMapGAN 预训练模型 在 celeba_hq 测试数据上的 生成效果
- 环境搭建参考上一篇博文即可
celeba_hq 测试数据 + 预训练模型准备
作者把相关下载链接和解压逻辑已经在 download.sh 中整理完毕,真的非常 Nice
直接傻瓜式操作,复制命令一路下载即可【看网速,差不多半小时的样子】
# Download raw images and create LMDB datasets using them
# Additional files are also downloaded for local editing
bash download.sh create-lmdb-dataset celeba_hq
# Download the pretrained network (256x256)
bash download.sh download-pretrained-network-256 celeba_hq
# Download the pretrained network (1024x1024 image / 16x16 stylemap / Light version of Generator)
bash download.sh download-pretrained-network-1024 ffhq_16x16
整个项目 + 以上命令下载解压的数据 ,总共就 占用 20G 存储
du -sh
20G .
项目数据部分目录结构
Generate images test of celeba_hq 数据集
Reconstruction
Reconstruction Results are saved to expr/reconstruction.
# CelebA-HQ
python generate.py --ckpt expr/checkpoints/celeba_hq_256_8x8.pt --mixing_type reconstruction --test_lmdb data/celeba_hq/LMDB_test
单卡 GPU 占用 11073MiB
interpolation
W interpolation Results are saved to expr/w_interpolation
# CelebA-HQ
python generate.py --ckpt expr/checkpoints/celeba_hq_256_8x8.pt --mixing_type w_interpolation --test_lmdb data/celeba_hq/LMDB_test
单卡 GPU 占用 8769MiB
Local editing
Local editing Results are saved to expr/local_editing. We pair images using a target semantic mask similarity. If you want to see details, please follow preprocessor/README.md.
# Using GroundTruth(GT) segmentation masks for CelebA-HQ dataset.
python generate.py --ckpt expr/checkpoints/celeba_hq_256_8x8.pt --mixing_type local_editing --test_lmdb data/celeba_hq/LMDB_test --local_editing_part nose
单卡 GPU 占用 8793MiB
重建得到的 nose
synthesized_image 生成的鼻子如下【也有少许失败样例】
Random Generation
Random Generation Results are saved to expr/random_generation. It shows random generation examples.
python generate.py --mixing_type random_generation --ckpt expr/checkpoints/celeba_hq_256_8x8.pt
Style Mixing
Style Mixing Results are saved to expr/stylemixing. It shows style mixing examples.
python generate.py --mixing_type stylemixing --ckpt expr/checkpoints/celeba_hq_256_8x8.pt --test_lmdb data/celeba_hq/LMDB_test
单卡 GPU 占用 8769MiB
Semantic Manipulation
Semantic Manipulation Results are saved to expr/semantic_manipulation. It shows local semantic manipulation examples.
# CelebA-HQ
python semantic_manipulation.py --ckpt expr/checkpoints/celeba_hq_256_8x8.pt --LMDB data/celeba_hq/LMDB --svm_train_iter 10000
单卡 GPU 占用 6455MiB
生成【化妆】效果如下
运行输出如下【运行5分钟左右】
latent_code_shape (64, 8, 8)
positive_train: 5867, negative_train:3134, positive_val:651, negative_val:348
Training boundary. 2021-07-09 10:36:17.187714
/home/墨理/anaconda3/envs/torch15/lib/python3.7/site-packages/sklearn/svm/_base.py:258: ConvergenceWarning: Solver terminated early (max_iter=10000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
Finish training. 2021-07-09 10:37:23.516691
validate boundary.
Accuracy for validation set: 914 / 999 = 0.914915
classifier.coef_.shape (1, 4096)
boundary.shape (64, 8, 8)
30000 images, 30000 latent_codes
Heavy_Makeup 18