Context Encoders: Feature Learning by Inpainting

1. Encoder-decoder pipeline

 

Context Encoders: Feature Learning by Inpainting
Context encoder trained with joint reconstruction and adversarial loss for semantic inpainting
Context Encoders: Feature Learning by Inpainting
Context encoder trained with reconstruction loss for feature learning by filling in arbitrary region dropouts in the input

该文章提出的算法采用了Encoder-decoder的pipeline,待修补的图像作为网络的输入,补全后的图像作为网络的输出。

  • 编码:通过卷积对待修补的图像进行编码;
  • 解码:通过去卷积操作还原使其大小和原图像一致;
  • Channel-wise fully-connected layer:减少参数的数量,若使用全连接层,输入特征图为Context Encoders: Feature Learning by Inpainting,输出也为Context Encoders: Feature Learning by Inpainting,则需要Context Encoders: Feature Learning by Inpainting个参数,而使用channel-wise只需要Context Encoders: Feature Learning by Inpainting的参数。

2. Loss function

  • 重构损失:缺失区域补全结果与真实结果之间差的二范数的平方

    Context Encoders: Feature Learning by Inpainting

  • 对抗损失:GAN的损失函数

    Context Encoders: Feature Learning by Inpainting

  • 最终的损失函数:两个损失的加权

Context Encoders: Feature Learning by Inpainting

3. Results and discussion

Context Encoders: Feature Learning by Inpainting

从实验结果来看,在当时还是很不错的,但是可以发现局部的一些细节修复的不是很好,这也是这篇文章的一个待解决问题。

我觉得这篇文章的创新点主要有两个:

  • 使用了Encoder-decoder结构来实现图像修复,并弃用了全连接层,采用channel-wise的方式,减少了模型的参数;
  • 损失函数包括两部分,一部分图像的重建损失,另一部分是GAN的对抗损失。

源代码:https://github.com/pathak22/context-encoder. (torch版本)

https://github.com/BoyuanJiang/context_encoder_pytorch. (Pytorch版本)

4. References

【1】Pathak, Deepak, et al. "Context encoders: Feature learning by inpainting." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

【2】王老头. "Context Encoder论文及代码解读". https://www.cnblogs.com/wmr95/p/10636804.html. 2019.

【3】scut_少东. "Context Encoder 论文及lua 代码解读". https://blog.****.net/qq_33594380/article/details/85317922. 2018.