
[Paper][My minimal pytorch implementation]
What’s different?
- Move the model to Pytorch
- Using High Level API (tf.layers/tf.contrib)
- add batch normalisation after dilated CNN

- This model run on 64x64 size images
- The structure of model has been changed a liitle bit since the smaller size image
- Add the sigmoid function in the last layer of Completion
Paper Structure
Paper optimization
The structure for this model

The result after only 20 steps (5 minutes)
<- Yeah .. I mean 5 min on CPU…
The generator has started to do something we want. The reason may be the reconstructed loss, which enables us to train it as a decoder.

And, the paper they trained
Still wondering
Which way of initialization will be better for this model?