【论文研读】SimCLR Google2020.2提出最新自监督方法
Title:A Simple Framework for Contrastive Learning of Visual Representations
Author:Ting Chen, Geoffrey Hinton... (Google Research)
参考:Hinton组力作:ImageNet无监督学习最佳性能一次提升7%,媲美监督学习
网络结构
Data augmentation
1.Composition of data augmentation operations is crucial for learning good representations
2. No single transformation suffices to learn good representations
3. it is critical to compose cropping with color distortion
4. data augmentation that does not yield accuracy benefits for supervised learning can still help considerably with
contrastive learning.
Architectures for Encoder and Head
1. Unsupervised contrastive learning benefits (more) from bigger models
2. G(*):A nonlinear projection is better than a linear projection
3. Contrastive learning benefits (more) from larger batch sizes and longer training