[cv231n] Lecture 12 | Visualizing and Understanding
Lecture 12 | Visualizing and Understanding
Visualizing
Maximally Activating Patches
Saliency
Saliency Maps: Segmentation without supervision
Intermediate Features via (guided) backprop
Visualizing CNN features: Gradient Ascent
Adding “multi-faceted” visualization gives even nicer results: (Plus more careful regularization, center-bias)
Fooling Images / Adversarial Examples
(1) Start from an arbitrary image
(2) Pick an arbitrary class
(3) Modify the image to maximize the class
(4) Repeat until network is fooled
DeepDream: Amplify existing features
Feature Inversion
Texture Synthesis
Neural Texture Synthesis
Neural Style Transfer
Fast Style Transfer
Summary
Many methods for understanding CNN representations
- Activations: Nearest neighbors, Dimensionality reduction, maximal patches, occlusion
- Gradients: Saliency maps, class visualization, fooling images, feature inversion
- Fun: DeepDream, Style Transfer.