Inception v1 layers visualized on a map
A joint work by Google and OpenAI:
- Take 1M random images;
- Feed to a CNN, collect some spatial activation;
- Produce a corresponding idealized image that would result in such an activation;
- Plot in 2D (via UMAP), add grid, averaging, etc etc;
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.