Understanding trained CNNs by indexing neuron selectivity

The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.


  • A framework to Analyze and Visualize a trained CNN by dissecting Individual Neurons.
  • A Color selectivity index of a neuron, extendible to other low-level attributes.
  • A Class selectivity index of a neuron, extendible to other high-level attributes.
  • Global contribution is to give some light to the black-box nature of trained CNNs.


Figure a): Visualization of Neuron Feature for a neuron of VGG-M with their cor-responding 100 cropped images.

Figure b): Examples of NFs for each convolutional layer of the network VGG-M. Although sizes of NF increase with layer depth, we scaled them into the same size. It can be seen that the complexity of the concepts encoded through the network increases through the network from basic colours and shapes to classes.

Figure c): Neurons with different color selectivity indexes. For each neuron,the NF (top) and its cropped images (bottom).

Figure d):  Neurons with different class selectivity indexes. For each neuron, theNF, its cropped images and a tag cloud of classes in the Wordnet ontology.