Geoffrey Hinton on the importance of “equivariance” over “variance” in convnets
The interesting point of view of Geoffrey Hinton on how to train ConvNets with less Data.
I usually don’t post very often on my blog, and I don’t often share conference videos here.
But I was really captivated by this conference of Geoffrey Hinton on why pooling is not the right routing algorithm for convolutional networks. What Hinton nicely explains in this video, is that while convnets can show the ability to recognize objects in various situations, they need a very high amount of data to do so. He argues that pooling is actually what incentivizes neural networks to work this way. He states that pooling actually aims for “invariance” in neural activity for a certain label. The aim of Geoffrey Hinton is to rather aim for “equivariance” where the neural activity is equivalent to what happens in the visual field. For Hinton, with invariance, there is an information loss, while with equivariance we can reach a much higher accuracy. Classic convolutional networks will learn different models for different viewpoints, and according to Hinton, that is what leads us to convnets that need a lot of training data. In this video Hinton introduces the concept of “capsules” that he proposes for high level perception.
PhD Candidate at The University of Tokyo