##### [Review] ImageNet Classification with Deep Convolutional Neural Networks

ImageNet Classification with Deep Convolutional Neural Networks

ImageNet Classification with Deep Convolutional Neural Networks

Keywords: LeNet, Convolutional Neural Networks, Handwritten Digit Recognition All the figures in this post come from ‘Learning algorithms for classification-a comparison on handwritten digit recognition’1 Basic Works Convolutional network2 Inspiration Raw accuracy, training time, recognition time and memory requirements should be considered in classification. From experiments and comparison, the results can illuminate which one is... » read more

The committee has an equal weight for every prediction from all models, and it gives little improvement than a single model. Then boosting was built for this problem. Boosting is a technique for combining multiple 'base' classifiers to produce a form of the committee that

The committee is a native inspiration for how to combine several models(or we can say how to combine the outputs of several models). For example, we can combine all the models by

Bayesian model averaging(BMA) is another wildly used method which is very like a combining model. However, the difference between BMA and combining models is significant.

The mixture of Gaussians had been discussed in the post 'Mixtures of Gaussians'. It can not only be used to introduce 'EM algorithm' but contain a strategy to improve model performance.

Maximizing likelihood could not be used to the Gaussian mixture model directly, for its severe defects that we have come across at 'Maximum Likelihood of Gaussian Mixtures'. By the inspiration of K-means, a two-step algorithm was developed.

Gaussian mixtures had been discussed in 'Mixtures of Gaussians'. And once we have training data and a certain hypothesis, what we should do next is estimating the parameters of the model. Both kinds of parameters from a mixture of Gaussians

We have introduced a mixture distribution in the post 'An Introduction to Mixture Models'. And the example in that post was just two components Gaussian Mixture. However, in this post, we would like to talk about Gaussian mixtures formally. And it severs to motivate the expectation-maximization(EM) algorithm.

Original form K-Means algorithm might be one of the most accessible algorithms in machine learning. And many books and courses started with it. However, if we convert the task which K-means dealt with into a more mathematical form, there would be more interesting aspects coming to us.