[Review] Learning algorithms for classification-a comparison on handwritten digit recognition
[Review] Learning algorithms for classification-a comparison on handwritten digit recognition

Keywords: LeNet, Convolutional Neural Networks, Handwritten Digit Recognition All the figures in this post come from ‘Learning algorithms for classification-a comparison on handwritten digit recognition’1 Basic Works Convolutional network2 Inspiration Raw accuracy, training time, recognition time and memory requirements should be considered in classification. From experiments and comparison, the results can illuminate which one is... » read more

[Neural Networks] Drawbacks of Backpropagation
[Neural Networks] Drawbacks of Backpropagation

BP algorithm has been described in 'An Introduction to Backpropagation and Multilayer Perceptrons'. And the implement of the BP algorithm has been recorded at 'The Backpropagation Algorithm (Part I)' and 'The Backpropagation Algorithm (Part II)'. BP has worked in many applications, but there are too many drawbacks in the process. The basic BP algorithm is too slow for most practical applications that it might take days or even weeks in training. And these following 4 posts are some investigations to make the BP algorithm more practical and to speed it up.

[Neural Networks] The Backpropagation Algorithm (Part I)
[Neural Networks] The Backpropagation Algorithm (Part I)

We have seen a three-layer network is flexible in approximating functions. If we had a more-than-three-layer network, it could be used to approximate any functions as close as we want. However, another trouble came to us is how to train these networks. This problem almost killed neural networks in the 1970s. Until backpropagation(BP for short) algorithm was found that it is an efficient algorithm in training multiple layers networks.

[Neural Networks] An Introduction to Backpropagation and Multilayer Perceptrons
[Neural Networks] An Introduction to Backpropagation and Multilayer Perceptrons

The LMS algorithm had been introduced before. It's a kind of 'performance learning'. And we have studied several learning rules(algorithms), such as 'Perceptron learning rule' and 'Supervised Hebbian learning' were based on the idea of the physical mechanism of biological neuron networks. And then performance learning was represented. From that time on, we go further and further away from natural intelligence.

[Neural Networks] Widrow-Hoff Learning(Part II)
[Neural Networks] Widrow-Hoff Learning(Part II)

Keywords: Widrow-Hoff learning, LMS LMS Algorithm1 LMS is the short for least mean square. And it is the algorithm for searching the minimum of the performance index. When $\boldsymbol{h}$ and $R$ are known and stationary points can be found directly. If $R^{-1}$ is impossible to calculate we can use the ‘steepest descent algorithm’. However, the... » read more

[Neural Networks] Widrow-Hoff Learning(Part I)
[Neural Networks] Widrow-Hoff Learning(Part I)

Performance learning had been discussed in 'Performance Surfaces and Optimum Points'. But we have not used it in any neural network. In this post, we talk about an important application of performance learning. And this new neural network was invented by Frank Widrow and his gradate student Marcian Hoff in 1960 when it was almost at the same time as Perceptron which had been discussed in 'Perceptron Learning Rule'. It is called Widrow-Hoff Learning.

[Neural Networks] Conjugate Gradient
[Neural Networks] Conjugate Gradient

We have learned the 'steepest descent method' and "Newton's method". They have advantages and limits at the same time. The main advantage of Newton's method is the speed, it converges quickly. And the main advantage of the steepest descent method guarantees to converge to a local minimum.