[Neural Networks] An Introduction to Backpropagation and Multilayer Perceptrons
[Neural Networks] An Introduction to Backpropagation and Multilayer Perceptrons

The LMS algorithm had been introduced before. It's a kind of 'performance learning'. And we have studied several learning rules(algorithms), such as 'Perceptron learning rule' and 'Supervised Hebbian learning' were based on the idea of the physical mechanism of biological neuron networks. And then performance learning was represented. From that time on, we go further and further away from natural intelligence.

[Neural Networks] Widrow-Hoff Learning(Part II)
[Neural Networks] Widrow-Hoff Learning(Part II)

Keywords: Widrow-Hoff learning, LMS LMS Algorithm1 LMS is the short for least mean square. And it is the algorithm for searching the minimum of the performance index. When $\boldsymbol{h}$ and $R$ are known and stationary points can be found directly. If $R^{-1}$ is impossible to calculate we can use the ‘steepest descent algorithm’. However, the... » read more

[Neural Networks] Widrow-Hoff Learning(Part I)
[Neural Networks] Widrow-Hoff Learning(Part I)

Performance learning had been discussed in 'Performance Surfaces and Optimum Points'. But we have not used it in any neural network. In this post, we talk about an important application of performance learning. And this new neural network was invented by Frank Widrow and his gradate student Marcian Hoff in 1960 when it was almost at the same time as Perceptron which had been discussed in 'Perceptron Learning Rule'. It is called Widrow-Hoff Learning.

[Neural Networks] Conjugate Gradient
[Neural Networks] Conjugate Gradient

We have learned the 'steepest descent method' and "Newton's method". They have advantages and limits at the same time. The main advantage of Newton's method is the speed, it converges quickly. And the main advantage of the steepest descent method guarantees to converge to a local minimum.

[Neural Networks] Newton’s Method
[Neural Networks] Newton’s Method

Taylor series gives us the conditions for minimum points base on both first-order items and the second-order item. And first-order item approximation of a performance index function produced a powerful algorithm in locating the minimum points in the whole parameter space which we call it 'steepest descent algorithm'.

[Neural Networks] Steepest Descent Method
[Neural Networks] Steepest Descent Method

'An Introduction to Performance Optimization' had given us a brief introduction to the optimization algorithm. And this post describes a typical direction($\boldsymbol{p}_k$) based algorithm and its variation gives an algorithm which is a step-length($\alpha_k$) based algorithm.

[Neural Networks] An Introduction to Performance Optimization
[Neural Networks] An Introduction to Performance Optimization

Taylor series had been used for analyzing the performance surface in 'An Introduction to Performance Optimization'. And then we would use it again to locate the optimum points of a certain performance index. This short post is a brief introduction to performance optimization and the contents of three categories of optimization algorithms: