Keywords: performance optimization

Performance Optimization1

Taylor series had been used for analyzing the performance surface in ‘An Introduction to Performance Optimization’. And then we would use it again to locate the optimum points of a certain performance index. This short post is a brief introduction to performance optimization and the contents of three categories of optimization algorithms:

  1. ‘Steepest Descent’
  2. “Newton’s Method”
  3. ‘Conjugate Gradient’

After the investigation of performance, to locate the optimum points of a certain performance index as a problem came to us. Recall the analysis of the performance index, which is a function of the parameters of the model. These equations mostly could not be solved analytically. So, to find an approximative solution, searching for the whole solution space is a straightforward strategy. However, no algorithms or computers can search a whole parameter space even which has only 1 dimension to locate minimum points of the surface. So in the searching, we need a map.

These algorithms we discussed here are developed hundreds of years ago. And the basic principles of optimization were discovered during the $17^{\text{th}}$ century. And some of them were brought up by Kepler, Fermat, Newton, and Leibniz. However, in calculating computers are more powerful than paper and pencils. So, these optimization algorithms had been rediscovered and became a major branch of mathematics. Thank for the brilliant scientists for their contribution to human beings:

All the algorithms we are going to talk about are iterative. Their general framework is:

  1. start from some initial guess $\boldsymbol{x}_0$
  2. update our guess in stages:
  • $\boldsymbol{x}_{k+1}=\boldsymbol{x}_k+\alpha_k \boldsymbol{p}_k$
  • or $\Delta \boldsymbol{x}_k=\boldsymbol{x}_{k+1}-\boldsymbol{x}_k=\alpha_k \boldsymbol{p}_k$
  1. check the terminational condition, decide to go back to step 2 or to terminate the algorithm.

In the algorithms, $\boldsymbol{p}_k$ is the search direction that works like the compass of Captain Jack Sparrow. It can lead you to what you want. $\alpha_k$ is the step length which means how far we should go along the current direction of $\boldsymbol{p}_k$. $\boldsymbol{x}_0$ is a guessed initial position of the algorithm, and it is also important in some view like if we could initial our start position close to the destination, we cloud to get there in a short time.

These three elements are the basis of three categories of optimization algorithms. How to decide the direction we are going to search, how long we should go along a direction and how to initiate the first point gave us research aspects.

 

References


1 Demuth, H.B., Beale, M.H., De Jess, O. and Hagan, M.T., 2014. Neural network design. Martin Hagan.
Last modified: March 24, 2020