LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Haycroft, Rebecca Jane
Languages: English
Types: Doctoral thesis
Subjects: QA
The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 1 In tro d u c tio n 1 1.1 Convex O ptim isation................................................................................. 1 1.1.1 Convexity and Concavity............................................................... 3 1.1.2 Quadratic Optimisation ............................................................... 4 1.1.3 Iterative M ethods........................................................................... 4 1.2 Steepest D e s c e n t....................................................................................... 7 1.3 The Conjugate Gradient M e th o d ............................................................ 11 1.4 Iterative Methods for Solving Linear Systems ...................................... 13 1.4.1 Stationary Iterative Methods ...................................................... 14 1.4.2 Krylov Subspace M e th o d s ............................................................ 16 1.5 Recent Successful Gradient A lgorithm s.................................................. 18 1.5.1 The Barzilai-Borwein A lgorithm ................................................... 18 1.5.2 Generalisations of the Barzilai-Borwein A lgorithm .................... 19 1.5.3 Other New Gradient M e th o d s..........................................................26 1.6 Motivation of T h e s i s ..................................................................................... 30
    • 2 G rad ien t A lgorithm s an d th e ir R elation to O ptim al D esign T h e o ry 32 2.1 Renormalised Versions of Gradient A lgorithm s..........................................32 2.1.1 Renormalisation of the Steepest Descent A lg o rith m .................... 32 2.1.2 Renormalisation of a General Gradient A lgorithm ....................... 35 2.2 Asymptotic Behaviour of the Steepest Descent A lgorithm ....................... 35 2.3 Relation to Optimal Experimental Design ................................................ 44 2.3.1 Optimal Experimental Design Terminology....................................45 Constructing Gradient Algorithms which Correspond to Given Optimality C rite ria ........................................................................... 48 Rate of Convergence of Gradient Algorithms Corresponding to Optimal D esigns........................................................................... 50
    • 3 T h e 7 -S teep est D escent A lgorithm 53 3.1 The Renormalised 7 -Steepest Descent Algorithm .......................................54 3.1.1 Concavity of the Optimality Criterion............................................. 55 3.1.2 Convergence to an Optimal D e s ig n ................................................ 56 3.1.3 Speed of Convergence to the Optimum D esig n .............................60 3.1.4 Behaviour of the Sequence (4>(£^)} 61 3.2 Asymptotic Rate of Convergence ............................................................... 71 3.2.1 Dependence on 7 ............................................................................... 72 3.2.2 Dependence on p ............................................................................... 75 3.2.3 Dependence on d ............................................................................... 79 3.2.4 Behaviour of r ^ ............................................................................... 83 3.3 7 -Steepest Descent Summary ...................................................................... 92
    • [10] B o x , M. J ., D. D., a n d S w a n n , W . H. Non-Linear Optimization Techniques. Oliver & Boyd Ltd., Edinburgh, 1969.
    • [11] B u n d a y , B. D. Basic Optimisation Methods. Edward Arnold Ltd., London, 1984.
    • [12] C a u c h y , A. Methode generate pour la resolution des systems d 'equations simulatanees. Comp. Rend. Sci. 25 (1847), 536-538.
    • [13] C h a m b e r l a i n , R. M., P o w e l l , M. J. D ., L e m a r e c h a l , C., a n d P e d ­ e r s e n , H. C. The watchdog technique for forcing convergence in algorithms for constrained optimization. Math. Programming Stud., 16 (1982), 1-17. Algorithms for constrained minimization of smooth nonlinear functions.
    • [14] C u r r y , H. B. The method of steepest descent for non-linear minimization problems. Quart. Appl. Math. 2 (1944), 258-261.
    • [15] D a i , Y ., Y u a n , J ., a n d Y u a n , Y.-X . Modified two-point stepsize gradient methods for unconstrained optimization. Comput. Optim. Appl. 22, 1 (2002), 103-109.
    • [16] D a i , Y.-H . Alternate step gradient method. Optimization 52, 4-5 (2003), 395-415. Theory, methods and applications of optimization.
    • [17] D a i , Y . - H . , a n d F l e t c h e r , R . On the asymptotic behaviour of some new gradient methods. Math. Program. 103, 3, Ser. A (2005), 541-559.
    • [18] D a i , Y .-H ., H a g e r , W . W ., S c h i t t k o w s k i , K ., a n d Z h a n g , H. The cyclic Barzilai-Borwein method for unconstrained optimization. IMA J. Numer. Anal. 26, 3 (2006), 604-627.
    • [19] D a i , Y . - H . , a n d L i a o , L . - Z . R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1 (2002), 1-10.
    • [20] D a i , Y. H., a n d Y a n g , X. Q. A new gradient method with stepsize property. Comput. Optim. Appl. 33, 1 (2006), 73-88.
    • [21] D a i , Y . H . , a n d Y u a n , Y . Some properties of a new conjugate gradient method. In Advances in nonlinear programming (Beijing, 1996), vol. 14 of Appl. Optim. Kluwer Acad. Publ., Dordrecht, 1998, pp. 251-262.
    • [22] D a i , Y .-H ., a n d Y u a n , Y .-X . Alternate minimization gradient method. IMA J. Numer. Anal. 23, 3 (2003), 377-393.
    • [23] D a i , Y .-H ., a n d Y u a n , Y .-X . Analysis of monotone gradient methods. J. Ind. Manag. Optim. 1, 2 (2005), 181-192.
    • [24] D a i , Y .-H ., A N D Z h a n g , H. Adaptive two-point stepsize gradient algorithm. Numer. Algorithms 27, 4 (2001), 377-385.
    • [25] D e n n i s , J r . , J. E ., a n d M o r e , J . J . Quasi-Newton methods, motivation and theory. S IA M Rev. 19, 1 (1977), 46-89.
    • [26] D o d g e , Y ., F . V. V ., a n d W y n n , H. P . Optimal design of experiments: An overview. In Optimal Design and Analysis o f Experiments. Elsevier Science Publishers B.V., North Holland, 1988, pp. 1-9.
    • [32] F o r s y t h e , G . E . On the asymptotic directions of the s-dimensional optimum gradient method. Numer. Math. 11 (1968), 57-76.
    • [33] F r i e d l a n d e r , A ., M a r t i n e z , J. M ., M o l i n a , B., a n d R a y d a n , M. Gradient method with retards and generalizations. SIA M J. Numer. Anal. 36, 1 (1999), 275-289 (electronic).
    • [34] G o l d s t e i n , A. A. Cauchy's method of minimization. Numer. Math. 4 (1962), 146-150.
    • [35] G o l u b , G. H., a n d O ' L e a r y , D. P . Some history of the conjugate gradient and Lanczos algorithms: 1948-1976. S IA M Rev. 31, 1 (1989), 50-102.
    • [36] G r i p p o , L., L a m p a r i e l l o , F ., a n d L u c i d i , S. A nonmonotone line search technique for Newton's method. SIA M J. Numer. Anal. 23, 4 (1986), 707-716.
    • [44] L u e n b e r g e r , D. G. Linear and Nonlinear Programming, second ed. AddisonWesley Publishing Company, US, 1989.
    • [45] M a n d a l , S., AND T o r s n e y , B. Construction of optimal designs using a clustering approach. J. Statist. Plann. Inference 136, 3 (2006), 1120-1 13 4.
    • [46] M a n d a l, S., T o r s n e y , B ., a n d C a r r i e r e , K. C. Constructing optimal designs with constraints. J. Statist. Plann. Inference 128, 2 (2005), 609-621.
    • [53] P r o n z a t o , L., W y n n , H. P ., a n d Z h i g l j a v s k y , A. A. Dynamical search. Chapman & Hall/CRC, Boca Raton, FL, 2000. Applications of dynamical systems in search and optimization, Interdisciplinary statistics.
    • [54] P r o n z a t o , L., W y n n , H. P ., a n d Z h i g l j a v s k y , A. A. Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta A ppl Math. 67, 1 (2001), 1-18.
    • [55] P r o n z a t o , L ., W y n n , H . P . , a n d Z h i g l j a v s k y , A. A. Asymptotic behaviour of a family of gradient algorithms in Rd and Hilbert spaces. Math. Program. 107, 3, Ser. A (2006), 409-438.
    • [64] T o r s n e y , B. A moment inequality and monotonicity of an algorithm. In Lecture Notes in Economics and Mathematical Systems, A. F. . K. K. Eds, Ed., vol. 215. Springer Verlag, 1983, pp. 249-260.
    • [73] Y u a n , Y .- X . A new stepsize for the steepest descent method. Tech. rep., Chinese Academy of Sciences, 2004.
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Cite this article