Optimal Newton-type methods for nonconvex optimization

               C. Cartis, N. I. M. Gould and Ph. L.Toint

                        Report NAXYS-17-2011

We consider a general class of second-order iterations for unconstrained
optimization that includes regularization and trust-region variants of
Newton's method.  For each method in this class, we exhibit a smooth,
bounded-below objective function, whose gradient is globally Lipschitz
continuous within an open convex set containing any iterates encountered and
whose Hessian is alpha-Holder continuous (for given alpha in [0,1]) on
the path of the iterates, for which the method in question takes at least 
floor[epsilon^{-(2+alpha)/(1+alpha)] function-evaluations to
generate a first iterate whose gradient is smaller than epsilon in norm.
This provides a lower bound on the evaluation complexity of second-order
methods in our class when applied to smooth problems satisfying our
assumptions.  Furthermore, for alpha=1, this lower bound is of the same
order in epsilon as the upper bound on the evaluation complexity of cubic
regularization, thus implying cubic regularization has optimal worst-case
evaluation complexity within our class of second-order methods.