Adaptive Regularization Minimization Algorithms
                  with Non-Smooth Norms

            S. Gratton  and  Ph. L. Toint
	              May 2021


Abstract.  An adaptive regularization algorithm (AR1pGN) for unconstrained
nonlinear minimization is considered, which uses a model consisting of a
Taylor expansion of arbitrary degree and regularization term involving a
possibly non-smooth norm. It is shown that the non-smoothness of the norm does
not affect the O(\epsilon_1^{-(p+1)/p}) upper bound on evaluation complexity
for finding first-order \epsilon_1-approximate minimizers using $p$
derivatives, and that this result does not hinge on the equivalence of norms
in Re^n.  It is also shown that, if p=2, the bound of O(\epsilon_2^{-3})
evaluations for finding second-order \epsilon_2-approximate minimizers still
holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth
nature of the regularization term.  Moreover, the adaptation of the existing
theory for handling the non-smoothness results in an interesting modification
of the subproblem termination rules, leading to an even more compact
complexity analysis. In particular, it is shown when the Newton's step is
acceptable for an adaptive regularization method. The approximate minimization
of quadratic polynomials regularized with non-smooth norms is then discussed,
and a new approximate second-order necessary optimality condition is derived
for this case.  An specialized algorithm is then proposed to enforce first-
and second-order conditions that are strong enough to ensure the existence of
a suitable step in AR1pGN (when p=2) and in AR2GN, and its iteration
complexity is analyzed.  A final section discusses how practical approximate
curvature measures may lead to weaker second-order optimality guarantees.