From 37f1d3b8c96ebaf643ba8a200b2410339de7f597 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sat, 2 Nov 2024 16:16:13 +0000 Subject: [PATCH] build based on 2f52220 --- dev/fine-tuneFPS/index.html | 2 +- dev/index.html | 2 +- dev/reference/index.html | 12 ++++++------ dev/search/index.html | 2 +- dev/tutorial/index.html | 8 ++++---- 5 files changed, 13 insertions(+), 13 deletions(-) diff --git a/dev/fine-tuneFPS/index.html b/dev/fine-tuneFPS/index.html index 9bba2dd..cd9c754 100644 --- a/dev/fine-tuneFPS/index.html +++ b/dev/fine-tuneFPS/index.html @@ -18,4 +18,4 @@ stp = NLPStopping(nlp) data = FPSSSolver(stp) stats = GenericExecutionStats(nlp) -stats = SolverCore.solve!(data, stp, stats)
"Execution stats: first-order stationary"

List of possible options

Find below a list of the main options of fps_solve.

Tolerances on the problem

We use Stopping.jl to control the algorithmic flow, we refer to Stopping.jl and https://solverstoppingjulia.github.io for tutorials and documentation. By default, we use the function Fletcher_penalty_optimality_check as optimality check, and the default tol_check is rtol [1 + c(x₀); 1 + ∇f(x₀)] with rtol = 1e-7.

Additional parameters used in stopping the algorithm are defined in the following table.

ParametersTypeDefaultDescription
lagrange_boundReal1 / sqrt(eps(T))bounds on estimated Lagrange multipliers.
subsolver_max_iterReal20000maximum iteration for the subproblem solver.

Algorithmic parameters

The metadata is defined in a AlgoData structure at the initialization of FPSSolver.

ParametersTypeDefaultDescription
σ_0Real1e3Initialize subproblem's parameter σ
σ_maxReal1 / √eps(T)Maximum value for subproblem's parameter σ
σ_updateRealT(2)Update subproblem's parameter σ
ρ_0RealT(2)Initialize subproblem's parameter ρ
ρ_maxReal1 / √eps(T)Maximum value for subproblem's parameter ρ
ρ_updateRealT(2)Update subproblem's parameter ρ
δ_0Real√eps(T)Initialize subproblem's parameter δ
δ_maxReal1 / √eps(T)Maximum value for subproblem's parameter δ
δ_updateRealT(10)Update subproblem's parameter δ
η_1Realzero(T)Initialize subproblem's parameter η
η_updateRealone(T)Update subproblem's parameter η
yMRealtypemax(T)bound on the Lagrange multipliers
ΔRealT(0.95)expected decrease in feasibility between two iterations
subproblem_solverFunctionKNITRO.has_knitro() ? NLPModelsKnitro.knitro : ipoptsolver used for the subproblem, see also JSOSolvers.jl
subpb_unbounded_thresholdReal1 / √eps(T)below the opposite of this value, the subproblem is unbounded
atol_subFunctionatol -> atolabsolute tolerance for the subproblem in function of atol
rtol_subFunctionrtol -> rtolrelative tolerance for the subproblem in function of rtol
hessian_approxeither Val(1) or Val(2)Val(2)it selects the hessian approximation
convex_subproblemBoolfalsetrue if the subproblem is convex. Useful to set the convex option in knitro.
qds_solverSymbol:ldltInitialize the QDSolver to solve quasi-definite systems, either :ldlt or :iterative.

Feasibility step

The metadata for the feasibility procedure is defined in a GNSolver structure at the initialization of FPSSolver.

ParametersTypeDefaultDescription
η₁Real1e-3Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
η₂Real0.66Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
σ₁Real0.25Feasibility step: decrease coefficient of the trust-region radius.
σ₂Real2.0Feasibility step: increase coefficient of the trust-region radius.
Δ₀Real1.0Feasibility step: initial radius.
feas_expected_decreaseReal0.95Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feasexpecteddecrease.
bad_steps_limInteger3Feasibility step: consecutive bad steps before using a second order step.
TR_compute_stepKrylovSolverLsmrSolverCompute the direction in feasibility step.
aggressive_stepKrylovSolverCgSolverCompute the (aggressive) direction in feasibility step.
+stats = SolverCore.solve!(data, stp, stats)
"Execution stats: first-order stationary"

List of possible options

Find below a list of the main options of fps_solve.

Tolerances on the problem

We use Stopping.jl to control the algorithmic flow, we refer to Stopping.jl and https://solverstoppingjulia.github.io for tutorials and documentation. By default, we use the function Fletcher_penalty_optimality_check as optimality check, and the default tol_check is rtol [1 + c(x₀); 1 + ∇f(x₀)] with rtol = 1e-7.

Additional parameters used in stopping the algorithm are defined in the following table.

ParametersTypeDefaultDescription
lagrange_boundReal1 / sqrt(eps(T))bounds on estimated Lagrange multipliers.
subsolver_max_iterReal20000maximum iteration for the subproblem solver.

Algorithmic parameters

The metadata is defined in a AlgoData structure at the initialization of FPSSolver.

ParametersTypeDefaultDescription
σ_0Real1e3Initialize subproblem's parameter σ
σ_maxReal1 / √eps(T)Maximum value for subproblem's parameter σ
σ_updateRealT(2)Update subproblem's parameter σ
ρ_0RealT(2)Initialize subproblem's parameter ρ
ρ_maxReal1 / √eps(T)Maximum value for subproblem's parameter ρ
ρ_updateRealT(2)Update subproblem's parameter ρ
δ_0Real√eps(T)Initialize subproblem's parameter δ
δ_maxReal1 / √eps(T)Maximum value for subproblem's parameter δ
δ_updateRealT(10)Update subproblem's parameter δ
η_1Realzero(T)Initialize subproblem's parameter η
η_updateRealone(T)Update subproblem's parameter η
yMRealtypemax(T)bound on the Lagrange multipliers
ΔRealT(0.95)expected decrease in feasibility between two iterations
subproblem_solverFunctionKNITRO.has_knitro() ? NLPModelsKnitro.knitro : ipoptsolver used for the subproblem, see also JSOSolvers.jl
subpb_unbounded_thresholdReal1 / √eps(T)below the opposite of this value, the subproblem is unbounded
atol_subFunctionatol -> atolabsolute tolerance for the subproblem in function of atol
rtol_subFunctionrtol -> rtolrelative tolerance for the subproblem in function of rtol
hessian_approxeither Val(1) or Val(2)Val(2)it selects the hessian approximation
convex_subproblemBoolfalsetrue if the subproblem is convex. Useful to set the convex option in knitro.
qds_solverSymbol:ldltInitialize the QDSolver to solve quasi-definite systems, either :ldlt or :iterative.

Feasibility step

The metadata for the feasibility procedure is defined in a GNSolver structure at the initialization of FPSSolver.

ParametersTypeDefaultDescription
η₁Real1e-3Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
η₂Real0.66Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
σ₁Real0.25Feasibility step: decrease coefficient of the trust-region radius.
σ₂Real2.0Feasibility step: increase coefficient of the trust-region radius.
Δ₀Real1.0Feasibility step: initial radius.
feas_expected_decreaseReal0.95Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feasexpecteddecrease.
bad_steps_limInteger3Feasibility step: consecutive bad steps before using a second order step.
TR_compute_stepKrylovSolverLsmrSolverCompute the direction in feasibility step.
aggressive_stepKrylovSolverCgSolverCompute the (aggressive) direction in feasibility step.
diff --git a/dev/index.html b/dev/index.html index 4617a3d..2ff0312 100644 --- a/dev/index.html +++ b/dev/index.html @@ -15,4 +15,4 @@ [0.0], [1.0], ) -stats = fps_solve(nlp)
"Execution stats: first-order stationary"

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

+stats = fps_solve(nlp)
"Execution stats: first-order stationary"

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

diff --git a/dev/reference/index.html b/dev/reference/index.html index 0e8c7f2..18d749e 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,19 +1,19 @@ Reference · FletcherPenaltySolver

Reference

Contents

Index

FletcherPenaltySolver.AlgoDataType
AlgoData(; kwargs...) 
-AlgoData(T::DataType; kwargs...)

Structure containing all the parameters used in the fps_solve call. T is the datatype used in the algorithm, by default it is Float64. Returns a AlgoData structure.

Arguments

The keyword arguments may include:

  • σ_0::Real = T(1e3): Initialize subproblem's parameter σ;
  • σ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter σ;
  • σ_update::Real = T(2): Update subproblem's parameter σ;
  • ρ_0::Real = one(T): Initialize subproblem's parameter ρ;
  • ρ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter ρ;
  • ρ_update::Real = T(2): Update subproblem's parameter ρ;
  • δ_0::Real = √eps(T): Initialize subproblem's parameter δ;
  • δ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter δ;
  • δ_update::Real = T(10): Update subproblem's parameter δ;
  • η_1::Real = zero(T): Initialize subproblem's parameter η;
  • η_update::Real = one(T): Update subproblem's parameter η;
  • yM::Real = typemax(T): bound on the Lagrange multipliers;
  • Δ::Real = T(0.95): expected decrease in feasibility between two iterations;
  • subproblem_solver::Function = ipopt: solver used for the subproblem;
  • subpb_unbounded_threshold::Real = 1 / √eps(T): below the opposite of this value, the subproblem is unbounded;
  • subsolver_max_iter::Int = 20000: maximum of iteration for the subproblem solver;
  • atol_sub::Function = atol -> atol: absolute tolerance for the subproblem in function of atol;
  • rtol_sub::Function = rtol -> rtol: relative tolerance for the subproblem in function of rtol;
  • hessian_approx = Val(2): either Val(1) or Val(2), it selects the hessian approximation;
  • convex_subproblem::Bool = false: true if the subproblem is convex. Useful to set the convex option in knitro;
  • lagrange_bound::T = 1 / sqrt(eps(T)): upper-bound on the Lagrange multiplier.

For more details, we refer to the package documentation fine-tuneFPS.md.

source
FletcherPenaltySolver.FPSSSolverType
FPSSSolver(nlp::AbstractNLPModel [, x0 = nlp.meta.x0]; kwargs...)
-FPSSSolver(stp::NLPStopping; kwargs...)

Structure regrouping all the structure used during the fps_solve call. It returns a FPSSSolver structure.

Arguments

The keyword arguments may include:

  • stp::NLPStopping: Stopping structure for this algorithm workflow;
  • meta::AlgoData{T}: see AlgoData;
  • workspace: allocated space for the solver itself;
  • qdsolver: solver structure for the linear algebra part, contains allocation for this part. By default a LDLtSolver, but an alternative is IterativeSolver ;
  • subproblem_solver::AbstractOptimizationSolver: by default a subproblem_solver_correspondence[Symbol(meta.subproblem_solver)];
  • sub_stats::GenericExecutionStats: stats structure for the result of subproblem_solver;
  • feasibility_solver: by default a GNSolver, see GNSolver;
  • model::FletcherPenaltyNLP: subproblem;
  • sub_stp::NLPStopping: Stopping structure for the subproblem.

Note:

  • subproblem_solver is accessible from the subproblem_solver_correspondence::Dict.
  • the qdsolver is accessible from the dictionary qdsolver_correspondence.
source
FletcherPenaltySolver.FletcherPenaltyNLPType
FletcherPenaltyNLP(nlp, σ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
+AlgoData(T::DataType; kwargs...)

Structure containing all the parameters used in the fps_solve call. T is the datatype used in the algorithm, by default it is Float64. Returns a AlgoData structure.

Arguments

The keyword arguments may include:

  • σ_0::Real = T(1e3): Initialize subproblem's parameter σ;
  • σ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter σ;
  • σ_update::Real = T(2): Update subproblem's parameter σ;
  • ρ_0::Real = one(T): Initialize subproblem's parameter ρ;
  • ρ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter ρ;
  • ρ_update::Real = T(2): Update subproblem's parameter ρ;
  • δ_0::Real = √eps(T): Initialize subproblem's parameter δ;
  • δ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter δ;
  • δ_update::Real = T(10): Update subproblem's parameter δ;
  • η_1::Real = zero(T): Initialize subproblem's parameter η;
  • η_update::Real = one(T): Update subproblem's parameter η;
  • yM::Real = typemax(T): bound on the Lagrange multipliers;
  • Δ::Real = T(0.95): expected decrease in feasibility between two iterations;
  • subproblem_solver::Function = ipopt: solver used for the subproblem;
  • subpb_unbounded_threshold::Real = 1 / √eps(T): below the opposite of this value, the subproblem is unbounded;
  • subsolver_max_iter::Int = 20000: maximum of iteration for the subproblem solver;
  • atol_sub::Function = atol -> atol: absolute tolerance for the subproblem in function of atol;
  • rtol_sub::Function = rtol -> rtol: relative tolerance for the subproblem in function of rtol;
  • hessian_approx = Val(2): either Val(1) or Val(2), it selects the hessian approximation;
  • convex_subproblem::Bool = false: true if the subproblem is convex. Useful to set the convex option in knitro;
  • lagrange_bound::T = 1 / sqrt(eps(T)): upper-bound on the Lagrange multiplier.

For more details, we refer to the package documentation fine-tuneFPS.md.

source
FletcherPenaltySolver.FPSSSolverType
FPSSSolver(nlp::AbstractNLPModel [, x0 = nlp.meta.x0]; kwargs...)
+FPSSSolver(stp::NLPStopping; kwargs...)

Structure regrouping all the structure used during the fps_solve call. It returns a FPSSSolver structure.

Arguments

The keyword arguments may include:

  • stp::NLPStopping: Stopping structure for this algorithm workflow;
  • meta::AlgoData{T}: see AlgoData;
  • workspace: allocated space for the solver itself;
  • qdsolver: solver structure for the linear algebra part, contains allocation for this part. By default a LDLtSolver, but an alternative is IterativeSolver ;
  • subproblem_solver::AbstractOptimizationSolver: by default a subproblem_solver_correspondence[Symbol(meta.subproblem_solver)];
  • sub_stats::GenericExecutionStats: stats structure for the result of subproblem_solver;
  • feasibility_solver: by default a GNSolver, see GNSolver;
  • model::FletcherPenaltyNLP: subproblem;
  • sub_stp::NLPStopping: Stopping structure for the subproblem.

Note:

  • subproblem_solver is accessible from the subproblem_solver_correspondence::Dict.
  • the qdsolver is accessible from the dictionary qdsolver_correspondence.
source
FletcherPenaltySolver.FletcherPenaltyNLPType
FletcherPenaltyNLP(nlp, σ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
 FletcherPenaltyNLP(nlp, σ, ρ, δ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
 FletcherPenaltyNLP(nlp; σ_0 = 1, ρ_0 = 0, δ_0 = 0, hessian_approx = Val(2), x0 = nlp.meta.x0, qds = LDLtSolver(nlp, S(0)))

We consider here the implementation of Fletcher's exact penalty method for the minimization problem:

\[ minₓ f(x) s.t. c(x) = ℓ, l ≤ x ≤ u\]

using Fletcher penalty function:

\[ minₓ f(x) - (c(x) - ℓ)^T ys(x) + 0.5 ρ ||c(x) - ℓ||²₂ s.t. l ≤ x ≤ u\]

where

\[ ys(x) ∈ arg min\_y 0.5 ||A(x)y - g(x)||²₂ + σ (c(x) - ℓ)^T y + 0.5 || δ ||²₂\]

Arguments

  • nlp::AbstractNLPModel: the model solved, see NLPModels.jl;
  • x::S: Initial guess. If x is not specified, then nlp.meta.x0 is used;
  • σ, ρ, δ parameters of the subproblem;
  • hessian_approx either Val(1) or Val(2) for the hessian approximation.
  • qds: solver structure for the linear algebra computations, see LDLtSolver or IterativeSolver.

Notes:

  • Evaluation of the obj, grad, objgrad functions evaluate functions from the orginial nlp. These values are stored in fx, cx, gx.
  • The value of the penalty vector ys is also stored.
  • The hessian's structure is dense.

Examples

julia> using FletcherPenaltySolver, ADNLPModels
 julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0])
-julia> fp_sos  = FletcherPenaltyNLP(nlp)
source
FletcherPenaltySolver.GNSolverType
GNSolver(x, y; kwargs...)

Structure containing all the parameters used in the feasibility step. x is an intial guess, and y is an initial guess for the Lagrange multiplier. Returns a GNSolver structure.

Arguments

The keyword arguments may include:

  • η₁::T=T(1e-3): Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::T=T(0.66): Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::T=T(0.25): Feasibility step: decrease coefficient of the trust-region radius.
  • σ₂::T=T(2.0): Feasibility step: increase coefficient of the trust-region radius.
  • Δ₀::T=one(T): Feasibility step: initial radius.
  • bad_steps_lim::Integer=3: Feasibility step: consecutive bad steps before using a second order step.
  • feas_expected_decrease::T=T(0.95): Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • TR_compute_step = LsmrSolver(length(y0), length(x0), S): Compute the direction in feasibility step.
  • aggressive_step = CgSolver(length(x0), length(x0), S): Compute the direction in feasibility step in agressive mode.
source
FletcherPenaltySolver.Fletcher_penalty_optimality_checkMethod
Fletcher_penalty_optimality_check(pb::AbstractNLPModel, state::NLPAtX)

Optimality function used by default in the algorithm. An alternative is to use the function KKT from Stopping.jl.

The function returns a vector of length ncon + nvar containing:

  • |c(x) - lcon| / |x|₂
  • res / |λ|₂ ; x - max(min(x - res, uvar), lvar)) if it has bounds

The fields x, cx and res need to be filled. If state.lambda is nothing then we take |λ|₂=1.

source
FletcherPenaltySolver.TR_lsmrMethod
TR_lsmr(solver, cz, Jz, ctol, Δ, normcz, Jd)

Compute a direction d such that

\[\begin{aligned} +julia> fp_sos = FletcherPenaltyNLP(nlp)

source
FletcherPenaltySolver.GNSolverType
GNSolver(x, y; kwargs...)

Structure containing all the parameters used in the feasibility step. x is an intial guess, and y is an initial guess for the Lagrange multiplier. Returns a GNSolver structure.

Arguments

The keyword arguments may include:

  • η₁::T=T(1e-3): Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::T=T(0.66): Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::T=T(0.25): Feasibility step: decrease coefficient of the trust-region radius.
  • σ₂::T=T(2.0): Feasibility step: increase coefficient of the trust-region radius.
  • Δ₀::T=one(T): Feasibility step: initial radius.
  • bad_steps_lim::Integer=3: Feasibility step: consecutive bad steps before using a second order step.
  • feas_expected_decrease::T=T(0.95): Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • TR_compute_step = LsmrSolver(length(y0), length(x0), S): Compute the direction in feasibility step.
  • aggressive_step = CgSolver(length(x0), length(x0), S): Compute the direction in feasibility step in agressive mode.
source
FletcherPenaltySolver.Fletcher_penalty_optimality_checkMethod
Fletcher_penalty_optimality_check(pb::AbstractNLPModel, state::NLPAtX)

Optimality function used by default in the algorithm. An alternative is to use the function KKT from Stopping.jl.

The function returns a vector of length ncon + nvar containing:

  • |c(x) - lcon| / |x|₂
  • res / |λ|₂ ; x - max(min(x - res, uvar), lvar)) if it has bounds

The fields x, cx and res need to be filled. If state.lambda is nothing then we take |λ|₂=1.

source
FletcherPenaltySolver.TR_lsmrMethod
TR_lsmr(solver, cz, Jz, ctol, Δ, normcz, Jd)

Compute a direction d such that

\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, -\end{aligned}\]

using lsmr method from Krylov.jl.

Output

  • d: solution
  • Jd: product of the solution with J.
  • infeasible: true if the problem is infeasible.
  • solved: true if the problem has been successfully solved.
source
FletcherPenaltySolver.cons_norhs!Method
cons_norhs!(nlp::FletcherPenaltyNLP, x, cx)

Redefine the NLPModel function cons! to account for non-zero right-hand side in the equation constraint. It returns cons(nlp, x) - nlp.meta.lcon.

source
FletcherPenaltySolver.feasibility_stepMethod
feasibility_step(feasibility_solver, nlp, x, cx, normcx, Jx, ρ, ctol; kwargs...)

Approximately solves min ‖c(x) - l‖, where l is nlp.meta.lcon, using a trust-region Levenberg-Marquardt method.

Arguments

  • η₁::AbstractFloat = feasibility_solver.feas_η₁: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::AbstractFloat = feasibility_solver.feas_η₂: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::AbstractFloat = feasibility_solver.feas_σ₁: decrease coefficient of the trust-region radius.
  • σ₂::AbstractFloat = feasibility_solver.feas_σ₂:increase coefficient of the trust-region radius.
  • Δ₀::T = feasibility_solver.feas_Δ₀: initial trust-region radius.
  • bad_steps_lim::Integer = feasibility_solver.bad_steps_lim: consecutive bad steps before using a second order step.
  • expected_decrease::T = feasibility_solver.feas_expected_decrease: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • max_eval::Int = 1_000: maximum evaluations.
  • max_time::AbstractFloat = 60.0: maximum time.
  • max_feas_iter::Int = typemax(Int64): maximum number of iterations.

Output

  • z, cz, normcz, Jz: the new iterate, and updated evaluations.
  • status: Computation status. Possible outcomes are: :success, max_eval, max_time, max_iter, unknown_tired, :infeasible, :unknown.
source
FletcherPenaltySolver.fps_solveMethod
fps_solve(nlp::AbstractNLPModel{T, S}, x0::S = nlp.meta.x0; subsolver_verbose::Int = 0, kwargs...)

Compute a local minimum of a bound and equality-constrained optimization problem using Fletcher's penalty function and the implementation described in

Estrin, R., Friedlander, M. P., Orban, D., & Saunders, M. A. (2020).
+\end{aligned}\]

using lsmr method from Krylov.jl.

Output

  • d: solution
  • Jd: product of the solution with J.
  • infeasible: true if the problem is infeasible.
  • solved: true if the problem has been successfully solved.
source
FletcherPenaltySolver.cons_norhs!Method
cons_norhs!(nlp::FletcherPenaltyNLP, x, cx)

Redefine the NLPModel function cons! to account for non-zero right-hand side in the equation constraint. It returns cons(nlp, x) - nlp.meta.lcon.

source
FletcherPenaltySolver.feasibility_stepMethod
feasibility_step(feasibility_solver, nlp, x, cx, normcx, Jx, ρ, ctol; kwargs...)

Approximately solves min ‖c(x) - l‖, where l is nlp.meta.lcon, using a trust-region Levenberg-Marquardt method.

Arguments

  • η₁::AbstractFloat = feasibility_solver.feas_η₁: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::AbstractFloat = feasibility_solver.feas_η₂: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::AbstractFloat = feasibility_solver.feas_σ₁: decrease coefficient of the trust-region radius.
  • σ₂::AbstractFloat = feasibility_solver.feas_σ₂:increase coefficient of the trust-region radius.
  • Δ₀::T = feasibility_solver.feas_Δ₀: initial trust-region radius.
  • bad_steps_lim::Integer = feasibility_solver.bad_steps_lim: consecutive bad steps before using a second order step.
  • expected_decrease::T = feasibility_solver.feas_expected_decrease: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • max_eval::Int = 1_000: maximum evaluations.
  • max_time::AbstractFloat = 60.0: maximum time.
  • max_feas_iter::Int = typemax(Int64): maximum number of iterations.

Output

  • z, cz, normcz, Jz: the new iterate, and updated evaluations.
  • status: Computation status. Possible outcomes are: :success, max_eval, max_time, max_iter, unknown_tired, :infeasible, :unknown.
source
FletcherPenaltySolver.fps_solveMethod
fps_solve(nlp::AbstractNLPModel{T, S}, x0::S = nlp.meta.x0; subsolver_verbose::Int = 0, kwargs...)

Compute a local minimum of a bound and equality-constrained optimization problem using Fletcher's penalty function and the implementation described in

Estrin, R., Friedlander, M. P., Orban, D., & Saunders, M. A. (2020).
 Implementing a smooth exact penalty function for equality-constrained nonlinear optimization.
 SIAM Journal on Scientific Computing, 42(3), A1809-A1835.
 https://doi.org/10.1137/19M1238265

For advanced usage, the principal call to the solver uses a NLPStopping, see Stopping.jl.

fps_solve(stp::NLPStopping, fpssolver::FPSSSolver{T, QDS, US}; subsolver_verbose::Int = 0)
 fps_solve(stp::NLPStopping; subsolver_verbose::Int = 0, kwargs...)

Arguments

  • nlp::AbstractNLPModel: the model solved, see NLPModels.jl;
  • x: Initial guess. If x is not specified, then nlp.meta.x0 is used.

Keyword arguments

  • fpssolver: see FPSSSolver;
  • verbose::Int = 0: if > 0, display iteration information of the solver;
  • subsolver_verbose::Int = 0: if > 0, display iteration information of the subsolver;

All the information regarding stopping criteria can be set in the NLPStopping object. Additional kwargs are given to the NLPStopping. By default, the optimality condition used to declare optimality is Fletcher_penalty_optimality_check.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

If one define a Stopping before calling fps_solve, it is possible to access all the information computed by the algorithm.

Notes

  • If the problem has inequalities, we use slack variables to get only equalities and bounds via NLPModelsModifiers.jl.
  • stp.current_state.res contains the gradient of Fletcher's penalty function.
  • subproblem_solver must take an NLPStopping as input, see StoppingInterface.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver: see FPSSSolver
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current gradient of the Lagrangian;
    • stats.primal_feas: norm of current feasibility;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solution: current iterate;
    • stats.multipliers: current Lagrange multipliers estimate;
    • stats.multipliers_L and stats.multipliers_U: current Lagrange multipliers estimate for the lower and upper bounds respectively;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

julia> using FletcherPenaltySolver, ADNLPModels
 julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0]);
 julia> stats = fps_solve(nlp)
-"Execution stats: first-order stationary"
source
FletcherPenaltySolver.ghjvprod_nln!Method
ghjvprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function ghjvprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.hess_nlnMethod
hess_nln(nlp::FletcherPenaltyNLP, x, y; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.hess_nln_coord!Method
hess_nln_nln!(nlp::FletcherPenaltyNLP, x, y, vals; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.hprod_nln!Method
hprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.solve_two_extrasFunction
invJtJJv, invJtJSsv = solve_two_extras(nlp, x, rhs1, rhs2)

The IterativeSolver variant solve successively a regularized least square, see solve_least_square, and a regularized minres. It returns only a warning if the method failed.

The LDLtSolver variant use successively a regularized cgls and a regularized minres.

source
FletcherPenaltySolver.solve_two_least_squaresFunction
p1, q1, p2, q2 = solve_two_least_squares(nlp, x, rhs1, rhs2)

Solve successively two least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

rhs1 and rhs2 are both of size nlp.meta.nvar.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square. Note that nlp.Aop is not re-evaluated in this case. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.

source
FletcherPenaltySolver.solve_two_mixedFunction
p1, q1, p2, q2 = solve_two_mixed(nlp, x, rhs1, rhs2)

Solve successively a least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

and a least-norm problem.

rhs1 is of size nlp.meta.nvar, and rhs2 is of size nlp.meta.ncon.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square and solve_least_norm. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.

source
+"Execution stats: first-order stationary"source
FletcherPenaltySolver.ghjvprod_nln!Method
ghjvprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function ghjvprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.go_logMethod
go_log(stp, sub_stp, fx, ncx, mess::String, verbose)

Logging shortcut.

source
FletcherPenaltySolver.hess_nlnMethod
hess_nln(nlp::FletcherPenaltyNLP, x, y; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.hess_nln_coord!Method
hess_nln_nln!(nlp::FletcherPenaltyNLP, x, y, vals; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.hprod_nln!Method
hprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

source
FletcherPenaltySolver.linear_system2Method
p1, q1, p2, q2 = linear_system2(nlp, x)

Call to solve_two_mixed(nlp, x, nlp.gx, nlp.cx), see solve_two_mixed.

source
FletcherPenaltySolver.random_restoration!Method
random_restoration!(meta, stp, sub_stp)

Add a random perturbation to the current iterate.

source
FletcherPenaltySolver.restoration_feasibility!Method
restoration_feasibility!(feasibility_solver, meta, stp, sub_stp, feas_tol, ncx, verbose)

Try to find a feasible point, see feasibility_step.

source
FletcherPenaltySolver.solve_least_normMethod
solve_least_norm(qdsolver::IterativeSolver, A, b, λ)

Solve least squares problem with regularization δ.

source
FletcherPenaltySolver.solve_least_squareMethod
solve_least_square(qdsolver::IterativeSolver, A, b, λ)

Solve least squares problem with regularization λ.

source
FletcherPenaltySolver.solve_two_extrasFunction
invJtJJv, invJtJSsv = solve_two_extras(nlp, x, rhs1, rhs2)

The IterativeSolver variant solve successively a regularized least square, see solve_least_square, and a regularized minres. It returns only a warning if the method failed.

The LDLtSolver variant use successively a regularized cgls and a regularized minres.

source
FletcherPenaltySolver.solve_two_least_squaresFunction
p1, q1, p2, q2 = solve_two_least_squares(nlp, x, rhs1, rhs2)

Solve successively two least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

rhs1 and rhs2 are both of size nlp.meta.nvar.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square. Note that nlp.Aop is not re-evaluated in this case. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.

source
FletcherPenaltySolver.solve_two_mixedFunction
p1, q1, p2, q2 = solve_two_mixed(nlp, x, rhs1, rhs2)

Solve successively a least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

and a least-norm problem.

rhs1 is of size nlp.meta.nvar, and rhs2 is of size nlp.meta.ncon.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square and solve_least_norm. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.

source
FletcherPenaltySolver.update_parameters!Method
update_parameters!(meta, sub_stp, feas)

Update σ. If the current iterate also update ρ.

source
FletcherPenaltySolver.update_parameters_unbdd!Method
update_parameters_unbdd!(meta, sub_stp, feas)

Start or update δ, then call update_parameters!(meta, sub_stp, feas)

source
diff --git a/dev/search/index.html b/dev/search/index.html index 6dd7426..4144d71 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · FletcherPenaltySolver

Loading search...

    +Search · FletcherPenaltySolver

    Loading search...

      diff --git a/dev/tutorial/index.html b/dev/tutorial/index.html index baac5f0..3c3857d 100644 --- a/dev/tutorial/index.html +++ b/dev/tutorial/index.html @@ -3,10 +3,10 @@ T = Float32 nlp = ADNLPModel(x -> (1 - x[1])^2, T[-1.2; 1.0], x -> [10 * (x[2] - x[1]^2)], T[0.0], T[0.0]) stats = fps_solve(nlp, hessian_approx = Val(2), subproblem_solver = tron, rtol = T(1e-4), verbose = 1) -(stats.dual_feas, stats.primal_feas, stats.status, typeof(stats.solution))
      (4.327324f-5, 5.9604645f-7, :first_order, Vector{Float32})

      A factorization-free solver

      The main advantage of fps_solver is the possibility to use Hessian and Jacobian-vector products only, whenever one uses a subproblem solver with the same property. So, it is not necessary to compute and store explicitly those matrices. In the following example, we choose a problem with equality constraints from OptimizationProblems.jl.

      using ADNLPModels, FletcherPenaltySolver, JSOSolvers, OptimizationProblems
      +(stats.dual_feas, stats.primal_feas, stats.status, typeof(stats.solution))
      (1.0967264f-6, 0.0f0, :first_order, Vector{Float32})

      A factorization-free solver

      The main advantage of fps_solver is the possibility to use Hessian and Jacobian-vector products only, whenever one uses a subproblem solver with the same property. So, it is not necessary to compute and store explicitly those matrices. In the following example, we choose a problem with equality constraints from OptimizationProblems.jl.

      using ADNLPModels, FletcherPenaltySolver, JSOSolvers, OptimizationProblems
       nlp = OptimizationProblems.ADNLPProblems.hs28()
       stats = fps_solve(nlp, subproblem_solver = tron, qds_solver = :iterative)
      -(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
      (4.669280834994944e-14, 1.6431300764452317e-14, :first_order, 5.339898109436035)

      Exploring nlp's counter, we can see that no Hessian or Jacobian matrix has been evaluated.

      nlp.counters
        Counters:
      +(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
      (4.669280834994944e-14, 1.6431300764452317e-14, :first_order, 5.760019063949585)

      Exploring nlp's counter, we can see that no Hessian or Jacobian matrix has been evaluated.

      nlp.counters
        Counters:
                    obj: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4                 grad: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5                 cons: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5     
               cons_lin: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5             cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
                  jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                  jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0              jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
      @@ -17,7 +17,7 @@
       

      We can compare this result with ipopt that uses the Jacobian and Hessian matrices.

      using NLPModels, NLPModelsIpopt
       reset!(nlp);
       stats = fps_solve(nlp, subproblem_solver = ipopt, qds_solver = :iterative)
      -(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
      (1.1603416640225207e-13, 2.220446049250313e-16, :first_order, 1.2107939720153809)
      nlp.counters
        Counters:
      +(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
      (1.1603416640225207e-13, 2.220446049250313e-16, :first_order, 1.284782886505127)
      nlp.counters
        Counters:
                    obj: █████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 2                 grad: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3                 cons: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3     
               cons_lin: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3             cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
                  jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                  jac: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1              jac_lin: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1     
      @@ -40,4 +40,4 @@
        0.0

      Another possibility is to reuse the Stopping for another solve.

      new_x0 = 4 * ones(2)
       reinit!(stp, rstate = true, x = new_x0)
       Stopping.reset!(stp.pb)
      -stats = fps_solve(stp)
      "Execution stats: first-order stationary"

      We refer to Stopping.jl and https://solverstoppingjulia.github.io for tutorials and documentation.

      +stats = fps_solve(stp)
      "Execution stats: first-order stationary"

      We refer to Stopping.jl and https://solverstoppingjulia.github.io for tutorials and documentation.