From e279bad5fe5addedfcceb6646bd109cb3b268a15 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 3 Sep 2024 16:51:27 +0000 Subject: [PATCH] build based on a2a1a00 --- dev/.documenter-siteinfo.json | 2 +- dev/api/index.html | 78 ++++++++++++------------ dev/assets/documenter.js | 53 ++++++++-------- dev/block_krylov/index.html | 6 +- dev/block_processes/index.html | 12 ++-- dev/callbacks/index.html | 2 +- dev/examples/bicgstab/index.html | 2 +- dev/examples/block_gmres/index.html | 4 +- dev/examples/car/index.html | 4 +- dev/examples/cg/index.html | 4 +- dev/examples/cg_lanczos_shift/index.html | 4 +- dev/examples/cgls/index.html | 4 +- dev/examples/cgne/index.html | 4 +- dev/examples/craig/index.html | 22 +++---- dev/examples/craigmr/index.html | 4 +- dev/examples/crls/index.html | 4 +- dev/examples/crmr/index.html | 4 +- dev/examples/dqgmres/index.html | 2 +- dev/examples/lsmr/index.html | 4 +- dev/examples/lsqr/index.html | 4 +- dev/examples/minares/index.html | 4 +- dev/examples/minres_qlp/index.html | 2 +- dev/examples/symmlq/index.html | 2 +- dev/examples/tricg/index.html | 14 ++--- dev/examples/trimr/index.html | 14 ++--- dev/factorization-free/index.html | 6 +- dev/gpu/index.html | 2 +- dev/index.html | 2 +- dev/inplace/index.html | 4 +- dev/preconditioners/index.html | 2 +- dev/processes/index.html | 12 ++-- dev/reference/index.html | 2 +- dev/solvers/as/index.html | 10 +-- dev/solvers/gsp/index.html | 6 +- dev/solvers/ln/index.html | 16 ++--- dev/solvers/ls/index.html | 16 ++--- dev/solvers/sid/index.html | 18 +++--- dev/solvers/sp_sqd/index.html | 10 +-- dev/solvers/spd/index.html | 20 +++--- dev/solvers/unsymmetric/index.html | 38 ++++++------ dev/storage/index.html | 2 +- dev/tips/index.html | 2 +- dev/warm-start/index.html | 2 +- 43 files changed, 217 insertions(+), 212 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 7feca7742..4c96c29e3 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-06T20:53:45","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-03T16:51:20","documenter_version":"1.6.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 09f0e10e9..90f9f30b6 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,40 +1,40 @@ -API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
-solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
-solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
-solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
-solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
-solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
-solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
-solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
-solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
-solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
-solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
-solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
-solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
-solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
-solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
-solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
-solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
-solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
-solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
-solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
-solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
-solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
-solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
-solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
-solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
-solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
-solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
-solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
-solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
-solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
-solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
-solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
-solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
-solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
-solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
-solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.BlockGmresSolverType

Type for storing the vectors required by the in-place version of BLOCK-GMRES.

The outer constructors

solver = BlockGmresSolver(m, n, p, memory, SV, SM)
-solver = BlockGmresSolver(A, B; memory=5)

may be used in order to create these vectors. memory is set to div(n,p) if the value given is larger than div(n,p).

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
-[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
-[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
+API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
+solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
+solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
+solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
+solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
+solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
+solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
+solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
+solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
+solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
+solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
+solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
+solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
+solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
+solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
+solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
+solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
+solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
+solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
+solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
+solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
+solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
+solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
+solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
+solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
+solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
+solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
+solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
+solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
+solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
+solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
+solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
+solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
+solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
+solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
+solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.BlockGmresSolverType

Type for storing the vectors required by the in-place version of BLOCK-GMRES.

The outer constructors

solver = BlockGmresSolver(m, n, p, memory, SV, SM)
+solver = BlockGmresSolver(A, B; memory=5)

may be used in order to create these vectors. memory is set to div(n,p) if the value given is larger than div(n,p).

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
+[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
+[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
diff --git a/dev/assets/documenter.js b/dev/assets/documenter.js index b2bdd43ee..82252a11d 100644 --- a/dev/assets/documenter.js +++ b/dev/assets/documenter.js @@ -77,30 +77,35 @@ require(['jquery'], function($) { let timer = 0; var isExpanded = true; -$(document).on("click", ".docstring header", function () { - let articleToggleTitle = "Expand docstring"; - - debounce(() => { - if ($(this).siblings("section").is(":visible")) { - $(this) - .find(".docstring-article-toggle-button") - .removeClass("fa-chevron-down") - .addClass("fa-chevron-right"); - } else { - $(this) - .find(".docstring-article-toggle-button") - .removeClass("fa-chevron-right") - .addClass("fa-chevron-down"); +$(document).on( + "click", + ".docstring .docstring-article-toggle-button", + function () { + let articleToggleTitle = "Expand docstring"; + const parent = $(this).parent(); + + debounce(() => { + if (parent.siblings("section").is(":visible")) { + parent + .find("a.docstring-article-toggle-button") + .removeClass("fa-chevron-down") + .addClass("fa-chevron-right"); + } else { + parent + .find("a.docstring-article-toggle-button") + .removeClass("fa-chevron-right") + .addClass("fa-chevron-down"); - articleToggleTitle = "Collapse docstring"; - } + articleToggleTitle = "Collapse docstring"; + } - $(this) - .find(".docstring-article-toggle-button") - .prop("title", articleToggleTitle); - $(this).siblings("section").slideToggle(); - }); -}); + parent + .children(".docstring-article-toggle-button") + .prop("title", articleToggleTitle); + parent.siblings("section").slideToggle(); + }); + } +); $(document).on("click", ".docs-article-toggle-button", function (event) { let articleToggleTitle = "Expand docstring"; @@ -110,7 +115,7 @@ $(document).on("click", ".docs-article-toggle-button", function (event) { debounce(() => { if (isExpanded) { $(this).removeClass("fa-chevron-up").addClass("fa-chevron-down"); - $(".docstring-article-toggle-button") + $("a.docstring-article-toggle-button") .removeClass("fa-chevron-down") .addClass("fa-chevron-right"); @@ -119,7 +124,7 @@ $(document).on("click", ".docs-article-toggle-button", function (event) { $(".docstring section").slideUp(animationSpeed); } else { $(this).removeClass("fa-chevron-down").addClass("fa-chevron-up"); - $(".docstring-article-toggle-button") + $("a.docstring-article-toggle-button") .removeClass("fa-chevron-right") .addClass("fa-chevron-down"); diff --git a/dev/block_krylov/index.html b/dev/block_krylov/index.html index bb780c9a8..a84f1f70b 100644 --- a/dev/block_krylov/index.html +++ b/dev/block_krylov/index.html @@ -13,10 +13,10 @@ ndrange = (k, k) copy_triangle_kernel!(backend)(R, Q; ndrange=ndrange) KernelAbstractions.synchronize(backend) -end
Krylov.block_gmresFunction
(X, stats) = block_gmres(A, b::AbstractMatrix{FC};
+end
Krylov.block_gmresFunction
(X, stats) = block_gmres(A, b::AbstractMatrix{FC};
                          memory::Int=20, M=I, N=I, ldiv::Bool=false,
                          restart::Bool=false, reorthogonalization::Bool=false,
                          atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                          timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                         callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(X, stats) = block_gmres(A, B, X0::AbstractMatrix; kwargs...)

Block-GMRES can be warm-started from an initial guess X0 where kwargs are the same keyword arguments as above.

Solve the linear system AX = B of size n with p right-hand sides using block-GMRES.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • B: a matrix of size n × p.

Optional argument

  • X0: a matrix of size n × p that represents an initial guess of the solution X.

Keyword arguments

  • memory: if restart = true, the restarted version block-GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new matrices of the block-Krylov basis against all previous matrix;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2 * div(n,p);
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms;
  • callback: function or functor called as callback(solver) that returns true if the block-Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • X: a dense matrix of size n × p;
  • stats: statistics collected on the run in a SimpleStats structure.
source
Krylov.block_gmres!Function
solver = block_gmres!(solver::BlockGmresSolver, B; kwargs...)
-solver = block_gmres!(solver::BlockGmresSolver, B, X0; kwargs...)

where kwargs are keyword arguments of block_gmres.

See BlockGmresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(X, stats) = block_gmres(A, B, X0::AbstractMatrix; kwargs...)

Block-GMRES can be warm-started from an initial guess X0 where kwargs are the same keyword arguments as above.

Solve the linear system AX = B of size n with p right-hand sides using block-GMRES.

Input arguments

Optional argument

Keyword arguments

Output arguments

source
Krylov.block_gmres!Function
solver = block_gmres!(solver::BlockGmresSolver, B; kwargs...)
+solver = block_gmres!(solver::BlockGmresSolver, B, X0; kwargs...)

where kwargs are keyword arguments of block_gmres.

See BlockGmresSolver for more details about the solver.

source
diff --git a/dev/block_processes/index.html b/dev/block_processes/index.html index d9c820bd8..ac576c9c6 100644 --- a/dev/block_processes/index.html +++ b/dev/block_processes/index.html @@ -9,7 +9,7 @@ & \ddots & \ddots & \Psi_k^H \\ & & \Psi_k & \Omega_k \\ & & & \Psi_{k+1} -\end{bmatrix}.\]

The function hermitian_lanczos returns $V_{k+1}$, $\Psi_1$ and $T_{k+1,k}$.

Krylov.hermitian_lanczosMethod
V, Ψ, T = hermitian_lanczos(A, B, k; algo="householder")

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • B: a matrix of size n × p;
  • k: the number of iterations of the block Hermitian Lanczos process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Non-Hermitian Lanczos

If the vectors $b$ and $c$ in the non-Hermitian Lanczos process are replaced by matrices $B$ and $C$ with both $p$ columns, we can derive the block non-Hermitian Lanczos process.

block_nonhermitian_lanczos

After $k$ iterations of the block non-Hermitian Lanczos process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function hermitian_lanczos returns $V_{k+1}$, $\Psi_1$ and $T_{k+1,k}$.

Krylov.hermitian_lanczosMethod
V, Ψ, T = hermitian_lanczos(A, B, k; algo="householder")

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • B: a matrix of size n × p;
  • k: the number of iterations of the block Hermitian Lanczos process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Non-Hermitian Lanczos

If the vectors $b$ and $c$ in the non-Hermitian Lanczos process are replaced by matrices $B$ and $C$ with both $p$ columns, we can derive the block non-Hermitian Lanczos process.

block_nonhermitian_lanczos

After $k$ iterations of the block non-Hermitian Lanczos process, the situation may be summarized as

\[\begin{align*} A V_k &= V_{k+1} T_{k+1,k}, \\ A^H U_k &= U_{k+1} T_{k,k+1}^H, \\ V_k^H U_k &= U_k^H V_k = I_{pk}, @@ -29,7 +29,7 @@ & \ddots & \ddots & \Psi_k^H \\ & & \Phi_k^H & \Omega_k^H \\ & & & \Phi_{k+1}^H -\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\Psi_1$, $T_{k+1,k}$, $U_{k+1}$ $\Phi_1^H$ and $T_{k,k+1}^H$.

Krylov.nonhermitian_lanczosMethod
V, Ψ, T, U, Φᴴ, Tᴴ = nonhermitian_lanczos(A, B, C, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • B: a matrix of size n × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Φᴴ: a dense p × p upper triangular matrix such that U₁Φᴴ = C;
  • Tᴴ: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Arnoldi

If the vector $b$ in the Arnoldi process is replaced by a matrix $B$ with $p$ columns, we can derive the block Arnoldi process.

block_arnoldi

After $k$ iterations of the block Arnoldi process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\Psi_1$, $T_{k+1,k}$, $U_{k+1}$ $\Phi_1^H$ and $T_{k,k+1}^H$.

Krylov.nonhermitian_lanczosMethod
V, Ψ, T, U, Φᴴ, Tᴴ = nonhermitian_lanczos(A, B, C, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • B: a matrix of size n × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Φᴴ: a dense p × p upper triangular matrix such that U₁Φᴴ = C;
  • Tᴴ: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Arnoldi

If the vector $b$ in the Arnoldi process is replaced by a matrix $B$ with $p$ columns, we can derive the block Arnoldi process.

block_arnoldi

After $k$ iterations of the block Arnoldi process, the situation may be summarized as

\[\begin{align*} A V_k &= V_{k+1} H_{k+1,k}, \\ V_k^H V_k &= I_{pk}, \end{align*}\]

where $V_k$ is an orthonormal basis of the block Krylov subspace $\mathcal{K}_k^{\square}(A,B)$,

\[H_{k+1,k} = @@ -39,7 +39,7 @@ & \ddots~ & \ddots & \Psi_{k-1,k} \\ & & \Psi_{k,k-1} & \Psi_{k,k} \\ & & & \Psi_{k+1,k} -\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\Gamma$, and $H_{k+1,k}$.

Related method: BLOCK-GMRES.

Krylov.arnoldiMethod
V, Γ, H = arnoldi(A, B, k; algo="householder", reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • B: a matrix of size n × p;
  • k: the number of iterations of the block Arnoldi process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").
  • reorthogonalization: reorthogonalize the new matrices of the block Krylov basis against all previous matrices.

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Γ: a dense p × p upper triangular matrix such that V₁Γ = B;
  • H: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p.
source

Block Golub-Kahan

If the vector $b$ in the Golub-Kahan process is replaced by a matrix $B$ with $p$ columns, we can derive the block Golub-Kahan process.

block_golub_kahan

After $k$ iterations of the block Golub-Kahan process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\Gamma$, and $H_{k+1,k}$.

Related method: BLOCK-GMRES.

Krylov.arnoldiMethod
V, Γ, H = arnoldi(A, B, k; algo="householder", reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • B: a matrix of size n × p;
  • k: the number of iterations of the block Arnoldi process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").
  • reorthogonalization: reorthogonalize the new matrices of the block Krylov basis against all previous matrices.

Output arguments

  • V: a dense n × p(k+1) matrix;
  • Γ: a dense p × p upper triangular matrix such that V₁Γ = B;
  • H: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p.
source

Block Golub-Kahan

If the vector $b$ in the Golub-Kahan process is replaced by a matrix $B$ with $p$ columns, we can derive the block Golub-Kahan process.

block_golub_kahan

After $k$ iterations of the block Golub-Kahan process, the situation may be summarized as

\[\begin{align*} A V_k &= U_{k+1} B_k, \\ A^H U_{k+1} &= V_{k+1} L_{k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_{pk}, @@ -59,7 +59,7 @@ & & \ddots & \Psi_k^H & \\ & & & \Omega_k^H & \Psi_{k+1}^H \\ & & & & \Omega_{k+1}^H \\ -\end{bmatrix}.\]

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\Psi_1$ and $L_{k+1}$.

Krylov.golub_kahanMethod
V, U, Ψ, L = golub_kahan(A, B, k; algo="householder")

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a matrix of size m × p;
  • k: the number of iterations of the block Golub-Kahan process.

Keyword argument

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense n × p(k+1) matrix;
  • U: a dense m × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that U₁Ψ = B;
  • L: a sparse p(k+1) × p(k+1) block lower bidiagonal matrix with a lower bandwidth p.
source

Block Saunders-Simon-Yip

If the vectors $b$ and $c$ in the Saunders-Simon-Yip process are replaced by matrices $B$ and $C$ with both $p$ columns, we can derive the block Saunders-Simon-Yip process.

block_saunders_simon_yip

After $k$ iterations of the block Saunders-Simon-Yip process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\Psi_1$ and $L_{k+1}$.

Krylov.golub_kahanMethod
V, U, Ψ, L = golub_kahan(A, B, k; algo="householder")

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a matrix of size m × p;
  • k: the number of iterations of the block Golub-Kahan process.

Keyword argument

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense n × p(k+1) matrix;
  • U: a dense m × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that U₁Ψ = B;
  • L: a sparse p(k+1) × p(k+1) block lower bidiagonal matrix with a lower bandwidth p.
source

Block Saunders-Simon-Yip

If the vectors $b$ and $c$ in the Saunders-Simon-Yip process are replaced by matrices $B$ and $C$ with both $p$ columns, we can derive the block Saunders-Simon-Yip process.

block_saunders_simon_yip

After $k$ iterations of the block Saunders-Simon-Yip process, the situation may be summarized as

\[\begin{align*} A U_k &= V_{k+1} T_{k+1,k}, \\ A^H V_k &= U_{k+1} T_{k,k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_{pk}, @@ -79,7 +79,7 @@ & \ddots & \ddots & \Psi_k^H \\ & & \Phi_k^H & \Omega_k^H \\ & & & \Phi_{k+1}^H -\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\Psi_1$, $T_{k+1,k}$, $U_{k+1}$, $\Phi_1^H$ and $T_{k,k+1}^H$.

Krylov.saunders_simon_yipMethod
V, Ψ, T, U, Φᴴ, Tᴴ = saunders_simon_yip(A, B, C, k; algo="householder")

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a matrix of size m × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block Saunders-Simon-Yip process.

Keyword argument

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense m × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Φᴴ: a dense p × p upper triangular matrix such that U₁Φᴴ = C;
  • Tᴴ: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Montoison-Orban

If the vectors $b$ and $c$ in the Montoison-Orban process are replaced by matrices $D$ and $C$ with both $p$ columns, we can derive the block Montoison-Orban process.

block_montoison_orban

After $k$ iterations of the block Montoison-Orban process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\Psi_1$, $T_{k+1,k}$, $U_{k+1}$, $\Phi_1^H$ and $T_{k,k+1}^H$.

Krylov.saunders_simon_yipMethod
V, Ψ, T, U, Φᴴ, Tᴴ = saunders_simon_yip(A, B, C, k; algo="householder")

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a matrix of size m × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block Saunders-Simon-Yip process.

Keyword argument

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").

Output arguments

  • V: a dense m × p(k+1) matrix;
  • Ψ: a dense p × p upper triangular matrix such that V₁Ψ = B;
  • T: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Φᴴ: a dense p × p upper triangular matrix such that U₁Φᴴ = C;
  • Tᴴ: a sparse p(k+1) × pk block tridiagonal matrix with a bandwidth p.
source

Block Montoison-Orban

If the vectors $b$ and $c$ in the Montoison-Orban process are replaced by matrices $D$ and $C$ with both $p$ columns, we can derive the block Montoison-Orban process.

block_montoison_orban

After $k$ iterations of the block Montoison-Orban process, the situation may be summarized as

\[\begin{align*} A U_k &= V_{k+1} H_{k+1,k}, \\ B V_k &= U_{k+1} F_{k+1,k}, \\ V_k^H V_k &= U_k^H U_k = I_{pk}, @@ -99,4 +99,4 @@ & \ddots~ & \ddots & \Phi_{k-1,k} \\ & & \Phi_{k,k-1} & \Phi_{k,k} \\ & & & \Phi_{k+1,k} -\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\Gamma$, $H_{k+1,k}$, $U_{k+1}$, $\Lambda$, and $F_{k+1,k}$.

Krylov.montoison_orbanMethod
V, Γ, H, U, Λ, F = montoison_orban(A, B, D, C, k; algo="householder", reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • D: a matrix of size m × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block Montoison-Orban process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").
  • reorthogonalization: reorthogonalize the new matrices of the block Krylov basis against all previous matrices.

Output arguments

  • V: a dense m × p(k+1) matrix;
  • Γ: a dense p × p upper triangular matrix such that V₁Γ = D;
  • H: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Λ: a dense p × p upper triangular matrix such that U₁Λ = C;
  • F: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p.
source
+\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\Gamma$, $H_{k+1,k}$, $U_{k+1}$, $\Lambda$, and $F_{k+1,k}$.

Krylov.montoison_orbanMethod
V, Γ, H, U, Λ, F = montoison_orban(A, B, D, C, k; algo="householder", reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • D: a matrix of size m × p;
  • C: a matrix of size n × p;
  • k: the number of iterations of the block Montoison-Orban process.

Keyword arguments

  • algo: the algorithm to perform reduced QR factorizations ("gs", "mgs", "givens" or "householder").
  • reorthogonalization: reorthogonalize the new matrices of the block Krylov basis against all previous matrices.

Output arguments

  • V: a dense m × p(k+1) matrix;
  • Γ: a dense p × p upper triangular matrix such that V₁Γ = D;
  • H: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p;
  • U: a dense n × p(k+1) matrix;
  • Λ: a dense p × p upper triangular matrix such that U₁Λ = C;
  • F: a dense p(k+1) × pk block upper Hessenberg matrix with a lower bandwidth p.
source
diff --git a/dev/callbacks/index.html b/dev/callbacks/index.html index 31873584a..139e76b21 100644 --- a/dev/callbacks/index.html +++ b/dev/callbacks/index.html @@ -50,4 +50,4 @@ return false # We don't want to add new stopping conditions end -(x, stats) = gmres(A, b, callback = gmres_callback) +(x, stats) = gmres(A, b, callback = gmres_callback) diff --git a/dev/examples/bicgstab/index.html b/dev/examples/bicgstab/index.html index b46d36377..92a1b1dd0 100644 --- a/dev/examples/bicgstab/index.html +++ b/dev/examples/bicgstab/index.html @@ -51,4 +51,4 @@ [Left preconditioning] Residual norm: 3.2e-07 [Left preconditioning] Number of iterations: 9 [Right preconditioning] Residual norm: 7.8e-06 -[Right preconditioning] Number of iterations: 8 +[Right preconditioning] Number of iterations: 8 diff --git a/dev/examples/block_gmres/index.html b/dev/examples/block_gmres/index.html index e9906ae71..743318490 100644 --- a/dev/examples/block_gmres/index.html +++ b/dev/examples/block_gmres/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 247.78ms + timer: 253.79ms status: solution good enough given atol and rtol -Relative residual: 6.8e-09 +Relative residual: 6.8e-09 diff --git a/dev/examples/car/index.html b/dev/examples/car/index.html index 73cf90b25..007177c7a 100644 --- a/dev/examples/car/index.html +++ b/dev/examples/car/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 18.35ms + timer: 18.74ms status: solution good enough given atol and rtol -Relative residual: 1.5e-08 +Relative residual: 1.5e-08 diff --git a/dev/examples/cg/index.html b/dev/examples/cg/index.html index 936693c11..0c859e243 100644 --- a/dev/examples/cg/index.html +++ b/dev/examples/cg/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 2.68ms + timer: 2.72ms status: solution good enough given atol and rtol -Relative residual: 1.2e-08 +Relative residual: 1.2e-08 diff --git a/dev/examples/cg_lanczos_shift/index.html b/dev/examples/cg_lanczos_shift/index.html index e26174a35..a1a306711 100644 --- a/dev/examples/cg_lanczos_shift/index.html +++ b/dev/examples/cg_lanczos_shift/index.html @@ -33,7 +33,7 @@ indefinite: Bool[0, 1, 1, 1] ‖A‖F: NaN κ₂(A): NaN - timer: 5.03ms + timer: 5.07ms status: solution good enough given atol and rtol Relative residuals with shifts: - 1.4e-08 1.4e-08 1.3e-08 1.3e-08 + 1.4e-08 1.4e-08 1.3e-08 1.3e-08 diff --git a/dev/examples/cgls/index.html b/dev/examples/cgls/index.html index 4924b3cc7..c4a08dbcc 100644 --- a/dev/examples/cgls/index.html +++ b/dev/examples/cgls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 989.25μs + timer: 994.23μs status: solution good enough given atol and rtol CGLS: Relative residual: 1.8e-08 -CGLS: ‖x‖: 8.4e+03 +CGLS: ‖x‖: 8.4e+03 diff --git a/dev/examples/cgne/index.html b/dev/examples/cgne/index.html index 2027cef84..0ae3d37fd 100644 --- a/dev/examples/cgne/index.html +++ b/dev/examples/cgne/index.html @@ -27,7 +27,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 977.42μs + timer: 988.18μs status: solution good enough given atol and rtol CGNE: Relative residual: 1.5e-08 -CGNE: ‖x - x*‖₂: 3.0e-07 +CGNE: ‖x - x*‖₂: 3.0e-07 diff --git a/dev/examples/craig/index.html b/dev/examples/craig/index.html index c77802326..23c3e1a1b 100644 --- a/dev/examples/craig/index.html +++ b/dev/examples/craig/index.html @@ -21,12 +21,12 @@ @printf("Error in y: %7.1e\n", norm(λ * y - xy_exact[n+1:n+m]) / norm(xy_exact[n+1:n+m])) end
CRAIG: system of 5 equations in 8 variables
     k       ‖r‖       ‖x‖       ‖A‖      κ(A)         α        β  timer
-    0  9.37e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.37s
-    1  2.16e-01  2.76e+00  3.39e+00  3.39e+00   3.4e+00  7.8e-02  0.77s
-    2  6.43e-02  2.78e+00  3.49e+00  4.94e+00   8.0e-01  2.4e-01  0.94s
-    3  1.72e-02  2.78e+00  3.58e+00  6.25e+00   7.6e-01  2.0e-01  0.94s
-    4  2.42e-02  2.78e+00  3.64e+00  7.36e+00   3.9e-01  5.5e-01  0.94s
-    5  3.51e-10  2.78e+00  3.70e+00  8.94e+00   6.2e-01  9.0e-09  0.94s
+    0  8.63e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.39s
+    1  2.11e-01  2.77e+00  3.12e+00  3.12e+00   3.1e+00  7.6e-02  0.80s
+    2  1.11e-01  2.78e+00  3.29e+00  4.65e+00   9.2e-01  4.9e-01  0.95s
+    3  3.41e-02  2.78e+00  3.35e+00  5.92e+00   6.0e-01  1.8e-01  0.95s
+    4  3.98e-02  2.79e+00  3.36e+00  6.89e+00   2.2e-01  2.5e-01  0.95s
+    5  2.66e-09  2.79e+00  3.38e+00  8.12e+00   3.2e-01  2.1e-08  0.95s
 
 SimpleStats
  niter: 5
@@ -35,9 +35,9 @@
  residuals: []
  Aresiduals: []
  κ₂(A): []
- timer: 942.84ms
+ timer: 946.87ms
  status: solution good enough for the tolerances given
-Primal feasibility: 3.7e-11
-Dual   feasibility: 2.6e-16
-Error in x: 3.7e-11
-Error in y: 3.2e-11
+Primal feasibility: 3.1e-10 +Dual feasibility: 1.5e-16 +Error in x: 3.1e-10 +Error in y: 1.5e-10 diff --git a/dev/examples/craigmr/index.html b/dev/examples/craigmr/index.html index a4a3f2358..6ae2304f8 100644 --- a/dev/examples/craigmr/index.html +++ b/dev/examples/craigmr/index.html @@ -28,8 +28,8 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 580.55μs + timer: 583.81μs status: found approximate minimum-norm solution CRAIGMR: Relative residual: 1.4e-08 CRAIGMR: ‖x - x*‖₂: 1.5e-05 -CRAIGMR: 0 iterations +CRAIGMR: 0 iterations diff --git a/dev/examples/crls/index.html b/dev/examples/crls/index.html index 69f5c8913..aa356d666 100644 --- a/dev/examples/crls/index.html +++ b/dev/examples/crls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 4.02ms + timer: 4.76ms status: solution good enough given atol and rtol CRLS: Relative residual: 2.1e-08 -CRLS: ‖x‖: 1.0e+04 +CRLS: ‖x‖: 1.0e+04 diff --git a/dev/examples/crmr/index.html b/dev/examples/crmr/index.html index 522bbdc07..f11b3ba02 100644 --- a/dev/examples/crmr/index.html +++ b/dev/examples/crmr/index.html @@ -29,7 +29,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 939.08μs + timer: 925.28μs status: system probably inconsistent but least squares/norm solution found CRMR: Relative residual: 4.0e-06 -CRMR: ‖x - x*‖₂: 6.3e-03 +CRMR: ‖x - x*‖₂: 6.3e-03 diff --git a/dev/examples/dqgmres/index.html b/dev/examples/dqgmres/index.html index 8d8ec7aed..5844393cb 100644 --- a/dev/examples/dqgmres/index.html +++ b/dev/examples/dqgmres/index.html @@ -51,4 +51,4 @@ [Left preconditioning] Residual norm: 7.6e-08 [Left preconditioning] Number of iterations: 34 [Right preconditioning] Residual norm: 3.4e-07 -[Right preconditioning] Number of iterations: 33 +[Right preconditioning] Number of iterations: 33 diff --git a/dev/examples/lsmr/index.html b/dev/examples/lsmr/index.html index 241e6dbdb..2a0661239 100644 --- a/dev/examples/lsmr/index.html +++ b/dev/examples/lsmr/index.html @@ -32,7 +32,7 @@ κ₂(A): 86.71320866451842 ‖A‖F: 43.722077889715436 xNorm: 16032.795019150253 - timer: 15.95ms + timer: 15.92ms status: truncated forward error small enough LSMR: Relative residual: 2.4e-03 -LSMR: ‖x‖: 1.6e+04 +LSMR: ‖x‖: 1.6e+04 diff --git a/dev/examples/lsqr/index.html b/dev/examples/lsqr/index.html index 65a0c59b8..2e65454f0 100644 --- a/dev/examples/lsqr/index.html +++ b/dev/examples/lsqr/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 10.89ms + timer: 10.78ms status: maximum number of iterations exceeded LSQR: Relative residual: 1.4e-03 -LSQR: ‖x‖: 9.4e+03 +LSQR: ‖x‖: 9.4e+03 diff --git a/dev/examples/minares/index.html b/dev/examples/minares/index.html index a94d3ded3..925f4e43c 100644 --- a/dev/examples/minares/index.html +++ b/dev/examples/minares/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 928.79μs + timer: 937.35μs status: solution good enough given atol, rtol and Artol -Relative A-residual: 7.8e-09 +Relative A-residual: 7.8e-09 diff --git a/dev/examples/minres_qlp/index.html b/dev/examples/minres_qlp/index.html index e5104835b..205c68872 100644 --- a/dev/examples/minres_qlp/index.html +++ b/dev/examples/minres_qlp/index.html @@ -17,4 +17,4 @@ @printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [-4.9e-15  2.4e-15  4.4e-16  4.0e+00 ]
 Relative residual norm ‖r‖:  7.3e-01
 Solution x: [ 1.0e+00  1.0e+00  1.0e+00 -2.0e-15 ]
-Minimum-norm solution? true
+Minimum-norm solution? true diff --git a/dev/examples/symmlq/index.html b/dev/examples/symmlq/index.html index b3611afe6..4dfde5d72 100644 --- a/dev/examples/symmlq/index.html +++ b/dev/examples/symmlq/index.html @@ -17,4 +17,4 @@ @printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [ 0.0e+00  4.4e-16  0.0e+00  0.0e+00 ]
 Relative residual norm ‖r‖:  1.2e-16
 Solution x: [ 1.0e+00  1.0e+00  1.0e+00  0.0e+00 ]
-Minimum-norm solution? true
+Minimum-norm solution? true diff --git a/dev/examples/tricg/index.html b/dev/examples/tricg/index.html index b9f01bccc..9ec115f24 100644 --- a/dev/examples/tricg/index.html +++ b/dev/examples/tricg/index.html @@ -74,11 +74,11 @@ TriCG: Relative residual: 3.9e-10 TriCG: system of 10 equations in 10 variables k ‖rₖ‖ βₖ₊₁ γₖ₊₁ timer - 0 1.1e+01 6.7e+00 8.7e+00 0.23s - 1 5.8e+00 2.6e+02 3.0e+02 0.33s - 2 2.5e+00 2.9e+02 2.2e+02 0.33s - 3 1.2e+00 2.4e+01 5.5e+01 0.33s - 4 1.9e-01 3.4e-01 8.1e-01 0.33s - 5 1.2e-08 6.7e-08 7.7e-08 0.33s + 0 1.1e+01 6.7e+00 8.7e+00 0.24s + 1 5.8e+00 2.6e+02 3.0e+02 0.35s + 2 2.5e+00 2.9e+02 2.2e+02 0.35s + 3 1.2e+00 2.4e+01 5.5e+01 0.35s + 4 1.9e-01 3.4e-01 8.1e-01 0.35s + 5 1.2e-08 6.7e-08 7.7e-08 0.35s -TriCG: Relative residual: 1.2e-08 +TriCG: Relative residual: 1.2e-08 diff --git a/dev/examples/trimr/index.html b/dev/examples/trimr/index.html index 94d589226..235f3b1e4 100644 --- a/dev/examples/trimr/index.html +++ b/dev/examples/trimr/index.html @@ -58,11 +58,11 @@ TriMR: Relative residual: 2.3e-11 TriMR: system of 10 equations in 10 variables k ‖rₖ‖ βₖ₊₁ γₖ₊₁ timer - 0 1.1e+00 8.7e-01 6.8e-01 0.19s - 1 7.3e-01 4.0e+00 8.8e-01 0.19s - 2 4.4e-01 2.2e+00 2.0e+00 0.19s - 3 3.2e-01 9.3e-02 1.4e+00 0.19s - 4 2.2e-03 1.0e-02 7.0e-03 0.19s - 5 1.2e-13 3.5e-10 1.9e-11 0.19s + 0 1.1e+00 8.7e-01 6.8e-01 0.20s + 1 7.3e-01 4.0e+00 8.8e-01 0.20s + 2 4.4e-01 2.2e+00 2.0e+00 0.20s + 3 3.2e-01 9.3e-02 1.4e+00 0.20s + 4 2.2e-03 1.0e-02 7.0e-03 0.20s + 5 1.2e-13 3.5e-10 1.9e-11 0.20s -TriMR: Relative residual: 1.2e-13 +TriMR: Relative residual: 1.2e-13 diff --git a/dev/factorization-free/index.html b/dev/factorization-free/index.html index ee430fbd3..cbec358ff 100644 --- a/dev/factorization-free/index.html +++ b/dev/factorization-free/index.html @@ -52,7 +52,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 464.76ms + timer: 537.42ms status: solution good enough given atol and rtol )

Example 2: The Gauss-Newton Method for Nonlinear Least Squares

At each iteration of the Gauss-Newton method applied to a nonlinear least-squares objective $f(x) = \tfrac{1}{2}\| F(x)\|^2$ where $F : \mathbb{R}^n \rightarrow \mathbb{R}^m$ is $\mathcal{C}^1$, we solve the subproblem:

\[\min_{d \in \mathbb{R}^n}~~\tfrac{1}{2}~\|J(x_k) d + F(x_k)\|^2,\]

where $J(x)$ is the Jacobian of $F$ at $x$.

An appropriate iterative method to solve the above linear least-squares problems is LSMR. We could pass the explicit Jacobian to LSMR as follows:

using ForwardDiff, Krylov
 
@@ -85,6 +85,6 @@
  κ₂(A): 1.2777264193293687
  ‖A‖F: 5.328138145631234
  xNorm: 0.5623144027914095
- timer: 541.69ms
+ timer: 584.82ms
  status: found approximate minimum least-squares solution
-)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

+)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

diff --git a/dev/gpu/index.html b/dev/gpu/index.html index a0eedb6b7..a60515556 100644 --- a/dev/gpu/index.html +++ b/dev/gpu/index.html @@ -180,4 +180,4 @@ b_gpu = MtlVector(b_cpu) # Solve a dense least-norm problem on an Apple M1 GPU -x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

+x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

diff --git a/dev/index.html b/dev/index.html index d631b0f1d..e34a787ef 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,4 +1,4 @@ Home · Krylov.jl

Krylov.jl documentation

This package provides implementations of certain of the most useful Krylov method for a variety of problems:

1 - Square or rectangular full-rank systems

\[ Ax = b\]

should be solved when b lies in the range space of A. This situation occurs when

  • A is square and nonsingular,
  • A is tall and has full column rank and b lies in the range of A.

2 - Linear least-squares problems

\[ \min \|b - Ax\|\]

should be solved when b is not in the range of A (inconsistent systems), regardless of the shape and rank of A. This situation mainly occurs when

  • A is square and singular,
  • A is tall and thin.

Underdetermined systems are less common but also occur.

If there are infinitely many such x (because A is column rank-deficient), one with minimum norm is identified

\[ \min \|x\| \quad \text{subject to} \quad x \in \argmin \|b - Ax\|.\]

3 - Linear least-norm problems

\[ \min \|x\| \quad \text{subject to} \quad Ax = b\]

should be solved when A is column rank-deficient but b is in the range of A (consistent systems), regardless of the shape of A. This situation mainly occurs when

  • A is square and singular,
  • A is short and wide.

Overdetermined systems are less common but also occur.

4 - Adjoint systems

\[ Ax = b \quad \text{and} \quad A^H y = c\]

where A can have any shape.

5 - Saddle-point and Hermitian quasi-definite systems

\[ \begin{bmatrix} M & \phantom{-}A \\ A^H & -N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \left(\begin{bmatrix} b \\ 0 \end{bmatrix},\begin{bmatrix} 0 \\ c \end{bmatrix},\begin{bmatrix} b \\ c \end{bmatrix}\right)\]

where A can have any shape.

6 - Generalized saddle-point and non-Hermitian partitioned systems

\[ \begin{bmatrix} M & A \\ B & N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} b \\ c \end{bmatrix}\]

where A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:

  • A is not available explicitly,
  • A would be dense or would consume an excessive amount of memory if it were materialized,
  • factors would consume an excessive amount of memory.

Iterative methods are recommended in either of the following situations:

  • the problem is sufficiently large that a factorization is not feasible or would be slow,
  • an effective preconditioner is known in cases where the problem has unfavorable spectral structure,
  • the operator can be represented efficiently as a sparse matrix,
  • the operator is fast, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as dense matrices.

Features

All solvers in Krylov.jl have in-place version, are compatible with GPU and work in any floating-point data type.

How to Install

Krylov can be installed and tested through the Julia package manager:

julia> ]
 pkg> add Krylov
-pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

+pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

diff --git a/dev/inplace/index.html b/dev/inplace/index.html index 661a72f25..2f2db0a44 100644 --- a/dev/inplace/index.html +++ b/dev/inplace/index.html @@ -7,7 +7,7 @@ dqgmres!(dqgmres_solver, A2, b2) lsqr_solver = LsqrSolver(m, n, CuVector{Float32}) -lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
+lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
 
 function newton(∇f, ∇²f, x₀; itmax = 200, tol = 1e-8)
 
@@ -65,4 +65,4 @@
         tired = iter ≥ itmax
     end
     return x
-end
+end diff --git a/dev/preconditioners/index.html b/dev/preconditioners/index.html index 4f2332c66..3d56c427c 100644 --- a/dev/preconditioners/index.html +++ b/dev/preconditioners/index.html @@ -60,4 +60,4 @@ (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')), (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T'))) -x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b +x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b diff --git a/dev/processes/index.html b/dev/processes/index.html index 77f4d4e3d..818148972 100644 --- a/dev/processes/index.html +++ b/dev/processes/index.html @@ -41,7 +41,7 @@ \begin{bmatrix} T_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$, $\beta_1$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosMethod
V, β, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$, $\beta_1$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosMethod
V, β, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} A V_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H U_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H U_k &= U_k^H V_k = I_k, @@ -62,7 +62,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_k -\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosMethod
V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} +\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosMethod
V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} P_0(A) &= I_n, \\ P_1(A) &= \left(\dfrac{A - \alpha_1 I_n}{\beta_1}\right) P_0(A), \\ P_k(A) &= \left(\dfrac{A - \alpha_k I_n}{\beta_k}\right) P_{k-1}(A) - \dfrac{\gamma_k}{\beta_{k-1}} P_{k-2}(A), \quad k \ge 2, \\ @@ -91,7 +91,7 @@ \begin{bmatrix} H_{k} \\ h_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\beta$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiMethod
V, β, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword argument

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\beta$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiMethod
V, β, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword argument

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} A V_k &= U_{k+1} B_k, \\ A^H U_{k+1} &= V_k B_k^H + \bar{\alpha}_{k+1} v_{k+1} e_{k+1}^T = V_{k+1} L_{k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -115,7 +115,7 @@ \begin{bmatrix} L_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\beta_1$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanMethod
V, U, β, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • β: a coefficient such that βu₁ = b;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\beta_1$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanMethod
V, U, β, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • β: a coefficient such that βu₁ = b;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} A U_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H V_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -136,7 +136,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_{k} -\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Krylov.saunders_simon_yipMethod
V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Krylov.saunders_simon_yipMethod
V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} A U_k &= V_k H_k + h_{k+1,k} v_{k+1} e_k^T = V_{k+1} H_{k+1,k}, \\ B V_k &= U_k F_k + f_{k+1,k} u_{k+1} e_k^T = U_{k+1} F_{k+1,k}, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -164,4 +164,4 @@ \begin{bmatrix} F_{k} \\ f_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\beta$, $H_{k+1,k}$, $U_{k+1}$, $\gamma$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanMethod
V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword argument

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • γ: a coefficient such that γu₁ = c;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
+\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\beta$, $H_{k+1,k}$, $U_{k+1}$, $\gamma$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanMethod
V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword argument

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • γ: a coefficient such that γu₁ = c;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
diff --git a/dev/reference/index.html b/dev/reference/index.html index 0be90b02a..0b89b9f7a 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,2 +1,2 @@ -Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
+Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
diff --git a/dev/solvers/as/index.html b/dev/solvers/as/index.html index c3066edf5..600747566 100644 --- a/dev/solvers/as/index.html +++ b/dev/solvers/as/index.html @@ -1,14 +1,14 @@ -Adjoint systems · Krylov.jl

BiLQR

Krylov.bilqrFunction
(x, y, stats) = bilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+Adjoint systems · Krylov.jl

BiLQR

Krylov.bilqrFunction
(x, y, stats) = bilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                       transfer_to_bicg::Bool=true, atol::T=√eps(T),
                       rtol::T=√eps(T), itmax::Int=0,
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = bilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

BiLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine BiLQ and QMR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
-solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
+solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                        transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                        rtol::T=√eps(T), itmax::Int=0,
                        timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine USYMLQ and USYMQR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length m that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
-solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
+[Aᴴ 0] [x] [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length m that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
+solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
diff --git a/dev/solvers/gsp/index.html b/dev/solvers/gsp/index.html index b44c86a8d..f341861a9 100644 --- a/dev/solvers/gsp/index.html +++ b/dev/solvers/gsp/index.html @@ -1,5 +1,5 @@ -Generalized saddle-point and non-Hermitian partitioned systems · Krylov.jl

GPMR

Krylov.gpmrFunction
(x, y, stats) = gpmr(A, B, b::AbstractVector{FC}, c::AbstractVector{FC};
+Generalized saddle-point and non-Hermitian partitioned systems · Krylov.jl

GPMR

Krylov.gpmrFunction
(x, y, stats) = gpmr(A, B, b::AbstractVector{FC}, c::AbstractVector{FC};
                      memory::Int=20, C=I, D=I, E=I, F=I,
                      ldiv::Bool=false, gsp::Bool=false,
                      λ::FC=one(FC), μ::FC=one(FC),
@@ -9,5 +9,5 @@
                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = gpmr(A, B, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

GPMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given matrices A of dimension m × n and B of dimension n × m, GPMR solves the non-Hermitian partitioned linear system

[ λIₘ   A  ] [ x ] = [ b ]
 [  B   μIₙ ] [ y ]   [ c ],

of size (n+m) × (n+m) where λ and μ are real or complex numbers. A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

This implementation allows left and right block diagonal preconditioners

[ C    ] [ λM   A ] [ E    ] [ E⁻¹x ] = [ Cb ]
 [    D ] [  B  μN ] [    F ] [ F⁻¹y ]   [ Dc ],

and can solve

[ λM   A ] [ x ] = [ b ]
-[  B  μN ] [ y ]   [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • memory: if restart = true, the restarted version GPMR(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • C: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal left preconditioner;
  • D: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal left preconditioner;
  • E: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal right preconditioner;
  • F: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal right preconditioner;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • gsp: if true, set λ = 1 and μ = 0 for generalized saddle-point systems;
  • λ and μ: diagonal scaling factors of the partitioned linear system;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
-solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
+[ B μN ] [ y ] [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • memory: if restart = true, the restarted version GPMR(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • C: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal left preconditioner;
  • D: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal left preconditioner;
  • E: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal right preconditioner;
  • F: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal right preconditioner;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • gsp: if true, set λ = 1 and μ = 0 for generalized saddle-point systems;
  • λ and μ: diagonal scaling factors of the partitioned linear system;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
+solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
diff --git a/dev/solvers/ln/index.html b/dev/solvers/ln/index.html index 06e00fc2e..bc1fd5768 100644 --- a/dev/solvers/ln/index.html +++ b/dev/solvers/ln/index.html @@ -1,15 +1,15 @@ -Least-norm problems · Krylov.jl

CGNE

Krylov.cgneFunction
(x, stats) = cgne(A, b::AbstractVector{FC};
+Least-norm problems · Krylov.jl

CGNE

Krylov.cgneFunction
(x, stats) = cgne(A, b::AbstractVector{FC};
                   N=I, ldiv::Bool=false,
                   λ::T=zero(T), atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

  • J. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.
  • J. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.
source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

  • J. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.
  • J. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.
source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
                   N=I, ldiv::Bool=false,
                   λ::T=zero(T), atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
                      M=I, N=I, ldiv::Bool=false,
                      transfer_to_craig::Bool=true,
                      sqd::Bool=false, λ::T=zero(T),
@@ -19,7 +19,7 @@
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter.

For a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, LNLQ solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LNLQ is then equivalent to applying SYMMLQ to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, LNLQ solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       transfer_to_lsqr::Bool=false, sqd::Bool=false,
                       λ::T=zero(T), btol::T=√eps(T),
@@ -28,16 +28,16 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.

For a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, CRAIG solves the symmetric and quasi-definite system

[ -F     Aᴴ ] [ x ]   [ 0 ]
 [  A   λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIG is then equivalent to applying CG to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIG solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
                         M=I, N=I, ldiv::Bool=false,
                         sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                         callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + λ²y = b

of size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind

(AAᴴ + λ²I) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖  s.t.  x ∈ argmin ‖Ax - b‖.

If λ > 0, CRAIGMR solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIGMR is then equivalent to applying MINRES to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIGMR solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                     rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. When A is Hermitian and b = c, USYMLQ is equivalent to SYMMLQ. USYMLQ is considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
-solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. When A is Hermitian and b = c, USYMLQ is equivalent to SYMMLQ. USYMLQ is considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
+solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
diff --git a/dev/solvers/ls/index.html b/dev/solvers/ls/index.html index 635472831..b43575074 100644 --- a/dev/solvers/ls/index.html +++ b/dev/solvers/ls/index.html @@ -1,15 +1,15 @@ -Least-squares problems · Krylov.jl

CGLS

Krylov.cglsFunction
(x, stats) = cgls(A, b::AbstractVector{FC};
+Least-squares problems · Krylov.jl

CGLS

Krylov.cglsFunction
(x, stats) = cgls(A, b::AbstractVector{FC};
                   M=I, ldiv::Bool=false, radius::T=zero(T),
                   λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                   itmax::Int=0, timemax::Float64=Inf,
                   verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
                   M=I, ldiv::Bool=false, radius::T=zero(T),
                   λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                   itmax::Int=0, timemax::Float64=Inf,
                   verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, transfer_to_lsqr::Bool=false,
                   sqd::Bool=false, λ::T=zero(T),
@@ -20,7 +20,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations

(AᴴA + λ²I) x = Aᴴb

but is more stable.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -30,7 +30,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CGLS) but is more stable.

LSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.

If λ > 0, LSQR solves the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -40,8 +40,8 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CRLS) but is more stable.

LSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.

LSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. When A is Hermitian and b = c, QMR is equivalent to MINRES. USYMQR is considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
-solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. When A is Hermitian and b = c, QMR is equivalent to MINRES. USYMQR is considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
+solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
diff --git a/dev/solvers/sid/index.html b/dev/solvers/sid/index.html index b79c0fff9..b5f0a4ff8 100644 --- a/dev/solvers/sid/index.html +++ b/dev/solvers/sid/index.html @@ -1,30 +1,30 @@ -Hermitian indefinite linear systems · Krylov.jl

SYMMLQ

Krylov.symmlqFunction
(x, stats) = symmlq(A, b::AbstractVector{FC};
+Hermitian indefinite linear systems · Krylov.jl

SYMMLQ

Krylov.symmlqFunction
(x, stats) = symmlq(A, b::AbstractVector{FC};
                     M=I, ldiv::Bool=false, window::Int=5,
                     transfer_to_cg::Bool=true, λ::T=zero(T),
                     λest::T=zero(T), etol::T=√eps(T),
                     conlim::T=1/√eps(T), atol::T=√eps(T),
                     rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;
  • λ: regularization parameter;
  • λest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SymmlqStats structure.

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
-solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;
  • λ: regularization parameter;
  • λest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SymmlqStats structure.

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
+solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
                     M=I, ldiv::Bool=false, window::Int=5,
                     λ::T=zero(T), atol::T=√eps(T),
                     rtol::T=√eps(T), etol::T=√eps(T),
                     conlim::T=1/√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
-solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
+solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false, Artol::T=√eps(T),
                         λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
-solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
+solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
                      M=I, ldiv::Bool=false,
                      λ::T = zero(T), atol::T=√eps(T),
                      rtol::T=√eps(T), Artol::T = √eps(T),
                      itmax::Int=0, timemax::Float64=Inf,
                      verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
-solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
+solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
diff --git a/dev/solvers/sp_sqd/index.html b/dev/solvers/sp_sqd/index.html index ef751d03d..1bc2c583d 100644 --- a/dev/solvers/sp_sqd/index.html +++ b/dev/solvers/sp_sqd/index.html @@ -1,5 +1,5 @@ -Saddle-point and Hermitian quasi-definite systems · Krylov.jl

TriCG

Krylov.tricgFunction
(x, y, stats) = tricg(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+Saddle-point and Hermitian quasi-definite systems · Krylov.jl

TriCG

Krylov.tricgFunction
(x, y, stats) = tricg(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       spd::Bool=false, snd::Bool=false,
                       flip::Bool=false, τ::T=one(T),
@@ -8,8 +8,8 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = tricg(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriCG can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriCG solves the Hermitian linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0 and F = N⁻¹ ≻ 0. b and c must both be nonzero. TriCG could breakdown if τ = 0 or ν = 0. It's recommended to use TriMR in these cases.

By default, TriCG solves Hermitian and quasi-definite linear systems with τ = 1 and ν = -1.

TriCG is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
-solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
+solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       spd::Bool=false, snd::Bool=false,
                       flip::Bool=false, sp::Bool=false,
@@ -18,5 +18,5 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trimr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriMR solves the symmetric linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0, F = N⁻¹ ≻ 0. b and c must both be nonzero. TriMR handles saddle-point systems (τ = 0 or ν = 0) and adjoint systems (τ = 0 and ν = 0) without any risk of breakdown.

By default, TriMR solves symmetric and quasi-definite linear systems with τ = 1 and ν = -1.

TriMR is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • sp: if true, set τ = 1 and ν = 0 for saddle-point systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
-solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
+[ 0 N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • sp: if true, set τ = 1 and ν = 0 for saddle-point systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
+solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
diff --git a/dev/solvers/spd/index.html b/dev/solvers/spd/index.html index f02ec3f7b..8546c5074 100644 --- a/dev/solvers/spd/index.html +++ b/dev/solvers/spd/index.html @@ -1,31 +1,31 @@ -Hermitian positive definite linear systems · Krylov.jl

CG

Krylov.cgFunction
(x, stats) = cg(A, b::AbstractVector{FC};
+Hermitian positive definite linear systems · Krylov.jl

CG

Krylov.cgFunction
(x, stats) = cg(A, b::AbstractVector{FC};
                 M=I, ldiv::Bool=false, radius::T=zero(T),
                 linesearch::Bool=false, atol::T=√eps(T),
                 rtol::T=√eps(T), itmax::Int=0,
                 timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
-solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
+solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
                 M=I, ldiv::Bool=false, radius::T=zero(T),
                 linesearch::Bool=false, γ::T=√eps(T),
                 atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                 timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
-solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
+solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
                  M=I, ldiv::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T),
                  itmax::Int=0, timemax::Float64=Inf,
                  verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
-solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
+solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false,
                         check_curvature::Bool=false, atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
-solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
+solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
                               M=I, ldiv::Bool=false,
                               check_curvature::Bool=false, atol::T=√eps(T),
                               rtol::T=√eps(T), itmax::Int=0,
                               timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                              callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n;
  • shifts: a vector of length p.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a vector of p dense vectors, each one of length n;
  • stats: statistics collected on the run in a LanczosShiftStats structure.

References

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n;
  • shifts: a vector of length p.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a vector of p dense vectors, each one of length n;
  • stats: statistics collected on the run in a LanczosShiftStats structure.

References

source
diff --git a/dev/solvers/unsymmetric/index.html b/dev/solvers/unsymmetric/index.html index 9cf2d2dce..8c96f0f2d 100644 --- a/dev/solvers/unsymmetric/index.html +++ b/dev/solvers/unsymmetric/index.html @@ -1,54 +1,54 @@ -Non-Hermitian square linear systems · Krylov.jl

BiLQ

Krylov.bilqFunction
(x, stats) = bilq(A, b::AbstractVector{FC};
+Non-Hermitian square linear systems · Krylov.jl

BiLQ

Krylov.bilqFunction
(x, stats) = bilq(A, b::AbstractVector{FC};
                   c::AbstractVector{FC}=b, transfer_to_bicg::Bool=true,
                   M=I, N=I, ldiv::Bool=false, atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf,
                   verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ. BiLQ requires support for adjoint(M) and adjoint(N) if preconditioners are provided.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
-solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ. BiLQ requires support for adjoint(M) and adjoint(N) if preconditioners are provided.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
+solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, M=I, N=I, ldiv::Bool=false, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0,
-                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES. QMR requires support for adjoint(M) and adjoint(N) if preconditioners are provided.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
-solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
+                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES. QMR requires support for adjoint(M) and adjoint(N) if preconditioners are provided.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
+solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, M=I, N=I,
                  ldiv::Bool=false, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
-solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
+solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
                       c::AbstractVector{FC}=b, M=I, N=I,
                       ldiv::Bool=false, atol::T=√eps(T),
                       rtol::T=√eps(T), itmax::Int=0,
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
-solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
+                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
+solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
                   memory::Int=20, M=I, N=I, ldiv::Bool=false,
                   reorthogonalization::Bool=false, atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
-solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
+solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
                  memory::Int=20, M=I, N=I, ldiv::Bool=false,
                  restart::Bool=false, reorthogonalization::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
-solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
+solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
                      memory::Int=20, M=I, N=I, ldiv::Bool=false,
                      reorthogonalization::Bool=false, atol::T=√eps(T),
                      rtol::T=√eps(T), itmax::Int=0,
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
-solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
+                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
+solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
                    memory::Int=20, M=I, N=I, ldiv::Bool=false,
                    restart::Bool=false, reorthogonalization::Bool=false,
                    atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                    timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
-solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
+                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
+solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
                     memory::Int=20, M=I, N=I, ldiv::Bool=false,
                     restart::Bool=false, reorthogonalization::Bool=false,
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
-solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
+solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
diff --git a/dev/storage/index.html b/dev/storage/index.html index fb8ca285a..54f2529ed 100644 --- a/dev/storage/index.html +++ b/dev/storage/index.html @@ -52,4 +52,4 @@ └──────────────────┴──────────────────┴────────────────────┘

If we want the total number of bytes used by the solver, we can call nbytes = sizeof(solver).

nbytes = sizeof(solver)
560121

Thereafter, we can use Base.format_bytes(nbytes) to recover what is displayed in the REPL.

Base.format_bytes(nbytes)
"546.993 KiB"

To verify that we match the theoretical results, we just need to multiply the storage requirement of a method by the number of bytes associated to the precision of the linear problem. For instance, we need 4 bytes for the precision Float32, 8 bytes for precisions Float64 and ComplexF32, and 16 bytes for the precision ComplexF64.

FC = Float64                            # precision of the least-squares problem
 ncoefs_lsmr = 5*n + 2*m                 # number of coefficients
 nbytes_lsmr = sizeof(FC) * ncoefs_lsmr  # number of bytes
560000

Therefore, you can check that you have enough memory in RAM to allocate a KrylovSolver.

free_nbytes = Sys.free_memory()
-Base.format_bytes(free_nbytes)  # Total free memory in RAM in bytes.
"13.576 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
+Base.format_bytes(free_nbytes) # Total free memory in RAM in bytes.
"13.442 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
diff --git a/dev/tips/index.html b/dev/tips/index.html index 605116384..74e80aed3 100644 --- a/dev/tips/index.html +++ b/dev/tips/index.html @@ -23,4 +23,4 @@ julia -t N # alternative: --threads N JULIA_NUM_THREADS=N julia

Thereafter, you can verify the number of threads usable by Julia

using Base.Threads
-nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

+nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

diff --git a/dev/warm-start/index.html b/dev/warm-start/index.html index 00febd816..2b5b2481a 100644 --- a/dev/warm-start/index.html +++ b/dev/warm-start/index.html @@ -23,4 +23,4 @@ Δx, Δy, stats = trimr(A, b₀, c₀, M=M, N=N) x = x₀ + Δx -y = y₀ + Δy +y = y₀ + Δy