Skip to content

Commit

Permalink
Merge branch 'pacs-course:master' into merge
Browse files Browse the repository at this point in the history
  • Loading branch information
lformaggia authored Sep 21, 2024
2 parents f08487f + 832ee31 commit 8c08016
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions Examples/src/FixedPointSolver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ make VERBOSE=yes
## Accelerators implemented ##
- `NoAccelerator`: the simplest accelerator, it does nothing.
- `ASecant`: It is two level Anderson acceleration, that may be considered as the multidimensional estension of the secant method for finding the zero of $f(x)=x-\phi(x). A giid reference of the technique is *V. Eyert, A Comparative Study on Methods for Convergence Acceleration of Iterative Vector Sequences, Journal of Computational Physics 124 (1996)271–285.* or *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinearacceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221.*
- `Anderson`. I timplements anderson acceleration. A good reference is *D. G. Anderson, Iterative procedures for nonlinear integral equations, J. Assoc. Comput. Mach. 12 (1965) 547–560.* or *D. G. Anderson, Iterative procedures for nonlinear integral equations, J. Assoc. Comput. Mach. 12 (1965) 547–560.*. The implementation is based on the algorithm described in *D. A. Smits, Accelerating the convergence of iterative vector sequences, Numerical Linear Algebra with Applications 4 (1997) 1–30.*. The algorithm is also described in *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinearacceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221.*. The implementation is based on the algorithm described in *D. A. Smits, Accelerating the convergence of iterative vector sequences, Numerical Linear Algebra with Applications 4 (1997) 1–30.*. The algorithm is also described in *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinearacceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221.*. The implementation is based on the algorithm described in *D. A. Smits, Accelerating the convergence of iterative vector sequences, Numerical Linear Algebra with Applications 4 (1997) 1–30.*. The algorithm is also described in *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinearacceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221.*. The implementation is based on the algorithm described in *D. A. Smits, Accelerating the convergence of iterative vector sequences, Numerical Linear Algebra with Applications 4 (1997) 1–30.*. The algorithm is also described in *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinear acceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221*. The implementation is based on the algorithm described in *D. A. Smits, Accelerating the convergence of iterative vector sequences, Numerical Linear Algebra with Applications 4 (1997) 1–30*. The algorithm is also described in the previously cited reference by *H. Fang, Y. Saad*.
- `ASecant`: It is two level Anderson acceleration, that may be considered as the multidimensional estension of the secant method for finding the zero of `f(x)=x-\phi(x)`. A good reference of the technique is found in *H. Fang, Y. Saad, Two classes of multisecant methods for nonlinear acceleration, Numerical Linear Algebra with Applications 16 (2009) 197–221.*
- `Anderson`. It implements Anderson acceleration. A good reference is *H.F. Walker, N. Peng, Anderson Acceleration for Fixed-Point Iterations, SIAM J. Numer. Anal., 49(4), 1715-1735, 2011*. The algorithm is also described in the previously cited paper by H. Fang and Y. Saad. Anderson acceleration requires to indicate the number of previous iterates to be used for the acceleration. The default value is 10, but you may change it by passing a different value in the constructor of the accelerator. The selection of this parameter is critical. Higher values increase the computational cost of each iteration, but it may reduce the number of iterations needed to converge, even if this is not necessarily true. So, you may need to experiment with different values.
# What do I learn from this example? #
Expand All @@ -38,5 +38,5 @@ make VERBOSE=yes
- The use of some standard algorithms
- The use of traits to centralize the declaration of types used throughout the code and to enable the selection among different possible choices.
- The use of policies (the accelerators) to implement part of an algorithm with a set of classes that can be developed separately. it is an example of the Bridge design pattern (here implemented via templates).
- The use of user defined traits to check if an enumerator has a certain value. This is useful to check if we are using EIGEN vectors as argument for the iterator function, since the Anderson Accelerator works only in that case.
- The use of user defined traits to check if an enumerator has a certain value. This is useful to check if we are using EIGEN vectors as argument for the iterator function, since the given implementation of the Anderson acceleration works only in that case.

0 comments on commit 8c08016

Please sign in to comment.