| |
| namespace Eigen { |
| |
| /** \page TutorialAdvancedLinearAlgebra Tutorial 3/3 - Advanced linear algebra |
| \ingroup Tutorial |
| |
| <div class="eimainmenu">\ref index "Overview" |
| | \ref TutorialCore "Core features" |
| | \ref TutorialGeometry "Geometry" |
| | \b Advanced \b linear \b algebra |
| </div> |
| |
| \b Table \b of \b contents |
| - \ref TutorialAdvSolvers |
| - \ref TutorialAdvLU |
| - \ref TutorialAdvCholesky |
| - \ref TutorialAdvQR |
| - \ref TutorialAdvEigenProblems |
| |
| \section TutorialAdvSolvers Solving linear problems |
| |
| This part of the tutorial focuses on solving linear problem of the form \f$ A \mathbf{x} = b \f$, |
| where both \f$ A \f$ and \f$ b \f$ are known, and \f$ x \f$ is the unknown. Moreover, \f$ A \f$ |
| assumed to be a squared matrix. Of course, the methods described here can be used whenever an expression |
| involve the product of an inverse matrix with a vector or another matrix: \f$ A^{-1} B \f$. |
| Eigen offers various algorithms to this problem, and its choice mainly depends on the nature of |
| the matrix \f$ A \f$, such as its shape, size and numerical properties. |
| |
| \subsection TutorialAdvSolvers_Triangular Triangular solver |
| If the matrix \f$ A \f$ is triangular (upper or lower) and invertible (the coefficients of the diagonal |
| are all not zero), then the problem can be solved directly using MatrixBase::solveTriangular(), or better, |
| MatrixBase::solveTriangularInPlace(). Here is an example: |
| <table class="tutorial_code"><tr><td> |
| \include MatrixBase_marked.cpp |
| </td> |
| <td> |
| output: |
| \include MatrixBase_marked.out |
| </td></tr></table> |
| |
| See MatrixBase::solveTriangular() for more details. |
| |
| |
| \subsection TutorialAdvSolvers_Inverse Direct inversion (for small matrices) |
| If the matrix \f$ A \f$ is small (\f$ \leq 4 \f$) and invertible, then a good approach is to directly compute |
| the inverse of the matrix \f$ A \f$, and then obtain the solution \f$ x \f$ by \f$ \mathbf{x} = A^{-1} b \f$. With Eigen, |
| this can be implemented like this: |
| |
| \code |
| #include <Eigen/LU> |
| Matrix4f A = Matrix4f::Random(); |
| Vector4f b = Vector4f::Random(); |
| Vector4f x = A.inverse() * b; |
| \endcode |
| |
| Note that the function inverse() is defined in the LU module. |
| See MatrixBase::inverse() for more details. |
| |
| |
| \subsection TutorialAdvSolvers_Symmetric Cholesky (for positive definite matrices) |
| If the matrix \f$ A \f$ is \b positive \b definite, then |
| the best method is to use a Cholesky decomposition. |
| Such positive definite matrices often arise when solving overdetermined problems in a least square sense (see below). |
| Eigen offers two different Cholesky decompositions: a \f$ LL^T \f$ decomposition where L is a lower triangular matrix, |
| and a \f$ LDL^T \f$ decomposition where L is lower triangular with unit diagonal and D is a diagonal matrix. |
| The latter avoids square roots and is therefore slightly more stable than the former one. |
| \code |
| #include <Eigen/Cholesky> |
| MatrixXf D = MatrixXf::Random(8,4); |
| MatrixXf A = D.transpose() * D; |
| VectorXf b = D.transpose() * VectorXf::Random(4); |
| VectorXf x; |
| A.llt().solve(b,&x); // using a LLT factorization |
| A.ldlt().solve(b,&x); // using a LDLT factorization |
| \endcode |
| when the value of the right hand side \f$ b \f$ is not needed anymore, then it is faster to use |
| the \em in \em place API, e.g.: |
| \code |
| A.llt().solveInPlace(b); |
| \endcode |
| In that case the value of \f$ b \f$ is replaced by the result \f$ x \f$. |
| |
| If the linear problem has to solved for various right hand side \f$ b_i \f$, but same matrix \f$ A \f$, |
| then it is highly recommended to perform the decomposition of \$ A \$ only once, e.g.: |
| \code |
| // ... |
| LLT<MatrixXf> lltOfA(A); |
| lltOfA.solveInPlace(b0); |
| lltOfA.solveInPlace(b1); |
| // ... |
| \endcode |
| |
| \sa Cholesky_Module, LLT::solve(), LLT::solveInPlace(), LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT. |
| |
| |
| \subsection TutorialAdvSolvers_LU LU decomposition (for most cases) |
| If the matrix \f$ A \f$ does not fit in any of the previous categories, or if you are unsure about the numerical |
| stability of your problem, then you can use the LU solver based on a decomposition of the same name : see the section \ref TutorialAdvLU below. Actually, Eigen's LU module does not implement a standard LU decomposition, but rather a so-called LU decomposition |
| with full pivoting and rank update which has much better numerical stability. |
| The API of the LU solver is the same than the Cholesky one, except that there is no \em in \em place variant: |
| \code |
| #include <Eigen/LU> |
| MatrixXf A = MatrixXf::Random(20,20); |
| VectorXf b = VectorXf::Random(20); |
| VectorXf x; |
| A.lu().solve(b, &x); |
| \endcode |
| |
| Again, the LU decomposition can be stored to be reused or to perform other kernel operations: |
| \code |
| // ... |
| LU<MatrixXf> luOfA(A); |
| luOfA.solve(b, &x); |
| // ... |
| \endcode |
| |
| See the section \ref TutorialAdvLU below. |
| |
| \sa class LU, LU::solve(), LU_Module |
| |
| |
| \subsection TutorialAdvSolvers_SVD SVD solver (for singular matrices and special cases) |
| Finally, Eigen also offer a solver based on a singular value decomposition (SVD). Again, the API is the |
| same than with Cholesky or LU: |
| \code |
| #include <Eigen/SVD> |
| MatrixXf A = MatrixXf::Random(20,20); |
| VectorXf b = VectorXf::Random(20); |
| VectorXf x; |
| A.svd().solve(b, &x); |
| SVD<MatrixXf> svdOfA(A); |
| svdOfA.solve(b, &x); |
| \endcode |
| |
| \sa class SVD, SVD::solve(), SVD_Module |
| |
| |
| |
| |
| <a href="#" class="top">top</a>\section TutorialAdvLU LU |
| |
| Eigen provides a rank-revealing LU decomposition with full pivoting, which has very good numerical stability. |
| |
| You can obtain the LU decomposition of a matrix by calling \link MatrixBase::lu() lu() \endlink, which is the easiest way if you're going to use the LU decomposition only once, as in |
| \code |
| #include <Eigen/LU> |
| MatrixXf A = MatrixXf::Random(20,20); |
| VectorXf b = VectorXf::Random(20); |
| VectorXf x; |
| A.lu().solve(b, &x); |
| \endcode |
| |
| Alternatively, you can construct a named LU decomposition, which allows you to reuse it for more than one operation: |
| \code |
| #include <Eigen/LU> |
| MatrixXf A = MatrixXf::Random(20,20); |
| Eigen::LUDecomposition<MatrixXf> lu(A); |
| cout << "The rank of A is" << lu.rank() << endl; |
| if(lu.isInvertible()) { |
| cout << "A is invertible, its inverse is:" << endl << lu.inverse() << endl; |
| } |
| else { |
| cout << "Here's a matrix whose columns form a basis of the kernel a.k.a. nullspace of A:" |
| << endl << lu.kernel() << endl; |
| } |
| \endcode |
| |
| \sa LU_Module, LU::solve(), class LU |
| |
| <a href="#" class="top">top</a>\section TutorialAdvCholesky Cholesky |
| todo |
| |
| \sa Cholesky_Module, LLT::solve(), LLT::solveInPlace(), LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT |
| |
| <a href="#" class="top">top</a>\section TutorialAdvQR QR |
| todo |
| |
| \sa QR_Module, class QR |
| |
| <a href="#" class="top">top</a>\section TutorialAdvEigenProblems Eigen value problems |
| todo |
| |
| \sa class SelfAdjointEigenSolver, class EigenSolver |
| |
| */ |
| |
| } |