aboutsummaryrefslogtreecommitdiff
path: root/unsupported/Eigen/NonLinearOptimization
diff options
context:
space:
mode:
Diffstat (limited to 'unsupported/Eigen/NonLinearOptimization')
-rw-r--r--unsupported/Eigen/NonLinearOptimization44
1 files changed, 25 insertions, 19 deletions
diff --git a/unsupported/Eigen/NonLinearOptimization b/unsupported/Eigen/NonLinearOptimization
index 600ab4c12..961f192b5 100644
--- a/unsupported/Eigen/NonLinearOptimization
+++ b/unsupported/Eigen/NonLinearOptimization
@@ -12,10 +12,10 @@
#include <vector>
-#include <Eigen/Core>
-#include <Eigen/Jacobi>
-#include <Eigen/QR>
-#include <unsupported/Eigen/NumericalDiff>
+#include "../../Eigen/Core"
+#include "../../Eigen/Jacobi"
+#include "../../Eigen/QR"
+#include "NumericalDiff"
/**
* \defgroup NonLinearOptimization_Module Non linear optimization module
@@ -30,12 +30,12 @@
* actually linear. But if this is so, you should probably better use other
* methods more fitted to this special case.
*
- * One algorithm allows to find an extremum of such a system (Levenberg
- * Marquardt algorithm) and the second one is used to find
+ * One algorithm allows to find a least-squares solution of such a system
+ * (Levenberg-Marquardt algorithm) and the second one is used to find
* a zero for the system (Powell hybrid "dogleg" method).
*
* This code is a port of minpack (http://en.wikipedia.org/wiki/MINPACK).
- * Minpack is a very famous, old, robust and well-reknown package, written in
+ * Minpack is a very famous, old, robust and well renowned package, written in
* fortran. Those implementations have been carefully tuned, tested, and used
* for several decades.
*
@@ -58,35 +58,41 @@
* There are two kinds of tests : those that come from examples bundled with cminpack.
* They guaranty we get the same results as the original algorithms (value for 'x',
* for the number of evaluations of the function, and for the number of evaluations
- * of the jacobian if ever).
+ * of the Jacobian if ever).
*
* Other tests were added by myself at the very beginning of the
- * process and check the results for levenberg-marquardt using the reference data
+ * process and check the results for Levenberg-Marquardt using the reference data
* on http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml. Since then i've
- * carefully checked that the same results were obtained when modifiying the
+ * carefully checked that the same results were obtained when modifying the
* code. Please note that we do not always get the exact same decimals as they do,
* but this is ok : they use 128bits float, and we do the tests using the C type 'double',
* which is 64 bits on most platforms (x86 and amd64, at least).
- * I've performed those tests on several other implementations of levenberg-marquardt, and
+ * I've performed those tests on several other implementations of Levenberg-Marquardt, and
* (c)minpack performs VERY well compared to those, both in accuracy and speed.
*
* The documentation for running the tests is on the wiki
* http://eigen.tuxfamily.org/index.php?title=Tests
*
- * \section API API : overview of methods
+ * \section API API: overview of methods
*
- * Both algorithms can use either the jacobian (provided by the user) or compute
- * an approximation by themselves (actually using Eigen \ref NumericalDiff_Module).
- * The part of API referring to the latter use 'NumericalDiff' in the method names
- * (exemple: LevenbergMarquardt.minimizeNumericalDiff() )
+ * Both algorithms needs a functor computing the Jacobian. It can be computed by
+ * hand, using auto-differentiation (see \ref AutoDiff_Module), or using numerical
+ * differences (see \ref NumericalDiff_Module). For instance:
+ *\code
+ * MyFunc func;
+ * NumericalDiff<MyFunc> func_with_num_diff(func);
+ * LevenbergMarquardt<NumericalDiff<MyFunc> > lm(func_with_num_diff);
+ * \endcode
+ * For HybridNonLinearSolver, the method solveNumericalDiff() does the above wrapping for
+ * you.
*
* The methods LevenbergMarquardt.lmder1()/lmdif1()/lmstr1() and
* HybridNonLinearSolver.hybrj1()/hybrd1() are specific methods from the original
* minpack package that you probably should NOT use until you are porting a code that
- * was previously using minpack. They just define a 'simple' API with default values
+ * was previously using minpack. They just define a 'simple' API with default values
* for some parameters.
*
- * All algorithms are provided using Two APIs :
+ * All algorithms are provided using two APIs :
* - one where the user inits the algorithm, and uses '*OneStep()' as much as he wants :
* this way the caller have control over the steps
* - one where the user just calls a method (optimize() or solve()) which will
@@ -94,7 +100,7 @@
* convenience.
*
* As an example, the method LevenbergMarquardt::minimize() is
- * implemented as follow :
+ * implemented as follow:
* \code
* Status LevenbergMarquardt<FunctorType,Scalar>::minimize(FVectorType &x, const int mode)
* {