Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
{{Hinduism}}
The '''Gauss–Newton algorithm''' is a method used to solve [[non-linear least squares]] problems. It can be seen as a modification of [[newton's method in optimization|Newton's method]] for finding a [[maxima and minima|minimum]] of a [[function (mathematics)|function]]. Unlike Newton's method, the Gauss–Newton algorithm can ''only'' be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.  
{{Pi box}}
'''Baudhāyana''', (fl. c. 800 BCE)<ref name="Baudhayana">O'Connor, "Baudhayana".</ref> was the author of the Baudhayana sūtras, which cover dharma, daily ritual, Vedic sacrifices, etc. He belongs to the [[Yajurveda]] school, and is older than the other sūtra author [[Apastamba|Āpastambha]].


He was the author of the earliest ''[[Sulba Sutras|Sulba Sūtra]]''—appendices to the [[Vedas]] giving rules for the construction of [[altars]]—called the ''{{IAST |Baudhāyana Śulbasûtra}}''. These are notable from the point of view of mathematics, for containing several important mathematical results, including giving a value of [[pi]] to some degree of precision, and stating a version of what is now known as the [[Pythagorean theorem]].
Non-linear least squares problems arise for instance in [[non-linear regression]], where parameters in a model are sought such that the model is in good agreement with available observations.


==The sūtras of Baudhāyana==
The method is named after the mathematicians [[Carl Friedrich Gauss]] and [[Isaac Newton]].
The {{IAST|Sûtras}} of {{IAST|Baudhāyana}} are associated with the ''Taittiriya'' {{IAST|Śākhā}} (branch) of Krishna (black) ''[[Yajurveda]]''. The sutras of {{IAST|Baudhāyana}} have six sections,
# the [[Srautasutra|{{IAST|Śrautasûtra}}]], probably in 19 {{IAST|Praśnas}} (questions),
# the {{IAST|Karmāntasûtra}} in 20 {{IAST|Adhyāyas}} (chapters),
# the {{IAST|Dvaidhasûtra}} in 4 {{IAST|Praśnas}},
# the [[Grhyasutra|Grihyasutra]] in 4 {{IAST|Praśnas}},
# the [[Dharmasutra|{{IAST|Dharmasûtra}}]] in 4 {{IAST|Praśnas}} and  
# the [[Sulba Sutras|{{IAST|Śulbasûtra}}]] in 3 {{IAST|Adhyāyas}}.<ref>[http://www.sacred-texts.com/hin/sbe14/sbe1403.htm ''Sacred Books of the East'', vol.14 – Introduction to Baudhayana]</ref>


===The Shrautasūtra===
== Description ==
{{main|Baudhayana Shrauta Sutra}}
Given ''m'' functions '''r''' = (''r''<sub>1</sub>, …, ''r''<sub>''m''</sub>) of ''n'' variables '''''&beta;'''''&nbsp;=&nbsp;(''&beta;''<sub>1</sub>, , ''&beta;''<sub>''n''</sub>), with ''m''&nbsp;≥&nbsp;''n'', the Gauss–Newton algorithm [[iterative method|iteratively]] finds the minimum of the sum of squares<ref name=ab>Björck (1996)</ref>
His [[Śrauta|shrauta]] [[sūtra]]s related to performing [[Vedas|Vedic]] [[sacrifice]]s has followers in some [[Smartism|Smārta]] [[brāhmaṇa]]s ([[Iyers]]) and some [[Iyengar]]s and [[Gounder|Kongu]] of [[Tamil Nadu]], [[Yajurveda|Yajurvedis]] or [[Namboothiri]]s of [[Kerala]], Gurukkal Brahmins, among others.  The followers of this sūtra follow a different method and do 24 Tila-tarpaṇa, as  Lord [[Krishna]] had done tarpaṇa on the day before [[Amavasya|amāvāsyā]];  they call themselves Baudhāyana Amavasya.


===The Dharmasūtra===
:<math> S(\boldsymbol \beta)= \sum_{i=1}^m r_i^2(\boldsymbol \beta).</math>
The Dharmasūtra of Baudhāyana like that of [[Apastamba]] also forms a part of the larger [[Kalpasutra]]. Likewise, it is composed of ''[[praśna]]s'' which literally means ‘questions’ or books. The structure of this Dharmasūtra is not very clear because it came down in an incomplete manner. Moreover, the text has undergone alterations in the form of additions and explanations over a period of time. The ''praśnas'' consist of the [[Srautasutra]] and other ritual treatises, the Sulvasutra which deals with vedic geometry, and the [[Grhyasutra]] which deals with domestic rituals.<ref name="Patrick Olivelle 1999 p.127">Patrick Olivelle, Dharmasūtras: The Law Codes of Ancient India, (Oxford World Classics, 1999), p.127</ref>


====Authorship and Dates====
Starting with an initial guess <math>\boldsymbol \beta^{(0)}</math> for the minimum, the method proceeds by the iterations
Āpastamba and Baudhāyana come from the [[Taittiriya]] branch vedic school dedicated to the study of the [[Black Yajurveda]]. [[Robert Lingat]] states that Baudhāyana was the first to compose the Kalpasūtra collection of the Taittiriya school followed by Āpastamba.<ref>Robert Lingat, The Classical Law of India, (Munshiram Manoharlal Publishers Pvt Ltd, 1993), p.20</ref> Kane assigns this Dharmasūtra an approximate date between 500 to 200 BC.<ref name="Patrick Olivelle 1999">Patrick Olivelle, Dharmasūtras: The Law Codes of Ancient India, (Oxford World Classics, 1999), p.xxxi</ref>


====Commentaries====
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{ J_r} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
There are no commentaries on this Dharmasūtra with the exception of [[Govindasvāmin]]'s ''Vivaraṇa''. The date of the commentary is uncertain but according to Olivelle it is not very ancient. Also the commentary is inferior in comparison to that of Haradatta on Āpastamba and Gautama.<ref name="Patrick Olivelle 1999"/>


====Organization and Contents====
where
This Dharmasūtra is divided into four books. Olivelle states that Book One and the first sixteen chapters of Book Two are the ‘Proto-Baudhayana’<ref name="Patrick Olivelle 1999 p.127" /> even though this section has undergone alteration. Scholars like Bühler and Kane agree that the last two books of the Dharmasūtra are later additions. Chapter 17 and 18 in Book Two lays emphasis on various types of ascetics and acetic practices.<ref name="Patrick Olivelle 1999 p.127" />
:<math> \mathbf{J_r} = \frac{\partial r_i (\boldsymbol \beta^{(s)})}{\partial \beta_j}</math>


The first book is primarily devoted to the student and deals in topics related to studentship. It also refers to social classes, the role of the king, marriage, and suspension of Vedic recitation. Book two refers to penances, inheritance, women, householder, orders of life, ancestral offerings. Book three refers to holy householders, forest hermit and penances. Book four primarily refers to the yogic practices and penances along with offenses regarding marriage.<ref>Patrick Olivelle, Dharmasūtras: The Law Codes of Ancient India, (Oxford World Classics, 1999), p.128-131</ref>
is the [[Jacobian matrix]] of '''r''' at <math>\boldsymbol \beta^{(s)}</math> and the symbol <math>^\top</math> denotes the [[matrix transpose]].


==The mathematics in Sulbasūtra==
If ''m''&nbsp;=&nbsp;''n'', the iteration simplifies to


===Pythagorean theorem===
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left( \mathbf{J_r} \right)^{-1} \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
The most notable of the rules (the Sulbasūtra-s do not contain any proofs of the rules which they describe, since they are sūtra-s, formulae, concise) in the ''Baudhāyana Sulba Sūtra'' says:
<blockquote>
''dīrghasyākṣaṇayā rajjuḥ pārśvamānī, tiryaḍam mānī,''<br>
''cha yatpṛthagbhūte kurutastadubhayāṅ karoti.''<br>
:A rope stretched along the length of the [[diagonal]] produces an [[area]] which the vertical and horizontal sides make together.''{{Citation needed|date=November 2013}}
</blockquote>


This appears to be referring to a rectangle, although some interpretations consider this to refer to a [[Square (geometry)|square]].  In either case, it states that the square of the [[hypotenuse]] equals the sum of the squares of the sides.  If restricted to right-angled isosceles triangles, however, it would constitute a less general claim, but the text seems to be quite open to unequal sides.
which is a direct generalization of [[Newton's method]] in one dimension.


If this refers to a rectangle, it is the earliest recorded statement of the [[Pythagorean theorem]].
In data fitting, where the goal is to find the parameters '''''&beta;''''' such that  a given model function ''y''&nbsp;=&nbsp;''f''(''x'', '''''&beta;''''') best fits some data points (''x''<sub>''i''</sub>, ''y''<sub>''i''</sub>), the functions ''r''<sub>''i''</sub> are the [[residual (statistics)|residuals]]


Baudhāyana also provides a non-axiomatic demonstration using a rope measure of the reduced form of the Pythagorean theorem for an isosceles [[right triangle]]:
: <math>r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).</math>


:''The cord which is stretched across a square produces an area double the size of the original square.''
Then, the Gauss-Newton method can be expressed in terms of the Jacobian of the function ''f'' as


===Circling the square===
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_f}^\top \mathbf{J_f} \right)^{-1} \mathbf{ J_f} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}). </math>
Another problem tackled by Baudhāyana is that of finding a circle whose area is the same as that of a square (the reverse of [[squaring the circle]]). His sūtra i.58 gives this construction:


:''Draw half its diagonal about the centre towards the East-West line; then describe a circle together with a third part of that which lies outside the square. ''
==Notes==
 
The assumption ''m''&nbsp;≥&nbsp;''n'' in the algorithm statement is necessary, as otherwise the matrix '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' is not invertible and the normal equations cannot be solved (at least uniquely).
 
The Gauss–Newton algorithm can be derived by [[linear approximation|linearly approximating]] the vector of functions ''r''<sub>''i''</sub>. Using [[Taylor's theorem]], we can write at every iteration:
 
: <math>\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta</math>
 
with <math>\Delta=\boldsymbol \beta-\boldsymbol \beta^s.</math> The task of finding &Delta; minimizing the sum of squares of the right-hand side, i.e.,
: <math>\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2</math>,
is a [[linear least squares (mathematics)|linear least squares]] problem, which can be solved explicitly, yielding the normal equations in the algorithm.
 
The normal equations are ''m'' linear simultaneous equations in the unknown increments, &Delta;. They may be solved in one step, using [[Cholesky decomposition]], or, better, the [[QR factorization]] of '''J'''<sub>'''r'''</sub>. For large systems, an [[iterative method]], such as the [[conjugate gradient]] method, may be more efficient. If there is a linear dependence between columns of '''J'''<sub>'''r'''</sub>, the iterations will fail as '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' becomes singular.
 
==Example==
[[File:Gauss Newton illustration.png|thumb|right|280px|Calculated curve obtained with <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> (in blue) versus the observed data (in red).]]
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
 
In a biology experiment studying the relation between substrate concentration [''S''] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.  
:{|class="wikitable" style="text-align: center;"
|''i'' || 1 || 2 || 3 || 4 || 5 || 6 || 7
|-
| [S] || 0.038 || 0.194 || 0.425 || 0.626 || 1.253 ||  2.500 || 3.740
|-
| rate || 0.050 || 0.127 || 0.094  ||  0.2122 ||  0.2729 ||  0.2665 || 0.3317
|}
 
It is desired to find a curve (model function) of the form
 
:<math>\text{rate}=\frac{V_\text{max}[S]}{K_M+[S]}</math>


Explanation:
that fits best the data in the least squares sense, with the parameters <math>V_\text{max}</math> and <math>K_M</math> to be determined.  
*Draw the half-diagonal of the square, which  is larger than the half-side by <math>x = {a \over 2}\sqrt{2}- {a \over 2}</math>.
*Then draw a circle with radius <math>{a \over 2} + {x \over 3}</math>, or <math>{a \over 2} + {a \over 6}(\sqrt{2}-1)</math>, which equals <math>{a \over 6}(2 + \sqrt{2})</math>.
* Now <math>(2+\sqrt{2})^2 \approx 11.66 \approx {36.6\over \pi}</math>, so the area <math>{\pi}r^2 \approx \pi \times {a^2 \over 6^2} \times {36.6\over \pi} \approx a^2</math>.


===Square root of 2===
Denote by <math>x_i</math> and <math>y_i</math> the value of <math>[S]</math> and the rate from the table, <math>i=1, \dots, 7.</math> Let <math>\beta_1=V_\text{max}</math> and <math>\beta_2=K_M.</math> We will find <math>\beta_1</math> and <math>\beta_2</math> such that the sum of squares of the residuals
Baudhāyana i.61-2 (elaborated in Āpastamba Sulbasūtra i.6)
: <math>r_i = y_i - \frac{\beta_1x_i}{\beta_2+x_i}</math>  &nbsp; (<math>i=1,\dots, 7</math>)
gives the length of the diagonal of a square in terms of its sides, which is equivalent to a formula for the square root of 2:
is minimized.


:''samasya dvikaraṇī. pramāṇaṃ tṛtīyena vardhayet <br> tac caturthenātmacatustriṃśonena saviśeṣaḥ''
The Jacobian <math>\mathbf{J_r}</math> of the vector of residuals <math>r_i</math> in respect to the unknowns <math>\beta_j</math> is an <math>7\times 2</math> matrix with the <math>i</math>-th row having the entries


: The diagonal [lit. "doubler"] of a square. The measure is to be increased by a third and by a fourth decreased by the 34th. That is its diagonal approximately.{{citation needed|date=December 2012}}
:<math>\frac{\partial r_i}{\partial \beta_1}= -\frac{x_i}{\beta_2+x_i},\  \frac{\partial r_i}{\partial \beta_2}= \frac{\beta_1x_i}{\left(\beta_2+x_i\right)^2}.</math>


That is,  
Starting with the initial estimates of <math>\beta_1</math>=0.9 and <math>\beta_2</math>=0.2, after five iterations of the Gauss–Newton algorithm the optimal values <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.


:<math>\sqrt{2} \approx  1 + \frac{1}{3} + \frac{1}{3 \cdot 4} - \frac{1}{3 \cdot4 \cdot 34} = \frac{577}{408} \approx 1.414216,</math>
==Convergence properties==


which is correct to five decimals.<ref name="Baudhayana"/>
It can be shown<ref>Björck (1996)  p260</ref> that the increment &Delta; is a [[descent direction]] for ''S'', and, if the algorithm converges, then the limit is a [[stationary point]] of ''S''. However, convergence is not guaranteed, not even [[local convergence]] as in [[Newton's method in optimization|Newton's method]].


Other theorems include: diagonals of rectangle bisect each other,
The rate of convergence of the Gauss–Newton algorithm can approach [[rate of convergence|quadratic]].<ref>Björck (1996)  p341, 342</ref> The algorithm may converge slowly or not at all  if the initial guess is far from the minimum or  the matrix <math>\mathbf{J_r^T  J_r}</math> is [[ill-conditioned]].  For example, consider the problem with <math>m=2</math> equations and <math>n=1</math> variable, given by
diagonals of rhombus bisect at right angles, area of a square formed
:<math> \begin{align}
by joining the middle points of a square is half of original, the
r_1(\beta) &= \beta + 1 \\
midpoints of a rectangle joined forms a rhombus whose area is half the
r_2(\beta) &= \lambda \beta^2 + \beta - 1.
rectangle, etc.
\end{align} </math>
The optimum is at <math>\beta = 0</math>. If <math>\lambda = 0</math> then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.<ref>Fletcher (1987) p.113</ref>


Note the emphasis on rectangles and squares; this arises from the need
== Derivation from Newton's method ==
to specify ''yajña bhūmikā''s—i.e. the altar on which a rituals were
conducted, including fire offerings (yajña).


[[Āpastamba]] (c. 600 BC) and [[Kātyāyana]] (c. 200 BC), authors of other sulba sūtras, extend some of Baudhāyana's ideas. Āpastamba provides a more general proof{{Citation needed|date=February 2007}} of the Pythagorean theorem.
In what follows, the Gauss–Newton algorithm will be derived from [[Newton's method in optimization|Newton's method]] for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic. 
 
The recurrence relation for Newton's method for minimizing a function ''S'' of parameters, '''''&beta;''''', is
:<math> \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \, </math>
where '''g''' denotes the [[gradient|gradient vector]] of ''S'' and '''H''' denotes the [[Hessian matrix]] of ''S''.
Since <math> S = \sum_{i=1}^m r_i^2</math>, the gradient is given by
:<math>g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.</math>
Elements of the Hessian are calculated by differentiating the gradient elements, <math>g_j</math>, with respect to <math>\beta_k</math>
:<math>H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).</math>
 
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
 
:<math>H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}</math>
 
where <math>J_{ij}=\frac{\partial r_i}{\partial \beta_j}</math> are entries of the Jacobian '''J<sub>r</sub>'''. The gradient and the approximate Hessian can be written in matrix notation as
 
:<math>\mathbf g=2\mathbf{J_r}^\top \mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J_r}^\top \mathbf{J_r}.\,</math>
 
These expressions are substituted into the recurrence relation above to obtain the operational equations
 
:<math> \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\top \mathbf{r}. </math>
 
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
:<math>\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|</math>
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.<ref>Nocedal (1997) {{Page needed|date=December 2010}}</ref>
#The function values <math>r_i</math> are small in magnitude, at least around the minimum.
#The functions are only "mildly" non linear, so that <math>\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}</math> is relatively small in magnitude.
 
== Improved versions ==
 
With the Gauss–Newton method the sum of squares ''S'' may not decrease at every iteration. However, since &Delta; is a descent direction, unless <math>S(\boldsymbol \beta^s)</math> is a stationary point, it holds that <math>S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s)</math> for all sufficiently small <math>\alpha>0</math>. Thus, if divergence occurs, one solution is to employ a fraction, <math>\alpha</math>, of the increment vector, &Delta; in the updating formula 
:<math> \boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\  \Delta</math>.
In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for <math>\alpha</math> can be found by using a [[line search]] algorithm, that is, the magnitude of <math>\alpha</math> is determined by finding the value that minimizes ''S'', usually using a [[line search|direct search method]] in the interval <math>0<\alpha<1</math>.
 
In cases where the direction of the shift vector is such that the optimal fraction, <math> \alpha </math>, is close to zero, an alternative method for handling divergence is the use of the [[Levenberg–Marquardt algorithm]], also known as the "[[trust region]] method".<ref name="ab"/> The normal equations are modified in such a way that the increment vector is rotated towards the direction of [[steepest descent]],   
:<math>\left(\mathbf{J^TJ+\lambda D}\right)\Delta=\mathbf{J}^T \mathbf{r}</math>,
where '''D''' is a positive diagonal matrix. Note that when ''D'' is the identity matrix and <math>\lambda\to+\infty</math>, then  <math> \Delta/\lambda\to \mathbf{J}^T \mathbf{r}</math>, therefore the [[Direction (geometry, geography)|direction]] of &Delta; approaches the direction of the gradient <math> \mathbf{J}^T \mathbf{r}</math>.
 
The so-called Marquardt parameter, <math>\lambda</math>, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time <math>\lambda</math> is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization.
 
== Related algorithms ==
 
In a [[quasi-Newton method]], such as that due to [[Davidon-Fletcher-Powell (DFP) formula|Davidon, Fletcher and Powell]] or Broyden–Fletcher–Goldfarb–Shanno ([[BFGS method]]) an estimate of the full Hessian, <math>\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}</math>, is built up numerically using first derivatives <math>\frac{\partial r_i}{\partial\beta_j}</math> only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.
 
Another method for solving minimization problems using only first derivatives is [[gradient descent]]. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.


==Notes==
==Notes==
<div class="references-small">
<references />
<references/>


==See also==
==References==
*[[Indian mathematics]]
*{{cite book
*[[Indian mathematicians]]
| last      = Björck | first      = A.
*[[Sulba Sutras]]
| title      = Numerical methods for least squares problems
| publisher  = SIAM, Philadelphia  | year      = 1996  | isbn      = 0-89871-360-9 }}
* {{Cite book | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 }}.
*{{cite book
| last      = Nocedal  | first      = Jorge  | coauthors  = Wright, Stephen
| title      = Numerical optimization
| publisher  = New York: Springer  | year      = 1999  | isbn      = 0-387-98793-2 }}


==References==
{{Least Squares and Regression Analysis}}
* George Gheverghese Joseph. ''The Crest of the Peacock: Non-European Roots of Mathematics'', 2nd Edition. [[Penguin Books]], 2000. ISBN 0-14-027778-1.
* Vincent J. Katz. ''A History of Mathematics: An Introduction'', 2nd Edition. [[Addison-Wesley]], 1998. ISBN 0-321-01618-1
* [[S. Balachandra Rao]], ''Indian Mathematics and Astronomy: Some Landmarks''. Jnana Deep Publications, Bangalore, 1998. ISBN 81-900962-0-6
* {{MacTutor Biography|id=Baudhayana}} [[St Andrews University]], 2000.
* {{MacTutor Biography|id=Indian_sulbasutras|title=The Indian Sulbasutras|class=HistTopics}} St Andrews University, 2000.
* Ian G. Pearce. [http://turnbull.mcs.st-and.ac.uk/~history/Projects/Pearce/Chapters/Ch4_2.html ''Sulba Sutras''] at the [[MacTutor archive]]. St Andrews University, 2002.
</div>
*B.B.DUTTA."THE SCIENCE OF THE SULBA".


{{Indian mathematics}}
{{Optimization algorithms}}


{{DEFAULTSORT:Baudhayana}}
{{DEFAULTSORT:Gauss-Newton algorithm}}
[[Category:Ancient Indian mathematicians]]
[[Category:Optimization algorithms and methods]]
[[Category:Pi]]
[[Category:Least squares]]
[[Category:Indian_mathematics]]
[[Category:Statistical algorithms]]

Revision as of 08:53, 13 August 2014

The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It can be seen as a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.

The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.

Description

Given m functions r = (r1, …, rm) of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares[1]

Starting with an initial guess for the minimum, the method proceeds by the iterations

where

is the Jacobian matrix of r at and the symbol denotes the matrix transpose.

If m = n, the iteration simplifies to

which is a direct generalization of Newton's method in one dimension.

In data fitting, where the goal is to find the parameters β such that a given model function y = f(x, β) best fits some data points (xi, yi), the functions ri are the residuals

Then, the Gauss-Newton method can be expressed in terms of the Jacobian of the function f as

Notes

The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible and the normal equations cannot be solved (at least uniquely).

The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:

with The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,

,

is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.

The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.

Example

Calculated curve obtained with and (in blue) versus the observed data (in red).

In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.

In a biology experiment studying the relation between substrate concentration [S] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.

i 1 2 3 4 5 6 7
[S] 0.038 0.194 0.425 0.626 1.253 2.500 3.740
rate 0.050 0.127 0.094 0.2122 0.2729 0.2665 0.3317

It is desired to find a curve (model function) of the form

that fits best the data in the least squares sense, with the parameters and to be determined.

Denote by and the value of and the rate from the table, Let and We will find and such that the sum of squares of the residuals

  ()

is minimized.

The Jacobian of the vector of residuals in respect to the unknowns is an matrix with the -th row having the entries

Starting with the initial estimates of =0.9 and =0.2, after five iterations of the Gauss–Newton algorithm the optimal values and are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.

Convergence properties

It can be shown[2] that the increment Δ is a descent direction for S, and, if the algorithm converges, then the limit is a stationary point of S. However, convergence is not guaranteed, not even local convergence as in Newton's method.

The rate of convergence of the Gauss–Newton algorithm can approach quadratic.[3] The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix is ill-conditioned. For example, consider the problem with equations and variable, given by

The optimum is at . If then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[4]

Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic.

The recurrence relation for Newton's method for minimizing a function S of parameters, β, is

where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since , the gradient is given by

Elements of the Hessian are calculated by differentiating the gradient elements, , with respect to

The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by

where are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as

These expressions are substituted into the recurrence relation above to obtain the operational equations

Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation

that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.[5]

  1. The function values are small in magnitude, at least around the minimum.
  2. The functions are only "mildly" non linear, so that is relatively small in magnitude.

Improved versions

With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless is a stationary point, it holds that for all sufficiently small . Thus, if divergence occurs, one solution is to employ a fraction, , of the increment vector, Δ in the updating formula

.

In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for can be found by using a line search algorithm, that is, the magnitude of is determined by finding the value that minimizes S, usually using a direct search method in the interval .

In cases where the direction of the shift vector is such that the optimal fraction, , is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,

,

where D is a positive diagonal matrix. Note that when D is the identity matrix and , then , therefore the direction of Δ approaches the direction of the gradient .

The so-called Marquardt parameter, , may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization.

Related algorithms

In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian, , is built up numerically using first derivatives only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.

Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.

Notes

  1. 1.0 1.1 Björck (1996)
  2. Björck (1996) p260
  3. Björck (1996) p341, 342
  4. Fletcher (1987) p.113
  5. Nocedal (1997) Template:Page needed

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534.
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

Template:Least Squares and Regression Analysis

Template:Optimization algorithms