<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=101.169.127.229</id>
	<title>formulasearchengine - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=101.169.127.229"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/wiki/Special:Contributions/101.169.127.229"/>
	<updated>2026-05-05T10:23:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Sterile_neutrino&amp;diff=10587</id>
		<title>Sterile neutrino</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Sterile_neutrino&amp;diff=10587"/>
		<updated>2014-01-14T01:05:12Z</updated>

		<summary type="html">&lt;p&gt;101.169.127.229: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Multiple issues|&lt;br /&gt;
{{Expert-subject|statistics|date=September 2009}}&lt;br /&gt;
{{morefootnotes|date=December 2010}}&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
In [[statistics]], the &#039;&#039;&#039;explained sum of squares (ESS),&#039;&#039;&#039; alternatively known as the &#039;&#039;&#039;Model Sum of Squares&#039;&#039;&#039; or &#039;&#039;&#039;Sum of Squares due to Regression&#039;&#039;&#039; (&#039;&#039;&#039;&amp;quot;SSR&amp;quot;&#039;&#039;&#039; – not to be confused with the [[residual sum of squares]] when this writing is being used), is a quantity used in describing how well a model, often a [[regression analysis|regression model]], represents the data being modelled. In particular, the explained sum of squares measures how much variation there is in the modelled values and this is compared to the [[total sum of squares]], which measures how much variation there is in the observed data, and to the [[residual sum of squares]], which measures the variation in the modelling errors.&lt;br /&gt;
&lt;br /&gt;
==Definition==&lt;br /&gt;
The &#039;&#039;&#039;explained sum of squares (ESS)&#039;&#039;&#039; is the sum of the squares of the deviations of the predicted values from the mean value of a response variable, in a standard [[regression model]] — for example, {{nowrap|1=&#039;&#039;y&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; = &#039;&#039;a&#039;&#039; + &#039;&#039;b&#039;&#039;&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&#039;&#039;x&#039;&#039;&amp;lt;sub&amp;gt;1&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; + &#039;&#039;b&#039;&#039;&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;&#039;&#039;x&#039;&#039;&amp;lt;sub&amp;gt;2&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; + ... + &#039;&#039;ε&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt;}}, where &#039;&#039;y&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; is the &#039;&#039;i&#039;&#039; &amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; observation of the [[response variable]], &#039;&#039;x&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;ji&#039;&#039;&amp;lt;/sub&amp;gt; is the &#039;&#039;i&#039;&#039; &amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; observation of the &#039;&#039;j&#039;&#039; &amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; [[explanatory variable]], &#039;&#039;a&#039;&#039; and &#039;&#039;b&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; are [[coefficient]]s, &#039;&#039;i&#039;&#039; indexes the observations from 1 to &#039;&#039;n&#039;&#039;, and &#039;&#039;ε&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt; is the &#039;&#039;i&#039;&#039;&amp;amp;nbsp;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; value of the [[error term]]. In general, the greater the ESS, the better the estimated model performs.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\hat{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{b}_i&amp;lt;/math&amp;gt; are the estimated [[coefficient]]s, then&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{y}_i=\hat{a}+\hat{b_1}x_{1i} + \hat{b_2}x_{2i} + \cdots \,  &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
is the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;&amp;amp;nbsp;th&amp;lt;/sup&amp;gt; predicted value of the response variable. The ESS is the sum of the squares of the differences of the predicted values and the mean value of the response variable:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\text{ESS} = \sum_{i=1}^n \left(\hat{y}_i - \bar{y}\right)^2.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general: [[total sum of squares]]&amp;amp;nbsp;=&amp;amp;nbsp;&#039;&#039;&#039;explained sum of squares&#039;&#039;&#039;&amp;amp;nbsp;+&amp;amp;nbsp;[[residual sum of squares]].&lt;br /&gt;
&lt;br /&gt;
==Partitioning in simple linear regression==&lt;br /&gt;
The following equality, stating that the total sum of squares equals the residual sum of squares plus the explained sum of squares, is generally true in simple linear regression:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\sum_{i=1}^n \left(y_i - \bar{y}\right)^2 = \sum_{i=1}^n \left(y_i - \hat{y}_i\right)^2 + \sum_{i=1}^n \left(\hat{y}_i - \bar{y}\right)^2.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple derivation===&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
(y_i - \bar{y}) = (y_{i}-\hat{y}_i)+(\hat{y}_i - \bar{y}).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Square both sides and sum over all &#039;&#039;i&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{i=1}^n (y_{i}-\bar{y})^2=\sum_{i=1}^n (y_i - \hat{y}_{i})^2+\sum_{i=1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i=1}^n 2(\hat{y}_{i}-\bar{y})(y_i - \hat{y}_i).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Simple linear regression]] gives &amp;lt;math&amp;gt;\hat{a}=\bar{y}-\hat{b}\bar{x}&amp;lt;/math&amp;gt;.&amp;lt;ref name=Mendenhall&amp;gt;{{cite book |last=Mendenhall |first=William |title=Introduction to Probability and Statistics |publisher=Brooks/Cole |year=2009 |location=Belmont, CA |page=507 |edition=13th |isbn=9780495389538 }}&amp;lt;/ref&amp;gt; What follows depends on this.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\sum_{i=1}^n 2(\hat{y}_{i}-\bar{y})(y_{i}-\hat{y}_i)	&amp;amp; = \sum_{i=1}^{n}2((\bar{y}-\hat{b}\bar{x}+\hat{b}x_{i})-\bar{y})(y_{i}-\hat{y}_{i}) \\&lt;br /&gt;
					&amp;amp; = \sum_{i=1}^{n}2((\bar{y}+\hat{b}(x_{i}-\bar{x}))-\bar{y})(y_{i}-\hat{y}_{i}) \\&lt;br /&gt;
					&amp;amp; = \sum_{i=1}^{n}2(\hat{b}(x_{i}-\bar{x}))(y_{i}-\hat{y}_{i}) \\&lt;br /&gt;
					&amp;amp; = \sum_{i=1}^{n}2\hat{b}(x_{i}-\bar{x})(y_{i}-(\bar{y}+\hat{b}(x_{i}-\bar{x}))) \\&lt;br /&gt;
					&amp;amp; = \sum_{i=1}^{n}2\hat{b}((y_{i}-\bar{y})(x_{i}-\bar{x})-\hat{b}(x_{i}-\bar{x})^2) .&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again [[simple linear regression]] gives&amp;lt;ref name=Mendenhall/&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{b}=\left(\sum_{i=1}^{n}(x_{i}-\bar{x})(y_{i}-\bar{y})\right)/\left(\sum_{i=1}^{n}(x_{i}-\bar{x})^2\right), &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\sum_{i=1}^{n}2(\hat{y}_{i}-\bar{y})(y_{i}-\hat{y}_{i})&lt;br /&gt;
	&amp;amp; = \sum_{i=1}^{n}2\hat{b}\left((y_{i}-\bar{y})(x_{i}-\bar{x})-\hat{b}(x_{i}-\bar{x})^2\right) \\	&lt;br /&gt;
	&amp;amp; = 2\hat{b}\left(\sum_{i=1}^{n}(y_{i}-\bar{y})(x_{i}-\bar{x})-\hat{b}\sum_{i=1}^{n}(x_{i}-\bar{x})^2\right) \\	&lt;br /&gt;
               &amp;amp; = 2\hat{b}\sum_{i=1}^{n}\left((y_{i}-\bar{y})(x_{i}-\bar{x})-(y_{i}-\bar{y})(x_{i}-\bar{x})\right) \\&lt;br /&gt;
              &amp;amp; = 2\hat{b}\cdot 0 = 0.							&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Partitioning in the general OLS model==&lt;br /&gt;
&lt;br /&gt;
The general regression model with &#039;&#039;n&#039;&#039; observations and &#039;&#039;k&#039;&#039; explanators, the first of which is a constant unit vector whose coefficient is the regression intercept, is&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; y = X \beta + e&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &#039;&#039;y&#039;&#039; is an &#039;&#039;n&#039;&#039; × 1 vector of dependent variable observations, each column of the &#039;&#039;n&#039;&#039; × &#039;&#039;k&#039;&#039; matrix &#039;&#039;X&#039;&#039; is a vector of observations on one of the &#039;&#039;k&#039;&#039; explanators, &amp;lt;math&amp;gt;\beta &amp;lt;/math&amp;gt; is a &#039;&#039;k&#039;&#039; × 1 vector of true coefficients,  and &#039;&#039;e&#039;&#039; is an &#039;&#039;n&#039;&#039;× 1 vector of the true underlying errors.  The [[ordinary least squares]] estimator for &amp;lt;math&amp;gt;\beta&amp;lt;/math&amp;gt; is&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \hat \beta = (X^T X)^{-1}X^T y.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The residual vector &amp;lt;math&amp;gt;\hat e&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;y - X \hat \beta = y - X (X^T X)^{-1}X^T y&amp;lt;/math&amp;gt;, so the residual sum of squares &amp;lt;math&amp;gt;\hat e ^T \hat e&amp;lt;/math&amp;gt; is, after simplification,&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;  RSS = y^T y - y^T X(X^T X)^{-1} X^T y.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Denote as &amp;lt;math&amp;gt;\bar y&amp;lt;/math&amp;gt; the constant vector all of whose elements are the sample mean &amp;lt;math&amp;gt;y_m&amp;lt;/math&amp;gt; of the dependent variable values in the vector &#039;&#039;y&#039;&#039;.  Then the total sum of squares is&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; TSS = (y - \bar y)^T(y - \bar y) = y^T y - 2y^T \bar y + \bar y ^T \bar y.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The explained sum of squares, defined as the sum of squared deviations of the predicted values from the observed mean of &#039;&#039;y&#039;&#039;, is&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; ESS = (\hat y - \bar y)^T(\hat y - \bar y) = \hat y^T \hat y - 2\hat y^T \bar y + \bar y ^T \bar y.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt; \hat y = X \hat \beta&amp;lt;/math&amp;gt; in this, and simplifying to obtain &amp;lt;math&amp;gt;\hat y^T \hat y = y^TX(X^T X)^{-1}X^Ty &amp;lt;/math&amp;gt;, gives the result that &#039;&#039;TSS&#039;&#039; = &#039;&#039;ESS&#039;&#039; + &#039;&#039;RSS&#039;&#039; if and only if &amp;lt;math&amp;gt;y^T \bar y = \hat y^T \bar y&amp;lt;/math&amp;gt;.  The left side of this is &amp;lt;math&amp;gt;y_m&amp;lt;/math&amp;gt; times the sum of the elements of &#039;&#039;y&#039;&#039;, and the right side is &amp;lt;math&amp;gt;y_m&amp;lt;/math&amp;gt; times the sum of the elements of &amp;lt;math&amp;gt;\hat y&amp;lt;/math&amp;gt;, so the condition is that the sum of the elements of &#039;&#039;y&#039;&#039; equals the sum of the elements of &amp;lt;math&amp;gt;\hat y&amp;lt;/math&amp;gt;, or equivalently that the sum of the prediction errors (residuals) &amp;lt;math&amp;gt;y_i - \hat y_i&amp;lt;/math&amp;gt; is zero.  This can be seen to be true by noting the well-known OLS property that the &#039;&#039;k&#039;&#039; × 1 vector &amp;lt;math&amp;gt;X^T \hat e = X^T [I - X(X^T X)^{-1}X^T]y= 0&amp;lt;/math&amp;gt;:  since the first column of &#039;&#039;X&#039;&#039; is a vector of ones, the first element of this vector &amp;lt;math&amp;gt;X^T \hat e&amp;lt;/math&amp;gt; is the sum of the residuals and is equal to zero.  This proves that the condition holds for the result that &#039;&#039;TSS&#039;&#039; = &#039;&#039;ESS&#039;&#039; + &#039;&#039;RSS&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In linear algebra terms, we have &amp;lt;math&amp;gt;RSS = ||y - {\hat y}||_2^2 &amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt; TSS = ||y - \bar y||_2^2&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt; ESS = ||{\hat y} - \bar y||_2^2 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The proof can be simplified by noting that &amp;lt;math&amp;gt; y^T {\hat y} = {\hat y}^T {\hat y} &amp;lt;/math&amp;gt;. The proof is as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt; {\hat y}^T {\hat y} = &lt;br /&gt;
y^T X (X^T X)^{-1} X^T X (X^T X)^{-1} X^T y = y^T X (X^T X)^{-1} X^T y = y^T {\hat y}, &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus,&lt;br /&gt;
:&amp;lt;math&amp;gt; TSS = ||y - \bar y||_2^2 = ||y - {\hat y} + {\hat y} - \bar y||_2^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt; TSS = ||y - {\hat y}||_2^2 + ||{\hat y} - \bar y||_2^2 + 2 &amp;lt;y - {\hat y}, {\hat y} - {\bar y}&amp;gt;  &amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt; TSS = RSS + ESS + 2 y^T {\hat y} -2 {\hat y}^T {\hat y} - 2 y^T {\bar y} + 2 {\hat y}{\bar y}  &amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt; TSS = RSS + ESS  - 2 y^T {\bar y} + 2 {\hat y}{\bar y} &amp;lt;/math&amp;gt;&lt;br /&gt;
which again gives the result that &#039;&#039;TSS&#039;&#039; = &#039;&#039;ESS&#039;&#039; + &#039;&#039;RSS&#039;&#039; if and only if &amp;lt;math&amp;gt;y^T \bar y = \hat y^T \bar y&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[Sum of squares (statistics)]]&lt;br /&gt;
*[[Lack-of-fit sum of squares]]&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* S. E. Maxwell and H. D. Delaney (1990), &amp;quot;Designing experiments and analyzing data: A model comparison perspective&amp;quot;. Wadsworth. pp.&amp;amp;nbsp;289–290.&lt;br /&gt;
* G. A. Milliken and D. E. Johnson (1984), &amp;quot;Analysis of messy data&amp;quot;, Vol. I: Designed experiments. Van Nostrand Reinhold. pp.&amp;amp;nbsp;146–151.&lt;br /&gt;
* B. G. Tabachnick and L. S. Fidell (2007), &amp;quot;Experimental design using ANOVA&amp;quot;. Duxbury. p.&amp;amp;nbsp;220.&lt;br /&gt;
* B. G. Tabachnick and L. S. Fidell (2007), &amp;quot;Using multivariate statistics&amp;quot;, 5th ed. Pearson Education. pp.&amp;amp;nbsp;217–218.&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Explained Sum Of Squares}}&lt;br /&gt;
[[Category:Regression analysis]]&lt;br /&gt;
[[Category:Least squares]]&lt;/div&gt;</summary>
		<author><name>101.169.127.229</name></author>
	</entry>
</feed>