<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=128.12.187.216</id>
	<title>formulasearchengine - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=128.12.187.216"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/wiki/Special:Contributions/128.12.187.216"/>
	<updated>2026-05-02T02:07:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Skorokhod%27s_embedding_theorem&amp;diff=15977</id>
		<title>Skorokhod&#039;s embedding theorem</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Skorokhod%27s_embedding_theorem&amp;diff=15977"/>
		<updated>2013-06-27T03:32:06Z</updated>

		<summary type="html">&lt;p&gt;128.12.187.216: /* Skorokhod&amp;#039;s second embedding theorem */ corrected somebody&amp;#039;s poor english&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;: &#039;&#039;Not to be confused with [[Kernel principal component analysis]].&#039;&#039;&lt;br /&gt;
The &#039;&#039;&#039;kernel regression&#039;&#039;&#039; is a [[non-parametric]] technique in statistics to estimate the [[conditional expectation]] of a [[random variable]]. The objective is to find a non-linear relation between a pair of random variables &#039;&#039;&#039;&#039;&#039;X&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;&#039;&#039;Y&#039;&#039;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In any [[nonparametric regression]], the [[conditional expectation]] of a variable &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; relative to a variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; may be written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\operatorname{E}(Y | X) = m(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; is an unknown function.&lt;br /&gt;
&lt;br /&gt;
== Nadaraya-Watson kernel regression ==&lt;br /&gt;
{{harvnb|Nadaraya|1964}} and {{harvnb|Watson|1964}} proposed to estimate &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; as a locally weighted average, using a [[kernel (statistics)|kernel]] as a weighting function. The Nadaraya-Watson estimator is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \widehat{m}_h(x)=\frac{\sum_{i=1}^n K_h(x-X_i) Y_i}{\sum_{i=1}^nK_h(x-X_i)}  &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is a kernel with a bandwidth &amp;lt;math&amp;gt;h&amp;lt;/math&amp;gt;. The fraction is a weighting term with sum 1.&lt;br /&gt;
&lt;br /&gt;
=== Derivation ===&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\operatorname{E}(Y | X) = \int y f(y|x) dy = \int y \frac{f(x,y)}{f(x)} dy&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the [[kernel density estimation]] for the joint distribution &#039;&#039;f(x,y)&#039;&#039; and &#039;&#039;f(x)&#039;&#039; with a kernel &#039;&#039;&#039;&#039;&#039;K&#039;&#039;&#039;&#039;&#039;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{f}(x,y) = n^{-1} h^{-2} \sum_{i=1}^{n} K\left(\frac{x-x_i}{h}\right) K\left(\frac{y-y_i}{h}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;,&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{f}(x) = n^{-1} h^{-1} \sum_{i=1}^{n} K\left(\frac{x-x_i}{h}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we obtain the Nadaraya-Watson estimator.&lt;br /&gt;
&lt;br /&gt;
== Priestley-Chao kernel estimator ==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\widehat{m}_{PC}(x) = h^{-1} \sum_{i=1}^n (x_i - x_{i-1}) K\left(\frac{x-x_i}{h}\right) y_i&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gasser-Müller kernel estimator ==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\widehat{m}_{GM}(x) = h^{-1} \sum_{i=1}^n \left[\int_{s_{i-1}}^{s_i} K\left(\frac{x-u}{h}\right) du\right] y_i&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;s_i = \frac{x_{i-1} + x_i}{2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
This example is based upon Canadian cross-section wage data consisting&lt;br /&gt;
of a random sample taken from the 1971 Canadian Census Public Use&lt;br /&gt;
Tapes for male individuals having common education (grade 13). There&lt;br /&gt;
are 205 observations in total.&lt;br /&gt;
&lt;br /&gt;
We consider estimating the unknown regression function using&lt;br /&gt;
Nadaraya-Watson kernel regression via the&lt;br /&gt;
[http://cran.r-project.org/web/packages/np/index.html R np package]&lt;br /&gt;
that uses automatic (data-driven) bandwidth selection; see the [http://cran.r-project.org/web/packages/np/vignettes/np.pdf np vignette] for an introduction to the np package.&lt;br /&gt;
&lt;br /&gt;
The figure below shows the estimated regression function using a&lt;br /&gt;
second order Gaussian kernel along with asymptotic variability bounds&lt;br /&gt;
&lt;br /&gt;
[[File:cps71 lc mean.png|center|360px]] &lt;br /&gt;
&amp;lt;center&amp;gt;Estimated Regression Function.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script for example ===&lt;br /&gt;
&lt;br /&gt;
The following commands of the [[R programming language]] use the&lt;br /&gt;
&amp;lt;tt&amp;gt;npreg()&amp;lt;/tt&amp;gt; function to deliver optimal smoothing and to create&lt;br /&gt;
the figure given above. These commands can be entered at the command&lt;br /&gt;
prompt via cut and paste.&lt;br /&gt;
&lt;br /&gt;
 library(np) # non parametric library&lt;br /&gt;
 data(cps71)&lt;br /&gt;
 attach(cps71)&lt;br /&gt;
 &lt;br /&gt;
 m &amp;lt;- npreg(logwage~age)&lt;br /&gt;
 &lt;br /&gt;
 plot(m,plot.errors.method=&amp;quot;asymptotic&amp;quot;,&lt;br /&gt;
      plot.errors.style=&amp;quot;band&amp;quot;,&lt;br /&gt;
      ylim=c(11,15.2))&lt;br /&gt;
 &lt;br /&gt;
 points(age,logwage,cex=.25)&lt;br /&gt;
&lt;br /&gt;
== Related ==&lt;br /&gt;
According to {{harvnb|Salsburg|2002|pp=290–1}}, the algorithms used in kernel regression were independently developed and used in [[fuzzy system]]s: &amp;quot;Coming up with almost exactly the same computer algorithm, fuzzy systems and kernel density-based regressions appear to have been developed completely independently of one another.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
*{{cite journal&lt;br /&gt;
  | last = Nadaraya&lt;br /&gt;
  | first = E. A. &lt;br /&gt;
  | title = On Estimating Regression&lt;br /&gt;
  | journal = Theory of Probability and its Applications&lt;br /&gt;
  | volume = 9&lt;br /&gt;
  | issue = 1&lt;br /&gt;
  | pages = 141–2&lt;br /&gt;
  | year = 1964&lt;br /&gt;
  | doi = 10.1137/1109020 | ref=harv&lt;br /&gt;
  }}&lt;br /&gt;
&lt;br /&gt;
*{{cite book&lt;br /&gt;
  | last = Li&lt;br /&gt;
  | first = Qi&lt;br /&gt;
  | coauthors = Racine, Jeffrey S.&lt;br /&gt;
  | title = Nonparametric Econometrics: Theory and Practice&lt;br /&gt;
  | publisher = Princeton University Press&lt;br /&gt;
  | year = 2007&lt;br /&gt;
  | isbn =  0-691-12161-3}}&lt;br /&gt;
&lt;br /&gt;
*{{cite book&lt;br /&gt;
  | last = Simonoff&lt;br /&gt;
  | first = Jeffrey S.&lt;br /&gt;
  | title = Smoothing Methods in Statistics&lt;br /&gt;
  | publisher = Springer&lt;br /&gt;
  | year = 1996&lt;br /&gt;
  | isbn = 0-387-94716-7}}&lt;br /&gt;
&lt;br /&gt;
*{{cite book |last=Salsburg |first=D. |title=The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century |publisher=W.H. Freeman |year=2002 |isbn=0-8050-7134-2 |ref=harv}}&lt;br /&gt;
&lt;br /&gt;
*{{cite journal |author=Richard, C.; Bermudez, J.-C. M.; Honeine, P. |title=Online prediction of time series data with kernels |journal=IEEE Transactions on Signal Processing |volume=57 |issue=3 |pages=1058–67 |date=March 2009 |doi=10.1109/TSP.2008.2009895 |url=http://www.cedric-richard.fr/Articles/richard2009online.pdf|format=PDF}}&lt;br /&gt;
&lt;br /&gt;
*{{cite journal |author=Parreira, W.; Bermudez, J.-C. M.; Richard, C.; Tourneret, J.-Y. |title=Stochastic behavior analysis of the Gaussian kernel-least-mean-square algorithm. |journal=IEEE Transactions on Signal Processing |volume=60 |issue=5 |pages=2208–2222 |date=May 2012 |doi=10.1109/TSP.2012.2186132 |url=http://www.cedric-richard.fr/Articles/parreira2012stochastic.pdf|format=PDF}}&lt;br /&gt;
&lt;br /&gt;
*{{cite journal |author=Richard, C.; Bermudez, J.-C. M. |title=Closed-form conditions for convergence of the Gaussian kernel-least-mean-square algorithm. |journal=Proc. of Asilomar&#039;12 |pages=1797–1801 |date=November 2012 |doi=10.1109/ACSSC.2012.6489344 |url=http://www.cedric-richard.fr/Articles/richard2012closed.pdf|format=PDF}}&lt;br /&gt;
&lt;br /&gt;
*{{cite journal |first=G. S. |last=Watson |authorlink=Geoffrey Watson |title=Smooth regression analysis |journal=Sankhyā: The Indian Journal of Statistics, Series A |volume=26 |issue=4 |pages=359–372 |year=1964 |jstor=25049340 |ref=harv}}&lt;br /&gt;
&lt;br /&gt;
==Statistical implementation==&lt;br /&gt;
* [[Stata]] [http://ideas.repec.org/c/boc/bocode/s372601.html kernreg2]&lt;br /&gt;
&amp;lt;pre&amp;gt; kernreg2 y x, bwidth(.5) kercode(3) npoint(500) gen(kernelprediction gridofpoints)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* [[R (programming language)|R]]: [http://cran.r-project.org/web/packages/np/index.html npreg  (package &#039;&#039;np&#039;&#039;)]&lt;br /&gt;
* [[GNU Octave|GNU/octave]] mathematical program package:&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
* [http://www.cs.tut.fi/~lasip Scale-adaptive kernel regression] (with Matlab software).&lt;br /&gt;
* [http://people.revoledu.com/kardi/tutorial/Regression/KernelRegression/index.html Tutorial of Kernel regression using spreadsheet] (with Microsoft Excel).&lt;br /&gt;
* [http://pcarvalho.com/things/kernelregressor/ An online kernel regression demonstration]  Requires .NET 3.0 or later.&lt;br /&gt;
* [http://cran.r-project.org/web/packages/np/index.html The np package] An [[R (programming language)|R]] package that provides a variety of nonparametric and semiparametric kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types.&lt;br /&gt;
&lt;br /&gt;
[[Category:Non-parametric statistics]]&lt;/div&gt;</summary>
		<author><name>128.12.187.216</name></author>
	</entry>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Classical_Wiener_space&amp;diff=14924</id>
		<title>Classical Wiener space</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Classical_Wiener_space&amp;diff=14924"/>
		<updated>2013-06-27T03:17:05Z</updated>

		<summary type="html">&lt;p&gt;128.12.187.216: /* Properties of classical Wiener space */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{unreferenced|date=August 2009}}&lt;br /&gt;
{{Expert-subject|Mathematics|date=February 2009}}&lt;br /&gt;
&lt;br /&gt;
In [[mathematics]], the &#039;&#039;&#039;Malliavin derivative&#039;&#039;&#039; is a notion of [[derivative]] in the [[Malliavin calculus]]. Intuitively, it is the notion of derivative appropriate to paths in [[classical Wiener space]], which are &amp;quot;usually&amp;quot; not differentiable in the usual sense. {{Citation Needed|date=August 2011}}&lt;br /&gt;
&lt;br /&gt;
==Definition==&lt;br /&gt;
Let &amp;lt;math&amp;gt;H&amp;lt;/math&amp;gt; be the [[Cameron-Martin space]], and &amp;lt;math&amp;gt;C_{0}&amp;lt;/math&amp;gt; denote [[classical Wiener space]]:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;H := \{ f \in W^{1,2} ([0, T]; \mathbb{R}^{n}) \;|\; f(0) = 0 \} := \{ \text{paths starting at 0 with first derivative in } L^{2} \}&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;C_{0} := C_{0} ([0, T]; \mathbb{R}^{n}) := \{ \text{continuous  paths starting at 0} \}&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
By the [[Sobolev_inequality#Sobolev_embedding_theorem|Sobolev embedding theorem]], &amp;lt;math&amp;gt;H \subset C_0&amp;lt;/math&amp;gt;. Let&lt;br /&gt;
:&amp;lt;math&amp;gt;i : H \to C_{0}&amp;lt;/math&amp;gt;&lt;br /&gt;
denote the [[inclusion map]].&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;F : C_{0} \to \mathbb{R}&amp;lt;/math&amp;gt; is [[Fréchet derivative|Fréchet differentiable]]. Then the [[Fréchet derivative]] is a map&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathrm{D} F : C_{0} \to \mathrm{Lin} (C_{0}; \mathbb{R})&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
i.e., for paths &amp;lt;math&amp;gt;\sigma \in C_{0}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathrm{D} F (\sigma)\;&amp;lt;/math&amp;gt; is an element of &amp;lt;math&amp;gt;C_{0}^{*}&amp;lt;/math&amp;gt;, the [[dual space]] to &amp;lt;math&amp;gt;C_{0}\;&amp;lt;/math&amp;gt;. Denote by &amp;lt;math&amp;gt;\mathrm{D}_{H} F(\sigma)\;&amp;lt;/math&amp;gt; the [[continuous function|continuous]] [[linear map]] &amp;lt;math&amp;gt;H \to \mathbb{R}&amp;lt;/math&amp;gt; defined by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathrm{D}_{H} F (\sigma) := \mathrm{D} F (\sigma) \circ i : H \to \mathbb{R}, &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sometimes known as the [[H-derivative|&#039;&#039;H&#039;&#039;-derivative]]. Now define &amp;lt;math&amp;gt;\nabla_{H} F : C_{0} \to H&amp;lt;/math&amp;gt; to be the [[adjoint]]{{dn|date=December 2013}} of &amp;lt;math&amp;gt;\mathrm{D}_{H} F\;&amp;lt;/math&amp;gt; in the sense that&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\int_0^T \left(\partial_t \nabla_H F(\sigma)\right) \cdot \partial_t h := \langle \nabla_{H} F (\sigma), h \rangle_{H} = \left( \mathrm{D}_{H} F \right) (\sigma) (h) = \lim_{t \to 0} \frac{F (\sigma + t i(h)) - F(\sigma)}{t}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then the &#039;&#039;&#039;Malliavin derivative&#039;&#039;&#039; &amp;lt;math&amp;gt;\mathrm{D}_{t}&amp;lt;/math&amp;gt; is defined by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\left( \mathrm{D}_{t} F \right) (\sigma) := \frac{\partial}{\partial t} \left( \left( \nabla_{H} F \right) (\sigma) \right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The [[domain (mathematics)|domain]] of &amp;lt;math&amp;gt;\mathrm{D}_{t}&amp;lt;/math&amp;gt; is the set &amp;lt;math&amp;gt;\mathbf{F}&amp;lt;/math&amp;gt; of all Fréchet differentiable real-valued functions on &amp;lt;math&amp;gt;C_{0}\;&amp;lt;/math&amp;gt;; the [[codomain]] is &amp;lt;math&amp;gt;L^{2} ([0, T]; \mathbb{R}^{n})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Skorokhod integral&#039;&#039;&#039; &amp;lt;math&amp;gt;\delta\;&amp;lt;/math&amp;gt; is defined to be the [[adjoint]]{{dn|date=December 2013}} of the Malliavin derivative:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\delta := \left( \mathrm{D}_{t} \right)^{*} : \operatorname{image} \left( \mathrm{D}_{t} \right) \subseteq L^{2} ([0, T]; \mathbb{R}^{n}) \to \mathbf{F}^{*} = \mathrm{Lin} (\mathbf{F}; \mathbb{R}).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[H-derivative]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Generalizations of the derivative]]&lt;br /&gt;
[[Category:Stochastic calculus]]&lt;/div&gt;</summary>
		<author><name>128.12.187.216</name></author>
	</entry>
</feed>