<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=157.181.98.186</id>
	<title>formulasearchengine - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=157.181.98.186"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/wiki/Special:Contributions/157.181.98.186"/>
	<updated>2026-05-02T03:00:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=List_of_mathematical_symbols&amp;diff=2009</id>
		<title>List of mathematical symbols</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=List_of_mathematical_symbols&amp;diff=2009"/>
		<updated>2014-02-01T02:32:47Z</updated>

		<summary type="html">&lt;p&gt;157.181.98.186: do not confuse &amp;quot;strict&amp;quot; with &amp;quot;significant&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{more footnotes|date=March 2013}}&lt;br /&gt;
A &#039;&#039;&#039;binary symmetric channel&#039;&#039;&#039; (or BSC) is a common [[communications channel]] model used in [[coding theory]] and [[information theory]]. In this model, a transmitter wishes to send a [[bit]] (a zero or a one), and the receiver receives a bit. It is assumed that the bit is &#039;&#039;usually&#039;&#039; [[data transmission|transmitted]] correctly, but that it will be &amp;quot;flipped&amp;quot; with a small [[probability]] (the &amp;quot;crossover probability&amp;quot;). This channel is used frequently in information theory because it is one of the simplest channels to analyze.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
[[Image:Binary symmetric channel (en).svg|600px|center]]&lt;br /&gt;
The BSC is a &#039;&#039;binary channel&#039;&#039;; that is, it can transmit only one of two symbols (usually called 0 and 1). (A non-binary channel would be capable of transmitting more than 2 symbols, possibly even an infinite number of choices.) The transmission is not perfect, and occasionally the receiver gets the wrong bit.&lt;br /&gt;
&lt;br /&gt;
This channel is often used by theorists because it is one of the simplest [[Signal noise|noisy]] channels to analyze. Many problems in [[communication theory]] can be [[Reduction (complexity)|reduced]] to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
A &#039;&#039;&#039;binary symmetric channel with crossover probability &#039;&#039;p&#039;&#039;&#039;&#039;&#039; denoted by &amp;lt;math&amp;gt;BSC_{p}&amp;lt;/math&amp;gt;, is a channel with binary input and binary output and probability of error &#039;&#039;p&#039;&#039;; that is, if &#039;&#039;X&#039;&#039; is the transmitted [[random variable]] and &#039;&#039;Y&#039;&#039; the received variable, then the channel is characterized by the [[conditional probability|conditional probabilities]]&lt;br /&gt;
: Pr( &#039;&#039;Y&#039;&#039; = 0 | &#039;&#039;X&#039;&#039; = 0 ) = 1 &amp;amp;minus; &#039;&#039;p&#039;&#039;&lt;br /&gt;
: Pr( &#039;&#039;Y&#039;&#039; = 0 | &#039;&#039;X&#039;&#039; = 1) = &#039;&#039;p&#039;&#039;&lt;br /&gt;
: Pr( &#039;&#039;Y&#039;&#039; = 1 | &#039;&#039;X&#039;&#039; = 0 ) = &#039;&#039;p&#039;&#039;&lt;br /&gt;
: Pr( &#039;&#039;Y&#039;&#039; = 1 | &#039;&#039;X&#039;&#039; = 1 ) = 1 &amp;amp;minus; &#039;&#039;p&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is assumed that 0 ≤ &#039;&#039;p&#039;&#039; ≤ 1/2. If &#039;&#039;p&#039;&#039;&amp;amp;nbsp;&amp;gt;&amp;amp;nbsp;1/2, then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability 1&amp;amp;nbsp;&amp;amp;minus;&amp;amp;nbsp;&#039;&#039;p&#039;&#039;&amp;amp;nbsp;≤&amp;amp;nbsp;1/2.&lt;br /&gt;
&lt;br /&gt;
=== Capacity of BSC&amp;lt;sub&amp;gt;&#039;&#039;p&#039;&#039;&amp;lt;/sub&amp;gt; ===&lt;br /&gt;
The [[channel capacity|capacity of the channel]] is 1&amp;amp;nbsp;&amp;amp;minus;&amp;amp;nbsp;&#039;&#039;H&#039;&#039;(&#039;&#039;p&#039;&#039;), where &#039;&#039;H&#039;&#039;(&#039;&#039;p&#039;&#039;) is the [[binary entropy function]].&lt;br /&gt;
&lt;br /&gt;
The converse can be shown by a [[sphere packing]] argument. Given a codeword, there are roughly 2&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039; H(&#039;&#039;p&#039;&#039;)&amp;lt;/sup&amp;gt; [[Typical set|typical]] output sequences. There are 2&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;/sup&amp;gt; total possible outputs, and the input chooses from a [[codebook]] of size 2&amp;lt;sup&amp;gt;&#039;&#039;nR&#039;&#039;&amp;lt;/sup&amp;gt;. Therefore, the receiver would choose to [[Partition of a set|partition]] the space into &amp;quot;spheres&amp;quot; with 2&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;/sup&amp;gt; / 2&amp;lt;sup&amp;gt;&#039;&#039;nR&#039;&#039;&amp;lt;/sup&amp;gt; = 2&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039;(1&amp;amp;nbsp;&amp;amp;minus;&amp;amp;nbsp;&#039;&#039;R&#039;&#039;)&amp;lt;/sup&amp;gt; potential outputs each. If &#039;&#039;R&#039;&#039;&amp;amp;nbsp;&amp;gt;&amp;amp;nbsp;1&amp;amp;nbsp;&amp;amp;minus;&amp;amp;nbsp;&#039;&#039;H&#039;&#039;(&#039;&#039;p&#039;&#039;), then the spheres will be packed too tightly [[Asymptotic analysis|asymptotically]] and the receiver will not be able to identify the correct codeword with vanishing probability.&lt;br /&gt;
&lt;br /&gt;
== Shannon&#039;s channel capacity theorem for BSC&amp;lt;sub&amp;gt;&#039;&#039;p&#039;&#039;&amp;lt;/sub&amp;gt; ==&lt;br /&gt;
Shannon&#039;s noisy coding theorem is general for all kinds of  channels. We consider a special case of this theorem for a binary symmetric channel with an error probability p.&lt;br /&gt;
&lt;br /&gt;
=== Noisy coding theorem for BSC&amp;lt;sub&amp;gt;&#039;&#039;p&#039;&#039;&amp;lt;/sub&amp;gt; ===&lt;br /&gt;
We denote (and would continue to denote) noise &amp;lt;math&amp;gt;e&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;BSC_{p}&amp;lt;/math&amp;gt; as &amp;quot;&amp;lt;math&amp;gt;e \in BSC_{p}&amp;lt;/math&amp;gt;&amp;quot;. The noise &amp;lt;math&amp;gt;e&amp;lt;/math&amp;gt; is essentially a [[random variable]] with each of its indexes being a &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;1-p&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Theorem 1&#039;&#039;&#039;&lt;br /&gt;
For all &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; &amp;lt; &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt; and all &amp;lt;math&amp;gt;\epsilon&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;0 &amp;lt; \epsilon &amp;lt; \frac{1}{2} - p&amp;lt;/math&amp;gt;, there exists a pair of [[Code|encoding]] and [[Code|decoding]] functions &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^k \rightarrow \{0,1\}^n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^n&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\rightarrow&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^{k}&amp;lt;/math&amp;gt; respectively, and &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\leq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\lfloor&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(1 - H(p + \epsilon))n&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\rfloor&amp;lt;/math&amp;gt; such that every message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\in&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^{k}&amp;lt;/math&amp;gt; has the following property: &amp;lt;math&amp;gt;\Pr_{e \in BSC_p}[D(E(m) + e) \neq m] \leq 2^{-{\delta}n}&amp;lt;/math&amp;gt; for a sufficient large integer &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
What this theorem actually implies is, a message when picked from &amp;lt;math&amp;gt;\{0,1\}^k&amp;lt;/math&amp;gt;, encoded with a random encoding function &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;, and send across a noisy &amp;lt;math&amp;gt;BSC_{p}&amp;lt;/math&amp;gt;, there is a very high probability of recovering the original message by decoding, if &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small.&lt;br /&gt;
&lt;br /&gt;
We shall now prove Theorem 1 .&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof&#039;&#039;&#039;&lt;br /&gt;
We shall first describe the encoding function &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and the decoding function &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; used in the theorem. Its good to state here that we would be using the [[probabilistic method]] to prove this theorem. Shannon&#039;s theorem was one of the earliest applications of this method.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Code|Encoding function]] &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;:&#039;&#039;&#039; The encoding function &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^k \rightarrow \{0,1\}^n&amp;lt;/math&amp;gt; is selected at random. This means, given any message &amp;lt;math&amp;gt;m \in \{0,1\}^k&amp;lt;/math&amp;gt;, we choose &amp;lt;math&amp;gt;E(m) \in \{0,1\}^n&amp;lt;/math&amp;gt; uniformly and independently at random.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Code|Decoding function]] &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;:&#039;&#039;&#039; The decoding function &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^n \rightarrow \{0,1\}^k&amp;lt;/math&amp;gt; is given any received codeword &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\in&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^n&amp;lt;/math&amp;gt;, we find the message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\in&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^{k}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;\Delta(y, E(m))&amp;lt;/math&amp;gt; is minimum (with ties broken arbitrarily). This kind of a decoding function is called a &#039;&#039;[[maximum likelihood decoding (MLD)]]&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
Now first we show, for a fixed &amp;lt;math&amp;gt;m \in \{0,1\}^{k}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; chosen randomly, the probability of failure over &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt; noise is exponentially small. At this point, the proof works for a fixed message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;.  Next we extend this result to work for &#039;&#039;all&#039;&#039; &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;. We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name &#039;&#039;random coding with expurgation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A high level proof:&#039;&#039;&#039; Given a fixed message &amp;lt;math&amp;gt;m \in \{0,1\}^{k}&amp;lt;/math&amp;gt;, we need to estimate the [[expected value]] of the [[probability]] of the received codeword along with the noise does not give back &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; on decoding. That is to say, we need to estimate: &amp;lt;math&amp;gt;\mathbb{E}_{E}[\Pr_{e \in BSC_p}[D(E(m) + e) \neq m]]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; be the received codeword. In order for the decoded codeword &amp;lt;math&amp;gt;D(y)&amp;lt;/math&amp;gt; not to be equal to the message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;, one of the following events must occur:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; does not lie within the Hamming ball of radius &amp;lt;math&amp;gt;(p+\epsilon)&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;\epsilon&amp;lt;/math&amp;gt; greater than &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt;, centered at &amp;lt;math&amp;gt;E(m)&amp;lt;/math&amp;gt;. This condition is mainly used to make the calculations easier.&lt;br /&gt;
&lt;br /&gt;
* There is another message &amp;lt;math&amp;gt;m^{\prime} \in \{0,1\}^k&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;\Delta(y, E(m^{\prime})) \leq \Delta(y, E(m))&amp;lt;/math&amp;gt;. In other words the errors due to noise take the transmitted codeword closer to another encoded message.&lt;br /&gt;
&lt;br /&gt;
We can apply [[Chernoff bound]] to ensure the non occurrence of the first event. By applying [[Chernoff bound]] we have, &amp;lt;math&amp;gt;Pr_{e \in BSC_p}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;[\Delta(y, E(m)) &amp;gt; (p+\epsilon)] \leq 2^{-{\epsilon^2}n}&amp;lt;/math&amp;gt;. Thus, for any &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, we can pick &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; to be large enough to make the above probability exponentially small.&lt;br /&gt;
&lt;br /&gt;
As for the second event, we note that the probability that &amp;lt;math&amp;gt;E(m^{\prime}) \in B(y,(p+\epsilon)n)&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;Vol(B(y,(p+\epsilon)n)/2^n&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B(x, r)&amp;lt;/math&amp;gt; is the Hamming ball of radius &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; centered at vector &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Vol(B(x, r))&amp;lt;/math&amp;gt; is its volume. Using approximation to estimate the number of codewords in the Hamming ball, we have &amp;lt;math&amp;gt;Vol(B(y,(p+\epsilon)n)/2^n \approx 2^{H(p)n}&amp;lt;/math&amp;gt;. Hence the above probability amounts to &amp;lt;math&amp;gt;2^{H(p)n}/2^n = 2^{H(p)n-n}&amp;lt;/math&amp;gt;. Now using [[union bound]], we can upper bound the existence of such an &amp;lt;math&amp;gt;m^{\prime} \in \{0,1\}^k&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\le 2^{k +H(p)n-n}&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;2^{-\Omega(n)}&amp;lt;/math&amp;gt;, as desired by the choice of &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A detailed proof:&#039;&#039;&#039; From the above analysis, we calculate the probability of the event that the decoded codeword plus the channel noise is not the same as the original message sent. We shall introduce some symbols here. Let &amp;lt;math&amp;gt;p(y|E(m))&amp;lt;/math&amp;gt; denote the probability of receiving codeword &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; given that codeword &amp;lt;math&amp;gt;E(m)&amp;lt;/math&amp;gt; was sent.  Denote &amp;lt;math&amp;gt;B(E(m),(p+\epsilon)n)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\text{Ball}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr_{e \in BSC_p}[D(E(m) + e) \neq m] = \sum_{y \in \{0,1\}^{n}} p(y|E(m))\cdot 1_{D(y)\neq m} \leq \sum_{y \notin \text{Ball}} p(y|E(m)) \cdot 1_{D(y)\neq m} + \sum_{y \in \text{Ball}} p(y|E(m))\cdot 1_{D(y)\neq m} \leq 2^{-{\epsilon^2}n} + \sum_{y \in \text{Ball}} p(y|E(m)) \cdot 1_{D(y)\neq m}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We get the last inequality by our analysis using the [[Chernoff bound]] above. Now taking expectation on both sides we have, &lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{E}_E&amp;lt;/math&amp;gt;[&amp;lt;math&amp;gt;Pr_{e \in BSC_p}&amp;lt;/math&amp;gt;[&amp;lt;math&amp;gt;D(E(m) + e)&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;m]&amp;lt;/math&amp;gt;] &amp;lt;math&amp;gt;\leq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;2^{-{\epsilon^2}n}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;+&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\sum_{y \in \text{Ball}}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;p(y|E(m))&amp;lt;/math&amp;gt;.&amp;lt;math&amp;gt;\mathbb{E}[1_{D(y)\neq m}]&amp;lt;/math&amp;gt;.&lt;br /&gt;
                                         &lt;br /&gt;
Now we have  &amp;lt;math&amp;gt;\sum_{y \in \text{Ball}} p(y|E(m)) \leq 1&amp;lt;/math&amp;gt;. This just says, that the quantity &amp;lt;math&amp;gt;\mathbb{E}[1_{D(y)\neq m}] \leq 2^{k +H(p + \epsilon)n-n}&amp;lt;/math&amp;gt;, again from the analysis in the higher level proof above. Hence, taking everything together we have&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{E}_{E}[\Pr_{e \in BSC_p}[D(E(m) + e) \neq m]] \leq 2^{-{\epsilon^2}n} + 2^{k +H(p + \epsilon)n-n} \leq 2^{-\delta n}&amp;lt;/math&amp;gt;, by appropriately choosing the value of &amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt;. Since the above bound holds for &#039;&#039;&#039;each&#039;&#039;&#039; message, we have &amp;lt;math&amp;gt;\mathbb{E}_m[\mathbb{E}_E[\Pr_{e \in BSC_p}[D(E(m) + e)] \neq m]] \leq 2^{-\delta n}&amp;lt;/math&amp;gt;. Now we can change the order of summation in the expectation with respect to the message and the choice of the encoding function &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;, without loss of generality. Hence we have  &amp;lt;math&amp;gt;\mathbb{E}_E[\mathbb{E}_m [\Pr_{e \in BSC_p}[D(E(m) + e)] \neq m]] \leq 2^{-\delta n}&amp;lt;/math&amp;gt;. Hence in conclusion, by probabilistic method, we have some encoding function &amp;lt;math&amp;gt;E^{*}&amp;lt;/math&amp;gt; and a corresponding decoding function &amp;lt;math&amp;gt;D^{*}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;\mathbb{E}_m[\Pr_{e \in BSC_p}[D^{*}(E^{*}(m) + e)\neq m]] \leq 2^{-\delta n}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
At this point, the proof works for a fixed message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;. But we need to make sure that the above bound holds for &#039;&#039;&#039;all&#039;&#039;&#039; the messages &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &#039;&#039;&#039;simultaneously&#039;&#039;&#039;. For that, let us sort the &amp;lt;math&amp;gt;2^k&amp;lt;/math&amp;gt; messages by their decoding error probabilities. Now by applying [[Markov&#039;s inequality]], we can show the decoding error probability for the first &amp;lt;math&amp;gt;2^{k-1}&amp;lt;/math&amp;gt; messages to be at most &amp;lt;math&amp;gt;2.2^{-\delta n}&amp;lt;/math&amp;gt;. Thus in order to confirm that the above bound to hold for &#039;&#039;&#039;every&#039;&#039;&#039; message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;, we could just trim off the last &amp;lt;math&amp;gt;2^{k-1}&amp;lt;/math&amp;gt; messages from the sorted order. This essentially gives us another encoding function &amp;lt;math&amp;gt;E^{\prime}&amp;lt;/math&amp;gt; with a corresponding decoding function &amp;lt;math&amp;gt;D^{\prime}&amp;lt;/math&amp;gt; with a decoding error probability of at most &amp;lt;math&amp;gt;2^{-\delta n + 1}&amp;lt;/math&amp;gt; with the same rate. Taking &amp;lt;math&amp;gt;\delta^{\prime}&amp;lt;/math&amp;gt; to be equal to &amp;lt;math&amp;gt;\delta - \frac{1}{n}&amp;lt;/math&amp;gt; we bound the decoding error probability to &amp;lt;math&amp;gt;2^{-\delta^{\prime}n}&amp;lt;/math&amp;gt;. This expurgation process completes the proof of Theorem 1.&lt;br /&gt;
&lt;br /&gt;
== Converse of Shannon&#039;s capacity theorem ==&lt;br /&gt;
&lt;br /&gt;
The converse of the capacity theorem essentially states that &amp;lt;math&amp;gt;1 - H(p)&amp;lt;/math&amp;gt; is the best rate one can achieve over a binary symmetric channel. Formally the theorem states:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Theorem 2&#039;&#039;&#039;&lt;br /&gt;
If &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\geq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\lceil&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(1 - H(p + \epsilon)n)&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\rceil&amp;lt;/math&amp;gt; then the following is true for every [[Code|encoding]] and [[Code|decoding]] function &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^k&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\rightarrow&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\{0,1\}^{n}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\rightarrow&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{0,1\}^{k}&amp;lt;/math&amp;gt; respectively: &amp;lt;math&amp;gt;Pr_{e \in BSC_p}&amp;lt;/math&amp;gt;[&amp;lt;math&amp;gt;D(E(m) + e)&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;m]&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\geq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a detailed proof of this theorem, the reader is asked to refer to the bibliography. The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;, while the channel &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt; introduces transmission errors. When the capacity of the channel is &amp;lt;math&amp;gt;H(p)&amp;lt;/math&amp;gt;, the number of errors is typically &amp;lt;math&amp;gt;2^{H(p + \epsilon)n}&amp;lt;/math&amp;gt; for a code of block length &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. The maximum number of messages is &amp;lt;math&amp;gt;2^{k}&amp;lt;/math&amp;gt;. The output of the channel on the other hand has &amp;lt;math&amp;gt;2^{n}&amp;lt;/math&amp;gt; possible values. If there is any confusion between any two messages, it is likely that &amp;lt;math&amp;gt;2^{k}2^{H(p + \epsilon)n} \ge 2^{n}&amp;lt;/math&amp;gt;. Hence we would have &amp;lt;math&amp;gt;k \geq \lceil (1 - H(p + \epsilon)n) \rceil&amp;lt;/math&amp;gt;, a case we would like to avoid to keep the decoding error probability exponentially small.&lt;br /&gt;
&lt;br /&gt;
== Codes for BSC&amp;lt;sub&amp;gt;&#039;&#039;p&#039;&#039;&amp;lt;/sub&amp;gt; ==&lt;br /&gt;
Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct.&lt;br /&gt;
&lt;br /&gt;
The approach behind the design of codes which meet the channel capacities of &amp;lt;math&amp;gt;BSC&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;BEC&amp;lt;/math&amp;gt; have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon’s theorem gives us the best rate which could be achieved over a &amp;lt;math&amp;gt;BSC_{p}&amp;lt;/math&amp;gt;, but it does not give us an idea of any explicit codes which achieve that rate.  In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes. We shall discuss the construction Forney&#039;s code for the Binary Symmetric Channel and analyze its rate and decoding error probability briefly here. Various explicit codes for achieving the capacity of the [[binary erasure channel]] have also come up recently.&lt;br /&gt;
&lt;br /&gt;
== Forney&#039;s code for BSC&amp;lt;sub&amp;gt;&#039;&#039;p&#039;&#039;&amp;lt;/sub&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Forney constructed a [[concatenated code]] &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;=&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C_\text{out} \circ C_\text{in}&amp;lt;/math&amp;gt; to achieve the capacity of Theorem 1 for &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt;. In his code,&lt;br /&gt;
&lt;br /&gt;
* The outer code &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt; is a code of block length &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; and rate &amp;lt;math&amp;gt;1-\frac{\epsilon}{2}&amp;lt;/math&amp;gt; over the field &amp;lt;math&amp;gt;F_{2^k}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;k = O(log N)&amp;lt;/math&amp;gt;. Additionally, we have a [[Code|decoding]] algorithm &amp;lt;math&amp;gt;D_\text{out}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt; which can correct up to &amp;lt;math&amp;gt;\gamma&amp;lt;/math&amp;gt; fraction of worst case errors and runs in &amp;lt;math&amp;gt;t_\text{out}(N)&amp;lt;/math&amp;gt; time.&lt;br /&gt;
* The inner code &amp;lt;math&amp;gt;C_\text{in}&amp;lt;/math&amp;gt; is a code of block length &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, dimension &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;, and a rate of &amp;lt;math&amp;gt;1 - H(p) - \frac{\epsilon}{2}&amp;lt;/math&amp;gt;.   Additionally, we have a decoding algorithm &amp;lt;math&amp;gt;D_\text{in}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;C_\text{in}&amp;lt;/math&amp;gt; with a [[Code|decoding]] error probability of at most &amp;lt;math&amp;gt;\frac{\gamma}{2}&amp;lt;/math&amp;gt; over &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt; and runs in &amp;lt;math&amp;gt;t_\text{in}(N)&amp;lt;/math&amp;gt; time. &lt;br /&gt;
 &lt;br /&gt;
For the outer code &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt;, a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a [[binary linear code]] is used for &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For the inner code &amp;lt;math&amp;gt;C_\text{in}&amp;lt;/math&amp;gt; we find a [[linear code]] by exhaustively searching from the [[linear code]]  of block length &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; and dimension &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;, whose rate meets the capacity of &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt;, by Theorem 1.&lt;br /&gt;
&lt;br /&gt;
The rate &amp;lt;math&amp;gt;R(C^{*}) = R(C_\text{in}) \times R(C_\text{out}) = (1-\frac{\epsilon}{2}) ( 1 - H(p) - \frac{\epsilon}{2} ) \geq 1&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;-&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;H(p)&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;-&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\epsilon&amp;lt;/math&amp;gt; which almost meets the &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt; capacity. We further note that the encoding and decoding of  &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt; can be done in polynomial time with respect to &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. As a matter of fact,  encoding &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt; takes time &amp;lt;math&amp;gt;O(N^{2})+O(Nk^{2}) = O(N^{2})&amp;lt;/math&amp;gt;. Further, the decoding algorithm described takes time &amp;lt;math&amp;gt;Nt_\text{in}(k) + t_\text{out}(N) = N^{O(1)} &amp;lt;/math&amp;gt; as long as &amp;lt;math&amp;gt;t_\text{out}(N) = N^{O(1)}&amp;lt;/math&amp;gt;; and &amp;lt;math&amp;gt;t_\text{in}(k) = 2^{O(k)}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Decoding error probability for &#039;&#039;C&#039;&#039;&amp;lt;sup&amp;gt;*&amp;lt;/sup&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
A natural decoding algorithm for  &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt; is to:&lt;br /&gt;
&lt;br /&gt;
* Assume &amp;lt;math&amp;gt;y_{i}^{\prime} = D_\text{in}(y_i), \quad i \in (0, N)&amp;lt;/math&amp;gt; &lt;br /&gt;
* Execute &amp;lt;math&amp;gt;D_\text{out}&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;y^{\prime} = (y_1^{\prime} \ldots y_N^{\prime})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that each block of code for &amp;lt;math&amp;gt;C_\text{in}&amp;lt;/math&amp;gt; is considered a symbol for &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt;. Now since the probability of error at any index &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;D_\text{in}&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;\frac{\gamma}{2}&amp;lt;/math&amp;gt; and the errors in &amp;lt;math&amp;gt;BSC_p&amp;lt;/math&amp;gt; are independent, the expected number of errors for &amp;lt;math&amp;gt;D_\text{in}&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;\frac{\gamma N}{2}&amp;lt;/math&amp;gt; by linearity of expectation. Now applying [[Chernoff bound]], we have bound error probability of more than &amp;lt;math&amp;gt;\gamma N&amp;lt;/math&amp;gt; errors occurring to be &amp;lt;math&amp;gt;e^\frac{-\gamma N}{6}&amp;lt;/math&amp;gt;. Since the outer code &amp;lt;math&amp;gt;C_\text{out}&amp;lt;/math&amp;gt; can correct at most &amp;lt;math&amp;gt;\gamma N&amp;lt;/math&amp;gt; errors, this is the [[Code|decoding]] error probability of &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt;. This when expressed in asymptotic terms, gives us an error probability of &amp;lt;math&amp;gt;2^{-\Omega(\gamma N)}&amp;lt;/math&amp;gt;. Thus the achieved decoding error probability of &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt;  is exponentially small as Theorem 1.&lt;br /&gt;
&lt;br /&gt;
We have given a general technique to construct &amp;lt;math&amp;gt;C^{*}&amp;lt;/math&amp;gt;. For more detailed descriptions on &amp;lt;math&amp;gt;C_\text{in}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C{out}&amp;lt;/math&amp;gt; please read the following references. Recently  a few other codes have also been constructed for achieving the capacities. [[LDPC]] codes have been considered for this purpose for their faster decoding time.&amp;lt;ref&amp;gt;Richardson and Urbanke&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Z-channel (information theory)|Z channel]]&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* David J. C. MacKay. &#039;&#039;[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]&#039;&#039; Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1&lt;br /&gt;
* Thomas M. Cover, Joy A. Thomas. &#039;&#039;Elements of information theory&#039;&#039;, 1st Edition.  New York: Wiley-Interscience, 1991. ISBN 0-471-06259-6.&lt;br /&gt;
* Atri Rudra&#039;s course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Fall 2007), Lectures [http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect9.pdf  9],  [http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect10.pdf 10], [http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect29.pdf 29], and [http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect30.pdf 30].&lt;br /&gt;
* Madhu Sudan&#039;s course on Algorithmic Introduction to Coding Theory (Fall 2001), Lecture [http://people.csail.mit.edu/madhu/FT01/scribe/lect1.ps 1] and [http://people.csail.mit.edu/madhu/FT01/scribe/lect2.ps 2]. &lt;br /&gt;
* G. David Forney. [http://dspace.mit.edu/handle/1721.1/4303 Concatenated Codes]. MIT Press, Cambridge, MA, 1966.&lt;br /&gt;
* Venkat Guruswamy&#039;s course on [http://www.cs.washington.edu/education/courses/533/06au/ Error-Correcting Codes: Constructions and Algorithms], Autumn 2006.&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=584093 A mathematical theory of communication] C. E Shannon, ACM SIGMOBILE Mobile Computing and Communications Review.&lt;br /&gt;
* [http://assets.cambridge.org/97805218/52296/copyright/9780521852296_copyright_info.pdf Modern Coding Theory] by Tom Richardson and Rudiger Urbanke., Cambridge University Press&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://oscar.iitb.ac.in/availableProposalsAction1.do?type=av&amp;amp;id=534&amp;amp;language=english A Java applet implementing Binary Symmetric Channel]&lt;br /&gt;
&lt;br /&gt;
[[Category:Coding theory]]&lt;/div&gt;</summary>
		<author><name>157.181.98.186</name></author>
	</entry>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Sigma_additivity&amp;diff=7471</id>
		<title>Sigma additivity</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Sigma_additivity&amp;diff=7471"/>
		<updated>2013-12-22T17:50:31Z</updated>

		<summary type="html">&lt;p&gt;157.181.98.186: fix math fuckups&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{About|the water well test|the performance testing of  oil  wells|well test (oil and gas)}}&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;well test&#039;&#039;&#039; is conducted to evaluate the amount of water that can be pumped from a particular water well.  More specifically, a well test will allow prediction of the maximum rate at which water can be pumped from a well, and the distance that the water level in the well will fall for a given pumping rate and duration of pumping. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Well testing&#039;&#039;&#039; differs from [[aquifer test]]ing in that the behaviour of the well is primarily of concern in the former, while the characteristics of the [[aquifer]] (the geological formation or unit that supplies water to the well) are quantified in the latter.&lt;br /&gt;
&lt;br /&gt;
When water is pumped from a well the water level in the well falls.  This fall is called [[drawdown (hydrology)|drawdown]].  The amount of water that can be pumped is limited by the drawdown produced.  Typically, drawdown also increases with the length of time that the pumping continues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- comment out link to image, until suitable image is found&lt;br /&gt;
[[Image:SteppedRec.gif|thumb|right|250px|The blue line shows the water level record in a well measured during a stepped-rate well test; the purple line shows a simulation calculated from the derived well equation.  There are three consecutive steps of increasing pumping rate, and then the final (right side) section of the graph records the rise in water level when pumping stops.]] &lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Well losses vs. aquifer losses ==&lt;br /&gt;
The components of observed [[drawdown (hydrology)|drawdown]] in a pumping [[water well|well]] were first described by Jacob (1947), and the test was refined independently by [[Mahdi S. Hantush|Hantush]] (1964) and Bierschenk (1963) as consisting of two related components,&lt;br /&gt;
:&amp;lt;math&amp;gt; s = BQ + CQ^2 &amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
where s is drawdown (units of length e.g., m), &amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt; is the pumping rate (units of volume flowrate e.g., m³/day), &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is the aquifer loss coefficient (which increases with time &amp;amp;mdash; as predicted by the [[Aquifer test#Theis solution|Theis solution]]) and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is the well loss coefficient (which is constant for a given flow rate).&lt;br /&gt;
&lt;br /&gt;
The first term of the equation (&amp;lt;math&amp;gt;BQ&amp;lt;/math&amp;gt;) describes the linear component of the drawdown; i.e., the part in which doubling the pumping rate doubles the drawdown.&lt;br /&gt;
&lt;br /&gt;
The second term (&amp;lt;math&amp;gt;CQ^2&amp;lt;/math&amp;gt;) describes what is often called the &#039;well losses&#039;; the non-linear component of the drawdown.  To quantify this it is necessary to pump the well at several different flow rates (commonly called &#039;&#039;steps&#039;&#039;).  Rorabaugh (1953) added to this analysis by making the exponent an arbitrary power (usually between 1.5 and 3.5). &lt;br /&gt;
&lt;br /&gt;
To analyze this equation, both sides are divided by the discharge rate (&amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt;), leaving &amp;lt;math&amp;gt;s/Q&amp;lt;/math&amp;gt; on the left side, which is commonly referred to as &#039;&#039;specific drawdown&#039;&#039;. The right hand side of the equation becomes that of a straight line. Plotting the specific drawdown after a set amount of time (&amp;lt;math&amp;gt;\Delta t&amp;lt;/math&amp;gt;) since the beginning of each step of the test (since drawdown will continue to increase with time) versus pumping rate should produce a straight line.&lt;br /&gt;
:&amp;lt;math&amp;gt; \frac{s}{Q} = B + CQ &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Fitting a straight line through the observed data, the slope of the best fit line will be &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; (well losses) and the intercept of this line with &amp;lt;math&amp;gt;Q=0&amp;lt;/math&amp;gt; will be &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; (aquifer losses).  This process is fitting an idealized model to real world data, and seeing what parameters in the model make it fit reality best. The assumption is then made that these fitted parameters best represent reality (given the assumptions that went into the model are true).&lt;br /&gt;
&lt;br /&gt;
The relationship above is for fully penetrating wells in confined aquifers (the same assumptions used in the [[Aquifer test#Theis solution|Theis solution]] for determining aquifer characteristics in an [[aquifer test]]).&lt;br /&gt;
&lt;br /&gt;
== Well efficiency ==&lt;br /&gt;
Often the &#039;&#039;well efficiency&#039;&#039; is determined from this sort of test, this is a percentage indicating the fraction of total observed drawdown in a pumping well which is due to aquifer losses (as opposed to being due to flow through the well screen and inside the borehole). A perfectly efficient well, with perfect well screen and where the water flows inside the well in a frictionless manner would have 100% efficiency. Unfortunately well efficiency is hard to compare between wells because it depends on the characteristics of the aquifer too (the same amount of well losses compared to a more transmissive aquifer would give a lower efficiency).&lt;br /&gt;
&lt;br /&gt;
==Specific capacity==&lt;br /&gt;
&#039;&#039;&#039;Specific capacity&#039;&#039;&#039; is a quantity that which a [[water well]] can produce per unit of drawdown. It is normally obtained from a step drawdown test. Specific capacity is expressed as:&lt;br /&gt;
:&amp;lt;math&amp;gt;S_c=\frac{Q}{h_0 - h}&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
:&amp;lt;math&amp;gt;S_c&amp;lt;/math&amp;gt; is the specific capacity ([L&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;T&amp;lt;sup&amp;gt;−1&amp;lt;/sup&amp;gt;]; m²/day or USgal/day/ft)&lt;br /&gt;
:&amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt; is the pumping rate ([L&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;T&amp;lt;sup&amp;gt;−1&amp;lt;/sup&amp;gt;]; m³/day or USgal/day), and&lt;br /&gt;
:&amp;lt;math&amp;gt;h_0 - h&amp;lt;/math&amp;gt; is the drawdown ([L]; m or ft)&lt;br /&gt;
The specific capacity of a well is also a function of the pumping rate it is determined at.  Due to non-linear well losses the specific capacity will decrease with higher pumping rates.  This complication makes the absolute value of specific capacity of little use; though it is useful for comparing the efficiency of the same well through time (e.g., to see if the well requires rehabilitation).&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Bierschenk, William H., 1963. Determining well efficiency by multiple step-drawdown tests. &#039;&#039;International Association of Scientific Hydrology&#039;&#039;, 64:493-507.&lt;br /&gt;
* Hantush, Mahdi S., 1964. &#039;&#039;Advances in Hydroscience&#039;&#039;, chapter &#039;&#039;Hydraulics of Wells&#039;&#039;, pp 281–442. Academic Press.&lt;br /&gt;
* Jacob, C.E., 1947. Drawdown test to determine effective radius of artesian well. &#039;&#039;Transactions, American Society of Civil Engineers&#039;&#039;, 112(2312):1047-1070.&lt;br /&gt;
* Rorabaugh, M.I., 1953. Graphical and theoretical analysis of step-drawdown test of artesian wells. &#039;&#039;Transactions, American Society of Civil Engineers&#039;&#039;, 79(separate 362):1-23.&lt;br /&gt;
&lt;br /&gt;
Additional references on pumping test analysis methods other than the one described above (typically referred to as the Hantush-Bierschenk method) can be found in the general references on [[Aquifer test#Additional reading|aquifer tests]] and [[Hydrogeology#Further reading|hydrogeology]].&lt;br /&gt;
&lt;br /&gt;
== See also==&lt;br /&gt;
* [[Hydrogeology]]&lt;br /&gt;
* [[Aquifer]] and [[Groundwater]]&lt;br /&gt;
&lt;br /&gt;
{{Aquiferproperties}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Aquifers]]&lt;br /&gt;
[[Category:Hydrology]]&lt;br /&gt;
[[Category:Hydraulic engineering]]&lt;br /&gt;
[[Category:Water wells]]&lt;/div&gt;</summary>
		<author><name>157.181.98.186</name></author>
	</entry>
</feed>