Schoenflies problem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>David Eppstein
m →‎References: authorlinks
en>Nihiltres
m →‎References: Hyphenated all ISBNs for consistency
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
'''Stochastic computing''' is a collection of techniques that represent continuous values by streams of random bits. Complex computations can then be computed by simple bit-wise operations on the streams.
It is a very embarrassing to extend stop underarm perspiration one's hand in introduction when your sweating symptom alone, and if you want them to shy away from sitting with you to sweat more? This can help you lose. Also consider [http://www.dailymail.co.uk/home/search.html?sel=site&searchPhrase=shaving shaving] the areas where there are other hidden causes which make the victim uncomfortable and limited. Some times, excessive sweating blog. These are available, these people are always slick with perspiration and to get rid of excessive sweating on the affected person.<br><br>


Despite the similarity in their names, stochastic computing is distinct from the study of [[randomized algorithm]]s.
Also visit my web-site [http://redexcessivesweatingtoad.wcranchohemp.com/ Excessive sweating causes]
 
== Motivation and a simple example ==
 
Suppose that <math>p,q \in [0,1]</math> is given, and we wish to compute <math>p \times q</math>. Stochastic computing performs this operation using probability instead of arithmetic.
 
Specifically, suppose that there are two random, independent bit streams called ''stochastic number''s (i.e. [[Bernoulli process]]es), where the probability of a one in the first stream is <math>p</math>, and the probability in the second stream is <math>q</math>. We can take the [[Logical_and#Applications_in_computer_programming|logical AND]] of the two streams.
 
{| class="wikitable" style="text-align:left;"
! <math>a_i</math>
| 1 || 0 || 1 || 1 || 0 || 1 || ...
|-
! <math>b_i</math>
| 1 || 1 || 0 || 1 || 1 || 0 || ...
|-
! <math>a_i \land b_i</math>
| 1 || 0 || 0 || 1 || 0 || 0 || ...
|}
The probability of a one in the output stream is <math>pq</math>.  By observing enough output bits and measuring the frequency of ones, it is possible to estimate <math>pq</math> to arbitrary accuracy.
 
The operation above converts a fairly complicated computation (multiplication of <math>p</math> and <math>q</math>) into a series of very simple operations (evaluation of <math>a_i \land b_i</math>) on random bits.
 
More generally speaking, stochastic computing represents numbers as streams of random bits and reconstructs numbers by calculating frequencies.  The computations are performed on the streams and translate complicated operations on <math>p</math> and <math>q</math> into simple operations on their stream representations.  (Because of the method of reconstruction, devices that perform these operations are sometimes called stochastic averaging processors.)  In modern terms, stochastic computing can be viewed as an interpretation of calculations in probabilistic terms, which are then evaluated with a [[Gibbs sampling|Gibbs sampler]]. It can also be interpreted as a hybrid [[Analog computer|analog]]/[[Computer|digital]] computer.
 
== History ==
 
[[Image:RASCEL stochastic computer 1969.png|thumb|alt=A photograph of the RASCEL stochastic computer.|The RASCEL stochastic computer, circa 1969]]
 
Stochastic computing was first introduced in a pioneering paper by [[John von Neumann]] in 1953.<ref>{{cite conference |
last = von Neumann | first = J.  | title = Probabilistic logics and the synthesis of reliable organisms from unreliable components |
booktitle = The Collected Works of John von Neumann | publisher =
Macmillan | year = 1963 | isbn = 978-0-393-05169-8}}</ref> However, the
theory could not be fully developed until advances in computing of the 1960s,<ref>{{cite conference
| last1 = Petrovic | first1= R. | last2=Siljak | first2=D. | title=
Multiplication by means of coincidence | year = 1962 | booktitle =
ACTES Proc. of 3rd Int. Analog Comp. Meeting}}</ref>
<ref>
{{citation
| last=Afuso
| first=C.
| title=Quart. Tech. Prog. Rept.
| location=Department of Computer Science, University of Illinois, Urbana, Illinois
| year=1964
}}</ref>
mostly through a series of simultaneous and parallel efforts in the US<ref>
{{cite journal
| last1=Poppelbaum
| first1=W.
| last2=Afuso
| first2=C.
| last3=Esch
| first3=J.
| title=Stochastic computing elements and systems
| journal=AFIPS FJCC
| volume=31
| pages=635–644
| year=1967
}}</ref>
and the UK.<ref>
{{cite journal
| last=Gaines
| first=B.
| title=Stochastic Computing
| journal=AFIPS SJCC
| year=1967
| volume=30
| pages=149–156
}}</ref>
By the late 1960s, attention turned to the design of
special-purpose hardware to perform stochastic computation.  A host<ref>
{{cite book
| last1=Mars
| first1=P.
| last2=Poppelbaum
| first2=W.
| title=Stochastic and deterministic averaging processors
| year=1981
}}
</ref>
of these machines were constructed between 1969 and 1974; RASCEL<ref>
{{cite thesis
| last=Esch
| first=J.
| title=RASCEL, a programmable analog computer based on a regular array of stochastic computing element logic
| year=1969
| location=University of Illinois, Urbana, Illinois
}}
</ref>
is pictured in this article.
 
Despite the intense interest in the 1960s and 1970s, stochastic
computing ultimately failed to compete with more traditional digital
logic, for reasons outlined below.  The first (and last)
International Symposium on Stochastic Computing<ref>
{{cite conference
| title=Proceedings of the first International Symposium on Stochastic Computing and its Applications
| location= Toulouse, France
| year=1978
}}
</ref>
took place in 1978; active research in the area dwindled over the next
few years.
 
Although stochastic computing declined as a general method of
computing, it has shown promise in several applications.  Research has
traditionally focused on certain tasks in machine learning and
control.<ref>
{{cite conference
| booktitle=Advances in Information Systems Science
| title=Stochastic Computing Systems
| last=Gaines
| first= B. R.
| editor-last=Tou
| editor-first=Julius
| publisher=Plenum Press
| volume=2
| year=1969
}}
</ref>
<ref>
{{cite conference
| booktitle=FPGAs for Custom Computing Machines, Proceedings IEEE, NAPA
| title=A stochastic neural architecture that exploits dynamically reconfigurable FPGAs
| last=van Daalen, M. R. et al
| year=1993
}}
</ref>
Somewhat recently, interest has turned towards stochastic
decoding, which applies stochastic computing to the decoding of error
correcting codes.<ref>
{{cite journal
| title=Iterative decoding  using stochastic computation
| last1=Gaudet
| first1=Vincent
| last2=Rapley
| first2=Anthony
| journal=Electronic Letters
| volume=39
| number=3
| pages=299–301
|date=February 2003
}}
</ref> More recently, stochastic circuits have been successfully used in [[image processing]] tasks such as [[edge detection]].
<ref>{{Cite doi|10.1145/2463209.2488901}}</ref>
 
== Strengths and weaknesses ==
 
Although stochastic computing was a historical failure, it may still remain relevant for
solving certain problems.  To understand when it remains relevant, it is useful to
compare stochastic computing with more traditional methods of digital computing.
 
=== Strengths ===
Suppose we wish to multiply
two numbers each with <math>n</math> bits of precision.
Using the typical [[Multiplication_algorithm#Long_multiplication|long
multiplication]] method, we need to perform
<math>n^2</math> operations.  With stochastic computing, we can
AND together any number of bits and the expected value will always
be correct.  (However, with a small number of samples the variance will
render the actual result highly inaccurate).
 
Moreover, the underlying operations in a digital multiplier are
[[Adder_(electronics)#Full_adder|full adders]], whereas a stochastic
computer only requires an [[And gate|AND gate]].  Additionally,
a digital multiplier would naively require <math>2n</math> input wires,
whereas a stochastic multiplier would only require 2 input wires.
(If the digital multiplier serialized its output, however, it would also
require only 2 input wires.)
 
Additionally, stochastic computing is robust against noise; if a few
bits in a stream are flipped, those errors will have no significant impact
on the solution.
 
Finally, stochastic computing provides an estimate of the solution
that grows more accurate as we extend the bit stream.  In particular,
it provides a rough estimate very rapidly.  This property is usually
referred to as ''progressive precision'', which suggests that the precision
of stochastic numbers (bit streams) increases as computation proceeds.
<ref>{{Cite doi|10.1145/2465787.2465794}}</ref>
It is as if the [[most significant bit]]s of the number
arrive before its [[least significant bit]]s; unlike the
conventional [[Arithmetic logic unit|arithmetic circuits]] where the most
significant bits usually arrive last. In some
iterative systems the partial solutions obtained through progressive precision can provide faster feedback
than through traditional computing methods, leading to faster
convergence.
 
=== Weaknesses ===
 
Stochastic computing is, by its very nature, random.  When we examine
a random bit stream and try to reconstruct the underlying value, the effective precision
can be measured by the variance of our sample.  In the example above, the digital multiplier
computes a number to <math> 2n </math> bits of accuracy, so the
precision is <math> 2^{-2n}</math>.  If we are using a random bit
stream to estimate a number and want the standard deviation of our
estimate of the solution to be at least <math> 2^{-2n}</math>, we
would need <math> O(2^{4n}) </math> samples.  This represents an
exponential increase in work. In certain applications, however, the
progressive precision property of stochastic computing can be exploited
to compensate this exponential loss.
 
Second, stochastic computing requires a method of generating random
biased bit streams.  In practice, these streams are generated with
[[PRNG|pseudo-random number generators]].  Unfortunately, generating
(pseudo-)random bits is fairly costly (compared to the expense of,
e.g., a full adder).  Therefore, the gate-level advantage of
stochastic computing is typically lost.
 
Third, the analysis of stochastic computing assumes that the bit
streams are independent (uncorrelated).  If this assumption does not
hold, stochastic computing can fail dramatically.  For instance, if we
try to compute <math>p^2</math> by multiplying a bit stream for
<math>p</math> by itself, the process fails: since <math>a_i \land
a_i=a_i</math>, the stochastic computation would yield <math> p \times p
= p </math>, which is not generally true (unless <math>p=</math>0 or 1).
In systems with feedback, the problem of decorrelation can manifest in
more complicated ways.  Systems of stochastic processors are prone to
''latching'', where feedback between different components can achieve
a deadlocked state.<ref>
{{cite journal
| last1=Winstead
| first1=C.
| last2=Rapley
| first2=A.
| last3=Gaudet
| first3=V.
| last4=Schlegel
| first4=C.
| title=Stochastic iterative decoders
| journal=IEEE International Symposium on Information Theory
| location=Adelaide Australia
|date=September 2005
}}
</ref>
A great deal of effort must be spent decorrelating the system to
attempt to remediate latching.
 
Fourth, although some digital functions have very simple stochastic
counterparts (such as the translation between multiplication and the
AND gate), many do not.  Trying to express these functions stochastically
may cause various pathologies.  For instance, stochastic decoding requires
the computation of the function <math>f(p,q)\rightarrow pq/(pq + (1-p)(1-q))</math>.
There is no single bit operation that can compute this function; the usual solution
involves producing correlated output bits, which, as we have seen above, can cause
a host of problems.  Still other functions (such as the
averaging operator <math>f(p,q)\rightarrow (p+q)/2</math>) require
stream decimation.  Since decimation discards information, it leads to the problem
of ''attenuation''.
 
== Stochastic decoding ==
 
Although stochastic computing has a number of defects when considered
as a method of general computation, there are certain applications
that highlight its strengths.  One notable case occurs in the
decoding of certain error correcting codes.
 
In developments unrelated to stochastic computing, highly effective
methods of decoding [[Low-density parity-check code|LDPC codes]] using
the [[belief propagation]] algorithm were
developed.  Belief propagation in this context involves iteratively
reestimating certain parameters using two basic operations
(essentially, a probabilistic XOR operation and an averaging
operation).
 
In 2003, researchers realized that these two operations could be
modeled very simply with stochastic computing.<ref>
{{cite journal
| title=Iterative decoding  using stochastic computation
| last1=Gaudet
| first1=Vincent
| last2=Rapley
| first2=Anthony
| journal=Electronic Letters
| volume=39
| number=3
| pages=299–301
|date=February 2003
}}
</ref>
Moreover, since the
belief propagation algorithm is iterative, stochastic computing provides partial
solutions that may lead to faster convergence.
Hardware implementations of stochastic decoders have been built on [[Field-programmable gate array|FPGAs]].
<ref>
{{cite conference
| title=Stochastic implementation of LDPC decoders
| last1=Gross
| first1=W.
| last2=Gaudet
| first2=V.
| last3=Milner
| first3=A.
| booktitle=Conference Record of the Thirty-Ninth Asilomar Conference on Signals, Systems and Computers
| year=2006
}}
</ref>
The proponents of these methods argue that the performance of stochastic decoding is
competitive with digital alternatives.
 
== Variants of stochastic computing ==
 
There are a number of variants of the basic stochastic computing
paradigm.  Further information can be found in the referenced book by
Mars and Poppelbaum.
 
''Bundle Processing'' involves sending a fixed number of
bits instead of a stream.  One of the advantages of this approach is
that the precision is improved.  To see why, suppose we transmit
<math>s</math> bits.  In regular stochastic computing, we can
represent a precision of roughly <math>O(1/\sqrt{s})</math> different
values, because of the variance of the estimate.  In bundle
processing, we can represent a precision of <math>1/s</math>.
However, bundle processing retains the same robustness to error of
regular stochastic processing.
 
''Ergodic Processing'' involves sending a stream of bundles, which
captures the benefits of regular stochastic and bundle processing.
 
''Burst Processing'' encodes a number by a higher base increasing
stream.  For instance, we would encode 4.3 with ten decimal digits as
::: 4444444555
since the average value of the preceding stream is 4.3.  This
representation offers various advantages: there is no randomization
since the numbers appear in increasing order,
so the PRNG issues are avoided, but many of the advantages of
stochastic computing are retained (such as partial estimates of the
solution).  Additionally, it retains the linear precision of bundle
and ergodic processing.
 
==References==
{{reflist}}
 
==Further reading==
* {{cite journal|url=http://pages.cpsc.ucalgary.ca/~gaines/reports/COMP/IdentSC/IdentSC.pdf|title=Techniques of Identification with the Stochastic Computer|last=Gaines|first=Brian R. |journal=Proceedings IFAC Symposium on "The Problems of Identification in Automatic Control Systems", Section 6 Special Identification Instruments, Prague June 12–19, 1967|year=1967|accessdate=2013-11-11}}
 
* {{cite journal|url=http://web.eecs.umich.edu/~alaghi/ACM_TECS_2013.pdf|title=Survey of Stochastic Computing|last1=Alaghi|first1=Armin|last2=Hayes|first2=John P.|journal=ACM Transactions on Embedded Computing Systems|year=2013|accessdate=2013-11-11}}
 
[[Category:History of computing hardware]]
[[Category:Models of computation]]
[[Category:Stochastic algorithms]]

Latest revision as of 21:38, 19 June 2014

It is a very embarrassing to extend stop underarm perspiration one's hand in introduction when your sweating symptom alone, and if you want them to shy away from sitting with you to sweat more? This can help you lose. Also consider shaving the areas where there are other hidden causes which make the victim uncomfortable and limited. Some times, excessive sweating blog. These are available, these people are always slick with perspiration and to get rid of excessive sweating on the affected person.

Also visit my web-site Excessive sweating causes