Rotation operator (quantum mechanics): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Myasuda
 
Line 1: Line 1:
{{lowercase|μ operator}}
Making your computer run quickly is pretty simple. Most computers run slow because they are jammed up with junk files, which Windows has to search from every time it wants to obtain something. Imagine needing to find a book in a library, nevertheless all library books are inside a big huge pile. That's what it's like for a computer to obtain anything, when your program is full of junk files.<br><br>Firstly, we should use your Antivirus or safety tool and run a scan on a computer. It is possible which a computer is infected with virus or malware which slows down your computer. If there is nothing found in the scanning report, it may be a RAM that cause the issue.<br><br>Perfect Optimizer additionally has to remove junk files plus is totally Windows Vista compatible. Most registry product just don't have the time plus money to analysis Windows Vista errors. Because best optimizer has a big customer base, they do have the time, money plus reasons to help totally support Windows Vista.<br><br>Your computer was quite fast when you first bought it. Because the registry was quite clean and free of errors. After time, your computer starts to run slow plus freezes up now and then. Because there are errors accumulating inside it and certain info is rewritten or even completely deleted by your incorrect uninstall of programs, wrong operations, malware or other details. That is the reason why the computer performance decreases gradually plus become quite unstable.<br><br>The [http://bestregistrycleanerfix.com/tune-up-utilities tuneup utilities] must come because standard with a back up plus restore center. This ought to be an simple to implement process.That signifies that if you encounter a problem with your PC following using a registry cleaning you can simply restore your settings.<br><br>The many probable cause of the trouble is the system problem - Registry Errors! That is the reason why individuals whom already have over 2 G RAM on their computers are still constantly bothered by the problem.<br><br>It is important which you remove obsolete registry entries from a system on a regular basis, if you would like your program to run quicker, which is. If you don't keep the registry clean, a time comes whenever your program usually stop working completely. Next, the just choice would be to reformat your hard drive and begin over!<br><br>By changing the method we utilize the internet you are able to have access more of your valuable bandwidth. This can ultimately provide we a quicker surfing experience. Here is a link to 3 techniques to customize your PC speed on the net.
In [[recursion theory|computability theory]], the '''μ operator''', '''minimization operator''', or '''unbounded search operator''' searches for the least [[natural number]] with a given property. Adding the μ-operator to the five [[primitive recursive]] operators makes it possible to define all [[computable function]]s (given that the [[Church-Turing thesis]] is true).
 
== Definition ==
Suppose that R( y, x<sub>1</sub>, . . ., x<sub>k</sub> ) is a fixed ''k+1''-ary relation on the natural numbers. The mu operator "μy", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers { 0, 1, 2, . . . }. to the natural numbers. However, "μy" contains a ''[[Predicate (mathematics)|predicate]]'' over the natural numbers that delivers ''true'' when the predicate is satisfied and ''false'' when it is not.
 
The ''bounded'' mu operator appears earlier in Kleene (1952) ''Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation'', as:
:"<math>\mu y_{y<z} R(y). \ \ \mbox{The least} \ y<z \ \mbox{such that} \ R(y), \ \mbox{if} \ (\exists y)_{y<z} R(y); \ \mbox{otherwise}, \ z.</math>" (p. 225)
 
[[Stephen Kleene]] notes that any of the six inequality restrictions on the range of the variable y is permitted, i.e. "y < z", "y ≤ z", "w < y < z", "w < y ≤ z", "w ≤ y < z", "w ≤ y ≤ z". "When the indicated range contains no y such that R(y) [is "true"], the value of the "μy" expression is the cardinal number of the range"(p.&nbsp;226); this is why the default "z" appears in the definition above. As shown below, the bounded mu operator "μy<sub>y<z</sub>" is defined in terms of two primitive recursive functions called the finite sum Σ and finite product Π, a predicate function that "does the test" and a [[representing function]] that converts { t, f } to { 0, 1 }.
 
In Chapter XI §57 General Recursive Functions, Kleene defines the ''unbounded'' μ-operator over the variable y in the following manner,
:"<math>(\exists y) \mu y R(y) = \{ \mbox{the least (natural number)}\ y \ \mbox{such that} \ R(y)\}</math>" (p. 279, where "<math>(\exists y)</math>" means "There exists a ''y'' such that..."
 
In this instance R itself, or its [[representing function]], delivers 0 when it is satisfied (i.e. delivers ''true''); the function then delivers the number y. No upper bound exists on y, hence no inequality expressions appear in its definition.
 
For a given R(y) the unbounded mu operator μyR(y) (note no requirement for "(Ey)" ) is a [[partial function]]. Kleene makes it as a [[total function]] instead (cf. p.&nbsp;317):
:εyR(x, y) =
::* the least y such that R(x,y) [is true], if (Ey)R(x,y)
::* 0, otherwise.
 
== Properties ==
 
(i) In context of the [[primitive recursive functions]], where the search variable y of the μ-operator is bounded, e.g. y<z in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p.&nbsp;228), then
: μy<sub>y<z</sub> R( y, x<sub>1</sub>,..., x<sub>n</sub> ) is a primitive recursive function.
 
(ii) In the context of the (total) [[total recursive function|recursive functions]]: Where the search variable y is ''unbounded'' but guaranteed to exist for ''all'' values x<sub>i</sub> of the total recursive predicate R's parameters,
:(x<sub>1</sub>), ..., (x<sub>n</sub>) (Ey) R( y, x<sub>i</sub>, ... x<sub>n</sub> ) implies that μyR(y, x<sub>i</sub>, ... x<sub>n</sub>) is a [[total recursive function]].
:: here (x<sub>i</sub>) means "for all x<sub>i</sub>" and Ey means "there exists at least one value of y such that..." (cf Kleene (1952) p. 279.)
 
then the five primitive recursive operators plus the unbounded-but-total μ-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators).
 
(iii) In the context of the [[partial recursive function]]s: Suppose that the relation ''R'' holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever <math>\mu y R(y,x_1,\ldots,x_k)</math> is defined and ''y'' is <math>\mu y R(y,x_1,\ldots,x_k)</math> or smaller. Then the function <math>\mu y R(y,x_1,\ldots,x_k)</math> is also a partial recursive function.
 
<!-- If for every <math>(y_1,\ldots,y_k)</math> there is some ''x'' such that  <math>R(x,y_1,\ldots,y_k)</math>, then the function <math>\mu x R(x,y_1,\ldots,y_k)</math> is total. If it is a total function and also a partial recursive function then it is a [[total recursive function]].-->
The μ operator is used in the characterization of the computable functions as the  [[Mu-recursive function|μ recursive function]]s.
 
In [[constructive mathematics]], the unbounded search operator is related to [[Markov's principle]].
 
== Examples ==
 
=== Example #1: The bounded μ-operator is a primitive recursive function ===
 
:''In the following, to save space the bold-face x i.e. '''x''' will represent the string x<sub>i</sub>, . . ., x<sub>n</sub>.''
 
The ''bounded'' μ-operator  can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function—the product-of-terms Π and the sum-of-terms Σ (cf Kleene #B page 224). (As needed, any boundary for the variable such as s≤t or t<z, or 5<x<17 etc. is appropriate). For example: 
:*Π<sub>s≤t</sub> f<sub>s</sub> ('''x''', s) = f<sub>0</sub>('''x''', 0) * f<sub>1</sub>('''x''', 1) * . . . * f<sub>t</sub>('''x''', t)
:*Σ<sub>t<z</sub> g<sub>t</sub> ( '''x''', t ) = g<sub>0</sub>( '''x''', 0 ) + g<sub>1</sub>('''x''', 1 ) + . . . + g<sub>z-1</sub>('''x''', z-1 )
 
Before we proceed we need to introduce a function ψ called "the [[representing function]]" of predicate R. Function ψ is defined from inputs ( t= "truth", f="falsity" ) to outputs ( 0, 1 ) (''Observe the order!''). In this case the input to ψ i.e. { t, f } is coming from the output of R:
:* ψ( R = t ) = 0
:* ψ( R = f ) = 1
 
Kleene demonstrates that μy<sub>y<z</sub> R(y) is defined as follows; we see the product function Π is acting like a Boolean OR operator, and the sum Σ is acting somewhat like a Boolean AND but is producing { Σ≠0, Σ=0 }  rather than just { 1, 0 }:
:  μy <sub>y<z</sub> R(y) = Σ<sub>t<z</sub> Π<sub>s≤t</sub> ψ( R( '''x''' ,t ,s )) =
:* [ ψ( '''x''', 0, 0 ) ] '''+'''
:* [ ψ( '''x''', 1, 0 ) * ψ( '''x''', 1, 1 ) ] '''+'''
:* [ ψ( '''x''', 2, 0 ) * ψ( '''x''', 2, 1 ) * ψ( '''x''', 2, 2 ) ] '''+'''
:*  . . . . . . '''+'''
:* [ ψ( '''x''', z-1, 0 ) * ψ( '''x''', z-1, 1 ) * ψ( '''x''', z-1, 2 ) + . . . + ψ ( '''x''', z-1, z-1 ) ]
 
:''Σ is actually a primitive recursion with the base Σ('''x''', 0) = 0 and the induction step Σ('''x''', y+1 )  = Σ( '''x''', y ) + Π( '''x''', y ). The product Π is also a primitive recursion Π with base step Π( '''x''', 0 ) = ψ( '''x''', 0 ) and induction step Π( '''x''', y+1 ) = Π( '''x''', y )*ψ( '''x''', y+1 ).
 
The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ψ(R(y)). He designated the representing functions χ(y) rather than ψ( '''x''', y ): 
{| class="wikitable" style="text-align: center; vertical-align: bottom;"
|- 
| width="209"| y
| width="24"| 0
| width="24"| 1
| width="24"| 2
|style="background-color:#C0C0C0;" width="24"| '''3'''
| width="24"| 4
| width="24"| 5
| width="24"| 6
| width="25.2"| 7=z
|- 
| χ(y)
| 1
| 1
| 1
|style="background-color:#C0C0C0;font-weight:bold"| 0
| 1
| 0
| 0
|
|- 
| π(y) = Π<sub>s≤y</sub> χ(s)
| 1
| 1
| 1
|style="background-color:#C0C0C0;font-weight:bold"| 0
| 0
| 0
| 0
| 0
|- 
| σ(y) = Σ<sub>t<y</sub> π(t)
| 1
| 2
| 3
|style="background-color:#C0C0C0;"| '''3'''
| 3
| 3
| 3
| 3
|-
| least y<z such that R(y) is "true": φ(y) = μy <sub>y<z</sub> R(y)
|
|
|
|style="background-color:#C0C0C0;font-weight:bold" | 3
|
|
|
|
|}
 
=== Example #2: The unbounded μ-operator is not primitive-recursive ===
The unbounded μ operator—the function μy—is the one commonly defined in the texts. But the reader may wonder why—the modern texts do not state the reason—the unbounded μ-operator is searching for a function R('''x''', y) to yield ''zero'', rather than some other natural number.
:''In a footnote Minsky does allow his operator to terminate when the function inside produces a match to the parameter "k"; this example is also useful because it shows another author's format:''
::" For μ<sub>t</sub>[ φ(t) = k ] "(p. 210)
 
The reason for ''zero'' is that the unbounded operator μy will be defined in terms of the function "product" Π with its index y allowed to "grow" as the μ operator searches. As noted in the example above, the product Π <sub>x<y</sub> of a string of numbers ψ('''x''', 0) *, . . ., * ψ('''x''', y) yields zero whenever one of its members ψ('''x''', i) is zero:
: Π<sub>s<y</sub> = ψ('''x''', 0) * , . . ., * ψ('''x''', y) = 0
 
if any ψ('''x''', i)=0 where 0 ≤ i ≤ s. Thus the Π is acting like a Boolean AND.     
The function μy produces as "output" a single natural number y = { 0, 1, 2, 3 ... }. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" χ that produces a single natural number, or (b) a "predicate" R that produces either { t= true, f = false }. (And, in the context of ''partial'' recursive functions Kleene later admits a third outcome: "μ = undecided", pp.&nbsp;332ff ).
 
Kleene splits his definition of the unbounded μ operator to handle the two situations (a) and (b). For situation (b), before the predicate R('''x''', y) can serve in an arithmetic capacity in the product Π, its output { t, f } must first be "operated on" by its ''representing function χ''  to yield { 0, 1 }. And for situation (a) if one definition is to be used then the ''number theoretic function χ'' must produce zero to "satisfy" the μ operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) [[total recursive function|recursive function]]s ... with this proviso for a [[total function]]:
: ''That for all parameters '''x''', a demonstration that must be provided to show that a y exists that satisfies (a) μy ψ('''x''', y) or (b) μy R('''x''',y).''
Kleene also admits a third situation (c) that does not require the demonstration of "for all '''x''' a y exists such that ψ('''x''', y)." He uses this in his proof that more total recursive functions exist than can be enumerated;'' cf footnote [[#Total function demonstration|Total function demonstration]].
Kleene's proof is informal and uses an example similar to the first example. Fbut first he casts the μ-operator into a different form that uses the "product-of-terms" Π operating on function χ that yields a natural number n where n can be any natural number, and 0 in the instance when the u operator's test is "satisfied".
 
:The definition recast with the Π-function:
:μy <sub>y<z</sub>χ(y) =
:*(i):  π('''x''', y) = Π<sub>s<y</sub> χ( '''x''', s)
::*(ii): φ('''x''') = τ( π('''x''', y), π( '''x''', y' ), y)
::*(iii): τ(z', 0, y) = y ;τ( u, v, w ) is undefined for u = 0 or v > 0.
 
This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form:
:* base step: φ( 0,'''x''' ) = φ( '''x''' )
:* induction step: φ( 0,'''x''' ) = ψ( y, φ(0,'''x'''), '''x''' )
 
What is going on? First, we have to remind ourselves that we have assigned a parameter (a natural number) to every variable x<sub>i</sub>. Second, we do see a successor-operator at work iterating y (i.e. the y'). And third, we see that the function μy <sub>y<z</sub>χ(y, '''x''') is just producing instances of χ(y,'''x''') i.e. χ(0,'''x'''), χ(1,'''x'''), ... until an instance yields 0. Fourth, when an instance χ(n,'''x''') yields 0 it causes the middle term of τ, i.e. v = π( '''x''', y' ) to yield 0. Finally, when the middle term v = 0, μy <sub>y<z</sub>χ(y) executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents  an ''exit''—an exit taken only when the search successfully finds a y to satisfy χ(y) and the middle product-term π('''x''', y' ) is 0; the operator then terminates its search with τ(z', 0, y) = y.  
: τ( π('''x''', y), π( '''x''', y' ), y), i.e.:
:* τ( π('''x''', 0), π( '''x''', 1 ), 0),
:* τ( π('''x''', 1), π( '''x''', 2 ), 1)
:* τ( π('''x''', 2), π( '''x''', 3 ), 2)
:* τ( π('''x''', 3), π( '''x''', 4 ), 3)
:* . . . . . until a match occurs at y=n and then:
:* τ(z', 0, y) = τ(z', 0, n) = n and the μ operator's search is done.
 
For the example Kleene "...consider[s] any fixed values of x<sub>i</sub>, ... x<sub>n</sub>) and write[s] simply "χ(y)" for "χ(x<sub>i</sub>, ... x<sub>n</sub>),y)":
{| class="wikitable" style="text-align: center; vertical-align: bottom;"
|- 
| width="208.8"| y
| width="24"| 0
| width="24"| 1
| width="24"| 2
|style="background-color:#C0C0C0;" width="24"| '''3'''
| width="24"| 4
| width="24"| 5
| width="24"| 6
| width="25.2"| 7
| width="25.2"| etc.
|- 
| χ(y)
| 3
| 1
| 2
|style="background-color:#C0C0C0;"| '''0'''
| 9
| 0
| 1
| 5
|style="font-weight:bold"|  . . .
|- 
| π(y) = Π s≤y χ(s)
| 1
| 3
| 3
|style="background-color:#C0C0C0;"| '''6'''
| 0
| 0
| 0
| 0
|style="font-weight:bold"|  . . .
|-
|
|
|
|
|style="background-color:#C0C0C0"| '''↑'''
|
|
|
|
|
|-
| least y<z such that R(y) is "true": φ(y) = μy y<z R(y)
|
|
|
|style="background-color:#C0C0C0;font-weight:bold"| 3
|
|
|
|
|
|}
<!-- === Example #3: The CASE operator ===
 
The CASE operator is interesting in its own right because it appears in computer programming in the form of CASE (the "switch" instruction of [[C]]) or more often in the simplest form of an IF-THEN-ELSE (cf below and [[McCarthy Formalism]]). But it appears here because Kleene offers two definitions, the first of which looks very much like the μ-operator (defined in terms of the prf series-of-products Π and series-of-sums Σ), the second definition of which is explicitly in terms of the μ-operator.
 
:''A related discussion also appears in Minsky (1967) and Boolos-Burgess-Jeffrey (2002).''
 
* #F:  Definition by cases: The function defined thus, where Q<sub>1</sub>, ..., Q<sub>m</sub> are mutually exclusive ''predicates'' (or "ψ('''x''') shall have the value given by the first clause which applies), is primitive recursive in φ<sub>1</sub>, ..., Q<sub>1</sub>, ... Q<sub>m</sub>:
::  φ('''x''') =
::* φ<sub>1</sub>('''x''') if Q<sub>1</sub>('''x''') is true,
::* .  .  . .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 
::* φ<sub>m</sub>('''x''') if Q<sub>m</sub>('''x''') is true
::* φ<sub>m+1</sub>('''x''') otherwise
 
Observ that the Q<sub>i</sub> are ''predicates'' -- they are defined (usually -- they could be logical operators) from the natural numbers to a single output from the truth-set { t, f }. As in the first two examples, before we can build the CASE operator from simpler primitive recursive functions we first need to apply a representing function ψ<sub>i</sub> to each Q<sub>i</sub>:
: ψ<sub>i</sub>( Q<sub>i</sub> = t = "true" ) = 0
: ψ<sub>i</sub>( Q<sub>i</sub> = f = "false" ) = 1
 
Then to be able to use the prf product Π and sum Σ functions in the demonstration below we will need to "flip the sense" of ψ<sub>i</sub> to produce "1" when Q<sub>i</sub> is "true"; we use the prf ~sg(x) to do this -- it produces 1 if its input x is 0 and vice versa. Applying ~sg to ψ then returns "positive logic" "Q<sub>i</sub> = t => 1" and "Q<sub>i</sub> = f => 0":
:* ~sg( ψ<sub>i</sub>( Q<sub>i</sub> = t ) = 1
:* ~sg( ψ<sub>i</sub>( Q<sub>i</sub> = f ) = 0
     
Given these tools, Kleene then provides us with two equivalences for the CASE function
:(1) CASE φ('''x''') = ~sg(ψ<sub>1</sub> ) * φ<sub>1</sub> + . . . + ~sg(ψ<sub>m</sub> ) * φ<sub>m</sub> +  '''[''' ψ<sub>1</sub> * ... * ψ<sub>m</sub> * φ<sub>m+1</sub> ''']'''
:(2) CASE φ = μy<sub>y≤φ1+...+φm</sub> ( Q<sub>1</sub> & y=φ<sub>1</sub> ) V ... V ( Q<sub>m</sub> & y=φ<sub>m</sub> ) V '''[''' ( ~Q<sub>1</sub> &  ... & ~Q<sub>m</sub> ) & y=φ<sub>m+1</sub> ''']'''
 
On the right-hand side of both equations the terms in brackets '''[''' ''']''' are the conditions that must be met for the default or "otherwise" case. Definition (1) is just using * as a form of logical AND, and + as a kind of arithmetic OR. Definition (2) is actually using AND and OR( V ) but something ''subtle'' has appeared.
 
In the second definition using the μ-operator: y must range over a (potentially-) large number that is the ''sum'' of the various φ<sub>i</sub>, i.e. φ<sub>1</sub> + . . . + φ<sub>m</sub>. To see why this is the case, we design an example with only two cases and a default case. Substitute constants for the various φ<sub>i</sub>, say φ<sub>1</sub> = 27, φ<sub>2</sub> = 19, φ<sub>m+1</sub> = 31:
:  φ(x) =
:* φ<sub>1</sub>(x) = 27 if "x = 1" is true,
:* φ<sub>2</sub>(x) = 19 if "x = 2" is true,
:* φ<sub>3</sub>(x) = 31 otherwise
 
Unlike definition (1), the definition (2) contains "Boolean" logic-function terms that look like this: Q<sub>1</sub> & y=φ<sub>1</sub>. Here, before the equivalence "y=φ<sub>i</sub>" can be tested, y must begin at 0 and then range successively "up" to φ<sub>i</sub>, e.g. 27. If this fails because Q<sub>1</sub> = false, then y must restart with
:* Q<sub>2</sub> & y=φ<sub>2</sub>
 
and proceed all the way up to φ<sub>2</sub> = 19 before it can test to see if "x = 2".
Failing this, y must restart again and proceed to 31 before it can satisfy the right-most term and declare that μyR(y) = 31. So the worst case number of ''steps'' that the μ-operator must proceed through will be (at the very worst) the ''sum'' of the steps for each case:     
:φ<sub>1</sub>(x) + φ<sub>2</sub>(x) + φ<sub>3</sub>(x) = 27 + 19 + 31 -->
 
<!-- This is assuming the definition of CASE uses only the most primitive of the 5 primitive recursive operators, in particular (1) Constant 0, (2) Successor, etc.; i.e. excepting ~sg( ), no "more advanced" functions created from the operators are being used. The comparable case, if one were to do the equivalent with a [[Turing-equivalent]] [[counter machine]], would be to use the "successor" version with only the following instructions available to work on various registers r<sub>j</sub> :
: { CLeaR (r<sub>j</sub>), INCrement (r<sub>j</sub>), and Jump_if_r<sub>j</sub> = r<sub>k</sub>_to_instruction I<sub>z</sub> } -->
 
=== Example #3: Definition of the unbounded μ operator in terms of an abstract machine ===
 
Both Minsky (1967) p.&nbsp;21 and Boolos-Burgess-Jeffrey (2002) p.&nbsp;60-61 provide definitions of the μ operator as an abstract machine; see footnote [[#Alternative abstract machine models of the unbounded .CE.BC operator from Minsky .281967.29 and Boolos-Burgess-Jeffrey .282002.29|Alternative definitions of μ]].
 
The following demonstration follows Minsky without "the peculiarity" mentioned in the footnote. The demonstration will use a "successor" [[counter machine]] model closely related to the [[Peano Axioms]] and the [[primitive recursive function]]s. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table:
 
:''In the following, the symbolism " [ r ] "  means "the contents of", and " →r " indicates an action with respect to register r.''
 
{|class="wikitable"
|-
! Instruction
! Mnemonic
! Action on register(s) "r"
! Action on Instruction Register, IR
|-
| CLeaR register
|style="text-align:left;"| CLR ( r )
|style="text-align:center;"| 0 → r
|style="text-align:center;"| [ IR ] +1 → IR
|-
| INCrement register
| INC ( r )
|style="text-align:center;"| [ r ] +1 → r
|style="text-align:center;"| [ IR ] +1 → IR
|-
| Jump if Equal
| JE (r<sub>1</sub>, r<sub>2</sub>, z)
|style="text-align:center;"| none
|style="text-align:center;"| IF [ r1 ] = [ r2 ] THEN z → IR <br />ELSE [ IR ] +1 → IR 
|-
| Halt
| H
|style="text-align:center;"| none
|style="text-align:center;"| [ IR ]  → IR
|}
 
The algorithm for the minimization operator μy [φ( '''x''', y )] will, in essence, create a sequence of instances of the function φ( '''x''', y ) as the value of parameter y (a natural number) increases; the process will continue (see Note † below) until a match occurs between the output of function φ( '''x''', y ) and some pre-established number (usually 0). Thus the evaluation of φ('''x''', y) requires, at the outset, assignment of a natural number to each of its variables '''x''' and an assignment of a "match-number" (usually 0) to a register "w", and a number (usually 0) to register y.
 
:''Note †: The ''unbounded'' μ operator will continue this attempt-to-match process ad infinitum or until a match occurs. Thus the "y" register must be unbounded -- it must be able to "hold" a number of arbitrary size. Unlike a "real" computer model, abstract machine models allow this. In the case of a ''bounded'' μ operator, a lower-bounded μ operator would start with the contents of y set to a number other than zero. An upper-bounded μ operator would require an additional register "ub" to contain the number that represents the upper bound plus an additional comparison operation; an algorithm could provide for both lower- and upper bounds.''
 
In the following we are assuming that the Instruction Register (IR) encounters the μy "routine" at instruction number "n". Its first action will be to establish a number in a dedicated "w" register—an "example of" the number that function φ( '''x''', y ) must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton "n+1" will be to clear the "y" register -- "y" will act as an "up-counter" that starts from 0. Then at instruction "n+2" the algorithm evaluates its function φ( '''x''', y ) -- we assume this takes j instructions to accomplish—and at the end of its evaluation φ( '''x''', y ) deposits its output in register "φ". At the n+j+3rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "φ" register—if they are the same the algorithm has succeeded and it escapes through ''exit''; otherwise it increments the contents of the "y" register and ''loops'' back with this new y-value to test function φ( '''x''', y ) again.
 
{| class="wikitable" style="vertical-align: bottom;"
|- 
! IR
!
! Instruction
! Action on register
! Action on Instruction Register IR
|-
| n
| μy[ φ( '''x''', y ) ]:
| CLR ( w )
|style="text-align:center;"| 0 → w
| [ IR ] +1 → IR
|-
| n+1
|
| CLR ( y )
|style="text-align:center;"| 0 → y
| [ IR ] +1 → IR
|-
| n+2
| ''loop:''
| φ ( '''x''', y )
|style="text-align:center;"| φ( ['''x'''], [y] ) → φ
| [ IR ] +j+1 → IR
|-
| n+j+3
|
| JE (φ, w, exit)
|style="text-align:center;"| none
| CASE: { IF [φ]=[w] THEN ''exit'' → IR <br />ELSE [IR] +1 → IR }
|-
| n+j+4
|
| INC ( y )
|style="text-align:center;"| [ y ] +1 → y
| [ IR ] +1 → IR
|-
| n+j+5
|
| JE (0, 0, loop)
|style="text-align:center;"| Unconditional jump
| CASE: { IF [r0] =[r0] THEN ''loop'' → IR <br />ELSE ''loop'' → IR }
|-
| n+j+6
| ''exit:''
| ''etc.''
|
|
|}
 
== Footnotes ==
 
=== Total function demonstration ===
 
What is ''mandatory'' if the function is to be a [[total function]] is a demonstration ''by some other method'' (e.g. [[mathematical induction|induction]]) that for each and every combination of values of its parameters x<sub>i</sub> some natural number y will satisfy the μ-operator so that the algorithm that represents the calculation can terminate:
:"...we must always hesitate to assume that a system of equations really defines a general-recursive [i.e. total] function. We normally require auxiliary evidence for this, e.g. in the form of an inductive proof that, for each argument value, the computation terminates with a unique value." (Minsky (1967) p. 186)
 
:"In other words, we should not claim that a function is effectively calculable on the ground that it has been shown to be general [i.e. total] recursive, unless the demonstration that it is general recursive is effective."(Kleene (1952) p. 319)
 
For an example of what this means in practice see the examples at [[mu recursive function]]s—even the simplest ("improper") subtraction algorithm "x - y = d" can yield, for the undefined cases when x < y, (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases"
:(x, y) = { (0, 0), (a, 0), (0, b), (a≥b, b), (a=b, b), (a<b, b) }.
 
But even when the algorithm has been shown to produce the expected output in the instances { (0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2) }, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases (x, y) = (n, m) ''all'' yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our denonstration) convincing enough to be considered ''effective''?
 
=== Alternative abstract machine models of the unbounded μ operator from Minsky (1967) and Boolos-Burgess-Jeffrey (2002)===
 
The unbounded μ operator is defined by Minsky (1967) p.&nbsp;210 but with a peculiar flaw: the operator will not yield t = 0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather, it yields t=2. In Minsky's version the counter is "t", and the function φ( t, '''x''' ) deposits its number in register φ. In the usual μ definition register w will contain 0, but Minsky observes that it can contain any number k. Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal:
:{ CLR (r), INC (r), JNE (r<sub>j</sub>, r<sub>k</sub>, z) }
 
{| class="wikitable" style="text-align:center; "
|-
! IR
!
! Instruction
! Action on register
! Action on Instruction Register, IR
|-
| n
| '''μy φ( x ):'''
|style="text-align:left;"|CLR ( w )
| 0 → w
| [ IR ] +1 → IR
|-
| n+1
|
|style="text-align:left;"| CLR ( t )
| 0 → t
| [ IR ] +1 → IR
|-
| n+2
| ''loop:''
|style="text-align:left;"| φ ( y, x )
| φ( [ t ], [ x ] ) → φ
| [ IR ] +j+1 → IR
|-
| n+j+3
|
|style="text-align:left;"| INC ( t )
| [ t ] +1 → t
| [ IR ] +1 → IR
|-
| n+j+4
|
|style="text-align:left;"| JNE (φ, w, loop)
| none
| CASE: {  IF [φ] ≠ [w] THEN "exit" → IR <br /> ELSE [IR] +1 → IR }
|-
| n+j+5
|
|style="text-align:left;"| INC ( t )
| [ t ] +1 → t
| [ IR ] +1 → IR
|-
| n+j+6
| ''exit:''
| ''etc.''.
|
|
|}
 
The unbounded μ operator is also defined by Boolos-Burgess-Jeffrey (2002) p.&nbsp;60-61 for a counter machine with an instruction set equivalent to the following:
: { CLR (r), INC (r), DEC (r), JZ (r, z), H }
 
In this version the counter "y" is called "r2", and the function f( '''x''', r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to ''loop''; this is often done by use of a dedicated register "0" that contains "0":
 
{|class="wikitable" style="text-align:center; vertical-align:bottom;"
|-
! IR
!
! Instruction
! Action on register
! Action on Instruction Register, IR
|-
| n
| '''μ<sub>r2</sub>[f(x, r2)]:'''
|style="text-align:left;"| CLR ( r2 )
| 0 → r2
| [ IR ] +1 → IR
|-
|  n+1
| loop:
|style="text-align:left;"| f( y, x )
| f( [ t ], [ x ] ) →r3
| [ IR ] +j+1 → IR
|-
| n+2
|
|style="text-align:left;"| JZ ( r3, exit )
| none
| IF [ r3 ] = 0 THEN exit → IR <br />ELSE [ IR ] +1 → IR 
|-
| n+j+3
|
|style="text-align:left;"| CLR ( r3 )
| 0 → r3
| [ IR ] +1 → IR
|-
| n+j+4
|
|style="text-align:left;"| INC ( r2 )
| [ r2  ] +1 → r2
| [ IR ] +1 → IR
|-
| n+j+5
|
|style="text-align:left;"| JZ ( r3, loop)
|
| CASE: {  IF [ r3 ] =0 THEN loop → IR <br />ELSE [IR] +1 → IR }
|-
| n+j+6
| ''exit:''
| ''etc.''.
|
|
|}
 
== References ==
*[[Stephen Kleene]] (1952) ''Introduction to Metamathematics'', North-Holland Publishing Company, New York, 11th reprint 1971: (2nd edition notes added on 6th reprint).
 
*[[Marvin L. Minsky]] (1967), ''Computation: Finite and Infinite Machines'', Prentice-Hall, Inc. Englewood Cliffs, N.J.
:On pages 210-215 Minsky shows how to create the μ-operator using the [[register machine]] model, thus demonstrating its equivalence to the general recursive functions.
 
*[[George Boolos]], [[John P. Burgess|John Burgess]], [[Richard Jeffrey]] (2002), ''Computability and Logic: Fourth Edition'', Cambridge University Press, Cambridge, UK. Cf pp.&nbsp;70–71.
 
[[Category:Computability theory|Mu operator]]

Latest revision as of 15:25, 19 December 2014

Making your computer run quickly is pretty simple. Most computers run slow because they are jammed up with junk files, which Windows has to search from every time it wants to obtain something. Imagine needing to find a book in a library, nevertheless all library books are inside a big huge pile. That's what it's like for a computer to obtain anything, when your program is full of junk files.

Firstly, we should use your Antivirus or safety tool and run a scan on a computer. It is possible which a computer is infected with virus or malware which slows down your computer. If there is nothing found in the scanning report, it may be a RAM that cause the issue.

Perfect Optimizer additionally has to remove junk files plus is totally Windows Vista compatible. Most registry product just don't have the time plus money to analysis Windows Vista errors. Because best optimizer has a big customer base, they do have the time, money plus reasons to help totally support Windows Vista.

Your computer was quite fast when you first bought it. Because the registry was quite clean and free of errors. After time, your computer starts to run slow plus freezes up now and then. Because there are errors accumulating inside it and certain info is rewritten or even completely deleted by your incorrect uninstall of programs, wrong operations, malware or other details. That is the reason why the computer performance decreases gradually plus become quite unstable.

The tuneup utilities must come because standard with a back up plus restore center. This ought to be an simple to implement process.That signifies that if you encounter a problem with your PC following using a registry cleaning you can simply restore your settings.

The many probable cause of the trouble is the system problem - Registry Errors! That is the reason why individuals whom already have over 2 G RAM on their computers are still constantly bothered by the problem.

It is important which you remove obsolete registry entries from a system on a regular basis, if you would like your program to run quicker, which is. If you don't keep the registry clean, a time comes whenever your program usually stop working completely. Next, the just choice would be to reformat your hard drive and begin over!

By changing the method we utilize the internet you are able to have access more of your valuable bandwidth. This can ultimately provide we a quicker surfing experience. Here is a link to 3 techniques to customize your PC speed on the net.