100-year flood: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Thewellman
m remove accidental text
en>Thewellman
m Undid revision 619378011 by CustomModzz (talk)erroneous information
 
Line 1: Line 1:
{{Refimprove|date=October 2007}}
The title of the writer is Figures. To do aerobics is a thing that I'm totally addicted to. Since she was eighteen she's been operating as a receptionist but her promotion by no means comes. South Dakota is exactly where I've always been living.<br><br>My website: over the counter std test ([http://denisten.kr/xe/?mid=free&comment_srl=869&listStyle=webzine&document_srl=248250 Read the Full Document])
 
In [[cryptography]], '''unicity distance''' is the length of an original [[ciphertext]] needed to break the cipher by reducing the number of possible '''spurious keys''' to zero in a [[brute force attack]]. That is, after trying every possible [[key (cryptography)|key]], there should be just one decipherment that makes sense, i.e. expected amount of ciphertext needed to determine the key completely, assuming the underlying message has redundancy.
 
Consider an attack on the ciphertext string "WNAIW" encrypted using a [[Vigenère cipher]] with a five letter key. Conceivably, this string could be deciphered into any other string &mdash; RIVER and WATER are both possibilities for certain keys. This is a general rule of [[cryptanalysis]]: with no additional information it is impossible to decode this message.
 
Of course, even in this case, only a certain number of five letter keys will result in English words. Trying all possible keys we will not only get RIVER and WATER, but SXOOS and KHDOP as well. The number of "working" keys will likely be very much smaller than the set of all possible keys. The problem is knowing which of these "working" keys is the right one; the rest are spurious.
 
==Relation with key size and possible plaintexts==
In general, given any particular assumptions about the size of the key and the number of possible messages, there is an average ciphertext length where there is only one key (on average) that will generate a readable message. In the example above we see only [[upper case]] [[Latin alphabet|Roman]] characters, so if we assume this is the input then there are 26 possible letters for each position in the string. Likewise if we assume five-character upper case keys, there are K=26<sup>5</sup> possible keys, of which the majority will not "work".
 
A tremendous number of possible messages, N, can be generated using even this limited set of characters: N = 26<sup>L</sup>, where L is the length of the message. However only a smaller set of them is readable [[plaintext]] due to the rules of the language, perhaps M of them, where M is likely to be very much smaller than N. Moreover M has a one-to-one relationship with the number of keys that work, so given K possible keys, only K &times; (M/N) of them will "work". One of these is the correct key, the rest are spurious.
 
Since N is dependent on the length of the message L, whereas M is dependent on the number of keys, K, there is some L where the number of spurious keys is zero. This L is the unicity distance.
 
==Relation with key entropy and plaintext redundancy==
The unicity distance can also be defined as the minimum amount of ciphertext-only required to permit a computationally unlimited adversary to recover the unique encryption key.
 
The expected unicity distance is accordingly:
 
: <math>U = H(k) / D</math>
 
where ''U'' is the unicity distance, ''H''(''k'') is the entropy of the key space (e.g. 128 for 2<sup>128</sup> equiprobable keys, rather less if the key is a memorized pass-phrase).
 
''D'' is defined as the plaintext redundancy in bits per character.
 
Now an alphabet of 32 characters can carry 5 bits of information per character (as 32 =&nbsp;2<sup>5</sup>). In general the number of bits of information is lg&nbsp;''N'', where ''N'' is the number of characters in the alphabet. So for English each character can convey {{math|log<sub>2</sub>(26) {{=}} 4.7}} bits of information. See [[Binary logarithm]] for details on {{math|log<sub>2</sub>}}.
 
However the average amount of actual information carried per character in meaningful English text is only about 1.5 bits per character{{Citation needed|date=June 2013}}. So the plain text redundancy is ''D'' =&nbsp;4.7&nbsp;&minus;&nbsp;1.5 =&nbsp;3.2.
 
Basically the bigger the unicity distance the better. For a one time pad, given the unbounded entropy of the key space, we have <math>U = \infty</math>, which is consistent with the [[one-time pad]] being theoretically unbreakable.
 
=== Unicity distance of substitution cipher ===
For a simple [[substitution cipher]], the number of possible keys is {{math|26! {{=}} 4.0329 × 10<sup>26</sup> {{=}} 2<sup>88.4</sup>}}, the number of ways in which the alphabet can be permuted. Assuming all keys are equally likely, {{math|''H''(''k'') {{=}} log<sub>2</sub>(26!) {{=}} 88.4}} bits. For English text {{math|''D'' {{=}} 3.2}}, thus {{math|''U'' {{=}} 88.4/3.2 {{=}} 28}}.
 
So given 28 characters of ciphertext it should be theoretically possible to work out an English plaintext and hence the key.
 
==Practical application==
Unicity distance is a useful theoretical measure, but it doesn't say much about the security of a block cipher when attacked by an adversary with real-world (limited) resources. Consider a block cipher with a unicity distance of three ciphertext blocks.  Although there is clearly enough information for a computationally unbounded adversary to find the right key (simple exhaustive search), this may be computationally infeasible in practice.
 
The unicity distance can be increased by reducing the plaintext redundancy. One way to do this is to deploy data compression techniques prior to encryption, for example by removing redundant vowels while retaining readability. This is a good idea anyway, as it reduces the amount of data to be encrypted.
 
Another way to increase the unicity distance is to increase the number of possible valid sequences in the files as it is read. Since if for at least the first several blocks any bit pattern can effectively be part of a valid message then the unicity distance has not been reached. This is possible on long files when certain bijective string sorting permutations are used, such as the many variants of bijective [[Burrows%E2%80%93Wheeler_transform|BWT transforms]].
 
Ciphertexts greater than the unicity distance can be assumed to have only one meaningful decryption. Ciphertexts shorter than the unicity distance may have multiple plausible decryptions. Unicity distance is not a measure of how much ciphertext is required for cryptanalysis, but how much ciphertext is required for there to be only one reasonable solution for cryptanalysis.
 
==External links==
*[[Bruce Schneier]]: [http://www.schneier.com/crypto-gram-9812.html#plaintext How to Recognize Plaintext] (Crypto-Gram Newsletter December 15, 1998)
*[http://www.practicalcryptography.com/cryptanalysis/text-characterisation/statistics/#unicity-distance Unicity Distance computed for common ciphers]
 
[[Category:Cryptography]]
[[Category:Information theory]]

Latest revision as of 07:43, 1 August 2014

The title of the writer is Figures. To do aerobics is a thing that I'm totally addicted to. Since she was eighteen she's been operating as a receptionist but her promotion by no means comes. South Dakota is exactly where I've always been living.

My website: over the counter std test (Read the Full Document)