<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=209.147.130.100</id>
	<title>formulasearchengine - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=209.147.130.100"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/wiki/Special:Contributions/209.147.130.100"/>
	<updated>2026-05-02T03:19:31Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Probabilistic_method&amp;diff=2633</id>
		<title>Probabilistic method</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Probabilistic_method&amp;diff=2633"/>
		<updated>2013-05-10T00:08:37Z</updated>

		<summary type="html">&lt;p&gt;209.147.130.100: Information theory uses probabilistic proofs in the &amp;quot;direct&amp;quot; part of the coding theorem.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This article is about the transpose of a matrix. For other uses, see [[Transposition (disambiguation)|Transposition]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Note that this article assumes that matrices are taken over a commutative ring. These results may not hold in the non-commutative case.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Matrix transpose.gif|thumb|200px|right|The transpose &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; of a matrix &#039;&#039;&#039;A&#039;&#039;&#039; can be obtained by reflecting the elements along its main diagonal. Repeating the process on the transposed matrix returns the elements to their original position.]]&lt;br /&gt;
&lt;br /&gt;
In [[linear algebra]], the &#039;&#039;&#039;transpose&#039;&#039;&#039; of a [[matrix (mathematics)|matrix]] &#039;&#039;&#039;A&#039;&#039;&#039; is another matrix &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; (also written &#039;&#039;&#039;A&#039;&#039;&#039;′, &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;tr&amp;lt;/sup&amp;gt;,&amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039; or &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;) created by any one of the following equivalent actions:&lt;br /&gt;
* reflect &#039;&#039;&#039;A&#039;&#039;&#039; over its [[main diagonal]] (which runs from top-left to bottom-right) to obtain &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&lt;br /&gt;
* write the rows of &#039;&#039;&#039;A&#039;&#039;&#039; as the columns of &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&lt;br /&gt;
* write the columns of &#039;&#039;&#039;A&#039;&#039;&#039; as the rows of &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Formally, the &#039;&#039;i&#039;&#039; th row, &#039;&#039;j&#039;&#039; th column element of &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; is the &#039;&#039;j&#039;&#039; th row, &#039;&#039;i&#039;&#039; th column element of &#039;&#039;&#039;A&#039;&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;[\mathbf{A}^\mathrm{T}]_{ij} = [\mathbf{A}]_{ji}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &#039;&#039;&#039;A&#039;&#039;&#039; is an {{nowrap|&#039;&#039;m&#039;&#039; × &#039;&#039;n&#039;&#039;}} matrix then &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; is an {{nowrap|&#039;&#039;n&#039;&#039; × &#039;&#039;m&#039;&#039;}} matrix.&lt;br /&gt;
&lt;br /&gt;
The transpose of a matrix was introduced in 1858 by the British mathematician [[Arthur Cayley]].&amp;lt;ref&amp;gt;Arthur Cayley (1858) [http://books.google.com/books?id=flFFAAAAcAAJ&amp;amp;pg=PA31#v=onepage&amp;amp;q&amp;amp;f=false &amp;quot;A memoir on the theory of matrices,&amp;quot;] &#039;&#039;Philosophical Transactions of the Royal Society of London&#039;&#039;, &#039;&#039;&#039;148&#039;&#039;&#039; : 17-37.  The transpose (or &amp;quot;transposition&amp;quot;) is defined on page 31.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
*&amp;lt;math&amp;gt;\begin{bmatrix}&lt;br /&gt;
1 &amp;amp; 2  \end{bmatrix}^{\mathrm{T}}&lt;br /&gt;
= \,&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
1   \\&lt;br /&gt;
2  \end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\begin{bmatrix}&lt;br /&gt;
1 &amp;amp; 2  \\&lt;br /&gt;
3 &amp;amp; 4 \end{bmatrix}^{\mathrm{T}}&lt;br /&gt;
=&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
1 &amp;amp; 3  \\&lt;br /&gt;
2 &amp;amp; 4 \end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
1 &amp;amp; 2 \\&lt;br /&gt;
3 &amp;amp; 4 \\&lt;br /&gt;
5 &amp;amp; 6 \end{bmatrix}^{\mathrm{T}}&lt;br /&gt;
=&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
1 &amp;amp; 3 &amp;amp; 5\\&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 6 \end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Properties ==&lt;br /&gt;
For matrices &#039;&#039;&#039;A&#039;&#039;&#039;, &#039;&#039;&#039;B&#039;&#039;&#039; and scalar &#039;&#039;c&#039;&#039; we have the following properties of transpose:&lt;br /&gt;
&lt;br /&gt;
{{ordered list&lt;br /&gt;
|1= &amp;lt;math&amp;gt;( \mathbf{A}^\mathrm{T} ) ^\mathrm{T} = \mathbf{A} \quad \,&amp;lt;/math&amp;gt;&lt;br /&gt;
:The operation of taking the transpose is an [[Involution (mathematics)|involution]] (self-[[Inverse matrix|inverse]]).&lt;br /&gt;
|2= &amp;lt;math&amp;gt;(\mathbf{A}+\mathbf{B}) ^\mathrm{T} = \mathbf{A}^\mathrm{T} + \mathbf{B}^\mathrm{T} \,&amp;lt;/math&amp;gt; &lt;br /&gt;
:The transpose respects addition.&lt;br /&gt;
|3= &amp;lt;math&amp;gt;\left( \mathbf{A B} \right) ^\mathrm{T} = \mathbf{B}^\mathrm{T} \mathbf{A}^\mathrm{T} \,&amp;lt;/math&amp;gt;&lt;br /&gt;
:Note that the order of the factors reverses. From this one can deduce that a [[square matrix]] &#039;&#039;&#039;A&#039;&#039;&#039; is [[Invertible matrix|invertible]] if and only if &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; is invertible, and in this case we have (&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;−1&amp;lt;/sup&amp;gt;)&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; = (&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;)&amp;lt;sup&amp;gt;−1&amp;lt;/sup&amp;gt;.  By induction this result extends to the general case of multiple matrices, where we find that (&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;...&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;k&#039;&#039;−1&amp;lt;/sub&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;k&#039;&#039;&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&amp;amp;nbsp;=&amp;amp;nbsp;&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;k&#039;&#039;&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;k&#039;&#039;−1&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;...&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;.&lt;br /&gt;
|4= &amp;lt;math&amp;gt;(c \mathbf{A})^\mathrm{T} = c \mathbf{A}^\mathrm{T} \,&amp;lt;/math&amp;gt;&lt;br /&gt;
:The transpose of a [[Scalar (mathematics)|scalar]] is the same scalar. Together with (2), this states that the transpose is a [[linear map]] from the [[Vector space|space]] of {{nowrap|&#039;&#039;m&#039;&#039; × &#039;&#039;n&#039;&#039;}} matrices to the space of all {{nowrap|&#039;&#039;n&#039;&#039; × &#039;&#039;m&#039;&#039;}} matrices.&lt;br /&gt;
|5= &amp;lt;math&amp;gt;\det(\mathbf{A}^\mathrm{T}) = \det(\mathbf{A}) \,&amp;lt;/math&amp;gt;&lt;br /&gt;
:The [[determinant]] of a square matrix is the same as that of its transpose.&lt;br /&gt;
|6= The [[dot product]] of two column [[vector space|vector]]s &#039;&#039;&#039;a&#039;&#039;&#039; and &#039;&#039;&#039;b&#039;&#039;&#039; can be computed as&lt;br /&gt;
:&amp;lt;math&amp;gt; \mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\mathrm{T}} \mathbf{b},&amp;lt;/math&amp;gt;&lt;br /&gt;
which is written as &#039;&#039;&#039;a&#039;&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt;&amp;amp;thinsp;&#039;&#039;&#039;b&#039;&#039;&#039;&amp;lt;sup&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sup&amp;gt; in [[Einstein notation]].&lt;br /&gt;
|7= If &#039;&#039;&#039;A&#039;&#039;&#039; has only real entries, then &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;&#039;&#039;&#039;A&#039;&#039;&#039; is a [[positive-semidefinite matrix]].&lt;br /&gt;
|8= &amp;lt;math&amp;gt;(\mathbf{A}^\mathrm{T})^{-1} = (\mathbf{A}^{-1})^\mathrm{T} \,&amp;lt;/math&amp;gt;&lt;br /&gt;
: The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.  The notation &#039;&#039;&#039;A&#039;&#039;&#039;&amp;lt;sup&amp;gt;−T&amp;lt;/sup&amp;gt; is sometimes used to represent either of these equivalent expressions.&lt;br /&gt;
|9= If &#039;&#039;&#039;A&#039;&#039;&#039; is a square matrix, then its [[Eigenvalue, eigenvector and eigenspace|eigenvalues]] are equal to the eigenvalues of its transpose since they share the same Characteristic polynomial.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Special transpose matrices ==&lt;br /&gt;
A square matrix whose transpose is equal to itself is called a [[symmetric matrix]]; that is, &#039;&#039;&#039;A&#039;&#039;&#039; is symmetric if&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{A}^{\mathrm{T}} = \mathbf{A} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A square matrix whose transpose is equal to its negative is called a [[skew-symmetric matrix]]; that is, &#039;&#039;&#039;A&#039;&#039;&#039; is skew-symmetric if&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{A}^{\mathrm{T}} = -\mathbf{A} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A square [[complex number|complex]] matrix whose transpose is equal to the matrix with every entry replaced by its [[complex conjugate]] is called a [[Hermitian matrix]] (equivalent to the matrix being equal to its [[conjugate transpose]]); that is, &#039;&#039;&#039;A&#039;&#039;&#039; is Hermitian if&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{A}^{\mathrm{T}} = \mathbf{A}^{*} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A square [[complex number|complex]] matrix whose transpose is equal to the negation of its complex conjugate is called a [[skew-Hermitian matrix]]; that is, &#039;&#039;&#039;A&#039;&#039;&#039; is skew-Hermitian if&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{A}^{\mathrm{T}} = -\mathbf{A}^{*} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A square matrix whose transpose is equal to its inverse is called an [[orthogonal matrix]]; that is, &#039;&#039;&#039;A&#039;&#039;&#039; is orthogonal if&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{A}^{\mathrm{T}} = \mathbf{A}^{-1} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Transpose of a linear map ==&lt;br /&gt;
{{see also|Dual space#Transpose of a linear map|l1=Dual space&#039;&#039; (section &#039;&#039;Transpose of a linear map&#039;&#039;)&#039;&#039;}}&lt;br /&gt;
&lt;br /&gt;
The transpose may be defined using a [[coordinate-free]] approach:&lt;br /&gt;
&lt;br /&gt;
If {{nowrap|1=&#039;&#039;f&#039;&#039; : &#039;&#039;V&#039;&#039; → &#039;&#039;W&#039;&#039;}} is a [[linear operator|linear map]] between [[vector space]]s &#039;&#039;V&#039;&#039; and &#039;&#039;W&#039;&#039; with respective [[dual space]]s &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt; and &#039;&#039;W&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;, the &#039;&#039;transpose&#039;&#039; of &#039;&#039;f&#039;&#039; is the linear map {{nowrap|1=&amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;f&#039;&#039; : &#039;&#039;W&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt; → &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;}} that satisfies&lt;br /&gt;
:&amp;lt;math&amp;gt; {}^\mathrm{t} f (\phi ) = \phi \circ f \quad \forall \phi \in W^* .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The definition of the transpose may be seen to be independent of any bilinear form on the vector spaces, unlike the adjoint ([[#Adjoint of a bilinear map|below]]).&lt;br /&gt;
&lt;br /&gt;
If the matrix &#039;&#039;A&#039;&#039; describes a linear map with respect to [[basis (linear algebra)|bases]] of &#039;&#039;V&#039;&#039; and &#039;&#039;W&#039;&#039;, then the matrix &#039;&#039;A&#039;&#039;&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;  describes the transpose of that linear map with respect to the dual bases.&lt;br /&gt;
&lt;br /&gt;
=== Transpose of a bilinear form ===&lt;br /&gt;
{{main|Bilinear form}}&lt;br /&gt;
&lt;br /&gt;
Every linear map to the dual space {{nowrap|1=&#039;&#039;f&#039;&#039; : &#039;&#039;V&#039;&#039; → &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;}} defines a bilinear form {{nowrap|1=&#039;&#039;B&#039;&#039; : &#039;&#039;V&#039;&#039; × &#039;&#039;V&#039;&#039; → &#039;&#039;F&#039;&#039;}}, with the relation {{nowrap|1=&#039;&#039;B&#039;&#039;(&#039;&#039;&#039;v&#039;&#039;&#039;, &#039;&#039;&#039;w&#039;&#039;&#039;) = &#039;&#039;f&#039;&#039;(&#039;&#039;&#039;v&#039;&#039;&#039;)(&#039;&#039;&#039;w&#039;&#039;&#039;)}}.  By defining the transpose of this bilinear form as the bilinear form &amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;B&#039;&#039; defined by the transpose {{nowrap|1=&amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;f&#039;&#039; : &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗∗&amp;lt;/sup&amp;gt; → &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;}} i.e. {{nowrap|1=&amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;B&#039;&#039;(&#039;&#039;&#039;w&#039;&#039;&#039;, &#039;&#039;&#039;v&#039;&#039;&#039;) = &amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;f&#039;&#039;(&#039;&#039;&#039;w&#039;&#039;&#039;)(&#039;&#039;&#039;v&#039;&#039;&#039;)}}, we find that {{nowrap|1=&#039;&#039;B&#039;&#039;(&#039;&#039;&#039;v&#039;&#039;&#039;,&#039;&#039;&#039;w&#039;&#039;&#039;) = &amp;lt;sup&amp;gt;t&amp;lt;/sup&amp;gt;&#039;&#039;B&#039;&#039;(&#039;&#039;&#039;w&#039;&#039;&#039;,&#039;&#039;&#039;v&#039;&#039;&#039;)}}.&lt;br /&gt;
&lt;br /&gt;
=== Adjoint ===&lt;br /&gt;
{{distinguish|Hermitian adjoint}}&lt;br /&gt;
&lt;br /&gt;
If the vector spaces &#039;&#039;V&#039;&#039; and &#039;&#039;W&#039;&#039; have respective [[nondegenerate form|nondegenerate]] [[bilinear form]]s &#039;&#039;B&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;V&#039;&#039;&amp;lt;/sub&amp;gt; and &#039;&#039;B&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;W&#039;&#039;&amp;lt;/sub&amp;gt;, a concept closely related to the transpose – the &#039;&#039;adjoint&#039;&#039; – may be defined:&lt;br /&gt;
&lt;br /&gt;
If {{nowrap|1=&#039;&#039;f&#039;&#039; : &#039;&#039;V&#039;&#039; → &#039;&#039;W&#039;&#039;}} is a [[linear map]] between [[vector space]]s &#039;&#039;V&#039;&#039; and &#039;&#039;W&#039;&#039;, we define &#039;&#039;g&#039;&#039; as the &#039;&#039;adjoint&#039;&#039; of &#039;&#039;f&#039;&#039; if {{nowrap|1=&#039;&#039;g&#039;&#039; : &#039;&#039;W&#039;&#039; → &#039;&#039;V&#039;&#039;}} satisfies&lt;br /&gt;
:&amp;lt;math&amp;gt;B_V(v, g(w)) = B_W(f(v),w) \quad \forall\ v \in V, w \in W .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These bilinear forms define an [[isomorphism]] between &#039;&#039;V&#039;&#039; and &#039;&#039;V&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;, and between &#039;&#039;W&#039;&#039; and &#039;&#039;W&#039;&#039;&amp;lt;sup&amp;gt;∗&amp;lt;/sup&amp;gt;, resulting in an isomorphism between the transpose and adjoint of &#039;&#039;f&#039;&#039;.  The matrix of the adjoint of a map is the transposed matrix only if the [[basis (linear algebra)|bases]] are orthonormal with respect to their bilinear forms.  In this context, many authors use the term transpose to refer to the adjoint as defined here.&lt;br /&gt;
&lt;br /&gt;
The adjoint allows us to consider whether {{nowrap|1=&#039;&#039;g&#039;&#039; : &#039;&#039;W&#039;&#039; → &#039;&#039;V&#039;&#039;}} is equal to {{nowrap|1=&#039;&#039;f&#039;&#039;&amp;lt;sup&amp;gt;&amp;amp;thinsp;−1&amp;lt;/sup&amp;gt; : &#039;&#039;W&#039;&#039; → &#039;&#039;V&#039;&#039;}}.  In particular, this allows the [[orthogonal group]] over a vector space &#039;&#039;V&#039;&#039; with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps {{nowrap|&#039;&#039;V&#039;&#039; → &#039;&#039;V&#039;&#039;}} for which the adjoint equals the inverse.&lt;br /&gt;
&lt;br /&gt;
Over a complex vector space, one often works with [[sesquilinear form]]s (conjugate-linear in one argument) instead of bilinear forms.  The [[Hermitian adjoint]] of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal.&lt;br /&gt;
&lt;br /&gt;
==Implementation of matrix transposition on computers==&lt;br /&gt;
&lt;br /&gt;
On a [[computer]], one can often avoid explicitly transposing a matrix in [[Random access memory|memory]] by simply accessing the same data in a different order.  For example, [[software libraries]] for [[linear algebra]], such as [[BLAS]], typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.&lt;br /&gt;
&lt;br /&gt;
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering.  For example, with a matrix stored in [[row-major order]], the rows of the matrix are contiguous in memory and the columns are discontiguous.  If repeated operations need to be performed on the columns, for example in a [[fast Fourier transform]] algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing [[memory locality]].&lt;br /&gt;
&lt;br /&gt;
{{Main|In-place matrix transposition}}&lt;br /&gt;
&lt;br /&gt;
Ideally, one might hope to transpose a matrix with minimal additional storage.  This leads to the problem of transposing an &#039;&#039;n&#039;&#039;&amp;amp;nbsp;&amp;amp;times;&amp;amp;nbsp;&#039;&#039;m&#039;&#039; matrix [[in-place]], with [[Big O notation|O(1)]] additional storage or at most storage much less than &#039;&#039;mn&#039;&#039;.  For &#039;&#039;n&#039;&#039;&amp;amp;nbsp;≠&amp;amp;nbsp;&#039;&#039;m&#039;&#039;, this involves a complicated [[permutation]] of the data elements that is non-trivial to implement in-place.  Therefore efficient [[in-place matrix transposition]] has been the subject of numerous research publications in [[computer science]], starting in the late 1950s, and several algorithms have been developed.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[Invertible matrix]]&lt;br /&gt;
*[[Moore–Penrose pseudoinverse]]&lt;br /&gt;
*[[Projection (linear algebra)]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
*[http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture05.htm MIT Linear Algebra Lecture on Matrix Transposes]&lt;br /&gt;
*[http://mathworld.wolfram.com/Transpose.html Transpose], mathworld.wolfram.com&lt;br /&gt;
*[http://planetmath.org/encyclopedia/Transpose.html Transpose], planetmath.org&lt;br /&gt;
*[http://khanexercises.appspot.com/video?v=2t0003_sxtU Khan Academy introduction to matrix transposes]&lt;br /&gt;
&lt;br /&gt;
{{linear algebra}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Matrices]]&lt;br /&gt;
[[Category:Abstract algebra]]&lt;br /&gt;
[[Category:Linear algebra]]&lt;br /&gt;
&lt;br /&gt;
[[de:Matrix (Mathematik)#Die transponierte Matrix]]&lt;/div&gt;</summary>
		<author><name>209.147.130.100</name></author>
	</entry>
</feed>