Unitary Spaces

\( \newcommand{\ket}[1]{\mbox{$|#1\rangle$}} \newcommand{\bra}[1]{\mbox{$\langle #1|$}} \newcommand{\cn}{\mbox{$\mathbb{C}$}} \newcommand{\rn}{\mbox{$\mathbb{R}$}} \newcommand{\knull}{\mbox{$|{\mathbf 0}\rangle $}} \newcommand{\bnull}{\mbox{$\langle {\mathbf 0}|$}} \newcommand{\sprod}[2]{\mbox{$\langle #1 | #2 \rangle $}} \newcommand{\ev}[3]{\mbox{$\langle #1 | #2 | #3 \rangle $}} \newcommand{\av}[1]{\langle #1 \rangle} \newcommand{\norm}[1]{\mbox{$\Vert #1 \Vert $}} \newcommand{\diad}[2]{\mbox{$|#1\rangle \langle #2 | $}} \newcommand{\tr}[1]{\mbox{$Tr\{#1\}$}} \newcommand{\sx}{\mbox{$\hat{\sigma}_x$}} \newcommand{\sy}{\mbox{$\hat{\sigma}_y$}} \newcommand{\sz}{\mbox{$\hat{\sigma}_z$}} \newcommand{\smx}{\mbox{$\left( \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right)$}} \newcommand{\smy}{\mbox{$\left( \begin{array}{cc} 0 & -i \\ i & 0\end{array} \right)$}} \newcommand{\smz}{\mbox{$\left( \begin{array}{cc} 1 & 0 \\ 0 & -1\end{array} \right)$}} \newcommand{\mat}[4]{\mbox{$\left( \begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array} \right)$}} \newcommand{\col}[2]{\mbox{$\left( \begin{array}{c} #1 \\ #2 \end{array} \right)$}} \)

The quantum mechanical state of a system (to be explained in more detail later), belongs to a Hilbert space. The subject of Hilbert spaces is beyond the scope of this lecture so the special case of unitary spaces will be considered instead. Unitary spaces are finite dimensional Hilbert spaces and as such are useful for their understanding. Also, most of the spaces used when considering spin 1/2 systems are finite dimensional, i.e. unitary spaces, so they will suffice for our needs.

A note on notation: The Dirac bra-ket notation in which a vector is denoted by \(\ket{a}\) will be used throughout these notes for consistency. As the text progresses it will become more apparent why this notation is so useful. Also, operators will consistently be denoted with a hat over their symbol, \(\hat{A}\) for example.

Unitary Spaces

A unitary space \(U\) is a vector space over the field of complex numbers with an inner product. For completeness and as a reminder to the reader we give the full definition of a complex vector space. For every two vectors \(\ket{u}\) and \(\ket{v}\) in the space \(U\) (\(\ket{u},\ket{v}\in U\)) and every two complex numbers \(c\) and \(d\) (\(c,d\in \cn \)) the following statements have to be satisfied:
  1. \(\ket{u}+\ket{v}\in U\) (the vector space is closed under vector addition).
  2. \(c\cdot \ket{u}\in U\) (the vector space is closed under scalar multiplication.
  3. \(\ket{u}+\ket{v}=\ket{v}+\ket{u}\) and \(\ket{u}+(\ket{v}+\ket{w})=(\ket{u}+\ket{v})+\ket{w}\) (commutativity and associativity).
  4. there exist a neutral element, the null vector \(\knull\), for vector addition such that \(\knull+\ket{u}=\ket{u}\).
  5. there exist an additive inverse, \(-\ket{u}\) such that \(\ket{u}+(-\ket{u})=\knull\).
  6. \(c\cdot (\ket{u}+\ket{v})=c\cdot \ket{u}+c\cdot\ket{v}\) and \(c\cdot (d\cdot \ket{v})=(cd)\cdot \ket{v}\) (distributivity and assoicativity).
  7. \(1\cdot \ket{v}=\ket{v}\).
What distinquishes a unitary space from a complex space is the inner product defined as a map, \(\langle\;|\;\rangle :U\times U\rightarrow \cn\), that maps a pair of vectors into a scalar and satisfies the following conditions for all \(\ket{u},\ket{v},\ket{w} \in U\) and \(c \in \cn\):
  1. \(\sprod{u}{v}=\sprod{v}{u}^*\) (conjugate symmetry).
  2. \(\bra{u}(c\ket{v}+\ket{w})=c\sprod{u}{v}+\sprod{u}{w}\) (linearity).
  3. \(\sprod{u}{u}\in \rn\) and \(\sprod{u}{u}\geq 0\) and \(\sprod{u}{u}=0\) only for \(\ket{u}=\knull\) (positive definitness).
The scalar product equips the space with a geometry since it enables us to define the norm (length) of a vector as \(\norm{u}=\sqrt{\sprod{u}{u}}\) and a the distance between two vectors \(\ket{u}\) and \(\ket{v}\) as \(\norm{u-v}\). A subspace of a unitary space is any subset of vectors that also form a unitary space (closed to vector addition, contains the zero vector...). The span of a set of vectors \(\{\ket{a_i}|i=1,...,M\}\) is the subspace of all their linear combinations \(span \{\ket{a_i}\}=\{\sum_{i=1}^M c^i\ket{a_i}|\forall c^i\in \cn \}\). The span operation constructs subspaces by making all possible linear combinations of a given set of vectors.

Orthonormal bases of a unitary space

For each unitary space there exists a set of \(n\) linearly independent vectors \(\ket{e_i}\), a basis, such that every vector \(\ket{u}\) in that space can be expressed as a linear combination of those vectors \begin{equation} \ket{u}=\sum_{i=1}^n u^i\ket{e_i}. \label{basis} \end{equation} i.e. they span the whole space. Equivalently a basis is a minimal set of vectors that span the space. While a basis is not uniquely defined (there is an infinite number of sets of \(n\) linearly independent vectors that span the whole space) the number of vectors in a basis is the same for all of them and is a characteristic of the space. The number of basis vectors, \(n\), defines the space dimension and further on we will denote \(n\)-dimensional unitary spaces by \(U_n\). One can always add vectors to the basis set. This expanded set of vectors will still span the whole space however the expansion coefficients \(u_i\) will no longer be unique. These sets are called overcomplete. A very important class of bases are orthonormal bases for which \(\sprod{e_i}{e_j}=\delta_{ij}\), where the Kronecker delta is defined as \(\delta_{ij}=0\) for \(i\neq j\) and \(\delta_{ii}=1\). Orthonormal bases are important since they reflect the geometry of the space. Each vector in an orthonormal basis has unit length and is orthogonal to every other vector in the basis. In these lectures we will only use orthonormal bases. The coefficients in the expansion \ref{basis} can be easily calcualted for an orthonormal basis since \begin{equation} \sprod{e_j}{u}=\sum_{i=1}^n u^i\sprod{e_j}{e_i}=\sum_{i=1}^n u^i\delta_i^{\;j}=u^j. \label{represent} \end{equation} The coefficients \(u^j\) are the magnitudes of the projections of the vector onto the basis vectors. Expression \ref{represent} is true only for orthogonal bases. It is very important to keep in mind that the choice of basis is not unique and the same vector will have different coordinates in different bases. The coordinates of a vector expressed in a given basis can be written as a column: \begin{equation} \left( \begin{array}{c} u^1\\ u^2\\ \vdots \\ u^n \end{array} \right). \end{equation} It can be shown that every unitary space of dimension \(n\) is isomorphic (through the above coordinate correspondence) to the space of \(n\)-tuple columns of complex numbers, \(\cn^n\), with the scalar product \(\sprod{u}{v}\) defined as \begin{equation} \sprod{u}{v}=\sum_{i=1}^n (u_i)^*v^i=(u_1^* u_2^* \ldots u_n^*) \mbox{$\left( \begin{array}{c} v^1\\ v^2\\ \vdots \\ v^n \end{array} \right)$} .\label{scalarprod} \end{equation} It is said the the columns form a representation of the abstract unitary space. The representation is basis dependent - if we change the basis vectors the coefficients in the expansion \ref{basis} of the same abstract vector will change. Every orthonormal basis is, by definition, always represented by the so called standard basis in \(\cn^n\) \begin{equation} \mbox{$\left( \begin{array}{c} 1\\ 0\\ \vdots \\ 0 \end{array} \right)$}, \mbox{$\left( \begin{array}{c} 0\\ 1\\ \vdots \\ 0 \end{array} \right)$}, \ldots , \mbox{$\left( \begin{array}{c} 0\\ 0\\ \vdots \\ 1 \end{array} \right)$}. \end{equation} We will often work with the representation of a given unitary space i.e. the columns of expansion coefficients \(u^i\). For unitary spaces it is often hard to keep this distinction in mind that is very important and non-trivial in Hilbert spaces.

The Dual (Adjoint) Space

It is obvious that the rows \((u_1 u_2 \ldots u_n)\) also form an \(n\) dimensional unitary space. This is the representation of the space dual to \(U_n\) that can be thought of as the space of scalar products on \(U_n\). The dual space is usually defined as the space of linear mappings from some space to scalars. For unitary spaces the dual has a nice form and a geometric meaning stemming from the scalar product. Vectors in the dual space are denoted by bra-s \(\bra{u}\). If we fix the vector \(\ket{u}\) (the row) equation \ref{scalarprod} defines a linear map from the vectors of \(U_n\) to complex numbers. This linear mapping, denoted \(\bra{u}\), is the vector dual to the ket \(\ket{u}\) - it is the "scalar product" corresponding to the vector \(\ket{u}\). We now see the meaning of the Dirac notation. Bras represent the dual vectors while kets represent the vectors from the original unitary space. If straight lines in the notation come together, \(\bra{u}\ket{v}\), a scalar product is implied, \(\sprod{u}{v}\), and the lines join. This rule relates to the so called Einstein notation in which a doubled index is assumed to be summed over, so if in a given representation we write the following expression for the coordinates \((u_i)^*v^i\) a sum wil be assumed along the index \(i\) resulting in the scalar product of the kets \(\ket{u}\) and \(\ket{v}\). The coordinates of bras and kets are called contravariant and covariant vectors, respectively. This nomenclature is due to different vector transformation properties under base changes that will not be discussed here. Covariant or contravariant vectors are differentiated by a lowered, \(u_i\), or a raised, \(u^i\) index.

Transofrmation properties of representations of bras and kets

Let us select a basis, \(\{\ket{e_i}|i=1,\ldots,n\}\) in the space \(U_n\). The vectors dual to these basis vectors form a basis in the dual space, \(\{\bra{e_i}|i=1,\ldots,n\}\), called the biorthogonal basis since \(\sprod{e_i}{e_j}=\delta_{ij}\), We can change to a different basis, \(\{ \ket{\overline{e}_i}|i=1,\ldots,n \}\) by a linear transformation \begin{eqnarray} \ket{\overline{e}_i}&=&\sum_{j=1}^n R_i^{\;j}\ket{e_j} \nonumber \\ \ket{e_i}&=&\sum_{j=1}^n (R^{-1})_i^{\; j} \ket{\overline{e}_j}. \end{eqnarray} The transformation to the new biorthogonal basis \begin{eqnarray} \bra{\overline{e}_i}&=&\sum_{j=1}^n S_i^{\; j} \bra{e_j} \nonumber \\ \bra{e_i}&=&\sum_{j=1}^n (S^{-1})_i^{\; j} \bra{\overline{e}_j} \end{eqnarray} can be obtained from the following identities \begin{eqnarray} \delta_{ij}=\sprod{\overline{e}_i}{\overline{e}_j}=\sum_{k,l=1}^n \ev{e_k}{S^{\; k}_i R_j^{\; l}}{e_l}=\sum_{k,l=1}^n S^{\; k}_i R_j^{\; l}\sprod{e_k}{e_l}=\sum_{k,l=1}^n S^{\; k}_i R_j^{\; l}\delta_{kl}=\sum_{k=1}^n S^{\; k}_i R_{jk}\Rightarrow S=(R^{-1})^T. \label{contra} \end{eqnarray} The representation of a vector in the original space then changes as \begin{eqnarray} \overline{u}^i&=&\sprod{\overline{e}_i}{u}=\bra{\overline{e_i}}\sum_{j=1}^n u^j\ket{e}_j=\bra{\overline{e}_i}\sum_{j=1}^n u^j\sum_{k=1}^n (R^{-1})_j^{\; k} \ket{\overline{e}_k}\nonumber \\ &=&\sum_{j=1}^n u^j\sum_{k=1}^n (R^{-1})_j^{\; k} \sprod{\overline{e}_i}{\overline{e}_k}=\sum_{j=1}^n u^j\sum_{k=1}^n (R^{-1})_j^{\; k}\delta_{ik}=\sum_{j=1}^n ((R^{-1})^T)_{\;j}^{i} u^j. \end{eqnarray} We see that the vector representation changes with the matrix \(S=(R^{-1})^T\) and we call this rule of transofrmation a contravariant change (opposite the basis change). Using \ref{contra} we can show in a similar way that for a dual vector \(\bra{v}=\sum_{i=1}^n v_i\bra{e_i}\) \begin{eqnarray} \overline{v}_i&=&\sprod{v}{\overline{e}_i}=\sum_{j=1}^n v_j\sprod{e_j}{\overline{e}_i}=\sum_{j=1}^n v_j\sum_{k=1}^n (S^{-1})_j^{\; k} \sprod{\overline{e}_k}{\overline{e}_i}=\sum_{j=1}^n ((S^{-1})^T)^{\;j}_i v_j=\sum_{j=1}^n R^{\;j}_i v_j. \end{eqnarray} The representations of the dual vectors are called covariant vectors since they transform "along" with the basis in the original space. We see that even though the dual is isomorphic to its original space (since they are of the same dimension) they can be distinguished by their transformational properties.

Example

Jumping ahead a bit, let us examine what happens to representations of vectors and their duals in \(U_2\), the unitary space of a spin 1/2 particle. We pick the basis to be the two eigenvectors of the z component of the spin operator \(\{\ket{+}_z,\ket{-}_z\}\). In this basis, the vector \begin{equation} \ket{u}=\frac{1}{\sqrt{2}}[i\ket{+}_z+\ket{0}_z] \end{equation} and its dual are represented by the following column and row respectively \begin{equation} \ket{u}\rightarrow_z \frac{1}{\sqrt{2}}\col{i}{1} \;\;\;\;\;, \;\;\;\;\; \bra{u}\rightarrow_z \frac{1}{\sqrt{2}}(-i \;\; 1), \end{equation} where the dual vector was represented in the biorthogonal basis \(\{\bra{+}_z,\bra{-}_z\}\) We did not use the equal sign since the column on the right hand side changes with the representation used. The representation is indicated by the subscript \(a\) under the arrow. Let us now change the basis using the following matrix \begin{eqnarray} \ket{+}_b&=&\frac{1}{\sqrt{2}}(-i\ket{+}_z+i\ket{-}_z) \nonumber \\ \ket{-}_b&=&\frac{1}{\sqrt{2}}(\ket{+}_z+\ket{-}_z), \end{eqnarray} so the \(R\) and \(S\) matrices are as follows \begin{equation} R=\frac{1}{\sqrt{2}}\mat{-i}{i}{1}{1}, \;\;\;\; S=(R^{-1})^T=\frac{1}{\sqrt{2}}\mat{i}{-i}{1}{1}. \end{equation} After acting with R and S on their coordinates, in the new basis \(\ket{u}\) and \(\bra{u}\) are now represented by \begin{eqnarray} \ket{u}\rightarrow_b \frac{1}{2}\col{-1-i}{1+i} \;\;\;\;\;, \;\;\;\;\; \bra{u}\rightarrow_b \frac{1}{2}(-1+i \;\; 1-i)=\frac{1}{2}\col{-1-i}{1+i}^\dagger. \end{eqnarray}