## Statistical Ensembles

In classical mechanics statistics is not a fundamental requirement of the theory. One could only take a single measurement and compare it to the classical predictions (for example a single measurement of the position of a golf ball at a certain time could be used for comparison with the prediction for its trajectory). Often this might give a very poor estimate of the measured quantity but this fact is not a consequence of the predictions of classical mechanics but rather a consequence of the fact that there is no such thing as a noiseless measurement. In quantum mechanics things are quite different. Quantum mechanics only predicts probabilities so the requirement for statistics is inherent to the theory rather than a consequence of the imperfection of the measurement process (even a noiseless quantum measurement would produce a random outcome (although there are cases of quantum measurements that give sharp values.). A statistical enemble is a set of identical objects whose number is sufficiently large for statistics to be applicable. A spatial ensemble is a set of many identical objects simmulatneously being measured in a certain volume of space. In a temporal ensemble the same object (or the same kind of object) is repeatedly measured under the same conditions at different times. In practice most often a mix of these two ensembles, a spatio-temporal ensemble, is used: a certain number of identical objects is simulatenously and repeatedly measured under identical conditions.## The State of the System

In classical mechanics one usually works either with the configuration space or the phase space of a given system. For a single, point particle the configuration space is the set of pairs \(\{x,\dot{x}\}\), the Cartesian product of the position and velocity space. Analogously one can use phase space, consisting of position-momentum pairs \(\{x,p\}\). The knowledge of the pair of*observable*quantities \(\{x,p\}\) is all we need or can know about the state of the system. It is also assumes that in principle (ignoring noise) one could perfectly measure these quantities to establish the state of the system - the position and momentum "exist". The situation is quite different in quantum mechanics. The state of the system is described by a ray in a Hilbert space specific to the system. Each ray is usually represented by one normalized state vector \(\ket{\psi}\), \(\sprod{\psi}{\psi}=1\). All vectors that are different only up to a phase factor, \(e^{i\phi}\ket{\psi}\), represent the same state since they define the same ray and have the same norm. This state vector is not analogous to the classical state described by a pair of

*observables*independent of any other mathematical object. An example of a quantum mechanical state space is \(U_2\) which describes possible states of a spin 1/2 system, a proton for example. An arbitrary state in this space can be written in terms of the two orthogonal basis vectors, \(\ket{+},\ket{-}\), as \(\ket{\psi}=\alpha \ket{+}+\beta \ket{-}\), with \(|\alpha|^2+|\beta|^2=1\).

## Observables

Experimentally measurable quantites, observables, are described by Hermitian operators in quantum mechanics. Quantum mechanics postulates that the only posible outcome values for a measurement of an observable are the eigenvalues of the corresponding operator. Hermitian operators have real eigen-values so they can be related to measurement outcomes. For example the z-component of the spin operator for a spin 1/2 system is: \begin{equation} \hat{s}_z=\frac{1}{2}\diad{+}{+}-\frac{1}{2}\diad{-}{-},\; \mbox{or in matrix representation}, \; \frac{1}{2}\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right). \end{equation} This means that the outcome of a spin measurement will always be either 1/2 or -1/2 (the famous Stern-Gerlach experiment).## Example: Spin 1/2 y-component of the spin

In the basis of the eigenstates of the z-component of the spin, \(\{ \ket{+}, \ket{-}\}\) , the y-component of the spin is representet by the matrix \begin{equation} \hat{s}_y=\frac{1}{2}\smy. \end{equation} This is a hermitian matrix since \begin{equation} \hat{s}_y=\frac{1}{2}\smy=\hat{s}^\dagger_y. \end{equation} Its eigenvalues are +1/2 and -1/2 since \begin{equation} \det (\lambda \hat{I}-\hat{s}_y)=\frac{1}{2}\det {\mbox{$\left( \begin{array}{cc}2 \lambda & -i \\ i & 2 \lambda\end{array} \right)$}}=\frac{1}{2}(4\lambda^2-1)=0\Rightarrow \lambda=\pm \frac{1}{2}. \end{equation} Its eigenvectors are \begin{equation} \frac{1}{\sqrt{2}}(\ket{+}-i\ket{-})\longrightarrow\col{1}{-i}, \frac{1}{\sqrt{2}}(\ket{+}+i\ket{-})\longrightarrow\col{1}{i} \end{equation}## Measurement Outcome Probabilities

Quantum mechanics predicts the*probability*of possible measurement outcomes. This is one of the crucial differences between classical and quantum mechanics. While we know that we always measure probability distributions (the outcome of an experiment is never exactly the same) in classical mechanics we still expect that behind this statistics there is some sharply defined deterministic value. While one is aware that measurements are needed to compare classical mechanics to reality, this requirement is not built into the theory. It is simply assumed that we can measure the coordinates of a system but nothing special is needed to compare those measurements to the predictions of classical mechanics. One of the most striking differences in quantum mechanics is that measurement is built into the theory, even though it is never precisely defined and this issue is still a topic of debate. The probability, \(p(\lambda,\psi,\hat{A})\), that for a system in the state \(\ket{\psi}\) a measurement of the observable \(\hat{A}\) will yield a result the eigenvalue of \(\hat{A}\), \(\lambda\), is \begin{equation} p(\lambda,\psi,\hat{A})=\norm{\hat{P}_\lambda\ket{\psi}}^2. \label{outcome_probability} \end{equation} where \(\hat{P_\lambda}\) is a projection operator that projects onto the eigenspace corresponding to the eiqenvalue \(\lambda\). A more familiar form of this equation can be easilly derived if we assume that the eigenvalue in question is non-degenerate and corresponds to a single vector \(\ket{\phi}\Rightarrow \hat{P}_\lambda=\diad{\phi}{\phi}\): \begin{equation} p(\lambda,\psi,\hat{A})=\norm{\diad{\phi}{\phi}\ket{\psi}}^2=\ev{\psi}{\diad{\phi}{\phi}}{\psi}=|\sprod{\phi}{\psi}|^2. \label{transition} \end{equation} This is the well known formula that describes the "transition" probability from the state \(\ket{\psi}\) to the state \(\ket{\phi}\). The word "transition" can be missleading. Equation \ref{transition} predicts the

*probability*that we will measure a certain outcome. We only know that given the state \(\ket{\psi}\) the probability that in repeated measurements we measure the value \(\lambda\) for \(\hat{A}\) will be given by \ref{transition}. Let us, for example, consider a spatial ensemble of \(N\) spins, where \(N\) is large, prepared in the state \(\ket{\psi}=\alpha \ket{+}+\beta \ket{-}\). If we measure the z compenent of each spin, \(|\alpha|^2N\) of them will have a z component of \(1/2\) while \(|\beta|^2N\) of them will have a z component of \(-1/2\) since \(|\sprod{+}{\psi}|=|\alpha|^2\) and \(|\sprod{-}{\psi}|=|\beta|^2\) . We do not know which spins will have which value but we know how often, on average, we will measure \(1/2\) or \(-1/2\) as descirbed in the previous sentence. Equivalently we could prepare a single spin \(N\) times in the same state \(\ket{\psi}\) and immediatley measure the z component of the spin. Of those measurements, \(|\alpha|^2N\) would result in a z component of \(1/2\) and \(|\beta|^2N\) of them would give a z component of \(-1/2\) \(|\beta|^2N\). Often, instead of measuring individual outcomes for each identical system we simply measure the average of some observable over the whole ensemble. For example, we measure the magnetization of a sample which is the average of the spin magnetic moment in a large set of spins - we do not measure each individual spin and then take the average. Let us examine how we would calculate such averages in quantum mechanics. Given an observable \begin{equation} \hat{H}=\sum_{i=1}^m h_i \hat{P}_i \end{equation} and a system prepared in the state \(\ket{\psi}\), the probability for \(\hat{H}\) to have the value \(h_i\) is \begin{equation} p(h_i,\psi,\hat{H})=\norm{\hat{P}_i\ket{\psi}}^2=\ev{\psi}{\hat{P}_i^\dagger \hat{P}_i}{\psi}=\ev{\psi}{\hat{P}_i^2}{\psi}=\ev{\psi}{\hat{P}_i}{\psi}. \label{expectation} \end{equation} Averaging over all posbile values of \(\hat{H}\) the average (expectation) value of \(\hat{H}\) is \begin{equation} \av{\hat{H}}=\sum_{i=1}^m h_i p(h_i,\psi,\hat{H})=\sum_{i=1}^m h_i \ev{\psi}{\hat{P}_i}{\psi}=\langle \psi | \sum_{i=1}^m h_i \hat{P}_i |\psi \rangle, \end{equation} which leads us the the well known expectation value formula \begin{equation} \av{\hat{H}}=\ev{\psi}{\hat{H}}{\psi}. \label{expectation_value} \end{equation}