- The
**scalar product of two**(complex)**functions**$f$ and $g$ of a real variable $x\in[a,b]$ is usually defined as $$\braket fg=\int_a^b\conj{f(x)}\,g(x)\,\d x.$$ This definition is consistent with our earlier idea of the scalar product of*vectors*, as it is linear in the second argument, $g$, and conjugate-linear in the first, $f$. Moreover, like the scalar product of vectors, it satisfies $$\braket ff=\int_a^b\abs{f(x)}^2\d x\ge0,$$ with equality holding if and only if $f(x)=0$ for all $x$. - Note that if the range of integration is
*infinite*, $f$ and $g$ must tend to zero sufficiently quickly for $\abs x\to\infty$ for the scalar products $\braket ff$, $\braket gg$, $\braket fg$ to exist. In this course, functions of this kind are said to be**square integrable**.- Elsewhere, this set of functions may be referred to as
$\mathcal{L}^2(\set R)$, where $\set R$ is the domain of the
functions, i.e. the set of values of $x$ for which they are
defined; $\mathcal{L}$ is a warning sign that the theory of
Lebesgue integration
*may*be needed to give meaning to the integrals.

- Elsewhere, this set of functions may be referred to as
$\mathcal{L}^2(\set R)$, where $\set R$ is the domain of the
functions, i.e. the set of values of $x$ for which they are
defined; $\mathcal{L}$ is a warning sign that the theory of
Lebesgue integration
- As for vectors,
**orthogonality**of functions $f$ and $g$ is defined by their scalar product being zero, $\braket fg=0$. **Examples**- The wave functions of the quantum harmonic oscillator should be fairly well known to you. They take the form $u_n(x)=A_nH_n(x)e^{-x^2/2}$, where: $n=0$, $1$, $2$, ...; $H_n(x)$ is the $n$th Hermite polynomial; and the coefficient $A_n$ is chosen so that the functions are normalized to unity, $\braket{u_n}{u_n}=1$. You will be aware that they satisfy the orthogonality relation $$\braket{u_m}{u_n}= \intii\conj{u_m(x)}\,u_n(x)\,\d x =0 \quad\text{for}\quad m\ne n.$$
- From the theory of Fourier series, you will also know that the complex exponentials $u_n(x)=e^{inx}/\sqrt{2\pi}$, where $n\in\set Z$, are an orthonormal set of functions that are periodic on the interval $[-\pi,\pi]$. That is, the functions satisfy a periodic boundary condition $u_n(-\pi)=u_n(\pi)$ and satisfy the orthogonality relation $$\braket{u_m}{u_n}=\frac1{2\pi}\int_{-\pi}^\pi e^{-imx}\,e^{inx}\,\d x=\delta_{mn}.$$

- For each of those examples, we are familiar with the idea (see
note at the end of this section) that, for a function $f(x)$
defined on the same interval of $x$ and satisfying the same
boundary conditions as the
**basis functions**$u_m(x)$, we can make an expansion $$f(x)=\sum_mf_m\,u_m(x),$$ where $$f_m = \braket{u_m}f=\int\conj{u_m(x)}\,f(x)\,\d x.$$ These two equations are analogous to the expansion of a vector $\ket f\in\set V^N$ in terms of the vectors of an orthonormal basis, $\basis{u_m}$, for $\set V^N$. Just as in that case one can show that $$\braket fg = \sum_m\conj{f_m}\,g_m=\sum_m\conj{\braket{u_m}f}\,\braket{u_m}g=\sum_m\braket f{u_m}\braket{u_m}g.$$ This last expression shows (at least formally) that inserting $\sum_m\proj{u_m}$ between $\bra f$ and $\ket g$ has no effect on $\braket fg$; i.e., we have $$\sum_m\proj{u_m}=\op1,$$ which has the same form as the**completeness relations**we have used several times to represent the operator $\op1$ on $\set V^N$. The most important difference compared with our earlier applications is that the number of terms in the sum is now**infinite**: the space of functions has an*infinite*number of dimensions.**Footnote.**Proof that the expansions converge to $f(x)$ is actually not very difficult for the two sets of functions mentioned. If interested, see Mathews & Walker,*Mathematical Methods of Physics*, pp.238-240, for a variational argument that covers these two cases and many others of relevance to physics.

- The appearance of the variable $x$ in the expansion $$f(x)=\sum_mf_m\,u_m(x)$$ slightly spoils the analogy with $$\ket f=\sum_mf_m\ket{u_m}$$ for vectors in $\set V^N$. We can improve the analogy by projecting $\ket f$ in the last equation onto the vectors $\basis{e_j}$ of a second orthonormal basis for $\set V^N$. Then $$\braket{e_j}f=\sum_mf_m\braket{e_j}{u_m}.$$ The $N$ numbers $\braket{e_j}f$ specify $\ket f$ completely, just as the infinite number of function values $f(x)$ specify the function $f$. To complete the analogy, we just have to identify the function values $f(x)$ with the expansion coefficients, $f(x)\equiv\braket xf$, of an abstract vector $\ket f$ with respect to a set of basis kets, $\ket x$. Then $$\braket xf=\sum_mf_m\braket x{u_m}\quad\text{corresponds closely to} \quad\braket{e_j}f=\sum_mf_m\braket{e_j}{u_m}.$$ Using this notation, the scalar product of functions $f$ and $g$ can be written as $$\braket fg = \int\conj{f(x)}\,g(x)\,\d x \equiv\int\conj{\braket xf}\,\braket xg\,\d x =\int\braket fx\braket xg\,\d x.$$ From this, we can read off a representation of the unit operator, $$\int\proj x\,\d x=\op1,$$ which is analogous to our earlier expression $$\sum_m\proj{u_m} = \op1,$$ except that an integral with respect to $x$ occurs, rather than a sum over $m$. This difference is necessary, because the variable $x$ varies continuously, whereas the variable $m$ is discrete.
- Using the kets $\ket x$ as a basis and regarding the function
values $f(x)$ as the expansion coefficients is known (in
physics) as
**the $x$ representation**. In the $x$ representation, the abstract vector $\ket f$ corresponding to the function $f$ has the expansion $$\ket f=\op1\ket f=\int\ket x\braket xf\,\d x\equiv\int\ket x\,f(x)\,\d x.$$ Projecting both sides onto $\ket{x'}$ gives $$ f(x')=\braket{x'}f=\int\braket{x'}x\,f(x)\,\d x,$$ so that, for consistency between the left- and right-hand sides we must have $$\braket{x'}x=\delta(x-x').$$ For $x'\ne x$, the delta function is zero, which (quite reasonably) expresses the orthogonality of $\ket x$ and $\ket{x'}$. However, for $x'=x$ we find $\braket xx=\delta(0)=\infty$, so the ket $\ket x$ has infinite norm. Therefore, the basis kets of the $x$ representation cannot be represented by square-integrable (i.e.,*normalizable*) functions. This inconsistency is the price we have to pay for a uniform notation.

- Perhaps the simplest example is an operator $\op X$ which,
when applied to $\ket f$, gives a vector whose
$x$-representative is the function $g(x)=xf(x)$. That is,
$$\ket g=\op X\ket f\longrightarrow g(x)=xf(x).$$ We have
$$g(x)=\braket xg=\bra x\op X\ket f=x\braket xf.$$ If we insert
a resolution of unity between $\op X$ and $\ket f$ we obtain
$$g(x)=\bra x\op X\op1\ket f=\int\bra x\op
X\ket{x'}\braket{x'}f\,\d x' =\int\bra x\op X\ket{x'}\,f(x')\,\d
x'.$$ If the last integral is to give $xf(x)$, we must have
$$\bra x\op X\ket{x'}=x\delta(x-x')=x'\delta(x-x')\,;$$ either
expression will do. The operator $\op X$ is
therefore
*diagonal*in the $x$ representation: its matrix elements vanish for $x\ne x'$. To put this another way, the ket $\ket x$ must be an eigenket of $\op X$ with eigenvalue $x$; in fact, we can show this explicitly, since $$\op X\ket x=\op1\op X\ket x = \int\proj{x'}\op X\ket x\,\d x' = \int\ket{x'}x'\delta(x'-x)\,\d x' = x\ket x.$$**Aside:**A similar argument leads also to $$\op X=\op1\op X\op1 =\int\!\!\int\proj{x'}\op X\proj x\,\d x\,\d x' = \int\ket xx\bra x\,\d x,$$ which is the spectral representation of the operator $\op X$.**Exercise:**Show that $\op X$ is self-adjoint, $\op X=\op X^\dagger$. You need to show that $\bra f\op X\ket g=\conj{\bra g\op X\ket f}$ for arbitrary $f$ and $g$. -
Another simple example of a linear operator corresponds to
differentiation with respect to $x$. That is, we define an
operator $\op D$ such that $$\ket g=\op D\ket f\longrightarrow
g(x)=\dby{f(x)}x.$$ What are its matrix elements in the $x$
representation? We have (somewhat formally) $$\braket xg =
\dby{}x\braket xf = \dby{}x\int\delta(x-x')\braket{x'}f\,\d
x'=\int\pdby{}x\delta(x-x')\braket{x'}f\,\d x'.$$ But we also
have $$\braket xg=\bra x\op D\ket f=\bra x\op D\op1\ket
f=\int\bra x\op D\ket{x'}\braket{x'}f\,\d x',$$ which is
consistent with the preceding expression if $$\bra x\op
D\ket{x'} = \pdby{}x\delta(x-x')=-\pdby{}{x'}\delta(x-x');$$ the
second expression assumes that $\delta(x-x')$ behaves like an
ordinary function under differentiation. Needless to say, the
derivative of a delta function is a highly singular object, best
left inside an integral, where it can be eliminated by an
integration by parts.
**Exercise:**Verify that $$\bra x\op D\ket f=\int\bra x\op D\ket{x'}\braket{x'}f\,\d x'$$ gives the same result, $\dslby fx$, regardless of which expression you use for $\bra x\op D\ket{x'}$. Integration by parts will be helpful here. - What is the
**adjoint**of $\op D$? It turns out that the matrix elements in the $x$ representation don't tell us the whole story. Naively, we could look at $$\bra x\op D^\dagger\ket{x'}=\conj{\bra{x'}\op D\ket x} =\pdby{}{x'}\delta(x'-x)=-\bra x\op D\ket{x'}$$ and from this conclude that $\op D^\dagger=-\op D$. But if, instead, we consider the matrix elements between more general vectors $\ket f,\ket g$ we find $$\bra f\op D^\dagger\ket g=\conj{\bra g\op D\ket f} =\int_a^bg(x)\dby{\conj{f(x)}}x\,\d x\,;$$ then, after an integration by parts, we discover that $$\bra f\op D^\dagger\ket g=\bigl[\conj{f(x)}\,g(x)\bigr]_a^b-\int_a^b\conj{f(x)}\,\dby{g(x)}x\,\d x =\bigl[\conj{f(x)}\,g(x)\bigr]_a^b-\bra f\op D\ket g.$$ So*provided*that $f$ and $g$ satisfy boundary conditions that ensure $\bigl[\conj{f(x)}\,g(x)\bigr]_a^b=0$, we can conclude that $\op D$ is, $$\op D^\dagger=-\op D.$$ This need to consider the boundary conditions is a feature of operators on function spaces; there is no analogue for operators on a finite-dimensional vector space, $\set V^N$.__skew__-Hermitian

From this point onwards, we take $a\to-\infty$, $b\to\infty$ and assume the boundary conditions $f(x)\to0$ for $\abs x\to\infty$. The effects of choosing a different domain and boundary conditions are investigated in Example 7.66.

- With the assumed boundary conditions, $\op D$ is skew-Hermitian, so that if we define an operator $\op K=-i\op D$, then $$\op K^\dagger=(-i\op D)^\dagger=\conj{(-i)}\op D^\dagger=i(-\op D)=\op K\,;$$ i.e., the operator $\op K$ is Hermitian, and therefore (like any Hermitian operator) it has real eigenvalues.
- The eigenvalue equation for $\op K$ is $$\op K\kete k=k\kete k, \quad\text{where $k\in\set R$.}$$ In the $x$ representation, this reads $$\bra x\op K\kete k=-i\dby{}x\braket x{e_k}=k\,\braket x{e_k} \quad\text{or}\quad -i\dby{e_k(x)}x = k\,e_k(x),$$ which has the solution $e_k(x)=C_ke^{ikx}$.
**Normalization:**Like $x$, $k$ is a continuous, real variable. If we require a resolution of unity in the form $$\op1=\intii\proj{e_k}\,\d k,$$ then, for consistency between the right- and left-hand sides of $$\braket{e_k}f=\brae k\op1\ket f=\intii\braket{e_k}{e_{k'}}\braket{e_{k'}}f\,\d k',$$ we must have $$\braket{e_k}{e_{k'}}=\delta(k-k').$$ This fixes the value of $\abs{C_k}$, since $$\braket{e_k}{e_{k'}}=\intii\conj{e_k(x)}\,e_{k'}(x)\,\d x =\conj{C_k}C_{k'}\intii e^{i(k'-k)x}\,\d x =\conj{C_k}C_{k'}\times2\pi\delta(k-k').$$ Thus, $\abs{C_k}=1/\sqrt{2\pi}$. The phase of $C_k$ can be chosen arbitrarily: we make the conventional choice $\Arg C_k =0$, which gives $$e_k(x) = \frac1{\sqrt{2\pi}}\,e^{ikx}.$$- Using the kets $\kete k$ as the basis and representing $\ket f$
via the expansion coefficients $\braket{e_k}f$ is known as
**the $k$ representation**. The expansion coefficient $\braket{e_k}f$ is a function of $k$ that is already well known to us; for we have $$\braket{e_k}f=\intii\conj{e_k(x)}\,f(x)\,\d x =\frac1{\sqrt{2\pi}}\intii e^{-ikx}f(x)\,\d x\equiv\tilde f(k),$$ which is the**Fourier transform**of $f(x)$. In this course the Fourier transform is indicated by a tilde, $\tilde f(k)$.Thus we have two ways of expanding $\ket f$ in terms of continuous sets of basis kets: $$\ket f=\intii\ket x\braket xf\,\d x =\intii\ket x\,f(x)\,\d x \quad\text{and}\quad \ket f=\intii\kete k\braket{e_k}f\,\d k =\intii\kete k\,\tilde f(k)\,\d k.$$ We could also make an expansion using a

*discrete*basis set, such as the set of eigenfunctions of the quantum harmonic oscillator, $\basis{u_n}$: $$\ket f = \sum_{n=0}^\infty\ket{u_n}\braket{u_n}f=\sum_{n=0}^\infty\ket{u_n}\,f_n.$$ All three representations, $\ket f\longrightarrow f(x)$, $\ket f\longrightarrow\tilde f(k)$ and $\ket f\longrightarrow f_n$ contain complete information about the vector $\ket f$. **Aside on the normalization of the basis kets:**The orthogonality condition $\braket{e_k}{e_{k'}}=\delta(k-k')$ implies $\braket{e_k}{e_k}=\infty$ (see also Example 7.65). So, when we use the $k$ representation, we are using a basis whose vectors are not in the vector space of square-integrable functions. This is one reason why in physics we often prefer to use periodic boundary conditions (or hard-wall boundary conditions) in a box of finite length $L$, when writing down the wave functions of free particles. The free-particle wave functions are then normalizable (see Example 7.66) and sums over a discrete set of wave vectors appear instead of integrals over $k$. Another reason for choosing box boundary conditions is that it makes the counting of states more meaningful; you have seen examples of this in PHYS20352 and PHYS20252.**Aside on Parseval's theorem:**From the point of view of this course, Parseval's theorem expresses the fact that the inner product $\braket fg$ does not depend on the representation we use to calculate it: $$\braket fg = \intii\braket fx\braket xg\,\d x =\intii\conj{f(x)}\,g(x)\,\d x$$ and $$\braket fg = \intii\braket f{e_k}\braket{e_k}g\,\d k =\intii\conj{\tilde f(k)}\,\tilde g(k)\,\d k,$$ so that $$\intii\conj{f(x)}\,g(x)\,\d x=\intii\conj{\tilde f(k)}\,\tilde g(k)\,\d k,$$ which is Parseval's theorem for Fourier transforms. Parseval's theorem may be useful, but is not essential, for Examples 7.67 and 7.70.

- The operator $\op K$ is self-adjoint, and $\kete k$ is an
eigenvector of $\op K$ with real eigenvalue $k$, so $$\brae k\op
K\ket f = \conj{\bra f\op K\kete k} =\conj{\bra f k\kete k} =
k\braket{e_k}f\equiv k\tilde f(k).$$ Therefore, in the $k$
representation, $$\ket g=\op K\ket f\longrightarrow \tilde
g(k)=k\tilde f(k).$$ By very similar arguments to those used for
operator $\op X$ in the $x$ representation, we can obtain the
spectral representation $$\op K = \intii\kete kk\brae k\,\d k$$
and the matrix elements $$\brae k\op K\kete{k'} =
k\,\delta(k-k')=k'\delta(k-k')\,.$$ The operator $\op K$
is
*diagonal*in the $k$ representation: its matrix elements vanish for $k\ne k'$. - A little more work is needed to discover the effect of the operator $\op X$ in the $k$ representation. We have $$\brae k\op X\ket f = \intii\braket{e_k}x\bra x\op X\ket f\,\d x =\intii\frac{e^{-ikx}}{\sqrt{2\pi}}\,x\braket xf\,\d x$$ $$=i\dby{}k\intii\frac{e^{-ikx}}{\sqrt{2\pi}}\,\braket xf\,\d x =i\dby{}k\braket{e_k}f\,;$$ note that the derivative with respect to $k$ brings down a factor of $-ix$ under the integral sign. Thus, $$\brae k\op X\ket f = i\dby{}k\braket{e_k}f\equiv i\dby{\tilde f(k)}k,$$ so that, in the $k$ representation, $$\ket g=\op X\ket f\longrightarrow\tilde g(k)=i\dby{\tilde f(k)}k\,.$$ By arguments similar to those used for operator $\op D$ in the $x$ representation, this leads to the matrix elements $$\brae k\op X\kete{k'} = i\pdby{}k\delta(k-k')=-i\pdby{}{k'}\delta(k-k').$$