=\frac12\Re\leb R^*(\x)Q(\x)\rib=\frac12\Re\leb R(\x)Q^*(\x). \rib \eeq We start by supposing that all fields are harmonic. Then they all have the complex form $\Fx\emot$ and they all have time derivatives give by \beq \frac{\partial(\Fx\emot)}{\partial t}=-i\om\Fx\emot \eeq so that the Maxwell equations for the complex amplitudes (just the position-dependent parts of the fields) are \beqa \div\Bx=0\hsp{1.0} \curl\Ex-i\frac\om c\Bx\nonumber\\ \div\Dx=4\pi\rhx\hsp{0.5}\curl\Hx+i\frac\om c\Dx=\frac{4\pi}c\Jx. \eeqa Notice that there is no problem in generalizing the Maxwell equations to complex fields because the equations involve linear combinations, with real coefficients, of complex objects. One thus has two sets of equations, one for the real parts of these objects and one for the imaginary parts. The set for the real parts comprises the ``true'' Maxwell equations. For the remainder of this section, the symbols $\E$, $\B$, etc, stand for the complex amplitudes $\Ex$, $\Bx$, ...~. We can rederive the Poynting theorem for these by starting from the inner product $\J^*\cdot\E$ and proceeding as in the original derivation. The result is \beq \J^*\cdot\E=\frac{c}{4\pi}\leb-\div(\E\times\H^*)-i\frac\om c(\E\cdot\D^* -\B\cdot\H^*)\rib. \eeq Define now the (complex) Poynting vector and energy densities \beq \S\equiv\frac c{8\pi}(\E\times\H^*), \eeq and \beq w_e\equiv\frac1{16\pi}(\E\cdot\D^*)\hbox{\hspace{0.5in}}w_m\equiv\frac1{16\pi} (\B\cdot\H^*). \eeq Notice that the real part of $\S$ is just the time-averaged real Poynting vector while the real parts of the energy densities are the time-averaged energy densities. More generally, the energy densities can be complex functions, depending on the relations between $\D$ and $\E$ and between $\B$ and $\H$. If the two members of each pair of fields are in phase with one another, then the corresponding energy density is real. Similarly, if $\E$ and $\H$ are in phase, then $\S$ is also real. In terms of our densities, Poynting's theorem for harmonic fields becomes \beq \frac12\J^*\cdot\E=-\div\S-2i\om(w_e-w_m). \eeq If we integrate this expression over some domain V and apply the divergence theorem to the term involving the Poynting vector, we find the relation \beq \frac12\inv\J^*\cdot\E+2i\om\inv(w_e-w_m)+\inac\S\cdot\nn=0. \eeq The interpretation of this equation is that the real part expresses the time-averaged conservation of energy. The imaginary part also has a meaning in connection with energy and its flow. Consider first the simplest case of real $w_e$ and $w_m$. Then the energy densities drop out of the real part of this equation and what it (the real part) tells us is that the time-average rate of doing work on the sources in V is equal to the time-averaged flow of energy (expressed by the Poynting vector) into V through the surface S. If the energy densities are not real, then there is an additional real piece in \eq{149} so that the work done on the sources in V is not equal to the energy that comes in through S; this case corresponds to having ``lossy'' materials within V which dissipate additional energy. Now let's suppose that there is some electromagnetic device within V, i.e., surrounded by S. Let it have two input terminals which are its only material communication with the rest of the world. At these terminals there are some input current $I_i$ and voltage $V_i$ which we suppose are harmonic and which may also be written in the form \eq{138}. \centerline{\psfig{figure=fig16.ps,height=2.25in,width=6.375in}} \noindent Then the (complex) input power is $I^*_iV_i/2$, meaning that the time-averaged input power is the real part of this quantity. Using our interpretation of the Poynting vector, we can express the input power in terms of a surface integral of the normal component of $\S$, \beq \frac12I^*_iV_i=-\int_{S_i}d^2x\,\S\cdot\nn \eeq where the surface integral is done just over the cross-section of the (presumed) coaxial cable feeding power into the device; it is assumed that for such a cable, the input fields are confined to the region within the shield on the cable and so the integral over the remainder of the surface S surrounding the device has no contribution from the incident fields. If we now combine this equation with \eq{149}, we find that we can write \beq \frac12I^*_iV_i=\frac12\inv\J^*\cdot\E+2i\om\inv(w_e-w_m)+\int_{S-S_i} d^2x\,\S\cdot\nn. \eeq The surface integral in this expression gives the power passing through the surface S, excluding the part through which the input power comes. The real part of this integral is the power radiated by the device. Now let us define the {\em input impedance} $Z$ of the device, \beq V_i\equiv ZI_i; \eeq the impedance is complex and so can be written as \beq Z\equiv R-iX \eeq where the {\em resistance} $R$ and the {\em reactance} $X$ are real. From \eq{151} we find expressions for these: \beq R=\frac1{|I_i|^2}\lec Re\leb\inv\J^*\cdot\E+2\int_{S-S_i}d^2x\,\S\cdot\nn \rib+4\om Im\leb\inv(w_m-w_e)\rib\ric \eeq and \beq X=\frac1{|I_i|^2}\lec4\om Re\leb\inv(w_m-w_e)\rib-Im\leb\inv\J^*\cdot\E +2\int_{S-S_i}d^2x\,\S\cdot\nn\rib\ric. \eeq By deforming the surface so that it lies far away from the device, one may make the integral over $\S\cdot\nn$ purely real so that it does not contribute to the reactance; then it is only a part of the resistance and is the so-called ``radiation resistance'' which will be present if the device radiates a significant amount of power. Our result has a simple and pleasing form at low frequencies. Then radiation is negligible and so the contributions of the surface integral may be ignored. Also, we may drop the term in the resistance proportional to $\om$. Then, assuming the current density and electric field are related by $\J=\si\E$ where $\si$ is the (real) {\em electrical conductivity}, and assuming real energy densities, we find \beq R=\frac1{|I_i|^2}\inv\si|\E|^2 \eeq and \beq X=\frac{4\om}{|I_i|^2}\inv(w_m-w_e) \eeq The last equation may be used to established contact between our expressions, based on the electromagnetic field equations, and some standard and fundamental relations in elementary circuit theory. If there is an inductance (magnetic energy-storing device) in the ``black-box,'' then the integral of the magnetic energy may be expressed (see the first two or three problems at the end of Chap. 6 of Jackson) as $\L|I_i|^2/4$, and so we find the familiar (if one knows anything about circuits) result that $X=L\om$. But if there is a capacitor, the energy becomes $|Q_i|^2/4C$ where the charge $Q_i$ is obtained by integrating the current over time; that gives $|Q_i|^2=|I_i|^2/\om^2$ and so $X=-1/\om C$, another familiar tenet of elementary circuit theory. \section{Transformations: Reflection, Rotation, and Time Reversal} Before entering into discussion of the specific transformations of interest, we give a brief review of orthogonal transformations. Introduce a $3\times3$ matrix ${\em a}$ with components $a_{ij}$ and use it to transform a position vector $\x=(x_1,x_2,x_3)=(x,y,z)$ into a new vector $\xp$: \beq x_i'=\sum_ja_{ij}x_j. \eeq An orthogonal transformation is one that leaves the length of the vector unchanged, \beq \sum_i(x_i')^2=\sum_ix_i^2. \eeq Using this condition, one may show that ${\em a}$ must satisfy the conditions \beq \sum_ia_{ij}a_{ik}=\de_{jk} \eeq and \beq det({\em a})=\pm1. \eeq Orthogonal transformations with $det({\em a})=+1$ are simple rotations. The other ones are combinations of a rotation and an inversion\footnote{An inversion is a transformation $\xp=-\x$.}; these are called {\em improper rotations}. It is common to refer to a collection of three objects $\ps_i,\;i=1,2,3$, which transform, under orthogonal transformations, in the same way as the components of $\x$, as a {\em vector}, a {\em polar vector}, or a {\em rank-one tensor}. A collection of nine objects $q_{ij},\;i,j=1,2,3$, which transform in the same way as the nine objects $x_ix_j$ is called a {\em rank-two tensor}. And so on. An object which is {\em invariant}, that is, which is unchanged under an orthogonal transformation, is called a {\em scalar} or {\em rank-zero tensor}; the length of a vector is such an object. One also defines {\em pseudotensors} of each rank. A {\em rank-p pseudotensor} comprises a set of $3^p$ objects which transform in the same way as a rank-p tensor under ordinary rotations but which transform with an extra change of sign relative to a rank-p tensor under improper rotations. One also uses the terms {\em pseudoscalar} for a rank-0 pseudotensor and {\em pseudovector} or {\em axial vector} for a rank-1 pseudotensor. Notice that under inversion, for which ${\em a}$ is just the negative of the unit $3\times3$ matrix, a vector changes sign, $\xp=-\x$, while a pseudovector is invariant. This statement can be generalized: Under inversion, a tensor $T$ of rank n transforms to $T'$ with \beq T'=(-1)^nT. \eeq A pseudotensor $P$ of the same rank, on the other hand, transforms according to \beq P'=(-1)^{(n+1)}P \eeq under inversion. \subsection{Transformation Properties of Physical Quantities} It is important to realize that objects which we are accustomed to referring to as ``vectors,'' such as $\B$, are not necessarily vectors in the sense introduced here; indeed, it is one of our tasks in this section to find out just what sorts of tensor are the various physical quantities we have been studying. Consider for example the charge density. Suppose that we have a system with a certain $\rhx$ and that we rotate it; then $\rh$ becomes $\rh'$ and $\x$ becomes $\xp$. \centerline{\psfig{figure=fig15.ps,height=1.875in,width=6.375in}} \noindent The question is, how is $\rh'(\xp)$ related to $\rhx$? It is easy to see, since $\xp$ is what $\x$ becomes as a consequence of the rotation, that $\rh'(\xp)=\rhx$. Under an inversion also, this relation is true. Hence we conclude that the charge density is a scalar or rank-0 tensor. An example of a vector or rank-1 tensor is, of course, $\x$. Similarly, from this fact one may show that the operator $\grad$ is a rank-1 tensor (Differential operators can also be tensors or components of tensors); \beq \frac\partial{\partial x_i'}=\sum_ja_{ij}\frac\partial{\partial x_j}. \eeq What then is $\grad\rh$? From the (known) transformation properties of $\rh$ and of $\grad$, it is easy to show that it is a rank-1 tensor. The gradient of any scalar function is a rank-1 tensor. Similarly, one may show that the inner product of two rank-1 tensors, or vectors, is a scalar as is the inner product of two rank-1 pseudotensors; the inner product of a rank-1 tensor and a rank-1 pseudotensor is a pseudoscalar; and the gradient of a pseudoscalar is a rank-1 pseudotensor. All of the foregoing are quite easy to demonstrate. A little harder is the crossproduct of two vectors (rank-1 tensors). Consider that $\b$ and $\c$ are rank-1 tensors. Their cross product may be written as \beq \u=\b\times\c \eeq with a Cartesian component given by \beq u_i=\sum_{j,k}\ep_{ijk}b_jc_k \eeq where \beq \ep_{ijk}\equiv\lec\barr{cc} +1 & \hbox{if }(i,j,k)=(1,2,3),(2,3,1),(3,1,2)\\ -1 & \hbox{if }(i,j,k)=(2,1,3),(1,3,2),(3,2,1)\\ 0 & \hbox{otherwise}.\ear\right. \eeq If we {\bf define} $\ep_{ijk}$ to be given by this equation in all frames, then we can {\bf show} that it is a rank-3 pseudotensor. Alternatively, we can use \eq{164} to specify it in a single frame, {\bf define} it to be a rank-3 pseudotensor, and then {\bf show} that it is given by \eq{164} in {\bf any} frame. However one chooses to do it, one can use this object, called the {\em completely antisymmetric unit rank-3 pseudotensor}, and the assumed transformation properties of $\b$ and $\c$ (rank-1 tensors) to determine the transformation properties of the crossproduct. What one finds is that \beq u_i'=det({\em a})\sum_ja_{ij}u_j \eeq which means that $\u$ is a pseudovector or a rank-1 pseudotensor. The transformations considered so far have all dealt with space; to them we wish to add the time-reversal transformation. The question to ask of a given entity is how it changes if time is reversed. Imagine making a videotape of the entity's behavior and then running the tape backwards. If, in this viewing, the quantity is the same at a given point on the tape as when the tape is running forward, then the quantity is {\em even} or {\em invariant} under time reversal. If its sign has been reversed, then it is {\em odd} under time reversal. For example, the position $\x(t)$ of an object is even under time reversal; the velocity of the object, however, is odd. In Table 1, we catalog some familiar mechanical functions according to their rotation, inversion, and time-reversal symmetries. \begin{table} \caption{Rotation, inversion, and time-reversal properties of some common mechanical quantities.} \btab{||l|c|l|c||} \hline Function & Rank & Inversion Symmetry & Time-reversal Symmetry \\ \hline $\x$ & 1 & $-$ (vector) & + \\ $\v=d\x/dt$ & 1 & $-$ (vector) & $-$ \\ $\p=m\v$ & 1 & $-$ (vector) & $-$ \\ ${\bf{L}}=\x\times m\v$ & 1 & + (pseudovector) & $-$ \\ $\F=d\p/dt$ & 1 & $-$ (vector) & + \\ $\N=\x\times\F$ & 1 & + (pseudovector) & + \\ $T=p^2/2m$ & 0 & + (scalar) & + \\ $V$ & 0 & + (scalar) & + \\ \hline \etab \end{table} We may make the same sort of table for various electromagnetic quantities, basing our analysis on the Maxwell equations, which we assume to be the correct equations of electromagnetism. Given that $\rh$ is a scalar and that $\grad$ is a vector, the equation $\div\E=4\pi\rh$ tells us that the electric field is a vector; further, it is even under time reversal (since $\rh$ and $\grad$ are both even). Similarly, $\D$ and $\P$ must be vectors and even under time reversal. Moving on to Faraday's Law, $\curl\E=-c^{-1}\partial\B/\partial t$, from our knowledge of the properties of the gradient, the cross product, and the electric field, we see that $\B$ is a pseudovector and that it is odd under time reversal; $\H$ and $\M$ have the same properties. Finally, Amp\`ere's Law, $\curl\B=(4\pi/c)\J+c^{-1}\partial\E/\partial t$ is consistent with these determinations and with the statement that $\J$ is a vector, odd under time reversal, which follows from the fact that $\J=\rh\v$. Finally, $\S$ and $\g$ are vectors with odd time-reversal symmetry while the Maxwell stress tensor is a rank-2 tensor, even under time reversal. These properties are summarized in Table 2. \begin{table} \caption{Rotation, inversion and time-reversal properties of some common electromagnetic quantities.} \btab{||l|c|l|c||} \hline function & rank & inversion symmetry & time-reversal symmetry \\ \hline $\rh$ & 0 & + (scalar) & + \\ $\J$ & 1 & $-$ (vector) & $-$ \\ $\E,\D,\P$ & 1 & $-$ (vector) & + \\ $\B,\H,\M$ & 1 & + (pseudovector) & $-$ \\ $\S,\g$ & 1 & $-$ (vector) & $-$ \\ $\dT$ & 2 & + (tensor) & + \\ \hline \etab \end{table} The usefulness of these expressions lies in the belief that acceptable equations of physics should be invariant under various symmetry operations. The Maxwell equations and the classical equations of mechanics (Newton's Laws), for example, are invariant under time reversal and under orthogonal transformations, meaning that each term in any given equation transforms in the same way as all of the other terms in that equation\footnote{Of course there are some equations, like Ohm's Law, which describe truly irreversible processes for which time reversal invariance does not hold.}. If we believe that this should be true of all elementary equations of classical physics, then there are certain implied constraints on the form of the equations. Consider as an example the relation between $\P$ and $\E$. Supposing that one can make an expansion of a component of $\P$ in terms of the components of $\E$, we have \beq P_i=\sum_j\al_{ij}E_j+\sum_{jk}\be_{ijk}E_jE_k+\sum_{jkl} \ga_{ijkl}E_jE_kE_l+... \eeq where, since $\P$ and $\E$ are both rank-1 tensors, invariant under time reversal, it follows, using the invariance argument, that the coefficients $\al_{ij}$ are the components of a rank-2 tensor, invariant under time reversal; the $\be_{ijk}$ are components of a rank-3 tensor, invariant under time reversal; and the $\ga_{ijkl}$ are components of a rank-4 tensor, also invariant under time reversal. If we now add some statement about the properties of the medium, we can get further conditions. In the simplest case of an isotropic material, it must be the case that each of these tensors is invariant under orthogonal transformations. This condition severely limits their forms; in particular, it means that $\al_{ij}= \al\de_{ij}$. We can see this by appealing to the transformation properties of second rank tensors. Thus, $\al_{ij}$ must transform like $x_ix_j$, or \beq \al'_{nm}=a_{ni}a_{mj}\al_{ij} \eeq Since the medium is isotropic, we require that $\al'_{ij}=\al_{ij}$. The only way to satisfy both of these conditions of transformation is if \beq \al'_{nm}=\al a_{ni} a_{mi}=\al\de_{nm} \eeq The same type of thing cannot be done with $\be$, so that $\be_{ijk}=0$, bu we can perform similar manipulations on $\ga$ so the coefficients $\ga_{ijkl}$ are such as to produce \beq \sum_{jkl}\ga_{ijkl}E_iE_jE_l=\ga(\E\cdot\E)\E \eeq where $\ga$ is a constant. Thus, through third-order terms, the expansion of $\P$ in terms of $\E$ must have the form \beq \P=\al\E+\ga E^2\E. \eeq The general forms of many other less obvious relations may be determined by similar considerations. \section{Do Maxwell's Equations Allow Magnetic Monopoles?} The answer is yes, but only in a restricted, and trivial, sense. If there were magnetic charges of density $\rh_m$ and an associated magnetic current density $\J_m$, with a corresponding conservation law \beq \pde{\rh_m}t+\div\J_m=0, \eeq then the field equations would read \beqa \div\B=4\pi\rh_m\hspace{1.0in}\curl\H=\frac{4\pi}c\J+\frac1c\pde\D t \nonumber\\ \div\D=4\pi\rh\hspace{1.0in}\curl\E=-\frac{4\pi}c\J_m-\frac1c \pde\B t. \eeqa In fact, the Maxwell equations as we understand them can be put into this form by making a particular kind of transformation, called a {\em duality transformation} of the fields and sources. Introduce \beqa \E=\E'\cos\et+\H'\sin\et\hsph\D=\D'\cos\et+\B'\sin\et\nonumber\\ \H=-\E'\sin\et+\H'\cos\et\hsph\B=-\D'\sin\et+\B'\cos\et\nonumber\\ \rh=\rh'\cos\et+\rh_m'\sin\et\hsp{0.75}\J=\J'\cos\et+\J_m'\sin\et\nonumber\\ \rh_m=-\rh'\sin\et+\rh_m'\cos\et\hsph\J_m=-\J'\sin\et+\J_m'\cos\et. \eeqa where $\et$ is an arbitrary real constant. If one now substitutes these into the generalized field equations, one finds, upon separating the coefficients of $\sin\et$ from those of $\cos\et$ (These must be independent because $\et$ is arbitrary), that the primed fields and sources obey an identical set of field equations. What this means is that the Maxwell equations (with no magnetic sources) may be thought of as a special case of the generalized field equations, one in which $\et$ is chosen so that $\rh_m$ and $\J_m$ are equal to zero. From the form of the transformations for the sources, we see that this is possible if the ratio of $\rh$ to $\rh_m$ for each source (particle)is the same as that for all of the other sources (particles). Hence it is meaningless to say that there are no magnetic monopoles; the real question is whether all elementary particles have the same ratio of electric to magnetic charge. If they do, then Maxwell's equations are correct and correspond, as stated above, to a particular choice of $\et$ in the more general field equations. If one subjects the electron and proton to scrutiny regarding the question of whether they have the same ratio of electric to magnetic charge, one finds that if one defines (by choice of $\et$) the magnetic charge of the electron to be zero, then experimentally the magnetic charge of the proton is known to be smaller than $10^{-24}$ of its electric charge. That's pretty good evidence for its being zero. But there remains the question whether there are other kinds of particles, not yet discovered, which have a different ratio $\rh/\rh_m$ than do electrons and protons. Dirac, for example, has given a simple and clever argument which shows that the quantization of the electric charge follows from the mere existence of an electrically uncharged magnetic monopole. Moreover, the argument gives the magnetic charge $g$ of the monopole as $g=nhc/4\pi e$ where $n$ is any integer and $h$ is Planck's constant. This is, in comparison with the electric charge, very large so that it ought to be in principle easy to detect a ``Dirac monopole'' should there by any of them around. So far, none has been reliably detected. \appendix \section{Helmholtz' Theorem} Any vector function of position $\C(x)$ can be written as the sum of two vector functions such that the divergence vanishes for one and the curl vanishes for the other. In other words, the decomposition \beq \Cx=\Dx+\Fx \eeq is always possible, where \beq \div\D=0\;\;\;\; \curl\F=0 \eeq {\bf{Proof.}} We may satisfy the two conditions for $\F$ and $\D$, by writing \beq \D=\curl\A\;\;\;\;\;\F=-\grad\Phi\,. \eeq Then taking the curl and divergence of these equations respectively, we can write \beq \lap\Phi=-\div\C\;\;\;\;\;\curl\lep\curl\A\rip=\curl\C\,. \eeq We already know how to solve these solutions (at least in Cartesian coordinates). \beq \Phxp=\frac{1}{4\pi}\inv\frac{\div\Cx}{|\x-\xp|}\;\;\;\;\;\; \A(\xp)=\frac{1}{4\pi}\inv\frac{\curl\Cx}{|\x-\xp|} \eeq Since $\D$ and $\F$ can now be found from these potentials, we have demonstrated the decomposition claimed by Helmholtz' Theorem, and thus proved it. An interesting corollary of this theorem is that a vector function is completely determined if its curl and divergence are known everywhere. The field $\F=-\grad\Phi$, where produced by a point source, is longitudinal to the vector from the source to the point where the field is evaluated. The field $\D=\curl\A$ is transverse to the vector from the source to the field point. Thus $\F$ is typically called the longitudinal, and $\D$ the transverse, part of $\C$. \edo