Deriving the Pauli Matrices
Tags:
#Mathematics/GeometricAlgebra #Mathematics/LinearAlgebra #Mathematics/Spinor
0 : Overview
This article provides a derivation for the Pauli spin matrices by showing that they arise naturally when looking for a matrix representation of the Clifford algebra
Basic Knowledge of 2D and 3D Geometric Algebra:
- Sudgylacmoe's introduction suffices.
- Linear and Geometric Algebra by Alan Macdonald provides more depth.
Basic Knowledge of Linear Algebra:
- 3Blue1Brown's Essence of Linear Algebra suffices.
Basic Knowledge of Complex Numbers:
- Welch Labs' Imaginary Numbers are Real suffices.
Basic Knowledge of Quaternions is helpful, but not required.
- 3Blue1Brown's Visualizing quaternions (4D numbers) with stereographic projection suffices.
1 : Two Dimensions
1.1 : Recap of 2D Geometric Algebra
The 2D geometric algebra
The geometric product between vectors is as follows:[1:1][2:1]
The first term is just the standard interior (dot) product between vectors; the second term is the exterior (wedge) product, which is anti-commutative and produces a bivector:
Just as a vector is an "oriented line segment", a bivector is an "oriented plane segment". In 2D, there is only the
Note that parallel vectors' geometric product is just the interior product, while perpendicular vectors' geometric product is the exterior product:[1:3][2:3]
Since we are working with an orthonormal basis, the geometric product of our basis vectors is just their wedge product, so
Using anti-commutativity of
So
Now our multivector looks like a complex number added to a 2D vector:
1.2 : Representation of Complex Numbers
A real number
For any
We want real-valued entries, so
Therefore we must have
Thus, we have a real-valued matrix representation for
A general complex number
Where '
1.3 : Representation of Vectors
Since 2D V·G·A's unit scalar
To begin, let's do a 'sanity check': do we even have enough degrees of freedom in
In fact, if we can find matrices for
The symbol
With our sanity check out of the way, let's see if we can find the desired matrix representations! We have three key properties of
Just as we did for
Equations (1) & (2):
This is very similar to what we had for
The first solution will be similar to that for
This is still just a hypothesis at this stage — while this matrix squares to
For the second solution, we instead set
Finally, we just check equation (3) to verify
Indeed, it does, so the matrices we found meet all requirements for
1.4 : Representation of Cl(2)
Now we have a full representation for 2D G·A!
The generic multivector
Since we can translate from
We've shown how to find the matrix corresponding to a 2D multivector, but what about going backwards? (An inverse map is required to prove isomorphism.)
Write down formulae for the multivector components in terms of the matrix entries for an arbitrary
2 : Three Dimensions
2.1 : Recap of 3D Geometric Algebra
In 3D, we have an extra basis vector,
I've separated it into the various-grade elements (scalar, vector, bivector, trivector / pseudoscalar) for added clarity.
The geometric product between vectors still works the same way, as do the interior (dot) and exterior (wedge) products. However,
The unit pseudoscalar is instead now
Note that
So, since the 3D pseudoscalar also behaves like
Returning now to the bivectors, note that they all square to
Recall the defining equations for quaternions:
We define the following:
Now we verify that
Finally, by associativity of the geometric product,
Now our multivector looks like a quaternion added to a vector + pseudoscalar :
2.2 : Representation of the Pseudoscalar
We already have a representation for
(The proof of this is left as an exercise to the reader, see the end of this section.)
This helps us a lot! We only need to find one new matrix, that for
What should the matrix for
We recall that
Thinking about the degrees of freedom, we note that allowing complex entries doubles the (real) degrees of freedom from four to eight — which is precisely the number we need for 3D G·A! This makes it a very elegant "minimal" solution, in some sense, since it extends our space only as much as we need. By contrast, if we'd found
We have left out one important detail here. The matrix above commutes with all other
Having found a matrix for
This fills in all of our gaps!
If you're somewhat new to geometric algebra, these are some good practice problems. Firstly, prove the following
Secondly, prove that
Remember, since our basis vectors are perpendicular, swapping them is anti-commutative:
And since our basis vectors are unit-length, they square to
Alongside the definitions of
2.3 : Cl(3) and Pauli Representations
Now we have a representation for 3D G·A!
We also see that we again have an isomorphism, now between 3D G·A and the space of
The rules for converting between matrices and multivectors is the same as in the 2D case, but now the coefficients are complex numbers, so the real or imaginary parts then yield the specific components we desire.
The Pauli matrices, denoted by
The reason I find this frustrating is that it causes the 2D G·A matrices to correspond to the
The Pauli representation then looks like so:
Where we've denoted products of matrices by combining their indices.
This completes the derivation of the Pauli matrices!
Conclusion and Spinor Discussion
As I have hopefully shown, the Pauli matrices are not just arbitrary, and in fact come about naturally when trying to find matrix representations of geometric algebras.
This might make you wonder — what is the connection between geometric algebra and spinors? (After all, the Pauli matrices are supposed to be operators that act on spinors.[4:1])
The answer, it turns out, is pretty much everything! The 'key idea' in geometric algebra is that vectors are not just objects, but linear transformations (via the geometric product). These linear transformations can, of course, act on other vectors, but they can also act on spinors, represented by two-components column tuples:[5]
These spinors are called Pauli spinors since they work with Pauli vectors.[5:1]
Matrices can be constructed from Kronecker Products of column tuples with row tuples, and we can analogously think of (multi)-vectors as being constructed from tensor products of spinors with (dual) cospinors.[5:2] In a sense, that is what geometric algebra does. It recognises that it is not spin-1 vectors that are the "fundamental objects", but instead spin-1/2 spinors. This allows the interpretation of vectors as linear maps (both on spinors and other vectors), which gives rise to the geometric product!
No wonder geometric algebra simplifies so much of physics — we've been treating vectors as "fundamental" this whole time, when it should have been spinors! Mathematics rewards those who listen to her.
Appendix
Thanks for reading, and have a nice day!
A lonely page, it seems...