Oh! the hours I put in to studying the taxonomy and husbandry of matrices. Time has passed. I have forgotten much. Jacobians have begun to seem downright Old Testament.

And when you put the various operations of matrix calculus into the mix (derivative of trace of a skew-hermitian heffalump painted with a camel-hair brush) the combinatorial explosions of theorems and identities is intimidating. Now import general non-commutative algebraic structures.

Things I need to learn:

## Basic linear algebra intros

Kevin Brown on Bras, Kets, and Matrices

Stanford CS229‘s

*Linear Algebra Review and reference*(PDF)fun, Tom Leinster, There are no non-trivial compelx quarter turns but there are real ones, i.e.

for a linear operator T T on a

*real*inner product space,\begin{equation*} \langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T^\ast = -T \end{equation*}whereas for an operator on a

*complex*inner product space,\begin{equation*} \langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T = 0. \end{equation*}Cool.

Sheldon Axler’s

*Down with Determinants!*. (Axle95) is a readable and intuitive introduction for undergrads:Without using determinants, we will define the multiplicity of an eigenvalue and prove that the number of eigenvalues, counting multiplicities, equals the dimension of the underlying space. Without determinants, we’ll define the characteristic and minimal polynomials and then prove that they behave as expected. Next, we will easily prove that every matrix is similar to a nice upper-triangular one. Turning to inner product spaces, and still without mentioning determinants, we’ll have a simple proof of the finite-dimensional Spectral Theorem.

Determinants are needed in one place in the undergraduate mathematics curriculum: the change of variables formula for multi-variable integrals. Thus at the end of this paper we’ll revive determinants, but not with any of the usual abstruse definitions. We’ll define the determinant of a matrix to be the product of its eigenvalues (counting multiplicities). This easy-to-remember definition leads to the usual formulas for computing determinants. We’ll derive the change of variables formula for multi-variable integrals in a fashion that makes the appearance of the determinant there seem natural.

He wrote a whole textbook on this basis, Axle14.

a handy glossary is Mike Brooks’ Matrix reference manual

Singular Value Decomposition series, for its insight:

Most of the time when people talk about linear algebra even mathematicians), they’ll stick entirely to the linear map perspective or the data perspective, which is kind of frustrating when you’re learning it for the first time. It seems like the data perspective is just a tidy convenience, that it just “makes sense” to put some data in a table. In my experience the singular value decomposition is the first time that the two perspectives collide, and (at least in my case) it comes with cognitive dissonance.

## Linear algebra and calculus

The multidimensional statistics/control theory workhorse.

- Simple quick recipes: Petersen & Pedersen’s Matrix Cookbook (PePe12)
- More sophisticated and more expository but a little more work: Thomas P Minka.
*Old and new matrix algebra useful for statistics*. (Mink00) - autodiff-focussed: Collected Matrix Derivative Results for Forward and Reverse Mode Algorithmic Differentiation (Gile08)
- NCAlgebra extends Mathematica with
noncommutative algebra functionality.
I haven’t tried this, but I’ve been
*told*that will allow you to, for example, take a derivative with respect to a vector computationally, instead of by pattern-matching from PePe12.

## Refs

- Axle95
- Axler, S. (1995) Down with Determinants!.
*The American Mathematical Monthly*, 102(2), 139–154. DOI. - Axle14
- Axler, S. (2014) Linear algebra done right. . New York: Springer
- Dwye67
- Dwyer, P. S.(1967) Some Applications of Matrix Derivatives in Multivariate Analysis.
*Journal of the American Statistical Association*, 62(318), 607. DOI. - Gile08
- Giles, M. B.(2008) Collected Matrix Derivative Results for Forward and Reverse Mode Algorithmic Differentiation. In C. H. Bischof, H. M. Bücker, P. Hovland, U. Naumann, & J. Utke (Eds.), Advances in Automatic Differentiation (Vol. 64, pp. 35–44). Berlin, Heidelberg: Springer Berlin Heidelberg
- MaAm14
- Manton, J. H., & Amblard, P.-O. (2014) A Primer on Reproducing Kernel Hilbert Spaces.
*arXiv:1408.0952 [Math]*. - Mink00
- Minka, T. P.(2000) Old and New Matrix Algebra Useful for Statistics.
- Parl00
- Parlett, B. N.(2000) The QR Algorithm.
*Computing in Science & Engineering*, 2(1), 38–42. DOI. - PePe12
- Petersen, K. B., & Pedersen, M. S.(2012) The Matrix Cookbook.