This has been bothering me for days, so I post it here in order that someone else may not be so vexed.

Consider \(\mathbb{R}^d\) with the \(L_2\) norm, i.e. \(d\)-dimensional euclidean space.

Let’s say you wish to have a random variable, \(V\) whose realisations are isotropically distributed unit-vectors (Say, to represent isotropic rotations, or for a null-distribution against which to check for isotropy.)

The simplest say of I know of generating such vectors is to take

That is, \(V\) distributed by a multivariate Gaussian distribution with each component independent and of standard deviation 1, so if \(v_i\) are i.i.d. with \(v_i \sim \mathcal{N}(0,1)\), then

By the rotational invariance of multivariate Gaussian distributions we know that this must be isotropic.

Now, to get unit-vectors we can simply normalise

This much is 30 seconds of web-searching away from any of us. (And if you use the “hit-and-run” monte carlo sampler, this is your daily bread.)

The question is, what is the distribution of the axial component \(u_i\), of \(U\), along axis \(i\), in the non-trivial case with \(d>1\)?

I spent a lot of time doing tedious integrals, before I realised there was an easy, calculus-free solution.

Let us consider, w.l.o.g., \(u_1\). We know that the p.d.f of \(u_1\)
is *even* (i.e. has reflective symmetry) by construction.
So it will suffice to find the
positive half of the distribution function.
Thus it will also be enough to
find the p.d.f. of \(u_1^2\).

Define

i.e. \(V'\) is \(V\) with the first component of the vector excised. We have

Now, the R.H.S. has an \(\mathcal{F}(d-1,1)\) distribution, as the ratio of i.i.d \(\chi^2\) variables, and the work there has already been done for us by Snedecor.

We call the R.H.S. here \(G\), where \(G \sim \mathcal{F}(d-1,1)\). Then

Now we can construct the pdf of \(u_1\). In fact, for my purposes I need the quantile/inverse-cdf function, so let’s do that. Call the quantile function of the \(\mathcal{F}(d-1,1)\) distribution, \(H(x)\) and the cdf of \(u_1\), \(F(x)\) (for ‘half’ and ‘full’).

Then we re-tile two symmetric copies of the function about the axis - \(F(x) = \mathrm{sgn}(2x-1) H(1-|2x-1|)\) - and we’re done.

Here’s a plot of the behaviour of our distribution:

Notice that axial components tend to be long in \(\mathbb{R}^2\), and short in more than 3 dimensions. And uniformly distributed in 3 dimensions. Intuitively, as the number of dimensions increases, the contribution of any one axis to the total length is likely to be smaller on average, because there are simply more of them.

And here’s the R code to generate the graph, most of which is plotting logic. A chocolate bar for anyone who can tell me easier ways of graphing functions in R.

half.quantile <- function(x,d=3) { g <- qf(x,d-1,1) return(1/sqrt(g*(d-1)+1)) } full_quantile <- function (x, d=3) { x.scaled <- 2*x -1 res <- sign(x.scaled)*half.quantile(1-abs(x.scaled), d) return(res) } pts <- seq(0,1,1/512) dims <- seq(2,10) ndims <- length(dims) vals <- data.frame(outer(pts, dims, full.quantile)) dimnames <- paste(dims) xrange <- c(0,1) yrange <- c(-1,1) library(RColorBrewer) colors <- brewer.pal(ndims,"Spectral") # set up the plot plot(xrange, yrange, type="l", xlab="x", ylab="quantile") # add lines for (i in seq(ndims)) { dim <- vals[,i] lines(pts, dim, type="l", lwd=2, col=colors[i], lty=1 ) } # add a legend legend(xrange[1], yrange[2], dims, cex=0.8, col=colors, lty=1, title="Dimension")