# Psychoacoustics

## Psychoacoustic units

A quick incomplete reference to pascals, Bels, erbs, Barks, sones, Hertz, semitones, Mels and whatever else I happen to need.

The actual auditory system is atrociously complex and I'm not going in to complete e.g. perceptual models here, even if I did know a stirrup from a hammer or a cochlea from a cauliflower ear. Measuring what we can perceive with our sensory apparatus is itself a complex thing, involving masking effects and variable resolution in time, space and frequency, not to mention variation between individuals.

Nonetheless, when studying audio it is worthwhile using units other than the natural-to-a-physicist Hz and Pascals even without hoping to pretend that we have found the native units of the human ear. SI units are inconvenient when studying musical metrics or machine listening because do not closely match human perceptual difference - 50 Herz is a significant difference at a base frequency of 100 Herz, but insignificant at 2000 Hz. But how big this difference is and what it means is rather a complex and contingent question. This means that we should not be too attached to getting this one “right”, and feel free to take adequate simple approximations as the project demands.

Since my needs are machine listening features and thus computational speed and simplicity over perfection, I will wilfully and with malice ignore any fine distinctions I cannot be bothered with, regardless of how many articles have been published discussing said details. For example, I will not cover “salience”, “sonorousness” or cultural difference issues. I will also ignore issues of uncertainty principles in inferring such qualities.

### Start point: physical units

SPL, Hertz, pascals.

### First step: Logarithmic units

This innovation is nearly universal in music studies, because of its extreme simplicity. However, it's constantly surprising to machine listening who keep rediscovering it when they get frustrated with the FFT spectrogram. Bels/deciBels, semitones/octaves… dbV.

### “Cambridge” and “Munich” frequency units

Bark and ERB measures; these seem to be more common in the acoustics and psycho-acoustics community. An introduction to selected musically useful bits is given by Parncutt and Strasberger (PaSt94).

According to Moor14 the key references for Barks is Zwicker “critical band” research (Zwic61) extended by Brian Moore, et al. (e.g. in MoGl83)

Trau90 gives a simple rational formula to approximate the in-any-case-approximate lookup tables, as does MoGl83, and both relate these to Erbs.

#### Barks

Descriptions of Barks seem to start with the statement that above about 500 Hz this scale is near logarithmic in the frequency axis. Below 500 Hz the Bark scale approaches linearity. It is defined by an empirically derived table, but there are analytic approximations which seem just as good.

Traunmüller approximation for critical band rate in bark

Lach Lau amends the formula:

Harmut Traunmüller's online unit conversion page can convert these for you and Dik Hermes summarises some history of how we got this way.

#### erbs

Newer, works better on lower frequencies. (but possibly not at very high frequencies?) Seem to be popular for analysing psychoacoustic masking effects?

Erbs are given different formulae and capitalisation depending where you look. Here's one from PaSt94 for the “ERB-rate”

where

Erbs themselves (which is different at the erb-rate for a given frequency?)

### Mel frequencies

Mels are credited by Traunmüller to Bera49 and by Parncutt to Stevens and Volkmann (StVo40).

The mel scale is not used as a metric for computing pitch distance in the present model, because it applies only to pure tones, whereas most of the tone sensations evoked by complex sonorities are of the complex variety (virtual rather than spectral pitches).

Certainly some of the ERB experiment are also done using pure tones, but maybe… Ach, I don't even care.

Mels are common in the machine listening community, mostly through the MFCC, the Mel-frequency Cepstral Transform, which is a metric that seems to be a historically popular one to measure psychoacoustic similarity of sounds. (MeCh76, DaMe80)

Here's one formula, the “HTK” formula.

There are others, such as the “Slanek” formula which is much more complicated and piecewise defined. I can't be bothered searching for details for now.

### Perceptual Loudness

Sones – StVN37 are a power-law-intensity scale. Phons, ibid, are a logarithmic intensity scale, something like the dB level of the signal filtered to match the human ear, which is close to… dbA? Something like that. But you can get more sophisticated. Keyword: Fletcher-Munson curves.

For this level of precision, the coupling of frequency and amplitude into perceptual “loudness” becomes important and they are no longer the same at different source sound frequencies via equal-loudness contours, which you can get from an actively updated ISO standard at great expense, or try to reconstruct from journals. SMRM03 seems to be the accepted modern version, but their report only lists graphs and is missing values in the few equations. Table-based loudness contours are available under the MIT license from the Surrey git repo, under iso226.m. Closed-form approximations for an equal loudness contour at fixed SPL are given in SuTa04, equation 6.

When the loudness of an $f$-Hz comparison tone is equal to the loudness of a reference tone at 1 kHz with a sound pressure of $p_r$, then the sound pressure of $p_f$ at the frequency of $f$ Hz is given by the following function:

AFAICT they don't define $p_{ft}$ or $p_{rt}$ anywhere, and I don't have enough free attention to find a simple expression for the frequency-dependent parameters, which I think are still spline-fit. (?)

There is an excellent explanation of the point of all this – with diagrams - by Joe Wolfe.

### Onwards and upwards like a Shepard tone

At this point, where we are already combining frequency and loudness, things are getting weird; we are usually measuring people's reported subjective loudness levels for unnatural signals (pure tones), and with real signals we rapidly start running into temporal masking effects and phasing and so on.

Thankfully, we aren't in the business of exhaustive cochlear modeling, so we can all go home now. The unhealthily curious might read Moor07 or Hart97 and tell me the good bits, then move onto sensory neurology.

## Psychoacoustic models in lossy audio compression

Pure link dump, sorry.