The Living Thing / Notebooks :


ISO 226:2003 equal-loudness contours

Equal loudness contour image by Lindosland.

Psychoacoustic units

A quick incomplete reference to pascals, Bels, erbs, Barks, sones, Hertz, semitones, Mels and whatever else I happen to need.

The actual auditory system is atrociously complex and I’m not going in to complete e.g. perceptual models here, even if I did know a stirrup from a hammer or a cochlea from a cauliflower ear. Measuring what we can perceive with our sensory apparatus is itself a complex thing, involving masking effects and variable resolution in time, space and frequency, not to mention variation between individuals.

Nonetheless, when studying audio it is worthwhile using units other than the natural-to-a-physicist Hz and Pascals even without hoping to pretend that we have found the native units of the human ear. SI units are inconvenient when studying musical metrics or machine listening because do not closely match human perceptual difference - 50 Herz is a significant difference at a base frequency of 100 Herz, but insignificant at 2000 Hz. But how big this difference is and what it means is rather a complex and contingent question. This means that we should not be too attached to getting this one “right”, and feel free to take adequate simple approximations as the project demands.

Since my needs are machine listening features and thus computational speed and simplicity over perfection, I will wilfully and with malice ignore any fine distinctions I cannot be bothered with, regardless of how many articles have been published discussing said details. For example, I will not cover “salience”, “sonorousness” or cultural difference issues. I will also ignore issues of uncertainty principles in inferring such qualities.

Start point: physical units

SPL, Hertz, pascals.

First step: Logarithmic units

This innovation is nearly universal in music studies, because of its extreme simplicity. However, it’s constantly surprising to machine listening who keep rediscovering it when they get frustrated with the FFT spectrogram. Bels/deciBels, semitones/octaves… dbV.

“Cambridge” and “Munich” frequency units

Bark and ERB measure; these seem to be more common in the acoustics and psycho-acoustics community. An introduction to selected musically useful bits is given by Parncutt and Strasberger (PaSt94).

According to Moor14 the key references for Barks is Zwicker “critical band” research (Zwic61) extended by Brian Moore, et al. (e.g. in MoGl83)

Trau90 gives a simple rational formula to approximate the in-any-case-approximate lookup tables, as does MoGl83, and both relate these to Erbs.

Descriptions of Barks always seem to start with the statement that above about 500 Hz this scale is near logarithmic in the frequency axis. Below 500 Hz the Bark scale approaches linearity. It is defined by an empirically derived table, but there are analytic approximations which seem just as good.
Newer, works better on lower frequencies. (but possibly not at very high frequencies?) Seem to be popular for analysing psychoacoustic masking effects?


Erbs are given different formulae and capitalisation depending where you look. Here’s one from PaSt94 for the “ERB-rate”

\begin{equation*} H_p(f) = H_1\ln\left(\frac{f+f_1}{f+f_2}\right)+H_0, \end{equation*}


\begin{align*} H_1 &=11.17 \text{ erb}\\ H_0 &=43.0 \text{ erb}\\ f_1 &= 312 \text{ Hz}\\ f_2 &= 14675 \text{ Hz} \end{align*}

Erbs themselves (which is different at the erb-rate for a given frequency?)

\begin{equation*} B_e = 6.23 \times 10^{-6} f^2 + 0.09339 f + 28.52. \end{equation*}


Traunmüller approximation for critical band rate in bark

\begin{equation*} z(f) = \frac{26.81}{1+1960/f} - 0.53 \end{equation*}

Lach Lau amends the formula:

\begin{equation*} z'(f) = z(f) + \mathbb{I}\{z(f)>20.1\}(z(f)-20.1)* 0.22 \end{equation*}

Harmut Traunmüller’s online unit conversion page can convert these for you and Dik Hermes summarises some history of how we got this way.

Mel frequencies

Mels are credited by Traunmüller to Bera49 and by Parncutt to Stevens and Volkmann (StVo40).

The mel scale is not used as a metric for computing pitch distance in the present model, because it applies only to pure tones, whereas most of the tone sensations evoked by complex sonorities are of the complex variety (virtual rather than spectral pitches).

Certainly some of the ERB experiment are also done using pure tones, but maybe… Ach, I don’t even care.

Mels are common in the machine listening community, mostly through the MFCC, the Mel-frequency Cepstral Transform, which is a metric that seems to be a historically popular one to measure psychoacoustic similarity of sounds. (MeCh76, DaMe80)

Here’s one formula, the “HTK” formula.

\begin{equation*} m(f) = 1127 \ln(1+f/700) \end{equation*}

There are others, such as the “Slanek” formula which is much more complicated and piecewise defined. I can’t be bothered searching for details for now.

Perceptual Loudness

Sones - StVN37 are a power-law-intensity scale. Phons, ibid, are a logarithmic intensity scale, something like the dB level of the signal filtered to match the human ear, which is close to… dbA? Something like that. But you can get more sophisticated. Keyword: Fletcher-Munson curves.

For this level of precision, the coupling of frequency and amplitude into perceptual “loudness” becomes important and they are no longer the same at different source sound frequencies via equal-loudness contours, which you can get from an actively updated ISO standard at great expense, or try to reconstruct from journals. SMRM03 seems to be the accepted modern version, but their report only lists graphs and is missing values in the few equations. Table-based loudness contours are available under the MIT license from the Surrey git repo, under iso226.m. Closed-form approximations for an equal loudness contour at fixed SPL are given in SuTa04, equation 6.

When the loudness of an \(f\)-Hz comparison tone is equal to the loudness of a reference tone at 1 kHz with a sound pressure of \(p_r\), then the sound pressure of \(p_f\) at the frequency of \(f\) Hz is given by the following function:

\begin{equation*} p^2_f =\frac{1}{U^2(f)}\left[(p_r^{2\alpha(f)} - p_{rt}^{2\alpha(f)}) + (U(f)p_{ft})^{2\alpha(f)}\right]^{1/\alpha(f)} \end{equation*}

AFAICT they don’t define \(p_{ft}\) or \(p_{rt}\) anywhere, and I don’t have enough free attention to find a simple expression for the frequency-dependent parameters, which I think are still spline-fit. (?)

There is an excellent explanation of the point of all this - with diagrams - by Joe Wolfe.

Onwards and upwards like a Shepard tone

At this point, where we are already combining frequency and loudness, things are getting weird; we are usually measuring people’s reported subjective loudness levels for unnatural signals (pure tones), and with real signals we rapidly start running into temporal masking effects and phasing and so on.

Thankfully, we aren’t in the business of exhaustive cochlear modeling, so we can all go home now. The unhealthily curious might read Moor07 or Hart97 and tell me the good bits, then move onto sensory neurology.

Psychoacoustic models in lossy audio compression

Pure link dump, sorry.


Ball, P. (1999) Pump up the bass. Nature News. DOI.
Ball, P. (2014) Rhythm is heard best in the bass. Nature. DOI.
Bauer, B., & Torick, E. (1966) Researches in loudness measurement. IEEE Transactions on Audio and Electroacoustics, 14(3), 141–151. DOI.
Beranek, L. L.(1949) Acoustic Measurements.
Bingham, C., Godfrey, M., & Tukey, J. W.(1967) Modern techniques of power spectrum estimation. Audio and Electroacoustics, IEEE Transactions on, 15(2), 56–66.
Bridle, J. S., & Brown, M. D.(1974) An experimental automatic word recognition system. JSRU Report, 1003(5).
Cancho, R. F. i, & Solé, R. V.(2003) Least effort and the origins of scaling in human language. Proceedings of the National Academy of Sciences, 100(3), 788–791. DOI.
Carter, G. C.(1987) Coherence and time delay estimation. Proceedings of the IEEE, 75(2), 236–255. DOI.
Cartwright, J. H. E., González, D. L., & Piro, O. (1999) Nonlinear Dynamics of the Perceived Pitch of Complex Sounds. Physical Review Letters, 82(26), 5389–5392. DOI.
Chon, S. H.(2008) Quantifying the consonance of complex tones with missing fundamentals.
Cousineau, M., McDermott, J. H., & Peretz, I. (2012) The basis of musical consonance as revealed by congenital amusia. Proceedings of the National Academy of Sciences, 109(48), 19858–19863. DOI.
Davis, S., & Mermelstein, P. (1980) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(4), 357–366. DOI.
Fastl, H., & Zwicker, E. (2007) Psychoacoustics: facts and models. (3rd. ed.). Berlin ; New York: Springer
Ferguson, S., & Parncutt, R. (2004) Composing In the Flesh: Perceptually-Informed Harmonic Syntax. In Proceedings of Sound and Music Computing.
Gómez, E., & Herrera, P. (2004) Estimating The Tonality Of Polyphonic Audio Files: Cognitive Versus Machine Learning Modelling Strategies. In ISMIR.
Guinan Jr., J. J.(2012) How are inner hair cells stimulated? Evidence for multiple mechanical drives. Hearing Research, 292(1–2), 35–50. DOI.
Hartmann, W. M.(1997) Signals, sound, and sensation. . Woodbury, N.Y: American Institute of Physics
Hennig, H., Fleischmann, R., Fredebohm, A., Hagmayer, Y., Nagler, J., Witt, A., … Geisel, T. (2011) The Nature and Perception of Fluctuations in Human Musical Rhythms. PLoS ONE, 6(10), 26457. DOI.
Herman, I. P.(2007) Physics of the human body. . Berlin ; New York: Springer
Hove, M. J., Marie, C., Bruce, I. C., & Trainor, L. J.(2014) Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms. Proceedings of the National Academy of Sciences, 111(28), 10383–10388. DOI.
Huron, D., & Parncutt, R. (1993) An improved model of tonality perception incorporating pitch salience and echoic memory. Psychomusicology: A Journal of Research in Music Cognition, 12(2), 154–171. DOI.
Irizarry, R. A.(2001) Local Harmonic Estimation in Musical Sound Signals. Journal of the American Statistical Association, 96(454), 357–367. DOI.
Lahat, M., Niederjohn, R. J., & Krubsack, D. (1987) A spectral autocorrelation method for measurement of the fundamental frequency of noise-corrupted speech. IEEE Transactions on Acoustics, Speech and Signal Processing, 35(6), 741–750. DOI.
Lerdahl, F. (1996) Calculating Tonal Tension. Music Perception: An Interdisciplinary Journal, 13(3), 319–363. DOI.
Masaoka, K., Ono, K., & Komiyama, S. (2001) A measurement of equal-loudness level contours for tone burst. Acoustical Science and Technology, 22(1), 35–39. DOI.
McKinney, M. F.(2001) Neural correlates of pitch and roughness: toward the neural code for melody and harmony. . Massachusetts Institute of Technology
Mermelstein, P., & Chen, C. (1976) Distance Measures for Speech Recognition–Psychological and Instrumental. In Pattern Recognition and Artificial Intelligence, (Vol. 101, pp. 374–388).
Moore, B. C. J.(2007) Cochlear hearing loss: physiological, psychological and technical issues. (2. ed.). Chichester: Wiley
Moore, B. C. J.(2014) Development and Current Status of the “Cambridge” Loudness Models. Trends in Hearing, 18. DOI.
Moore, B. C. J., & Glasberg, B. R.(1983) Suggested formulae for calculating auditory‐filter bandwidths and excitation patterns. The Journal of the Acoustical Society of America, 74(3), 750–753. DOI.
Narayan, S. S., Temchin, A. N., Recio, A., & Ruggero, M. A.(1998) Frequency Tuning of Basilar Membrane and Auditory Nerve Fibers in the Same Cochleae. Science, 282(5395), 1882–1884. DOI.
Neely, S. T.(1993) A model of cochlear mechanics with outer hair cell motility. Journal of the Acoustical Society of America, 94(1), 137–146. DOI.
Nordmark, J., & Fahlen, L. E.(1988) Beat theories of musical consonance. Speech Transmission Laboratory, Quarterly Progress and Status Report.
Nowotny, M., & Gummer, A. W.(2011) Vibration responses of the organ of Corti and the tectorial membrane to electrical stimulation. Journal of the Acoustical Society of America, 130(6), 3852–3872. DOI.
Olson, E. S.(2001) Intracochlear pressure measurements related to cochlear tuning. The Journal of the Acoustical Society of America, 110(1), 349–367. DOI.
Parncutt, R. (2005) Psychoacoustics and music perception. Musikpsychologie–das Neue Handbuch.
Parncutt, R., & Strasburger, H. (1994) Applying Psychoacoustics in Composition: “Harmonic” Progressions of “Nonharmonic” Sonorities. Perspectives of New Music, 32(2), 88–129. DOI.
Plomp, R., & Levelt, W. J.(1965) Tonal consonance and critical bandwidth. The Journal of the Acoustical Society of America, 38(4), 548–560. DOI.
Rasch, R., & Plomp, R. (1999) The perception of musical tones. The Psychology of Music, 2, 89–112.
Robinson, D. W., & Dadson, R. S.(1956) A re-determination of the equal-loudness relations for pure tones. British Journal of Applied Physics, 7(5), 166. DOI.
Rouat, J., Liu, Y. C., & Morissette, D. (1997) A pitch determination and voiced/unvoiced decision algorithm for noisy speech. Speech Communication, 21(3), 191–207.
Sethares, W. A.(1997) Specifying spectra for musical scales. The Journal of the Acoustical Society of America, 102(4), 2422–2431. DOI.
Slepecky, N. B.(1996) Structure of the Mammalian Cochlea. In P. Dallos, A. N. Popper, & R. R. Fay (Eds.), The Cochlea (pp. 44–129). Springer New York
Smith, E. C., & Lewicki, M. S.(2006) Efficient auditory coding. Nature, 439(7079), 978–982. DOI.
Smith, S. T., & Chadwick, R. S.(2011) Simulation of the Response of the Inner Hair Cell Stereocilia Bundle to an Acoustical Stimulus. PLoS ONE, 6(3), e18161. DOI.
Steele, C., Boutet de Monvel, J., & Puria, S. (2009) A multiscale model of the organ of Corti. Journal of Mechanics of Materials and Structures, 4(4), 755–778. DOI.
Stevens, S. S., & Volkmann, J. (1940) The Relation of Pitch to Frequency: A Revised Scale. The American Journal of Psychology, 53(3), 329–353. DOI.
Stevens, S. S., Volkmann, J., & Newman, E. B.(1937) A Scale for the Measurement of the Psychological Magnitude Pitch. The Journal of the Acoustical Society of America, 8(3), 185–190. DOI.
Suzuki, Y., Mellert, V., Richter, U., Møller, H., Nielsen, L., Hellman, R., … Takeshima, H. (2003) Precise and Full-range Determination of Two-dimensional Equal Loudness Contours.
Suzuki, Y., & Takeshima, H. (2004) Equal-loudness-level contours for pure tones. The Journal of the Acoustical Society of America, 116(2), 918. DOI.
Tarnopolsky, A., Fletcher, N., Hollenberg, L., Lange, B., Smith, J., & Wolfe, J. (2005) Acoustics: The vocal tract and the sound of a didgeridoo. Nature, 436(7047), 39–39. DOI.
Terhardt, E. (1974) Pitch, consonance, and harmony. The Journal of the Acoustical Society of America, 55(5), 1061–1069. DOI.
Thompson, W. F., & Parncutt, R. (1997) Perceptual judgments of triads and dyads: Assessment of a psychoacoustic model. Music Perception, 263–280.
Traunmüller, H. (1990) Analytical expressions for the tonotopic sensory scale. The Journal of the Acoustical Society of America, 88(1), 97–100. DOI.
Tymoczko, D. (2006) The Geometry of Musical Chords. Science, 313(5783), 72–74. DOI.
Zwicker, E. (1961) Subdivision of the Audible Frequency Range into Critical Bands (Frequenzgruppen). The Journal of the Acoustical Society of America, 33(2), 248–248. DOI.
Zwislocki, J. J.(1980) Symposium on cochlear mechanics: Where do we stand after 50 years of research?. The Journal of the Acoustical Society of America, 67(5), 1679–1679. DOI.