The Living Thing / Notebooks : Arpeggiate by numbers

See also machine listening, musical corpora, musical metrics, synchronisation. The discrete symbolic cousin to analysis/resynthesis project. Related projects: How I would do generative art with neural networks and learning gamelan.

A long story which I have not time to explain right now, but see the project code and let me know if you can work it out.

Composition as path dependence: If everything were ordered by equilibrium, then orchestras would tend toward a Pareto optimal distribution of french horn. How to capture time dependence? How to quantify “motifs”?

Alternatively can I use a chain graph to do this?

Evan Chow represents for team non-deep-learning with jazzml:

Computer jazz improvisation powered by machine learning, specifically trigram modeling, K-Means clustering, and chord inference with SVMs.

There are also a whole bunch of neural-network-based approaches - see generative art & neural networks

To understand

Dmitri Tymozcko claims, music data is most naturally regarded as existing on an orbifold (“quotient manifold”), which I’m sure you could do some clever regression upon but I can’t yet see how. Orbifolds are what you get when you have a bag of regressors instead of a tuple, and are reminiscent of the string bag models of the natural language information retrieval people, except there is no Google trying to hustle music synthesis along like there is text search. Nonetheless manifold regression is a thing, and regression on manifolds also, so there is probably some stuff done there, as documented at arpeggiate by numbers.

Also it’s not a single scalar (which note) we are predicting here, and not just a distribution of a single output; (probability of each notes). At the very least it’s the co-occurentce of several notes.

More generally, it’s the joint distribution of the evolution of the harmonics and the noise and all that other stuff that our ear can resolve and which can be simultaneously extracted. And we know from psycho-accoustics that these will be coupled - dissonance of two pure tones depends on frequency and amplitude of each of those components, for example.

In any case, these wrinkles aside, if I could predict the conditional distribution of the sequence in a way that produced recognisably musical sound, then simulate from it, I would be happy for a variety of reasons.

So I guess this page is “nonparametric vector regression on an orbifold”. Hmm.


Baddeley, A. J., Møller, J., & Waagepetersen, R. (2000) Non- and semi-parametric estimation of interaction in inhomogeneous point patterns. Statistica Neerlandica, 54(3), 329–350. DOI.
Baddeley, A. J., Van Lieshout, M.-C. N., & Møller, J. (1996) Markov Properties of Cluster Processes. Advances in Applied Probability, 28(2), 346–355. DOI.
Belaire-Franch, J., & Contreras, D. (2002) Recurrence Plots in Nonlinear Time Series Analysis: Free Software. Journal of Statistical Software, 07(i09), 2002.
Bigo, L., Giavitto, J.-L., & Spicher, A. (2011) Building Topological Spaces for Musical Objects. In Proceedings of the Third International Conference on Mathematics and Computation in Music (pp. 13–28). Berlin, Heidelberg: Springer-Verlag DOI.
Bod, R. (2001) What is the Minimal Set of Fragments That Achieves Maximal Parse Accuracy?. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (pp. 66–73). Stroudsburg, PA, USA: Association for Computational Linguistics DOI.
Bod, R. (2002a) A unified model of structural organization in language and music. Journal of Artificial Intelligence Research, 17(2002), 289–308.
Bod, R. (2002b) Memory-based models of melodic analysis: Challenging the Gestalt principles. Journal of New Music Research, 31(1), 27–36. DOI.
Boggs, P. T., & Rogers, J. E.(1990) Orthogonal distance regression. Contemporary Mathematics, 112, 183–194.
Borgs, C., Chayes, J. T., Cohn, H., & Zhao, Y. (2014) An $L^p$ theory of sparse graph convergence I: limits, sparse random graph models, and power law distributions. arXiv:1401.2906 [Math].
Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012) Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription. In 29th International Conference on Machine Learning.
Brette, R. (2008) Generation of Correlated Spike Trains. Neural Computation, 0(0), 080804143617793-28. DOI.
Budney, R., & Sethares, W. (2014) Topology of Musical Data. Journal of Mathematics and Music, 8(1), 73–92. DOI.
Collins, M., & Duffy, N. (2002) Convolution Kernels for Natural Language. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14 (pp. 625–632). MIT Press
Di Lillo, A., Motta, G., & Storer, J. . (2010) A rotation and scale invariant descriptor for shape recognition. In 2010 17th IEEE International Conference on Image Processing (ICIP) (pp. 257–260). DOI.
Donner, R. V., Zou, Y., Donges, J. F., Marwan, N., & Kurths, J. (2010) Recurrence networks—a novel paradigm for nonlinear time series analysis. New Journal of Physics, 12(3), 033025. DOI.
Eigenfeldt, A., & Pasquier, P. (2013) Considering vertical and horizontal context in corpus-based generative electronic dance music. In Proceedings of the fourth international conference on computational creativity (Vol. 72).
Gashler, M., & Martinez, T. (2011) Tangent space guided intelligent neighbor finding. (pp. 2617–2624). IEEE DOI.
Gashler, M., & Martinez, T. (2012) Robust manifold learning with CycleCut. Connection Science, 24(1), 57–69. DOI.
Gashler, M. S.(2012) Advancing the Effectiveness of Non-linear Dimensionality Reduction Techniques. . Brigham Young University, Provo, UT, USA
Gillick, J., Tang, K., & Keller, R. M.(2010) Machine Learning of Jazz Grammars. Computer Music Journal, 34(3), 56–66. DOI.
Gontis, V., & Kaulakys, B. (2004) Multiplicative point process as a model of trading activity. Physica A: Statistical Mechanics and Its Applications, 343, 505–514. DOI.
Goroshin, R., Bruna, J., Tompson, J., Eigen, D., & LeCun, Y. (2014) Unsupervised Learning of Spatiotemporally Coherent Metrics. arXiv:1412.6056 [Cs].
Graves, A. (2013) Generating Sequences With Recurrent Neural Networks. arXiv:1308.0850 [Cs].
Hadjeres, G., & Pachet, F. (2016) DeepBach: a Steerable Model for Bach chorales generation. arXiv:1612.01010 [Cs].
Hadjeres, G., Sakellariou, J., & Pachet, F. (2016) Style Imitation and Chord Invention in Polyphonic Music with Exponential Families. arXiv:1609.05152 [Cs].
Hall, R. W.(2008) Geometrical Music Theory. Science, 320(5874), 328–329. DOI.
Harris, N., & Drton, M. (2013) PC Algorithm for Nonparanormal Graphical Models. Journal of Machine Learning Research, 14(1), 3365–3383.
Haussler, D. (1999) Convolution kernels on discrete structures. . Technical report, UC Santa Cruz
Hinton, G. E., Osindero, S., & Bao, K. (2005) Learning causally linked markov random fields. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (pp. 128–135). Citeseer
Huron, D. (1994) Interval-Class Content in Equally Tempered Pitch-Class Sets: Common Scales Exhibit Optimum Tonal Consonance. Music Perception: An Interdisciplinary Journal, 11(3), 289–305. DOI.
Jordan, M. I., & Weiss, Y. (2002) Probabilistic inference in graphical models. Handbook of Neural Networks and Brain Theory.
Katz, J., & Pesetsky, D. (2009) The recursive syntax and prosody of tonal music. Ms., Massachusetts Institute of Technology.
Kaulakys, B., Gontis, V., & Alaburda, M. (2005) Point process model of $1∕f$ noise vs a sum of Lorentzians. Physical Review E, 71(5), 051105. DOI.
Kontorovich, L. (Aryeh), Cortes, C., & Mohri, M. (2008) Kernel methods for learning languages. Theoretical Computer Science, 405(3), 223–236. DOI.
Kroese, D. P., & Botev, Z. I.(2013) Spatial process generation. arXiv:1308.0399 [Stat].
Krumin, M., & Shoham, S. (2009) Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions. Neural Computation, 21(6), 1642–1664. DOI.
Lafferty, J., & Wasserman, L. (2008) Rodeo: Sparse, greedy nonparametric regression. The Annals of Statistics, 36(1), 28–63. DOI.
Lee, S.-I., Ganapathi, V., & Koller, D. (2006) Efficient Structure Learning of Markov Networks using $ L_1 $-Regularization. In Advances in neural Information processing systems (pp. 817–824). MIT Press
Liu, H., Chen, X., Wasserman, L., & Lafferty, J. D.(2010) Graph-Valued Regression. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 23 (pp. 1423–1431). Curran Associates, Inc.
Liu, H., Han, F., Yuan, M., Lafferty, J., & Wasserman, L. (2012) The Nonparanormal SKEPTIC. arXiv:1206.6488 [Cs, Stat].
Liu, H., Lafferty, J., & Wasserman, L. (2009) The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs. Journal of Machine Learning Research, 10, 2295–2328.
Liu, H., Roeder, K., & Wasserman, L. (2010) Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 23 (pp. 1432–1440). Curran Associates, Inc.
Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., & Watkins, C. (2002) Text Classification Using String Kernels. J. Mach. Learn. Res., 2, 419–444. DOI.
Lowe, D. G.(2004) Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2), 91–110. DOI.
Meinshausen, N., & Bühlmann, P. (2006) High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 34(3), 1436–1462. DOI.
Meinshausen, N., & Bühlmann, P. (2010) Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4), 417–473. DOI.
Møller, J., & Waagepetersen, R. P.(2007) Modern Statistics for Spatial Point Processes. Scandinavian Journal of Statistics, 34(4), 643–684. DOI.
Montanari, A. (2014) Computational Implications of Reducing Data to Sufficient Statistics. arXiv:1409.3821 [Cs, Math, Stat].
Moustafa, K. A.-, Schuurmans, D., & Ferrie, F. (2013) Learning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning. In Journal of Machine Learning Research (pp. 341–356).
Pollard, D. (2004) Hammersley-Clifford theorem for Markov random fields.
Possolo, A. (1986) Estimation of binary Markov random fields.
Rathbun, S. L.(1996) Estimation of Poisson intensity using partially observed concomitant variables. Biometrics, 226–242.
Ravikumar, P. D., Liu, H., Lafferty, J. D., & Wasserman, L. A.(2007) SpAM: Sparse Additive Models. In NIPS.
Ravikumar, P., Wainwright, M. J., & Lafferty, J. D.(2010) High-dimensional Ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics, 38(3), 1287–1319. DOI.
Reese, K., Yampolskiy, R., & Elmaghraby, A. (2012) A framework for interactive generation of music for games. In 2012 17th International Conference on Computer Games (CGAMES) (pp. 131–137). Washington, DC, USA: IEEE Computer Society DOI.
Ripley, B. D., & Kelly, F. P.(1977) Markov Point Processes. Journal of the London Mathematical Society, s2-15(1), 188–192. DOI.
Sethares, W. A.(1997) Specifying spectra for musical scales. The Journal of the Acoustical Society of America, 102(4), 2422–2431. DOI.
Sethares, W. A., Milne, A. J., Tiedje, S., Prechtl, A., & Plamondon, J. (2009) Spectral Tools for Dynamic Tonality and Audio Morphing. Computer Music Journal, 33(2), 71–84. DOI.
Tillmann, B., Bharucha, J. J., & Bigand, E. (2000) Implicit learning of tonality: a self-organizing approach. Psychological Review, 107(4), 885.
Tymoczko, D. (2006) The Geometry of Musical Chords. Science, 313(5783), 72–74. DOI.
Tymoczko, D. (2009) Generalizing Musical Intervals. Journal of Music Theory, 53(2), 227–254. DOI.
van Lieshout, M.-C. N. M.(1996) On likelihoods for Markov random sets and Boolean models. In Proceedings of the International Symposium.
Veitch, V., & Roy, D. M.(2015) The Class of Random Graphs Arising from Exchangeable Random Measures. arXiv:1512.03099 [Cs, Math, Stat].
Wasserman, L., Kolar, M., & Rinaldo, A. (2013) Estimating Undirected Graphs Under Weak Assumptions. arXiv:1309.6933 [Cs, Math, Stat].
Witten, D. M., Tibshirani, R., & Hastie, T. (2009) A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, kxp008. DOI.
Witten, D. M., & Tibshirani, R. J.(2009) Extensions of sparse canonical correlation analysis with applications to genomic data. Statistical Applications in Genetics and Molecular Biology, 8(1), 1–27. DOI.
Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2005) Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7), 2282–2312. DOI.