The Living Thing / Notebooks :

Arpeggiate by numbers

Automated composition, music theory and tools therefor, mostly Western.

Sometime you don’t want to generate a chord, or measure a chord, or learn a chord, you just want to write a chord.

See also machine listening, musical corpora, musical metrics, synchronisation. The discrete symbolic cousin to analysis/resynthesis project.

Related projects: How I would do generative art with neural networks and learning gamelan.

To understand

Dmitri Tymozcko claims, music data is most naturally regarded as existing on an orbifold (“quotient manifold”), which I’m sure you could do some clever regression upon but I can’t yet see how. Orbifolds are, AFAICT, something like what you get when you have a bag of regressors instead of a tuple, and are reminiscent of the string bag models of the natural language information retrieval people, except there is not as much hustle for music as there is for NLP. Nonetheless manifold regression is a thing, and regression on manifolds also, so there is probably some stuff done there, as documented at arpeggiate by numbers.

Also it’s not a single scalar (which note) we are predicting here, and not just a distribution of a single output; (probability of each notes). At the very least it’s the co-occurence of several notes.

More generally, it’s the joint distribution of the evolution of the harmonics and the noise and all that other stuff that our ear can resolve and which can be simultaneously extracted. And we know from psycho-accoustics that these will be coupled - dissonance of two pure tones depends on frequency and amplitude of each of those components, for example.

In any case, these wrinkles aside, if I could predict the conditional distribution of the sequence in a way that produced recognisably musical sound, then simulate from it, I would be happy for a variety of reasons.

So I guess this page is “nonparametric vector regression on an orbifold”. Hmm.

Random ideas

Helpful software for the musically vexed

Arpeggiators

Constraint Composition

All of that too mainstream? Try a weird alternative formalism! How about constraint composition? That is, declarative musical composition by defining constraints on the relations which the notes must satisfy. Sounds fun in the abstract but the practice doesn’t grab me especially as a creative tool.

The reference here is strasheela built on an obscure, unpopular, and apparently discontinued Prolog-like language called “Oz” or “Mozart”, because using popular languages is not a grand a gesture as claiming none of them are quite Turing complete enough, in the right way, for your special thingy.

That language is a bit of a ghost town, which means headaches if you wish to use it in practice; If you wanted to actually do this, you’d probably use overtone + minikanren (prolog-for-lisp), as with the composing schemer, or to be even more mainstream, just use a conventional constraint solver in a popular language. I am fond of python and ncvx, but there are many choices.

Anyway, prolog fans can read on: see Anders and Miranda (AnMi10, AnMi11)

Refs

AnMi10
Anders, T., & Miranda, E. R.(2010) Constraint Application with Higher-Order Programming for Modeling Music Theories. Computer Music Journal, 34(2), 25–38. DOI.
AnMi11
Anders, T., & Miranda, E. R.(2011) Constraint programming systems for modeling music theories and composition. ACM Computing Surveys, 43(4), 1–38. DOI.
BaMW00
Baddeley, A. J., Møller, J., & Waagepetersen, R. (2000) Non- and semi-parametric estimation of interaction in inhomogeneous point patterns. Statistica Neerlandica, 54(3), 329–350. DOI.
BaVM96
Baddeley, A. J., Van Lieshout, M.-C. N., & Møller, J. (1996) Markov Properties of Cluster Processes. Advances in Applied Probability, 28(2), 346–355. DOI.
BeCo02
Belaire-Franch, J., & Contreras, D. (2002) Recurrence Plots in Nonlinear Time Series Analysis: Free Software. Journal of Statistical Software, 07(i09), 2002.
BiGS11
Bigo, L., Giavitto, J.-L., & Spicher, A. (2011) Building Topological Spaces for Musical Objects. In Proceedings of the Third International Conference on Mathematics and Computation in Music (pp. 13–28). Berlin, Heidelberg: Springer-Verlag DOI.
Bod01
Bod, R. (2001) What is the Minimal Set of Fragments That Achieves Maximal Parse Accuracy?. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (pp. 66–73). Stroudsburg, PA, USA: Association for Computational Linguistics DOI.
Bod02a
Bod, R. (2002a) A unified model of structural organization in language and music. Journal of Artificial Intelligence Research, 17(2002), 289–308.
Bod02b
Bod, R. (2002b) Memory-based models of melodic analysis: Challenging the Gestalt principles. Journal of New Music Research, 31(1), 27–36. DOI.
BoRo90
Boggs, P. T., & Rogers, J. E.(1990) Orthogonal distance regression. Contemporary Mathematics, 112, 183–194.
BCCZ14
Borgs, C., Chayes, J. T., Cohn, H., & Zhao, Y. (2014) An $L^p$ theory of sparse graph convergence I: limits, sparse random graph models, and power law distributions. arXiv:1401.2906 [Math].
BoBV12
Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012) Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription. In 29th International Conference on Machine Learning.
Bret08
Brette, R. (2008) Generation of Correlated Spike Trains. Neural Computation, 0(0), 080804143617793-28. DOI.
BuSe14
Budney, R., & Sethares, W. (2014) Topology of Musical Data. Journal of Mathematics and Music, 8(1), 73–92. DOI.
CoDu02
Collins, M., & Duffy, N. (2002) Convolution Kernels for Natural Language. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14 (pp. 625–632). MIT Press
DiMS10
Di Lillo, A., Motta, G., & Storer, J. . (2010) A rotation and scale invariant descriptor for shape recognition. In 2010 17th IEEE International Conference on Image Processing (ICIP) (pp. 257–260). DOI.
DZDM10
Donner, R. V., Zou, Y., Donges, J. F., Marwan, N., & Kurths, J. (2010) Recurrence networks—a novel paradigm for nonlinear time series analysis. New Journal of Physics, 12(3), 033025. DOI.
EiPa13
Eigenfeldt, A., & Pasquier, P. (2013) Considering vertical and horizontal context in corpus-based generative electronic dance music. In Proceedings of the fourth international conference on computational creativity (Vol. 72).
GaMa11
Gashler, M., & Martinez, T. (2011) Tangent space guided intelligent neighbor finding. (pp. 2617–2624). IEEE DOI.
GaMa12
Gashler, M., & Martinez, T. (2012) Robust manifold learning with CycleCut. Connection Science, 24(1), 57–69. DOI.
Gash12
Gashler, M. S.(2012) Advancing the Effectiveness of Non-linear Dimensionality Reduction Techniques. . Brigham Young University, Provo, UT, USA
GiTK10
Gillick, J., Tang, K., & Keller, R. M.(2010) Machine Learning of Jazz Grammars. Computer Music Journal, 34(3), 56–66. DOI.
GoKa04
Gontis, V., & Kaulakys, B. (2004) Multiplicative point process as a model of trading activity. Physica A: Statistical Mechanics and Its Applications, 343, 505–514. DOI.
GBTE14
Goroshin, R., Bruna, J., Tompson, J., Eigen, D., & LeCun, Y. (2014) Unsupervised Learning of Spatiotemporally Coherent Metrics. arXiv:1412.6056 [Cs].
Grav13
Graves, A. (2013) Generating Sequences With Recurrent Neural Networks. arXiv:1308.0850 [Cs].
HaPa16
Hadjeres, G., & Pachet, F. (2016) DeepBach: a Steerable Model for Bach chorales generation. arXiv:1612.01010 [Cs].
HaSP16
Hadjeres, G., Sakellariou, J., & Pachet, F. (2016) Style Imitation and Chord Invention in Polyphonic Music with Exponential Families. arXiv:1609.05152 [Cs].
Hall08
Hall, R. W.(2008) Geometrical Music Theory. Science, 320(5874), 328–329. DOI.
HaDr13
Harris, N., & Drton, M. (2013) PC Algorithm for Nonparanormal Graphical Models. Journal of Machine Learning Research, 14(1), 3365–3383.
Haus99
Haussler, D. (1999) Convolution kernels on discrete structures. . Technical report, UC Santa Cruz
HiOB05
Hinton, G. E., Osindero, S., & Bao, K. (2005) Learning causally linked markov random fields. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (pp. 128–135). Citeseer
Huro94
Huron, D. (1994) Interval-Class Content in Equally Tempered Pitch-Class Sets: Common Scales Exhibit Optimum Tonal Consonance. Music Perception: An Interdisciplinary Journal, 11(3), 289–305. DOI.
JoWe02
Jordan, M. I., & Weiss, Y. (2002) Probabilistic inference in graphical models. Handbook of Neural Networks and Brain Theory.
KaPe09
Katz, J., & Pesetsky, D. (2009) The recursive syntax and prosody of tonal music. Ms., Massachusetts Institute of Technology.
KaGA05
Kaulakys, B., Gontis, V., & Alaburda, M. (2005) Point process model of $1∕f$ noise vs a sum of Lorentzians. Physical Review E, 71(5), 051105. DOI.
KoCM08
Kontorovich, L. (Aryeh), Cortes, C., & Mohri, M. (2008) Kernel methods for learning languages. Theoretical Computer Science, 405(3), 223–236. DOI.
KrBo13
Kroese, D. P., & Botev, Z. I.(2013) Spatial process generation. arXiv:1308.0399 [Stat].
KrSh09
Krumin, M., & Shoham, S. (2009) Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions. Neural Computation, 21(6), 1642–1664. DOI.
LaWa08
Lafferty, J., & Wasserman, L. (2008) Rodeo: Sparse, greedy nonparametric regression. The Annals of Statistics, 36(1), 28–63. DOI.
LeGK06
Lee, S.-I., Ganapathi, V., & Koller, D. (2006) Efficient Structure Learning of Markov Networks using $ L_1 $-Regularization. In Advances in neural Information processing systems (pp. 817–824). MIT Press
LCWL10
Liu, H., Chen, X., Wasserman, L., & Lafferty, J. D.(2010) Graph-Valued Regression. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 23 (pp. 1423–1431). Curran Associates, Inc.
LHYL12
Liu, H., Han, F., Yuan, M., Lafferty, J., & Wasserman, L. (2012) The Nonparanormal SKEPTIC. arXiv:1206.6488 [Cs, Stat].
LiLW09
Liu, H., Lafferty, J., & Wasserman, L. (2009) The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs. Journal of Machine Learning Research, 10, 2295–2328.
LiRW10
Liu, H., Roeder, K., & Wasserman, L. (2010) Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 23 (pp. 1432–1440). Curran Associates, Inc.
LSSC02
Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., & Watkins, C. (2002) Text Classification Using String Kernels. J. Mach. Learn. Res., 2, 419–444. DOI.
Lowe04
Lowe, D. G.(2004) Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2), 91–110. DOI.
MaQW00
Madjiheurem, S., Qu, L., & Walder, C. (n.d.) Chord2Vec: Learning Musical Chord Embeddings.
MeBü06
Meinshausen, N., & Bühlmann, P. (2006) High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 34(3), 1436–1462. DOI.
MeBü10
Meinshausen, N., & Bühlmann, P. (2010) Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4), 417–473. DOI.
MøWa07
Møller, J., & Waagepetersen, R. P.(2007) Modern Statistics for Spatial Point Processes. Scandinavian Journal of Statistics, 34(4), 643–684. DOI.
Mont14
Montanari, A. (2014) Computational Implications of Reducing Data to Sufficient Statistics. arXiv:1409.3821 [Cs, Math, Stat].
MoSF13
Moustafa, K. A.-, Schuurmans, D., & Ferrie, F. (2013) Learning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning. In Journal of Machine Learning Research (pp. 341–356).
Poll04
Pollard, D. (2004) Hammersley-Clifford theorem for Markov random fields.
Poss86
Possolo, A. (1986) Estimation of binary Markov random fields.
Rath96
Rathbun, S. L.(1996) Estimation of Poisson intensity using partially observed concomitant variables. Biometrics, 226–242.
RLLW07
Ravikumar, P. D., Liu, H., Lafferty, J. D., & Wasserman, L. A.(2007) SpAM: Sparse Additive Models. In NIPS.
RaWL10
Ravikumar, P., Wainwright, M. J., & Lafferty, J. D.(2010) High-dimensional Ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics, 38(3), 1287–1319. DOI.
ReYE12
Reese, K., Yampolskiy, R., & Elmaghraby, A. (2012) A framework for interactive generation of music for games. In 2012 17th International Conference on Computer Games (CGAMES) (pp. 131–137). Washington, DC, USA: IEEE Computer Society DOI.
RiKe77
Ripley, B. D., & Kelly, F. P.(1977) Markov Point Processes. Journal of the London Mathematical Society, s2-15(1), 188–192. DOI.
Seth97
Sethares, W. A.(1997) Specifying spectra for musical scales. The Journal of the Acoustical Society of America, 102(4), 2422–2431. DOI.
SMTP09
Sethares, W. A., Milne, A. J., Tiedje, S., Prechtl, A., & Plamondon, J. (2009) Spectral Tools for Dynamic Tonality and Audio Morphing. Computer Music Journal, 33(2), 71–84. DOI.
TiBB00
Tillmann, B., Bharucha, J. J., & Bigand, E. (2000) Implicit learning of tonality: a self-organizing approach. Psychological Review, 107(4), 885.
Tymo06
Tymoczko, D. (2006) The Geometry of Musical Chords. Science, 313(5783), 72–74. DOI.
Tymo09
Tymoczko, D. (2009) Generalizing Musical Intervals. Journal of Music Theory, 53(2), 227–254. DOI.
Lies96
van Lieshout, M.-C. N. M.(1996) On likelihoods for Markov random sets and Boolean models. In Proceedings of the International Symposium.
VeRo15
Veitch, V., & Roy, D. M.(2015) The Class of Random Graphs Arising from Exchangeable Random Measures. arXiv:1512.03099 [Cs, Math, Stat].
WaKR13
Wasserman, L., Kolar, M., & Rinaldo, A. (2013) Estimating Undirected Graphs Under Weak Assumptions. arXiv:1309.6933 [Cs, Math, Stat].
WiTH09
Witten, D. M., Tibshirani, R., & Hastie, T. (2009) A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, kxp008. DOI.
WiTi09
Witten, D. M., & Tibshirani, R. J.(2009) Extensions of sparse canonical correlation analysis with applications to genomic data. Statistical Applications in Genetics and Molecular Biology, 8(1), 1–27. DOI.
YeFW05
Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2005) Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7), 2282–2312. DOI.