The Living Thing / Notebooks :

Here’s how I would do art with machine learning if I had to

I’ve a weakness for ideas that give me plausible deniability for making generative art while doing my maths homework.

Quasimondo: so do you

This page is more chaotic than the already-chaotic median, sorry. Good luck making sense of it.

See also analysis/resynthesis.

See gesture recognition. Oh and also google’s AMI channel, and ml4artists, which has some sweet machine learning for artists topic guides.

Many neural networks, are generative in the sense that even if you train ’em to classify things, they can also predict new members of the class. e.g. run the model forwards, it recognizes melodies; run it “backwards”, it composes melodies. Or rather, you maybe trained them to generate examples in the course of training them to detect examples.

There are many definitional and practical wrinkles here, and this quality is not unique to artificial neural networks, but it is a great convenience, and the gods of machine learning have blessed us with much infrastructure to exploit this feature, because it is very close to actual profitable algorithms. Upshot: There is now a lot of computation and grad student labour directed at producing neural networks which as a byproduct can produce faces, chairs, film dialogue, symphonies and so on.

There are NIPS streams about this now.

Misc

Some as-yet-unfiled neural-artwork links I should think about.

Variational inference (Hint07, WiBi05, Giro01, MnGr14) looks exciting here, particularly in an autoencoder setting. (KiWe13)

Text synthesis

Visual synthesis

See those classic images from google’s tripped-out image recognition systems) or Gatys, Ecker and Bethge’s deep art Neural networks do a passable undergraduate Monet.

Here’s Frank Liu’s implementation of style transfer in pycaffe.

Alex Graves, Generating Sequences With Recurrent Neural Networks, generates handwriting. Relatedly, sketch-rnn is reaaaally cute.

Deep dreaming approaches are entertaining (NSFW). Here’s a more pedestrian and slightly more informative version of that.

Distill.pub has some lovely visual explanations of visual and other neural networks:

Music

Symbolic composition via scores/MIDI/etc

Seems like it should be easy, until you think about it.

Related: Arpeggiate by numbers which discussed music-theory.

Google has weighed in, like a gorilla on the metallophone, to do midi composition with Tensorflow as part of their Magenta project. Their NIPS 2016 demo won the best demo prize.

Daniel Johnson has a convolutional and recurrent architecture for taking into account multiple types of dependency in music, which he calls biaxial neural network Zhe LI, Composing Music With Recurrent Neural Networks.

Ji-Sung Kim’s deepjazz project is minimal, but does interesting jazz improvisations. Part of the genius here is choosing totally chaotic music to try to ape, so you can ape it chaotically. (Code)

Boulanger-Lewandowski, (code and data) for BoBV12’s recurrent neural network composition using python/Theano. Christian Walder leads a project which shares some roots with that. (Wald16a, Wald16b) Bob Sturm’s FolkRNN does a related thing, but ingeniously redefines the problem by focussing on folk tune notation.

A tutorial on generating music using Restricted Boltzmann Machines for the conditional random field density, and an RNN for the time dependence after BoBV12.

Bob Sturm did a good one

TBD: google’s latest demo in this area was popular. Deep Bach (paper HaPa16, code) seems to be doing a related thing. Similar sets of authors (HaSP16) have some other related work):

Modeling polyphonic music is a particularly challenging task because of the intricate interplay between melody and harmony. A good model should satisfy three requirements: statistical accuracy (capturing faithfully the statistics of correlations at various ranges, horizontally and vertically), flexibility (coping with arbitrary user constraints), and generalization capacity (inventing new material, while staying in the style of the training corpus). Models proposed so far fail on at least one of these requirements. We propose a statistical model of polyphonic music, based on the maximum entropy principle. This model is able to learn and reproduce pairwise statistics between neighboring note events in a given corpus. The model is also able to invent new chords and to harmonize unknown melodies. We evaluate the invention capacity of the model by assessing the amount of cited, re-discovered, and invented chords on a corpus of Bach chorales. We discuss how the model enables the user to specify and enforce user-defined constraints, which makes it useful for style-based, interactive music generation.

Audio synthesis

See analysis/resynthesis, voice face.

Refs