The Living Thing / Notebooks :

Here’s how I would do art with machine learning if I had to

I’ve a weakness for ideas that give me plausible deniability for making generative art while doing my maths homework.

Quasimondo: so do you

This page is more chaotic than the already-chaotic median, sorry. Good luck making sense of it.

See also analysis/resynthesis.

See gesture recognition. Oh and also google’s AMI channel, and ml4artists, which has some sweet machine learning for artists topic guides.

Many neural networks, are generative in the sense that even if you train ‘em to classify things, they can also predict new members of the class. e.g. run the model forwards, it recognizes melodies; run it “backwards”, it composes melodies. Or rather, you maybe trained them to generate examples in the course of training them to detect examples.

There are many definitional and practical wrinkles here, and this quality is not unique to artificial neural networks, but it is a great convenience, and the gods of machine learning have blessed us with much infrastructure to exploit this feature, because it is very close to actual profitable algorithms. Upshot: There is now a lot of computation and grad student labour directed at producing neural networks which as a byproduct can produce faces, chairs, film dialogue, symphonies and so on.

There are NIPS streams about this now.


Some as-yet-unfiled neural-artwork links I should think about.

Variational inference (Hint07, WiBi05, Giro01, MnGr14) looks exciting here, particularly in an autoencoder setting. (KiWe13)

Text synthesis

Visual synthesis

See those classic images from google’s tripped-out image recognition systems) or Gatys, Ecker and Bethge’s deep art Neural networks do a passable undergraduate Monet.

Here’s Frank Liu’s implementation of style transfer in pycaffe.

Alex Graves, Generating Sequences With Recurrent Neural Networks, generates handwriting. Relatedly, sketch-rnn is reaaaally cute

Deep dreaming approaches are entertaining (NSFW). Here’s a more pedestrian and slightly more informative version of that. has some lovely visual explanations of visual and other neural networks:


Symbolic composition via scores/MIDI/etc

Seems like it should be easy, until you think about it.

Related: Arpeggiate by numbers which discussed music-theory.

Google has weighed in like a gorilla on the metallophone to do midi composition with Tensorflow as part of their Magenta project. Their NIPS 2016 demo won the best demo prize.

Daniel Johnson has a convolutional and recurrent architecture for taking into account multiple types of dependency in music, which he calls biaxial neural network Zhe LI, Composing Music With Recurrent Neural Networks.

Ji-Sung Kim’s deepjazz project is minimal, but does interesting jazz improvisations. Part of the genius here is choosing totally chaotic music to try to ape, so you can ape it chaotically. (Code)

Boulanger-Lewandowski, (code and data) for BoBV12’s recurrent neural network composition using python/Theano. Christian Walder leads a project which shares some roots with that. (Wald16a, Wald16b) Bob Sturm’s FolkRNN does a related thing, but ingeniously redefines the problem by focussing on folk tune notation.

A tutorial on generating music using Restricted Boltzmann Machines for the conditional random field density, and an RNN for the time dependence after BoBV12.

Bob Sturm did a nice one

TBD: google’s latest demo in this area was popular.

Audio synthesis

See also analysis/resynthesis, voice face.

Matt Vitelli on music generation from MP3s (source).

Soundtracking audio from video.

Alex Graves on RNN predictive synthesis.

Parag Mittal on RNN style transfer.

Andy Sarrof, Musical Audio Synthesis Using Autoencoding Neural Nets. (code)

Neural style transfer for audio is crying out to be done, but I’ve only seen more traditional techniques. (UPDATE: It’s happening these days, but google it for yourself as I’m busy.)

Pixelrnn turns out to be good at music Dadabots have successfully weaponised samplernn and it’s cute.