The Living Thing / Notebooks :

Differentiable learning of automata

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Learning stack machines, random access machines, nested hierarchical parsing machines, Turing machines and whatever other automata-with-memory that you wish, from data. In other words, teaching computers to program themselves, via a deep learning formalism. Obviously a hypothetical superhuman Artificial General Intelligence would be good at handling problems; It’s not the absolute hippest research area right now though, on account of being hard in general, just like we always imagined from earlier attempts. Some progress has been made. My sense is that most of the hyped research that looks like differentiable computer learning is in the slightly-better-contained area of reinforcement learning where more progress can be made.

Related: grammatical inference.

The border between these and recurrent neural networks is porous.

Google branded: Differentiable neural computers.

Christopher Olah’s Characteristically pedagogic intro

Adrian Colyer’s introduction to neural Turing machines.

Andrej Karpathy’s memory machine list has some good starting point.

Refs

Bottou, Leon. 2011. “From Machine Learning to Machine Reasoning,” February. http://arxiv.org/abs/1102.1808.

Ellis, Kevin, Armando Solar-Lezama, and Josh Tenenbaum. 2016. “Sampling for Bayesian Program Learning.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1289–97. Curran Associates, Inc. http://papers.nips.cc/paper/6082-sampling-for-bayesian-program-learning.pdf.

Graves, Alex, Greg Wayne, and Ivo Danihelka. 2014. “Neural Turing Machines,” October. http://arxiv.org/abs/1410.5401.

Graves, Alex, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, et al. 2016. “Hybrid Computing Using a Neural Network with Dynamic External Memory.” Nature advance online publication (October). https://doi.org/10.1038/nature20101.

Grefenstette, Edward, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. “Learning to Transduce with Unbounded Memory,” June. http://arxiv.org/abs/1506.02516.

Gulcehre, Caglar, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. 2016. “Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes,” June. http://arxiv.org/abs/1607.00036.

Kaiser, Łukasz, and Ilya Sutskever. 2015. “Neural GPUs Learn Algorithms,” November. http://arxiv.org/abs/1511.08228.

Looks, Moshe, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. “Deep Learning with Dynamic Computation Graphs.” In Proceedings of ICLR. http://arxiv.org/abs/1702.02181.

Perez, Julien, and Fei Liu. 2016. “Gated End-to-End Memory Networks,” October. http://arxiv.org/abs/1610.04211.

Putzky, Patrick, and Max Welling. 2017. “Recurrent Inference Machines for Solving Inverse Problems,” June. http://arxiv.org/abs/1706.04008.

Rae, Jack, Jonathan J Hunt, Ivo Danihelka, Timothy Harley, Andrew W Senior, Gregory Wayne, Alex Graves, and Tim Lillicrap. 2016. “Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3621–9. Curran Associates, Inc. http://papers.nips.cc/paper/6298-scaling-memory-augmented-neural-networks-with-sparse-reads-and-writes.pdf.

Wei, Qi, Kai Fan, Lawrence Carin, and Katherine A. Heller. 2017. “An Inner-Loop Free Solution to Inverse Problems Using Deep Neural Networks,” September. http://arxiv.org/abs/1709.01841.

Weston, Jason, Sumit Chopra, and Antoine Bordes. 2014. “Memory Networks,” October. http://arxiv.org/abs/1410.3916.