Neurons

Neural networks made of real neurons, in functioning brains

November 3, 2014 — February 14, 2022

compsci
life
machine learning
mind
networks
neuron

How do brains work?

Figure 1

I mean, how do brains work at the level slightly higher than a synapse, but much lower than, e.g. psychology. “How is thought done?” etc.

Notes pertaining to large, artificial networks are filed under artificial neural networks. The messy, biological end of the stick is here. Since brains seem to be the seat of the most flashy and important bit of the computing taking place in our bodies, we understandably want to know how they works, in order to

Real brains are different to the “neuron-inspired” computation of the simulacrum in many ways, not just the usual difference between model and reality. The similitude between “neural networks” and neurons is intentionally weak for reasons of convenience.

For one example, most simulated neural networks are based on a continuous activation potential and discrete time, unlike spiking biological ones which are driven by discrete events in continuous time.

Also real brains support heterogeneous types of neuron, have messier layer organisation, use less power, don’t have well-defined backpropagation (or not in the same way), and many other things that I as a non-specialist do not know.

To learn more about:

1 Fun data

Allen Institute Brain Observatory.

2 How computationally complex is a neuron?

Empirically quantifying computation is hard, but people try to do it all the time for brains. Classics try to estimate structure in neural spike trains, (Crumiller et al. 2011; Haslinger, Klinkner, and Shalizi 2010; Nemenman, Bialek, and de Ruyter van Steveninck 2004) often by empirical entropy estimates.

If we are prepared to accept “size of a neural network needed to approximate X” as an estimate of the complexity of X, then there are some interesting results: Allison Whitten, How Computationally Complex Is a Single Neuron? (Beniaguev, Segev, and London 2021). OTOH, finding the smallest neural network that can approximate something is itself computationally hard and not in general even easy to check.

3 Pretty pictures of neurons

The names I am looking for here for beutiful hand drawn early neuron diagrams are Camillo Golgi and Santiago Ramón y Cajal, especially the latter.

4 References

Amigó, Szczepański, Wajnryb, et al. 2004. Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity.” Neural Computation.
Barbieri, Quirk, Frank, et al. 2001. Construction and Analysis of Non-Poisson Stimulus-Response Models of Neural Spiking Activity.” Journal of Neuroscience Methods.
Beniaguev, Segev, and London. 2021. Single Cortical Neurons as Deep Artificial Neural Networks.” Neuron.
Berwick, Okanoya, Beckers, et al. 2011. Songs to Syntax: The Linguistics of Birdsong.” Trends in Cognitive Sciences.
Brette. 2008. Generation of Correlated Spike Trains.” Neural Computation.
———. 2012. Computing with Neural Synchrony.” PLoS Comput Biol.
Buhusi, and Meck. 2005. What Makes Us Tick? Functional and Neural Mechanisms of Interval Timing.” Nature Reviews Neuroscience.
Cadieu. 2014. Deep Neural Networks Rival the Representation of Primate It Cortex for Core Visual Object Recognition.” PLoS Comp. Biol.
Carhart-Harris, and Nutt. 2017. Serotonin and Brain Function: A Tale of Two Receptors.” Journal of Psychopharmacology.
Crumiller, Knight, Yu, et al. 2011. Estimating the Amount of Information Conveyed by a Population of Neurons.” Frontiers in Neuroscience.
de Castro. 2019. Cajal and the Spanish Neurological School: Neuroscience Would Have Been a Different Story Without Them.” Frontiers in Cellular Neuroscience.
Eden, Frank, Barbieri, et al. 2004. Dynamic Analysis of Neural Encoding by Point Process Adaptive Filtering.” Neural Computation.
Elman. 1990. Finding Structure in Time.” Cognitive Science.
———. 1993. Learning and Development in Neural Networks: The Importance of Starting Small.” Cognition.
Fee, Kozhevnikov, and Hahnloser. 2004. Neural Mechanisms of Vocal Sequence Generation in the Songbird. Annals of the New York Academy of Sciences.
Fernández, and Solé. 2007. Neutral Fitness Landscapes in Signalling Networks.” Journal of The Royal Society Interface.
Freedman. 1999. Wald Lecture: On the Bernstein-von Mises Theorem with Infinite-Dimensional Parameters.” The Annals of Statistics.
Glickstein. 2006. Golgi and Cajal: The neuron doctrine and the 100th anniversary of the 1906 Nobel Prize.” Current Biology.
Haslinger, Klinkner, and Shalizi. 2010. The Computational Structure of Spike Trains.” Neural Computation.
Haslinger, Pipa, and Brown. 2010. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking.” Neural Computation.
Hinton. n.d. The Forward-Forward Algorithm: Some Preliminary Investigations.”
Jin. 2009. Generating Variable Birdsong Syllable Sequences with Branching Chain Networks in Avian Premotor Nucleus HVC.” Physical Review E.
Jin, and Kozhevnikov. 2011. A Compact Statistical Model of the Song Syntax in Bengalese Finch.” PLoS Comput Biol.
Jonas, and Kording. 2017. Could a Neuroscientist Understand a Microprocessor? PLOS Computational Biology.
Kass, Amari, Arai, et al. 2018. Computational Neuroscience: Mathematical and Statistical Perspectives.” Annual Review of Statistics and Its Application.
Katahira, Suzuki, Okanoya, et al. 2011. Complex Sequencing Rules of Birdsong Can Be Explained by Simple Hidden Markov Processes.” PLoS ONE.
Kay, Chung, Sosa, et al. 2020. Constant Sub-second Cycling between Representations of Possible Futures in the Hippocampus.” Cell.
Kutschireiter, Surace, Sprekeler, et al. 2015a. “A Neural Implementation for Nonlinear Filtering.” arXiv Preprint arXiv:1508.06818.
Kutschireiter, Surace, Sprekeler, et al. 2015b. Approximate Nonlinear Filtering with a Recurrent Neural Network.” BMC Neuroscience.
Lee, Battle, Raina, et al. 2007. Efficient Sparse Coding Algorithms.” Advances in Neural Information Processing Systems.
Marcus, Marblestone, and Dean. 2014. The atoms of neural computation.” Science.
Nemenman, Bialek, and de Ruyter van Steveninck. 2004. Entropy and Information in Neural Spike Trains: Progress on the Sampling Problem.” Physical Review E.
Olshausen, Bruno A., and Field. 1996. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.” Nature.
Olshausen, Bruno A, and Field. 2004. Sparse Coding of Sensory Inputs.” Current Opinion in Neurobiology.
Orellana, Rodu, and Kass. 2017. Population Vectors Can Provide Near Optimal Integration of Information.” Neural Computation.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 1986.
Parr, Markovic, Kiebel, et al. 2019. Neuronal Message Passing Using Mean-Field, Bethe, and Marginal Approximations.” Scientific Reports.
Sandkühler, and Eblen-Zajjur. 1994. Identification and Characterization of Rhythmic Nociceptive and Non-Nociceptive Spinal Dorsal Horn Neurons in the Rat.” Neuroscience.
Sasahara, Cody, Cohen, et al. 2012. Structural Design Principles of Complex Bird Songs: A Network-Based Approach.” PLoS ONE.
Shen, Baingana, and Giannakis. 2016. Nonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity.” arXiv:1610.06551 [Stat].
Simoncelli, and Olshausen. 2001. Natural Image Statistics and Neural Representation.” Annual Review of Neuroscience.
Smith, A, and Brown. 2003. Estimating a State-Space Model from Point Process Observations.” Neural Computation.
Smith, Evan C., and Lewicki. 2004. Learning Efficient Auditory Codes Using Spikes Predicts Cochlear Filters.” In Advances in Neural Information Processing Systems.
Smith, Evan, and Lewicki. 2005. Efficient Coding of Time-Relative Structure Using Spikes.” Neural Computation.
Smith, Evan C., and Lewicki. 2006. Efficient Auditory Coding.” Nature.
Starr. 1913. Organic and functional nervous diseases; a text-book of neurology.
Stolk, Noordzij, Verhagen, et al. 2014. Cerebral Coherence Between Communicators Marks the Emergence of Meaning.” Proceedings of the National Academy of Sciences.
Strong, Koberle, de Ruyter van Steveninck, et al. 1998. Entropy and Information in Neural Spike Trains.” Phys. Rev. Lett.
Vargas-Irwin, Brandman, Zimmermann, et al. 2015. Spike Train SIMilarity Space (SSIMS): A Framework for Single Neuron and Ensemble Data Analysis.” Neural Computation.
Volgushev, Ilin, and Stevenson. 2015. Identifying and Tracking Simulated Synaptic Inputs from Neuronal Firing: Insights from In Vitro Experiments.” PLoS Computational Biology.
Zeki, Romaya, Benincasa, et al. 2014. The experience of mathematical beauty and its neural correlates.” Frontiers in Human Neuroscience.