The Living Thing / Notebooks :


compressed representations of reality for syntactic agents; also, what stuff means

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧
“[…] archetypes don’t exist; the body exists.
The belly inside is beautiful, because the baby grows there,
because your sweet cock, all bright and jolly, thrusts there,
and good, tasty food descends there,
and for this reason the cavern, the grotto, the tunnel
are beautiful and important, and the labyrinth, too,
which is made in the image of our wonderful intestines.
When somebody wants to invent something beautiful and important,
it has to come from there,
because you also came from there the day you were born,
because fertility always comes from inside a cavity,
where first something rots and then, lo and behold,
there’s a little man, a date, a baobab.

And high is better than low,
because if you have your head down, the blood goes to your brain,
because feet stink and hair doesn’t stink as much,
because it’s better to climb a tree and pick fruit
than end up underground, food for worms,
and because you rarely hurt yourself hitting something above
— you really have to be in an attic —
while you often hurt yourself falling.
That’s why up is angelic and down devilish.”

 — Umberto Eco. Foucault’s Pendulum.

On the mapping between linguistic tokens and what they denote.

If I had time I would learn about: Wierzbicka’s semantic primes, Valiant’s PAC-learning, Wittgenstein, probably Mark Johnson if the over-writing doesn’t kill me. Logic-and-language philosophers, toy axiomatic worlds. Classic AI symbolic reasoning approaches. Drop in via game theory and neurolinguistics? Ignore most of it, mention plausible models based on statistical learnability.

Learnability of terms

When do we need to use words? BGPL10 have a toy model for color words, which is a clever choice of domain.

StTe05: a connection to count model stochastics.

Also what embodiment means for this stuff.

Neural models

What does the MRI tell us about denotation in the brain?

SNVV14 is worth it for the tagline alone: “experimental semiotics”

How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.

Meanings as probabilistic graphical classifiers

Eliezer Yudkowsky’s classic essay, How an algorithm feels from the inside.

Advanced-grade, semantics and discourse and values – Scott Alexander’s The whole city is centre.

Word vector models

Nearly-reversible, distributed representations of semantics via entity embeddings. Do these actually tell us anything about semantics?

As invented by BDVJ03 and popularised/refined by Mikolov and Dean at Google, the skip-gram semantic vector spaces – definitely the hippest of the ways of defining String distances for natual language this season.

Tehcnology Review: Mappings 1-5

Sanjeev Aror aexplain that, more than that, the skip gram vectors for polysemic words are a weighted sum of their constituent meanings.

Graph formulations, e.g. David McAllester, Deep Meaning Beyond Thought Vectors:

I want to complain at this point that you can’t cram the meaning of a bleeping sentence into a bleeping sequence of vectors. The graph structures on the positions in the sentence used in the above models should be exposed to the user of the semantic representation. I would take the position that the meaning should be an embedded knowledge graph — a graph on embedded entity nodes and typed relations (edges) between them. A node representing an event can be connected through edges to entities that fill the semantic roles of the event type


Dead Philosophers:

The rose has teeth in the mouth of the beast.


Abend, Omri, and Ari Rappoport. 2017. “The State of the Art in Semantic Representation.” In, 77–89. Association for Computational Linguistics.

Arbib, Michael. 2002. “The Mirror System, Imitation, and the Evolution of Language.” In Imitation in Animals and Artifacts, edited by Chrystopher Nehaniv and Kerstin Dautenhahn. MIT Press.

Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. “A Neural Probabilistic Language Model.” Journal of Machine Learning Research 3 (Feb): 1137–55.

Cancho, Ramon Ferrer i, and Ricard V. Solé. 2003. “Least Effort and the Origins of Scaling in Human Language.” Proceedings of the National Academy of Sciences 100 (3): 788–91.

Cao, Hui, George Hripcsak, and Marianthi Markatou. 2007. “A Statistical Methodology for Analyzing Co-Occurrence Data from a Large Sample.” Journal of Biomedical Informatics 40 (3): 343–52.

Christiansen, Morten H, and Nick Chater. 2008. “Language as Shaped by the Brain.” Behavioral and Brain Sciences 31: 489–509.

Corominas-Murtra, Bernat, and Ricard V. Solé. 2010. “Universality of Zipf’s Law.” Physical Review E 82 (1): 011102.

Deerwester, Scott, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. “Indexing by Latent Semantic Analysis.”

Elman, Jeffrey L. 1990. “Finding Structure in Time.” Cognitive Science 14: 179–211.

———. 1993. “Learning and Development in Neural Networks: The Importance of Starting Small.” Cognition 48: 71–99.

———. 1995. “Language as a Dynamical System,” 195.

Gärdenfors, Peter. 2014. Geometry of Meaning: Semantics Based on Conceptual Spaces. Cambridge, Massachusetts: The MIT Press.

Guthrie, David, Ben Allison, Wei Liu, Louise Guthrie, and Yorick Wilks. 2006. “A Closer Look at Skip-Gram Modelling.” In.

Kiros, Ryan, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. “Skip-Thought Vectors,” June.

Lazaridou, Angeliki, Dat Tien Nguyen, Raffaella Bernardi, and Marco Baroni. 2015. “Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation,” June.

Le, Quoc V., and Tomas Mikolov. 2014. “Distributed Representations of Sentences and Documents.” In Proceedings of the 31st International Conference on Machine Learning, 1188–96.

Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space,” January.

Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013. “Exploiting Similarities Among Languages for Machine Translation,” September.

Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In, 3111–9. Curran Associates, Inc.

Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. 2013. “Linguistic Regularities in Continuous Space Word Representations.” In HLT-NAACL, 746–51. Citeseer.

Narayanan, Annamalai, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, and Shantanu Jaiswal. 2017. “Graph2vec: Learning Distributed Representations of Graphs,” July.

Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. “GloVe: Global Vectors for Word Representation.” Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12.

Petersson, Karl-Magnus, Vasiliki Folia, and Peter Hagoort. 2012. “What Artificial Grammar Learning Reveals About the Neurobiology of Syntax.” Brain and Language, The Neurobiology of Syntax, 120 (2): 83–95.

Rizzolatti, Giacomo, and Laila Craighero. 2004. “The Mirror-Neuron System.” Annual Review of Neuroscience 27: 169–92.

Smith, Kenny, and Simon Kirby. 2008. “Cultural Evolution: Implications for Understanding the Human Language Faculty and Its Evolution.” Philosophical Transactions of the Royal Society B: Biological Sciences 363: 3591–3603.

Steyvers, Mark, and Joshua B. Tenenbaum. 2005. “The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science 29 (1): 41–78.

Stolk, Arjen, Matthijs L. Noordzij, Lennart Verhagen, Inge Volman, Jan-Mathijs Schoffelen, Robert Oostenveld, Peter Hagoort, and Ivan Toni. 2014. “Cerebral Coherence Between Communicators Marks the Emergence of Meaning.” Proceedings of the National Academy of Sciences 111 (51): 18183–8.

Zanette, Damiáan H. 2006. “Zipf’s Law and the Creation of Musical Context.” Musicae Scientiae 10 (1): 3–18.