The Living Thing / Notebooks :

Natural language processing

Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.

I don’t really know anything about this. See instead, perhaps design grammars, semantics, iterated function systems and my obsolete research proposal in this area, grammatical inference, information retreival.

What is NLP

Software

Blingfire

BlingFire Fire Tokenizer is a tokenizer designed for fast-speed and quality tokenization of Natural Language text. It mostly follows the tokenization logic of NLTK, except hyphenated words are split and a few errors are fixed.

This looks like it is also good for non-NLP tokenization tasks.

SpaCy

http://spacy.io/:

spaCy excels at large-scale information extraction tasks. It’s written from the ground up in carefully memory-managed Cython. Independent research has confirmed that spaCy is the fastest in the world. If your application needs to process entire web dumps, spaCy is the library you want to be using. […]

spaCy is the best way to prepare text for deep learning. It interoperates seamlessly with TensorFlow, PyTorch, scikit-learn, Gensim and the rest of Python’s awesome AI ecosystem. With spaCy, you can easily construct linguistically sophisticated statistical models for a variety of NLP problems.

pytorch.text

Like other deep learning frameworks, there is some basic NLP support in pytorch; see pytorch.text.

NLTK

NLTK is a classic python teaching library for rolling your own language processing.

NLP4J

Formerly ClearNLP.

The Natural Language Processing for JVM languages (NLP4J) project provides:

NLP tools readily available for research in various disciplines. Frameworks for fast development of efficient and robust NLP components. API for manipulating computational structures in NLP (e.g., dependency graph). The project is initiated and currently led by the Emory NLP research group with many helps [sic] from the community.

Misc other

There are many more, but I am stopping with the links having found the bits and pieces I need for my purposes.

Refs

Angluin, Dana. 1988. “Identifying Languages from Stochastic Examples.” No. YALEU/DCS/RR-614. http://www.cs.yale.edu/publications/techreports/tr614.pdf.

Arisoy, Ebru, Tara N. Sainath, Brian Kingsbury, and Bhuvana Ramabhadran. 2012. “Deep Neural Network Language Models.” In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-Gram Model? On the Future of Language Modeling for HLT, 20–28. WLM ’12. Stroudsburg, PA, USA: Association for Computational Linguistics. http://anthology.aclweb.org/W/W12/W12-27.pdf#page=30.

Autebert, Jean-Michel, Jean Berstel, and Luc Boasson. 1997. “Context-Free Languages and Pushdown Automata.” In Handbook of Formal Languages, Vol. 1, edited by Grzegorz Rozenberg and Arto Salomaa, 111–74. New York, NY, USA: Springer-Verlag New York, Inc. http://dl.acm.org/citation.cfm?id=267846.267849.

Baeza-Yates, Ricardo, and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. 1st ed. Addison Wesley.

Bail, Christopher Andrew. 2016. “Combining Natural Language Processing and Network Analysis to Examine How Advocacy Organizations Stimulate Conversation on Social Media.” Proceedings of the National Academy of Sciences, September, 201607151. https://doi.org/10.1073/pnas.1607151113.

Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. “A Neural Probabilistic Language Model.” Journal of Machine Learning Research 3 (Feb): 1137–55. http://www.jmlr.org/papers/v3/bengio03a.html.

Berstel, Jean, and Luc Boasson. 1990. “Transductions and Context-Free Languages.” In Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity, edited by J. van Leeuwen, Albert R. Meyer, M. Nivat, Matthew Paterson, and D. Perrin, 1–278. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.162.684.

Booth, Taylor L, and R. A. Thompson. 1973. “Applying Probability Measures to Abstract Languages.” IEEE Transactions on Computers C-22 (5): 442–50. https://doi.org/10.1109/T-C.1973.223746.

Charniak, Eugene. 1996. Statistical Language Learning. Reprint. A Bradford Book.

Chater, Nick, and Christopher D Manning. 2006. “Probabilistic Models of Language Processing and Acquisition.” Trends in Cognitive Sciences 10 (7): 335–44. https://doi.org/10.1016/j.tics.2006.05.006.

Cho, Kyunghyun, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation.” In EMNLP 2014. http://arxiv.org/abs/1406.1078.

Clark, Alexander, and Rémi Eyraud. 2005. “Identification in the Limit of Substitutable Context-Free Languages.” In Algorithmic Learning Theory, edited by Sanjay Jain, Hans Simon, and Etsuji Tomita, 3734:283–96. Lecture Notes in Computer Science. Springer Berlin / Heidelberg. http://www.springerlink.com/content/1hv6ha07jjgfbvbt/abstract/.

Clark, Alexander, Christophe Costa Florêncio, and Chris Watkins. 2006. “Languages as Hyperplanes: Grammatical Inference with String Kernels.” In Machine Learning: ECML 2006, edited by Johannes Fürnkranz, Tobias Scheffer, and Myra Spiliopoulou, 90–101. Lecture Notes in Computer Science 4212. Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/11871842_13.

Clark, Alexander, Christophe Costa Florêncio, Chris Watkins, and Mariette Serayet. 2006. “Planar Languages and Learnability.” In Grammatical Inference: Algorithms and Applications, edited by Yasubumi Sakakibara, Satoshi Kobayashi, Kengo Sato, Tetsuro Nishino, and Etsuji Tomita, 148–60. Lecture Notes in Computer Science 4201. Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/11872436_13.

Collins, Michael, and Nigel Duffy. 2002. “Convolution Kernels for Natural Language.” In Advances in Neural Information Processing Systems 14, edited by T. G. Dietterich, S. Becker, and Z. Ghahramani, 625–32. MIT Press. http://papers.nips.cc/paper/2089-convolution-kernels-for-natural-language.pdf.

Gold, E Mark. 1967. “Language Identification in the Limit.” Information and Control 10 (5): 447–74. https://doi.org/10.1016/S0019-9958(67)91165-5.

Gonzalez, R. C., and M. G. Thomason. n.d. Syntactic Pattern Recognition: An Introduction.

Grefenstette, Edward, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. “Learning to Transduce with Unbounded Memory,” June. http://arxiv.org/abs/1506.02516.

Greibach, Sheila A. 1966. “The Unsolvability of the Recognition of Linear Context-Free Languages.” J. ACM 13 (4): 582–87. https://doi.org/10.1145/321356.321365.

Hopcroft, John E., and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages and Computation. 1st ed. Addison-Wesley Publishing Company.

Kontorovich, Leonid (Aryeh), Corinna Cortes, and Mehryar Mohri. 2008. “Kernel Methods for Learning Languages.” Theoretical Computer Science, Algorithmic Learning Theory, 405 (3): 223–36. https://doi.org/10.1016/j.tcs.2008.06.037.

Kontorovich, Leonid, Corinna Cortes, and Mehryar Mohri. 2006. “Learning Linearly Separable Languages.” In Algorithmic Learning Theory, edited by José L. Balcázar, Philip M. Long, and Frank Stephan, 288–303. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/11894841_24.

Lafferty, John D., Andrew McCallum, and Fernando C. N. Pereira. 2001. “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.” In Proceedings of the Eighteenth International Conference on Machine Learning, 282–89. ICML ’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://repository.upenn.edu/cis_papers/159/.

Lipton, Zachary C., John Berkowitz, and Charles Elkan. 2015. “A Critical Review of Recurrent Neural Networks for Sequence Learning,” May. http://arxiv.org/abs/1506.00019.

Manning, Christopher D. 2002. “Probabilistic Syntax.” In Probabilistic Linguistics, 289–341. Cambridge, MA: MIT Press.

Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press.

Manning, Christopher D, and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, Mass.: MIT Press.

Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013. “Exploiting Similarities Among Languages for Machine Translation,” September. http://arxiv.org/abs/1309.4168.

Mikolov, Tomáš, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. “Recurrent Neural Network Based Language Model.” In Eleventh Annual Conference of the International Speech Communication Association. http://www.fit.vutbr.cz/research/groups/speech/servite/2010/rnnlm_mikolov.pdf.

Mitra, Bhaskar, and Nick Craswell. 2017. “Neural Models for Information Retrieval,” May. http://arxiv.org/abs/1705.01509.

Mohri, Mehryar, Fernando Pereira, and Michael Riley. 1996. “Weighted Automata in Text and Speech Processing.” In Proceedings of the 12th Biennial European Conference on Artificial Intelligence (ECAI-96), Workshop on Extended Finite State Models of Language. Budapest, Hungary: John Wiley and Sons, Chichester. http://arxiv.org/abs/cs/0503077.

———. 2002. “Weighted Finite-State Transducers in Speech Recognition.” Computer Speech & Language 16 (1): 69–88. https://doi.org/10.1006/csla.2001.0184.

O’Donnell, Timothy J., Joshua B. Tenenbaum, and Noah D. Goodman. 2009. “Fragment Grammars: Exploring Computation and Reuse in Language,” March. http://dspace.mit.edu/handle/1721.1/44963.

Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. “GloVe: Global Vectors for Word Representation.” Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12. http://nlp.stanford.edu/projects/glove/glove.pdf.

Petersson, Karl-Magnus, Vasiliki Folia, and Peter Hagoort. 2012. “What Artificial Grammar Learning Reveals About the Neurobiology of Syntax.” Brain and Language, The Neurobiology of Syntax, 120 (2): 83–95. https://doi.org/10.1016/j.bandl.2010.08.003.

Rijsbergen, C. J. van. 1979. Information Retrieval. 2nd ed. Butterworth-Heinemann. http://www.dcs.gla.ac.uk/Keith/Preface.html.

Salakhutdinov, Ruslan. 2015. “Learning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85. https://doi.org/10.1146/annurev-statistics-010814-020120.

Solan, Zach, David Horn, Eytan Ruppin, and Shimon Edelman. 2005. “Unsupervised Learning of Natural Languages.” Proceedings of the National Academy of Sciences of the United States of America 102 (33): 11629–34. https://doi.org/10.1073/pnas.0409746102.

Sutton, Charles, Andrew McCallum, and Khashayar Rohanimanesh. 2007. “Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data.” Journal of Machine Learning Research 8 (May): 693–723. http://dl.acm.org/citation.cfm?id=1248659.1248684.

Wetherell, C. S. 1980. “Probabilistic Languages: A Review and Some Open Questions.” ACM Comput. Surv. 12 (4): 361–79. https://doi.org/10.1145/356827.356829.

Wolff, J Gerard. 2000. “Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search.” Journal of Universal Computer Science 6 (8): 781–829.