A mere placeholder.
I’d like to know how good the results are getting in this area, and how general across people etc. How close are we to the point that someone can put an arbitrary individual in some kind of tomography machine and say what they are thinking without pre-training or priming?
Marcel Just et al do a lot of this. It for sure leads to fun press releases, e.g. CMU Scientists Harness “Mind Reading” Technology to Decode Complex Thoughts but I need time to see details to understand how much progress they are making towards the science-fiction version. (WaCJ17)
Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time.
Cloud a neuroscientist understand a microprocessor? (JoKo17)
- Boettiger, C. (2015) An introduction to Docker for reproducible research, with examples from the R environment. ACM SIGOPS Operating Systems Review, 49(1), 71–79. DOI.
- Jonas, E., & Kording, K. P.(2017) Could a Neuroscientist Understand a Microprocessor?. PLOS Computational Biology, 13(1), e1005268. DOI.
- Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M., Morito, Y., Tanabe, H. C., … Kamitani, Y. (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders. Neuron, 60(5), 915–929. DOI.
- Wang, J., Cherkassky, V. L., & Just, M. A.(2017) Predicting the brain activation pattern associated with the propositional content of a sentence: Modeling neural representations of events and states: Modeling Neural Representations of Events and States. Human Brain Mapping. DOI.