Publisher's Synopsis
Brain-reading or thought identification uses the responses of multiple voxels in the brain evoked by stimulus then detected by fMRI in order to decode the original stimulus. Advances in research have made this possible by using human neuroimaging to decode a person's conscious experience based on non-invasive measurements of an individual's brain activity. Brain reading studies differ in the type of decoding (i.e. classification, identification and reconstruction) employed, the target (i.e. decoding visual patterns, auditory patterns, cognitive states), and the decoding algorithms (linear classification, nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.
Professor of neuropsychology Barbara Sahakian qualifies, "A lot of neuroscientists in the field are very cautious and say we can't talk about reading individuals' minds, and right now that is very true, but we're moving ahead so rapidly, it's not going to be that long before we will be able to tell whether someone's making up a story, or whether someone intended to do a crime with a certain degree of certaintyIdentification of complex natural images is possible using voxels from early and anterior visual cortex areas forward of them (visual areas V3A, V3B, V4, and the lateral occipital) together with Bayesian inference. This brain reading approach uses three components: a structural encoding model that characterizes responses in early visual areas; a semantic encoding model that characterizes responses in anterior visual areas; and a Bayesian prior that describes the distribution of structural and semantic scene statistics. Experimentally the procedure is for subjects to view 1750 black and white natural images that are correlated with voxel activation in their brains. Then subjects viewed another 120 novel target images, and information from the earlier scans is used reconstruct them. Natural images used include pictures of a seaside cafe and harbor, performers on a stage, and dense foliage. In 2008 IBM applied for a patent on how to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform gyrus area in the brain which activates proportionate with degree of facial recognition. In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar. The authors have uploaded demos comparing the watched and the computer-estimated videosThe category of event which a person freely recalls can be identified from fMRI before they say what they remembered.Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology wherein subjects' internal neural processing of auditory information was decoded and reconstructed as sound on computer by gathering and analyzing electrical signals directly from subjects' brains.The research team conducted their studies on the superior temporal gyrus, a region of the brain that is involved in higher order neural processing to make semantic sense from auditory information. The research team used a computer model to analyze various parts of the brain that might be involved in neural firing while processing auditory signals. Using the computational model, scientists were able to identify the brain activity involved in processing auditory information when subjects were presented with recording of individual words. Later, the computer model of auditory information processing was used to reconstruct some of the words back into sound based on the neural processing of the subjects.