While “the focus of Shannon’s formulation was on the signal and noise of the channel, [...] he made it clear that whatever was sent over the channel would need to be decoded by a receiver. Thus, in Shannon’s formulation, the quantification of information over a channel was contingent on the existence of a ‘receiver’.” the authors write. A key notion of the paper by De-Wit and colleagues is therefore that most of modern neuroscience studies seem to focus on how we can interpret the activation we find in BOLD or EEG signals (experimenter-as-receiver), while the focus should be whether the rest of the brain can actually interpret this activation (cortex-as-receiver). “It is only when physical responses can be shown to be used by the brain that we have positive evidence that a physical signal acts as information.”
Information is in that sense not an objective measure, but heavily depends on the subjective interpretation of a receiver, whoever or whatever that may be. The paper offers many examples to make this point. One of them refers to the issue of signal and noise. Take an encryption method where an algorithm will encrypt the target signal in such a way that it seems like noise for any receiver who doesn’t have the encryption key. “Without the key, there is no immediate way to tell whether a message is signal or noise.”
This notion has consequences with regard to how we analyze our data, like turning EEG signals into ERP components. Averaging over trials to calculate ERPs is often justified as a means of getting rid of noise and by doing so, to increase the signal-to-noise ratio. But averaging could also cause one to cancel out what might in fact be important signals. To distinguish between signal and noise, we need a model that correctly describes the complex interactions between sender, transmitter, and receiver. The authors further argue that neuroscience should “find the ‘correct model of interaction’ for the case of the brain.”
But Shannon’s theory has its own issues as well. While it mainly targets the communication
of information, the rather philosophical question as to whether we as human beings do not constantly create
information is not part of Shannon’s equation. Let’s take perception, which is a highly creative process in which we, for example, create discrete objects out of a myriad of edges processed in visual cortex. According to De-Wit et al. “it is philosophically questionable whether that object can be said to exist in any objectively definable way in the actual physics of the world, as a pre-existing signal that was ‘sent’ by the transmitter.” Perception might therefore be a process of creating differences that make a difference, but that merely exists because of what the brain does.
So is trying to open the black box within the black box a lost cause? And if not, what can be done? It seems like the issue the authors raise is at least not easily solvable, nor is it clear that one necessarily needs to solve it in general. There are surely instances, where the experimenter being the receiver rather than the brain itself might be all that we want and need. Correctly interpreting a spot on an MRI image as a brain tumor to find the right therapy will not depend on the question of whether the brain has interpreted the tumor correctly. And for some of our research questions it might be absolutely sufficient to know that the brain was able to distinguish between two types of inputs that we manipulated according to our research questions. But when the aim is to understand the mechanisms behind how the brain actually performs such tasks, it seems good to remind ourselves what it is that we are actually measuring. Instead of simply recording under which circumstances what parts of the brain become active, we should find a better way to understand the neural code
that is used by the brain as information.
How can this be done? The firing rate alone might not be sufficient. When looking at oscillatory rhythms, for instance, more information might be carried in the phase of those rhythms rather than in their frequency or amplitude. The seminal work of Singer and colleagues that emphasizes the role of neuronal synchrony
as key to communication in the brain seems to lead in the right direction. “Big-data” approaches
in fMRI or recent advances in combining different techniques
— like MEG, DTI, and fMRI — in a sophisticated manner that make use of whole-brain recordings might provide a better way of figuring out how
the brain actually communicates as a whole.
The authors simply hope that “this article will cause a shift in emphasis away from thinking about what we can decode from different neuroimaging techniques to thinking about whether those recordings of neural activity are differences that could be decoded by the rest of the brain.”
Article focused on in this post:
De-Wit, L., Alexander, D., Ekroll, V., & Wagemans, J. (2016). Is neuroimaging measuring information in the brain? Psychonomic Bulletin and Review