Scatterbrain. Henning Beck
Читать онлайн книгу.here as well. You will need to know where the book is located, and you will need to be able to process the characters in the book.
It’s different in the brain, however, because there are neither characters (data) nor a fixed location where the data is held. If I were to say: “Think about your grandmother!”—you wouldn’t get some kind of “grandmother neuron” suddenly popping up in your brain (as brain researchers used to believe)—instead your neuronal network would assume a very particular state. And it is precisely in this state, in the way in which the nerve cells activate each other, where the information is located. This may sound somewhat abstract, but let us simplify it by comparing it to a very, very large orchestra. Individual members of an orchestra can also individually change their activity level (playing louder or quieter or at higher or deeper tones). If you are watching a silent orchestra with inactive musicians, it’s impossible to know what compositions they have in their repertoire. In the same way, it’s impossible to know what a brain is able to think by simply observing it from outside of its neural network. In an orchestra, the music is produced when the musicians play together and in sync. The music is not located somewhere within the orchestra but is rather in the activity of the individual musicians. If you only listen to a single viola, you can gain some insight into one musician, but you won’t have any idea of what the complete musical piece sounds like. In order to know this, you also need to find out the way in which the other musicians are active at the same time. But even this would not be adequate because in this case you would only know what one particular tone sounds like at any one given moment in time, whereas the music only first emerges when you consider it over the course of time. The information (in this case, the melody of the musical work) is located between the various musicians.
Like orchestra musicians, neurons also tune themselves to one another. Just as an orchestra produces a piece of music when the musicians interact, neurons produce the informational content of a thought. A thought isn’t stuck somewhere in the network of a brain. Instead, it is located in the manner in which the network interacts or plays together. In order for this to go off without a hitch, the neurons are connected to each other over common points of contact (synapses), which is the only way that the individual nerve cells can figure out what all the others are up to. In an orchestra, every musician listens to what the others are playing to ensure that they can keep in sync and in tune with each other. In the cerebrum, neurons are connected with several thousand other nerve cells, which means that they are able to produce much more complex states of activity than an orchestra. But it is precisely in these states of activity that the content of the brain’s information is located. In an orchestra, this is the music; in the brain, it is a thought.
This method for processing information has a couple of crucial benefits. Just as the same orchestra is able to play completely different pieces of music by synchronizing the playing of the individual musicians in a new way, the exact same neural network is able to produce totally unique thoughts merely by a shift in activation. In addition, a piece of information (whether a melody played in an orchestra or an image in one’s head) is not necessarily coded in a concrete state of activity, but also in the shift of the state. The mood of a piece of music may be influenced by whether the musicians play softer or louder—in the same way, the information in the neural state may also be influenced by the way in which the neurons shift their activity and not only how they currently are.
This brings us to the realization that the number of possible patterns of activity is vast. The question of how many thoughts it’s possible to think is thus as useful as the question of how many songs it’s possible for an orchestra to play.
There is something else to notice here. In a computer, the information is stored in a location. When you switch the machine off, the information is still there (saved in the form of electrical charges), and all you have to do is to turn the computer back on to retrieve it. But if you switch off a brain, the party is over. End of story. Because the information stored in a brain is not located in any particular physical location but is rather an ever-changing state of the network. During a person’s lifetime, a thought or a piece of informational content always proceeds from an earlier one—as though every state of thought becomes the start signal for the next thought. A thought is never derived from nothing.
The learning in between
AS USEFUL AS the orchestra metaphor is, I don’t want to conceal the fact that there is one enormous difference when it comes to the brain. And the difference is this: unlike an orchestra, the brain does not employ a conductor (and the neurons also don’t have predefined sheet music to play). There’s no one standing on a podium in front of the neurons to direct them on how they should interact with their neighbors. And yet they still manage, with utmost precision, to synchronize themselves in their activities and to create new patterns.
This has consequences for the manner in which a neural network learns. While an orchestra conductor provides the tempo to sync up the musicians, the neurons have to find another method. And as it turns out, the way that information is produced is somewhat like the melody of an orchestra, in the ability of the individual neurons to play all together.
When an orchestra learns a new melody, the musicians must accomplish two things. Firstly, they have to improve their own playing skills (i.e., learn a new combination with their fingers). Secondly, and also more importantly, they have to know exactly when and what to play. They can only be really certain of this, however, by watching the actions of the conductor and waiting to hear how the others around them are going to play. When an orchestra practices a new piece, the musicians are in effect practicing their ability to play together. At the end of the day, the piece of music has also been “saved” in the orchestra’s newly acquired skill of playing it together. In order to retrieve it, the concrete dynamic of the musicians first must be activated, leading to the piece of music. Likewise, a piece of information in the brain is encoded in conjunction with the interaction between the neurons, and when the neurons “practice,” they also adjust their harmonization with each other, making it easier to trigger their interaction next time. In order for a neural network to learn, the neurons must also adjust their points of contact and thereby redesign the entire architecture of the network.
Because the brain does not have a conductor, the nerve cells must rely on tuning themselves to their neighboring cells. What happens next on a cellular biological level is well known. Simply put, the adjustments among the neural contact points that happen during learning follow a basic principle: contact points that are frequently used grow stronger while those seldom put to use dwindle away. Thus, when an important bit of information pops up in the brain (that is, when the neurons interact in a very characteristic way), the neurons somehow have to “make a note” of it. They do this by adjusting their contact points with one another so that the information (the state of activity) will be easier to retrieve in the future. If in specific cases, some of the synapses are quite strongly activated, measures are taken to restructure the cells to ensure that it will be easier to activate the specific synapse later on. Conversely, synapses that go unused because of a lack of structural support are dismantled over time. This saves energy, allowing a thinking brain to function on twenty watts of power. (As a comparison, an oven requires a hundred times as much energy to produce nothing but a couple of bread rolls. Ovens are apparently not all that clever.)
This is how the system learns. By altering its structure so that its state of interaction can more readily be triggered. In this way, the piece of information is actually saved in the neural network—namely, “between” the nerve cells, within their architecture and connection points. But this is only half of the story. In order for the piece of information to be retrieved, the nerve cells must first be reactivated. The more interconnected the points of contact are, the easier it is to do this, even though information cannot be derived from these contacts alone. If you cut open a brain, you will see how the cells are connected but not how they work. You won’t have any idea what has been “saved” in the brain, nor what kind of dynamic interaction it could potentially produce.
Under stress, learning is best—and worst
THIS NEURAL SYSTEM of information processing is extremely efficient. It is much more flexible than a static computer system, requires no supervision (such as a conductor) and, in