Livewired. David Eagleman

Читать онлайн книгу.

Livewired - David  Eagleman


Скачать книгу
person’s ear. Based on the sound’s intensity, the blind person could tell where lights or dark areas were.

image

       The Elektroftalm translated a camera’s image into vibrations on the head (1969).

      Unfortunately, it was large, heavy, and only one pixel of resolution, so it gained no traction. But by 1960, his Polish colleagues picked up the ball and ran with it.15 Recognizing that hearing is critical for the blind, they instead turned to passing in the information via touch. They built a system of vibratory motors, mounted on a helmet, that “drew” the images on the head. Blind participants could move around in specially prepared rooms, painted to enhance the contrast of door frames and furniture edges. It worked. Alas, like the earlier inventions, the device was heavy and would get hot during use, so the world had to wait. But the proof of principle was there.

      Why did these strange approaches work? Because inputs to the brain—photons at the eyes, air compression waves at the ears, pressure on the skin—are all converted into the common currency of electrical signals. As long as the incoming spikes carry information that represents something important about the outside world, the brain will learn how to interpret it. The vast neural forests in the brain don’t care about the route by which the spikes entered. Bach-y-Rita described it this way in a 2003 interview on PBS:

      If I’m looking at you, the image of you doesn’t get beyond my retina. . . . From there to the brain to the rest of the brain, it’s pulses. Pulses along nerves. Those pulses aren’t any different from the pulses along the big toe. It’s [the] information that [they carry], and the frequency and the pattern of pulses. If you could train the brain to extract that kind of information, then you don’t need the eye to see.

      In other words, the skin is a path to feeding data into a brain that no longer possesses functioning eyes. But how could that work?

      When you look at the cortex, it looks approximately the same everywhere as you traverse its hills and valleys. But when we image the brain or dip tiny electrodes into its jellylike mass, we find that different types of information are lurking in different regions. These differences have allowed neuroscientists to assign areas with labels: this region is for vision, this one for hearing, this one for touch from your left big toe, and so on. But what if areas come to be what they are only because of their inputs? What if the “visual” cortex is only visual because of the data it receives? What if specialization develops from the details of the incoming data cables rather than by genetic pre-specification of modules? In this framework, the cortex is an all-purpose data-processing engine. Feed data in and it will crunch through and extract statistical regularities.16 In other words, it’s willing to accept whatever input is plugged into it and performs the same basic algorithms on it. In this view, no part of the cortex is prespecified to be visual, auditory, and so on. So whether an organism wants to detect air compression waves or photons, all it has to do is plug the fiber bundle of incoming signals into the cortex, and the six-layered machinery will run a very general algorithm to extract the right kind of information. The data make the area.

      And this is why the neocortex looks about the same everywhere: because it is the same. Any patch of cortex is pluripotent—meaning that it has the possibility to develop into a variety of fates, depending on what’s plugged into it.

      So if there’s an area of the brain devoted to hearing, it’s only because peripheral devices (in this case, the ears) send information along cables that plug into the cortex at that spot. It’s not the auditory cortex by necessity; it’s the auditory cortex only because signals passed along by the ears have shaped its destiny. In an alternate universe, imagine that nerve fibers carrying visual information plugged into that area; then we would label it in our textbooks as the visual cortex. In other words, the cortex performs standard operations on whatever input it happens to get. This gives a first impression that the brain has prespecified sensory areas, but it really only looks that way because of the inputs.17

      Consider where the fish markets are in the middle United States: the towns in which pescatarianism thrives, in which sushi restaurants are overrepresented, in which new seafood recipes are developed—let’s call these towns the primary fishual areas.

      Why does the map have a particular configuration, and not something different? It looks that way because that’s where the rivers flow, and therefore where the fish are. Think of the fish like bits of data, flowing along the data cables of the rivers, and the restaurant distribution crafts itself accordingly. No legislative body prescribed that the fish markets should move there. They clustered there naturally.

      All this leads to the hypothesis that there’s nothing special about a chunk of tissue in, say, the auditory cortex. So could you cut out a bit of auditory cortex in an embryo and transplant it into the visual cortex, and would it function just fine? Indeed, this is precisely what was demonstrated in animal experiments beginning in the early 1990s: in short order, the chunk of transplanted tissue looks and behaves just like the rest of the visual cortex.18

      And then the demonstration was taken a step further. In 2000, scientists at MIT redirected inputs from a ferret’s eye to the auditory cortex so that now the auditory cortex received visual data. What happened? The auditory cortex adjusted its circuitry to resemble the connections of the primary visual cortex.19 The rewired animals interpreted inputs to the auditory cortex like normal vision. This tells us that the pattern of inputs determines the fate of the cortex. The brain dynamically wires itself to best represent (and eventually act upon) whatever data come swimming in.20

image

       Visual fibers in the ferret brain were rerouted to the auditory cortex—which then began to process visual information.

      Hundreds of studies on transplanting tissue or rewiring inputs support the model that the brain is a general-purpose computing device—a machine that performs standard operations on the data streaming in—whether those data carry a glimpse of a hopping rabbit, the sound of a phone ring, the taste of peanut butter, the smell of salami, or the touch of silk on the cheek. The brain analyzes the input and puts it into context (what can I do with this?), regardless of where it comes from. And that’s why data can become useful to a blind person even when they’re fed into the back, or ear, or forehead.

image

      In the 1990s, Bach-y-Rita and his colleagues sought ways to go smaller than the dental chair. They developed a small device called the Brain-Port.21 A camera is attached to the forehead of a blind person, and a small grid of electrodes is placed on the tongue. The “Tongue Display Unit” uses a grid of stimulators over three square centimeters. The electrodes deliver small shocks that correlate with the position of pixels, feeling something like the children’s candy Pop Rocks in the mouth. Bright pixels are encoded by strong stimulation at the corresponding points on the tongue, gray by medium stimulation, and darkness by no stimulation. The BrainPort gives the capacity to distinguish visual items with a visual acuity that equates to about 20/800 vision.22 While users report that they first perceived the tongue stimulation as unidentifiable edges and shapes, they eventually learn to recognize the stimulation at a deeper level, allowing them to discern qualities such as distance, shape, direction of movement, and size.23

image

       Seeing with the tongue.

      We normally think of the tongue as a taste organ, but it is loaded with touch receptors (that’s how you feel the texture of food), making it an excellent brain-machine interface.24 As with the other visual-tactile devices, the tongue grid reminds us vision arises not in the eyes but in the brain. When brain imaging is performed on trained subjects (blind or sighted), the motion of electrotactile shocks across the tongue activates an area of


Скачать книгу