Smarter Than You Think: How Technology is Changing Our Minds for the Better. Clive Thompson
Читать онлайн книгу.the world; then the telegraph shrank it even more dramatically. With every innovation, cultural prophets bickered over whether we were facing a technological apocalypse or a utopia. Depending on which Victorian-age pundit you asked, the telegraph was either going usher in an era of world peace (“It is impossible that old prejudices and hostilities should longer exist,”21 as Charles F. Briggs and Augustus Maverick intoned) or drown us in a Sargasso of idiotic trivia (“We are eager to tunnel under the Atlantic22 … but perchance the first news that will leak through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough,” as Thoreau opined). Neither prediction was quite right, of course, yet neither was quite wrong. The one thing that both apocalyptics and utopians understand and agree upon is that every new technology pushes us toward new forms of behavior while nudging us away from older, familiar ones. Harold Innis—the lesser-known but arguably more interesting intellectual midwife of Marshall McLuhan—called this the bias of a new tool.23 Living with new technologies means understanding how they bias everyday life.
What are the central biases of today’s digital tools? There are many, but I see three big ones that have a huge impact on our cognition. First, they allow for prodigious external memory: smartphones, hard drives, cameras, and sensors routinely record more information than any tool before them. We’re shifting from a stance of rarely recording our ideas and the events of our lives to doing it habitually. Second, today’s tools make it easier for us to find connections—between ideas, pictures, people, bits of news—that were previously invisible. Third, they encourage a superfluity of communication and publishing. This last feature has many surprising effects that are often ill understood. Any economist can tell you that when you suddenly increase the availability of a resource, people do more things with it, which also means they do increasingly unpredictable things. As electricity became cheap and ubiquitous in the West, its role expanded from things you’d expect—like nighttime lighting—to the unexpected and seemingly trivial: battery-driven toy trains, electric blenders, vibrators. The superfluity of communication today has produced everything from a rise in crowd-organized projects like Wikipedia to curious new forms of expression: television-show recaps, map-based storytelling, discussion threads that spin out of a photo posted to a smartphone app, Amazon product-review threads wittily hijacked for political satire. Now, none of these three digital biases is immutable, because they’re the product of software and hardware, and can easily be altered or ended if the architects of today’s tools (often corporate and governmental) decide to regulate the tools or find they’re not profitable enough. But right now, these big effects dominate our current and near-term landscape.
In one sense, these three shifts—infinite memory, dot connecting, explosive publishing—are screamingly obvious to anyone who’s ever used a computer. Yet they also somehow constantly surprise us by producing ever-new “tools for thought”24 (to use the writer Howard Rheingold’s lovely phrase) that upend our mental habits in ways we never expected and often don’t apprehend even as they take hold. Indeed, these phenomena have already woven themselves so deeply into the lives of people around the globe that it’s difficult to stand back and take account of how much things have changed and why. While this book maps out what I call the future of thought, it’s also frankly rooted in the present, because many parts of our future have already arrived, even if they are only dimly understood. As the sci-fi author William Gibson famously quipped: “The future is already here25—it’s just not very evenly distributed.” This is an attempt to understand what’s happening to us right now, the better to see where our augmented thought is headed. Rather than dwell in abstractions, like so many marketers and pundits—not to mention the creators of technology, who are often remarkably poor at predicting how people will use their tools—I focus more on the actual experiences of real people.
To provide a concrete example of what I’m talking about, let’s take a look at something simple and immediate: my activities while writing the pages you’ve just read.
As I was working, I often realized I couldn’t quite remember a detail and discovered that my notes were incomplete. So I’d zip over to a search engine. (Which chess piece did Deep Blue sacrifice when it beat Kasparov? The knight!) I also pushed some of my thinking out into the open: I blogged admiringly about the Spanish chess-playing robot from 1915, and within minutes commenters offered smart critiques. (One pointed out that the chess robot wasn’t that impressive because it was playing an endgame that was almost impossible to lose: the robot started with a rook and a king, while the human opponent had only a mere king.) While reading Kasparov’s book How Life Imitates Chess on my Kindle, I idly clicked on “popular highlights” to see what passages other readers had found interesting—and wound up becoming fascinated by a section on chess strategy I’d only lightly skimmed myself. To understand centaur play better, I read long, nuanced threads on chess-player discussion groups, effectively eavesdropping on conversations of people who know chess far better than I ever will. (Chess players who follow the new form of play seem divided—some think advanced chess is a grim sign of machines’ taking over the game, and others think it shows that the human mind is much more valuable than computer software.) I got into a long instant-messaging session with my wife, during which I realized that I’d explained the gist of advanced chess better than I had in my original draft, so I cut and pasted that explanation into my notes. As for the act of writing itself? Like most writers, I constantly have to fight the procrastinator’s urge to meander online, idly checking Twitter links and Wikipedia entries in a dreamy but pointless haze—until I look up in horror and realize I’ve lost two hours of work, a missing-time experience redolent of a UFO abduction. So I’d switch my word processor into full-screen mode, fading my computer desktop to black so I could see nothing but the page, giving me temporary mental peace.
In this book I explore each of these trends. First off, there’s the emergence of omnipresent computer storage, which is upending the way we remember, both as individuals and as a culture. Then there’s the advent of “public thinking”: the ability to broadcast our ideas and the catalytic effect that has both inside and outside our minds. We’re becoming more conversational thinkers—a shift that has been rocky, not least because everyday public thought uncorks the incivility and prejudices that are commonly repressed in face-to-face life. But at its best (which, I’d argue, is surprisingly often), it’s a thrilling development, reigniting ancient traditions of dialogue and debate. At the same time, there’s been an explosion of new forms of expression that were previously too expensive for everyday thought—like video, mapping, or data crunching. Our social awareness is shifting, too, as we develop ESP-like “ambient awareness,” a persistent sense of what others are doing and thinking. On a social level, this expands our ability to understand the people we care about. On a civic level, it helps dispel traditional political problems like “pluralistic ignorance,” catalyzing political action, as in the Arab Spring.
Are these changes good or bad for us? If you asked me twenty years ago, when I first started writing about technology, I’d have said “bad.” In the early 1990s, I believed that as people migrated online, society’s worst urges might be uncorked: pseudonymity would poison online conversation, gossip and trivia would dominate, and cultural standards would collapse. Certainly some of those predictions have come true, as anyone who’s wandered into an angry political forum knows. But the truth is, while I predicted the bad stuff, I didn’t foresee the good stuff. And what a torrent we have: Wikipedia, a global forest of eloquent bloggers, citizen journalism, political fact-checking—or even the way status-update tools like Twitter have produced a renaissance in witty, aphoristic, haiku-esque expression. If this book accentuates the positive, that’s in part because we’ve been so flooded with apocalyptic warnings of late. We need a new way to talk clearly about the rewards and pleasures of our digital experiences—one that’s rooted in our lived experience and also detangled from the hype of Silicon Valley.
The other thing that makes me optimistic about our cognitive future is how much it resembles our cognitive past. In the sixteenth century, humanity faced27 a printed-paper wave of information overload—with the explosion of books that began with the codex and went into overdrive with Gutenberg’s movable type. As the historian Ann Blair notes, scholars were alarmed: How would they be able to keep on top of the flood of human expression? Who would separate