The bad news: much of the positive cognitive gains from video gaming are non-transferable, and gaming may increase both aggression and the symptoms of ADHD. The good news: it doesn’t sound like that should be a problem for people without existing aggressive tendencies or ADHD.
My take-away: if you like playing games, keep playing them. If you don’t, there’s probably not much benefit in starting.
Media Magazine is running an interview with David “Pesco” Pescovitz on the subject of the future of attention:
What do you think about the ability to process more concurrent streams? Do you think we’re adapting our brains to be able to process more at the same time?
I don’t think our brains are necessarily changing. But I think we do develop new skills. It started with wanting more information, and being forced to deal with it and make sense of this onslaught that has led to a habit, basically, where we want more and more of it. Or, we think we want more and more of it. I actually think that, as we spend more time in these sort of fast-paced, virtually mediated experiences, there’s going to be this quest for authentic, visceral, focused, immersive and, in many ways, singular experiences. I don’t think sitting down and reading a book or watching a two-and-a-half hour art film are going away any time. I actually think that we’re going to see a renewed appreciation for those kinds of experiences, as they become more rarified.
Are we becoming addicted to information supply?
I don’t know. I mean, I don’t know what addiction really means. That’s within the realm of psychology and medicine. I can certainly say that I feel a sense of twitchiness when I don’t have access to my email during long meetings. And I don’t think that is necessarily a good thing. So, I guess you could probably argue that that’s a form of addiction in some way. Then again, maybe it’s also what was once an addiction. I mean, I think things change. As technology changes, the mores surrounding that technology change. Usages change. And it adds up to the way the world turns.
Consider a recent study by neuroscientists at Harvard and the University of Toronto that documents the benefits of all these extra thoughts. (It was replicated here.) The researchers began by giving a sensory test to a hundred undergraduates at Harvard. The tests were designed to measure their level of latent inhibition, which is the capacity to ignore stimuli that seem irrelevant. Are you able to not think about the air-conditioner humming in the background? What about the roar of the airplane overhead? When you’re at a cocktail party, can you tune out the conversations of other people? If so, you’re practicing latent inhibition. While this skill is typically seen as an essential component of attention – it keeps us from getting distracted by extraneous perceptions – it turns out that people with low latent inhibition have a much richer mixture of thoughts in working memory. This shouldn’t be too surprising: Because they struggle to filter the world, they end up letting everything in. As a result, their consciousness is flooded with seemingly unrelated thoughts. Here’s where the data gets interesting: Those students who were classified as “eminent creative achievers” – the rankings were based on their performance on various tests, as well as their real world accomplishments – were seven times more likely to “suffer” from low latent inhibition. This makes some sense: The association between creativity and open-mindedness has long been recognized, and what’s more open-minded than distractability? People with low latent inhibition are literally unable to close their mind, to keep the spotlight of attention from drifting off to the far corners of the stage. The end result is that they can’t help but consider the unexpected.
If you haven’t heard, information technology iconoclast Nicholas Carr has a new book coming up called The Shallows. The basic case he makes is this: the Internet is altering our brains and making our thinking wider but more shallow.
Carr makes a compelling case, and it’s time for web professionals to start thinking about how we can fix the problem.
The WSJ is also running Clay Shirkey’s response to Carr – or actually, they may have just asked him whether the Internet was making us stupid, because Shirkey’s piece doesn’t seem to specifically address Carr’s arguments and it doesn’t mention Carr at all.
I haven’t read Carr’s book yet, so I’m having to go on reviews and Carr’s Wired and WSJ pieces. But I haven’t seen any critic of Carr’s yet make substantial argument that Carr is wrong about what’s happening to us. Lehrer compares Carr’s concerns about the Internet to Socrates’s concerns about writing. But Socrates didn’t have the sort of evidence Carr does. Nor was Socrates making quite the same sort of argument Carr is.
Weirdly, Lehrer points to two studies that show that video games may improve certain cognitive functions, such as sustained attention. Carr mentions these studies himself in his WSJ article. But the web is not a video game. I spend tens of hours per week on the web. I rarely play video games (maybe I should start). And the effects of the web, and of multitasking, are what Carr is talking about.
Shirkey is right to point out that the public at large never did read much, or spend much time on the sort of intellectual endeavors Carr is concerned with. They spent most of their time watching TV. YouTube comments aren’t evidence that people are becoming more stupid, YouTube just provides stupid people a platform they’d never been afforded before. And the Internet gives many people something more to do with their “cognitive surplus” than watch TV. Not that public libraries weren’t there before, and not that self-publishing and zine-making weren’t around before, but the Internet makes a lot of tools and information more accessible and appealing.
But what about the minority of us who do want to read longer works of text and think deeply about them? Are better understanding, deeper thinking, and sustained attention worthy pursuits? I think so, and Carr makes a compelling argument, backed up by scientific research, that our abilities to do these things are being diminished.
One argument I’ve seen made at different times, starting with Douglas Rushkoff’s Playing the Future, is that our shorter attention spans and tendency to multitask are actually cognitive evolutions – improvements in our ability to scan information. But we’re not getting better at multitasking – we’re actually getting worse. Carr cites a study that showed frequent multitaskers were actually worse at multitasking than infrequent multitaskers. I’m reminded of a study that made the blog rounds in 2005 that showed that multitasking was worse than marijuana on people’s job performance.
There are some questions we can ask and problems we can start working on right now:
What strategies, short of complete dis-engagement from the Internet (which I don’t think Carr advocates) can we adopt to preserve our attention spans? Periodic disengagement? Deliberate, daily monotasking? Zazen? More disciplined web surfing strategies? People in the “lifehacking” community have been working on things like this for years.
What can those of us involved in creating the web – as writers, designers, developers, publishers, etc. do to improve the experience of reading online. Can we makes sites and write content that actually help people focus?
For example, Carr suggested putting links at the bottom of articles instead of inline. I think that’s an over simplistic solution, but I think we can be more strategic, more mindful of how we integrate links into our texts (for this article, I put background info at the top, and added in the occasional additional link as necessary, and will include a few things at the end). I don’t know yet what the best solution will be, but I do believe that we ignore Carr’s research at our own peril.
Readify a browser plugin that can “Carr-ify” web pages, among other things.