The Cloud Mind

Bear with me as we conduct a simple experiment: I will ask a few short questions, and we’ll see how many of them you can answer without looking them up.

  1. What is the population of Cairo?
  2. How tall is Arnold Schwarzenegger?
  3. What is Avogadro’s Number?
  4. What is Magic Johnson’s real name?

These questions all have very clear answers, yet they are also so trivial that it is highly unlikely anyone should know more than one of them. However, I happen to know them all (9.5 million, 6 foot 2 inches, 6.02 x 1023, Earvin). The reason I know these things is not because I am a Jeopardy! champion with photographic memory but simply because all this information has been made readily available for me to access with very minimal effort. What these questions all have in common is that they each appear in the list of search terms I have asked Google Assistant in the last week.

We are living at a time rightly called The Information Age, where any information a person wants to know can be delivered at a moment’s notice by the soothing voice of a robot more than content to answer all our dumb questions. If you find yourself wanting to make a pineapple upside-down cake, no worries—you don’t need to know how because YouTube will show you. If you want to know how to sail a boat or how to fix that rattle in your car engine, all you have to do is ask. At no other time in history have we had more information at our fingertips. But, as we become more accustomed to this convenience, we also fundamentally change the way we interact with the world around us. This constant stream of information is changing what we know, who we are, and how we think about the world and our existence in it.

Design theorists like Bill Ferster and Donald Norman point out a difference between “knowledge in the head” and “knowledge in the world” (Norman 75). As we think, we pull information from two separate databases: the knowledge we hold in our brains and the knowledge we find in the world around us (Ferster 31). As a simple example, imagine that your computer’s keyboard has no letters or numbers signifying which key is which—just dozens of blank keys. Though you may have knowledge in your head as to where all the keys are supposed to be, without that reference in the world how long do you think it would take for you to make a typo? As we extend our “knowledge in the world” to the incomprehensively vast reaches of the internet, it creates an imbalance that makes the storage capacity of our brains seem wildly inadequate compared to the data servers of YouTube and Wikipedia. As a result, we have learned to navigate these databases as extensions of our own knowledge; we don’t need to store information in our heads that can be kept in the cloud. But, are we backing up our knowledge to the cloud, or are we moving it there?

Media theorist Marshall McLuhan has argued that the media we use are not merely conduits for the transmission of information, they also change our thought processes (McLuhan 8). While the internet is a fantastic resource, if we no longer need to travel to the library, search for a book, and read entire chapters from multiple sources to find the information we were looking for—if we can simply click on a few sites and have that information fed to us instantaneously—then this information becomes cheapened and we lose the opportunity for accidental discovery along the way.

Design writer Nicholas Carr, in his appropriately titled article, “Is Google Making Us Stupid?” questions the long-term usefulness of relying on quick search results to answer our questions. He writes that our newfound ability to rapidly absorb and discard information is affecting the way we think:

Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self… Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged (Carr).

Because of this, our attention spans seem to be shrinking. We rarely have the patience for deep research and—even if we do—we may be hard-pressed to find an audience for it.

On the other hand, proponents like Bill Ferster argue that this sort of “knowledge in the world” is a kind of distributed cognition: worldly references which serve as “scaffolds for internal cognition” that “reduce the cognitive load requirements and increase the combined ability to understand complex information.” Ferster argues that when this knowledge and expertise is shared (or distributed) among us, it “provides an environment in which individual minds are supported and carried further by the vast information provided,” making our collective knowledge much more advanced (Ferster 31).

We are moving into a rather complicated offshoot of the Information Age: an era of distributed cognition where we all have our heads in the cloud. Our sharing of information and cultural knowledge has led to a great new liberation—a second enlightenment that is working to equalize us into the various parts of a holistic, global village. As McLuhan suggests, “the aspiration of our time for wholeness, empathy and depth of awareness is a natural adjunct of electric technology” (McLuhan 5). Wikipedia, in this respect, is perhaps the single most important invention in human history, for when we have access to each other’s knowledge, it makes the task of understanding one another immeasurably easier. But at what cost? Our reference material has become so strong that we are actively choosing to forget or ignore what we have learned. Information the scholars of antiquity fought and often died for is now reduced to “TL; DR.”

This poses a very difficult conundrum: How do we maintain a highly accessible shared knowledge base without taking for granted the ease with which that knowledge was attained? Unfortunately, I doubt there is a simple or comprehensive answer to this (as James Watson, one of the biologists who discovered the structure of DNA, once said: “There are only molecules. Everything else is Sociology.”)

The important distinction to make is the difference between what is already known and what has yet to be discovered. Simply put: Old knowledge is easy to attain—new knowledge is difficult. Running a web search for Avogadro’s Number is incredibly easy, even though the conceptualization and measurement of the number of molecules per mole was an incredibly difficult pursuit spanning hundreds of years of collaborative scientific effort. What we must try at all costs to avoid is this trivialization of knowledge. Knowing the population of Cairo is all well and good, but it is most likely useless until that knowledge is put to work in some way. Knowing the number of molecules per mole is useless unless it opens the doors to new applications and new learning.

Information with no use can be cumbersome, and the constant stream of information to which we are subjected on a daily basis is just so. Our world is now made up of software, and we do an increasing percentage of our work in a digital realm.

“Software has become our interface to the world, to others, to our memory and our imagination—a universal language through which the world speaks, and a universal engine on which the world runs” (Manovich 2).

If, as McLuhan suggests, any medium is an extension of ourselves and broadens the scale of our existence (McLuhan 7), then the internet and the software it enables are not merely extensions but also a binding agent, connecting us all to a collective consciousness that has evolved from a local zeitgeist into a worldwide group mind.

Just over 30% of the US population was born after 1995 and thus has always lived in a world bound together by the Internet. As this number continues to grow, I predict that we will only become increasingly reliant upon distributed cognition, and so it is increasingly important for us to be mindful of this—else we might grow accustomed to letting the internet do our thinking for us.

To approach this problem further, more research is needed into the psychology of human-computer interaction and the neuroscience involved with distributed cognition. We must continue to study people’s interactions with software to better understand how our brains compensate for this imbalance caused by the extension of our knowledge. Our goal must be to make all the world’s knowledge readily accessible while also strengthening our ability to think critically and interact attentively.

It can be easy to assume, considering the enormity of information at our disposal, that someone else probably holds an adequate answer to every one of our questions. This is the philosophy that we must reject. Instead of being content with whatever answer our robotic assistants confidently spit back at us, we should continue to question them and continue to ask ourselves why we want this information and what we plan to do with it once it arrives.

Works Cited:

Carr, Nicholas. “Is Google Making Us Stupid?” The Atlantic. Aug 2008, www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/.

Ferster, Bill. Interactive Visualization: Insight Through Inquiry. MIT Press, 2012.

McLuhan, Marshall. Understanding Media. MIT Press, 1964.

Norman, Donald. The Design of Everyday Things. Basic Books, 2013.