classroom’s Internet and explicitly state that no browsing, texting, or e-mailing was allowed during class. This was a silly if not outright impossible rule to enforce. However, the attitude on the part of professors was that technology was a “distraction” that took the students’ focus away from lectures and conversations. Fearful of the way in which connectivity might distract students, these professors were themselves distracted from the real effect and the benefit of connectivity in the classroom: the transparency that forms a foundation for trust.
In one of Tom’s classes, he experimented by taking a page out of the Gen Z playbook and turning the tables on students. His policy was that they were welcome to use any connected devices as often as they liked—with one caveat.
He would periodically throw out a fact or statistic that was blatantly incorrect, and he expected students who were online to promptly take him to task and correct the error.
The approach paid off in ways he never imagined. Not only did students see it as a welcome challenge to correct a professor, but they brought to light all sorts of anecdotes, and some critical conversations evolved from information they found related to the lecture.
For Gen Z, connectivity creates a level of transparency in relationships that makes trust an earned status rather than one bestowed upon an individual or an organization.
We believe that viewing the resulting behaviors of hyperconnectivity as distractions rather than a potential way to increase engagement and trust is a big mistake. Swimming against the Gen Z tide of hyperconnectivity is like swimming against a tsunami—only an idiot does it. If you want to survive the tsunami, you have to ride the wave—it’s your only option, no matter how frightening it may appear.
It’s no different with kids. We were discussing hyperconnectivity with a colleague who told us his daughter met her first boyfriend in her freshman year of high school. He said it was a typical teenage relationship, except for one very odd behavior—his sixteen-year-old daughter’s boyfriend moved in with her, in a manner of speaking, on Skype. Every time he would go into his daughter’s room, her boyfriend’s smiling face would be there on the laptop screen, propped up on her bed. “They wouldn’t even be talking to each other,”
he told us. “They would just sit there on each other’s laptop, sharing the same virtual space.” Awkward? Perhaps at first, but that’s the thing about behavior;
even the most aberrant behaviors, repeated often enough, seem completely normal.
Your first temptation may be to bristle at this type of relationship as far less appealing than what you experienced in your early dating life—relationships are, after all, about human contact. Something must be lost if we reduce them to Skype. But that’s the argument made about virtually every new generation of technology. Even writing was seen by some of the world’s greatest philosophers, Socrates and Plato, as a step backward and a far less effective way of connecting and communicating.
It’s supremely ironic, then, that we only know of Socrates and his work through the writing of his student Plato. In one of Plato’s books he recounts a metaphorical conversation between two gods, as told by Socrates.5 In the conversation, one of the gods responds to the other about bestowing the gift of writing on mankind.
For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory … You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
The words are twenty-five hundred years old, and yet they sound familiar—it’s the same argument we hear used against myriad other technologies, from electricity, automobiles, and television to electronic calculators, spell checkers, and Google.
We love to perpetuate generational arrogance; in large part because we value what we have learned because of the effort it took to learn it. The
smartest people make the best case for holding onto the past; after all, they ended up being the smartest by using the tools of the past in the best possible way. When we fall into these sorts of broad generational traps we severely constrain the opportunities we can imagine; it’s just too easy to limit our view of the future by thinking of it in terms of the behaviors of the present—if we’ve learned anything over the past two hundred years, it’s that the future knows no such bounds.
Change is not linear: it’s exponential and it’s disruptive. It has no allegiance to the past. If we are to stand even the slightest chance of understanding the potential of hyperconnectivity going forward, we need to first examine Gen Z from a neutral vantage point, one that does not impose a historical standard of behavior established in a far less-connected world.
In a Bill Moyers interview, Sherry Turkle, a professor at MIT who has written Alone Together, a book that looks critically at the way in which technology takes us away from human connections—as contrasted with online connections—said:
“What concerns me as a developmental psychologist is watching children grow in this new world where being bored is something that never has to be tolerated for a moment.” Turkle tells Moyers, “Everyone is always having their attention divided between the world of people [they’re] with and this ‘other’ reality.”
A post responding to the interview, on Moyer’s website, provides a succinct and cutting counterargument:
We are a herd animal. Any technology that allows us to maintain more efficient contact with the rest of the herd is going to be embraced by its members.
Technology will be more and more integrated into each of us as individuals.
Google Glass is yet another step in this direction. What scared us about the Borg in Star Trek’s Next Generation is being embraced by the next generation(s).
Technology and communication will be integrated into our very being, and we will welcome it. It appears that this is our future as a species.6
We love Turkle’s work; it’s prosaic and delightful to read, and some of it resonates deeply with us, especially her observations that youth are increasingly losing the ability to be alone and are panicked at the thought of it.
We’d offer her a Pulitzer Prize if we could—at least for the elegance of her argument. But we can’t help but believe that her arguments are anchored in the past, in the same intellectual vein as Socrates’ disdain for writing. Something is being lost through progress, perhaps, but not everything, and much is being gained. Children who were outcasts and whose shyness locked them into
solitude now can socialize in ways that add richness to their lives.
We recall a story shared with us by a colleague we were having dinner with as part of a consulting engagement with a group of large manufacturers. The conversation meandered as we talked about our work and the critical importance of keeping an open mind toward new online behaviors that we may see as antisocial—excessive gaming, for example. Our colleague, Joe, described a nephew (we’ll call him Barry) who suffered from Tourette’s syndrome, a potentially debilitating neurological condition that causes uncontrollable tremors, movements, tics, and occasional, sudden bursts of disconnected—often profane—language. While the cause and triggers for Tourette’s are not well understood, it manifests in such a way that those communicating with someone who has Tourette’s may interpret the behavior as, at the least, disruptive and at worst antisocial. Tourette’s, unlike stuttering or nervous twitching, is not a modifiable behavior—at least not consciously modifiable. The resulting social stigma can drive those living with Tourette’s to insulate themselves from social settings, making education, work, and socializing difficult, if not impossible.
However, Joe told us that Barry, who is an avid gamer and very active on social networks, has no problem when he is communicating or collaborating online. “Of course,” we responded, “it makes perfect sense that via a keyboard his condition becomes invisible, allowing Barry to communicate freely without any prejudice on the part of his peers.”
“No,” said Joe, “I’m not talking about his communication via keyboard. I’m saying that he will use a gamer’s headset and talk without any indication of Tourette’s. It’s as though his brain suddenly finds a way to calm those misfiring neurons because of his ability to be so in the moment. I can’t explain it but it is an amazing transformation. Unfortunately, I’m worried that Barry, who is still in school, will not be able to join the workforce in any traditional sort of setting where he has to interact with others face to face.”
Here’s the beauty of the Gen Z Effect: we don’t think Barry will need to interact face to face unless he chooses to. Barry is not broken. What is broken is our way of defining work and the way work “has to be” done—because that’s the way it’s “always been done.” We draw lines that separate work that’s face to face in an office setting from remote work that’s performed online. We refuse to accept that online can replace in person. But these two modes of working are not in a contest with each other, any more than the written word is in competition with the spoken word.
When we try to pit one behavior against another we create a zero-sum scenario, in which the future is always competing with the past, when what inevitably happens is that the future converges with the past. Like a highway
with infinite on-ramps, we are always integrating new behaviors as they join old behaviors. Eventually, some of those old behaviors take an exit while others simply fall behind, fading slowly as traffic accelerates. Most, however, refuel and become part of the way forward.
When the separateness of our worlds could be easily divided into work and play, professional and personal, we could live with generational divides, like solid lines marking the lanes of a highway, and prosper despite them.
For Gen Z, however, and certainly for the children of today’s children, there will be no separation between the two worlds Turkle describes. They will not see online and offline settings in competition. Far from being an either-or proposition, the two worlds will be integrated in a way that multiplies people’s potential and their opportunities to live, work, and play.
We have already taken the first steps in the amalgamation of offline and online, not only with traditional mobile devices such as smartphones but also with technologies such as Google Glass (Google’s wearable device for connecting to and interacting with the Internet). These wearable devices do more than blur the distinction between the two worlds; they begin to eliminate it. That may come across as hyperbolic, but it is only if we limit our imagination by what we have already experienced rather than extend it to catch a glimpse of the future.
In 1876, a cash-strapped Alexander Graham Bell offered to sell his telephone patent to Western Union for $100,000. While considering Bell’s offer, which Western Union turned down, officials who reviewed the offer wrote the following recommendation:
We do not see that this device will be ever capable of sending recognizable speech over a distance of several miles. Hubbard and Bell want to install one of their telephone devices in every city. The idea is idiotic on the face of it.
Furthermore, why would any person want to use this ungainly and impractical device when he can send a messenger to the telegraph office and have a clear written message sent to any large city in the United States?
Figure 3-3: Alexander Graham Bell’s telephone patent, although one of the most profitable and contested patents of the nineteenth century, was flatly rejected by Western Union.7
Laughable, isn’t it? However, the problem Western Union had is the same one we all share: the future never comes fully formed. It is always disguised in a clumsy package that doesn’t comfortably fit the behaviors we are accustomed to.
It’s a familiar pattern of behavior.
The first pocket transistor radio didn’t fit into a standard shirt pocket. Sony cofounder Akio Morita had to order tailor-made shirts with double-width pockets for his salespeople. Many people thought it was silly to have a personal
radio; after all, radio had been built around the expectation of a shared experience. Since its introduction, more than six billion pocket transistor radios have been sold.
Motorola’s first portable cell phone, popularly known as “the brick”—
although, in fairness, it weighed slightly less—didn’t fit neatly in your hand and definitely did not fit in your pocket. At the time of its introduction, even the most outrageous pundits projected somewhere in the range of ten million to one hundred million cell phones in use by the year 2000. We’ve already said that today there are more than seven billion. That’s hardly a rounding error.
The first laptop computer, the Osborne 1, introduced in 1981, was larger and heavier than a portable sewing machine of the same vintage and weighed in at twenty-five pounds. In addition to the difficulty of carrying it around, there was the obvious question: “Why would anyone need to carry a computer?”
In each of these cases, it’s not that the technology building blocks of the future weren’t already in the devices of the present but rather that behavior had not had a chance to form itself around the technology. But when people begin using new a technology they try to use it as direct replacements for earlier technology. However, with a bit of experimentation they inevitably find purposes for which the technology was never intended but where it ultimately has the greatest value. For example, the initial value of radio transmissions was thought to be for person to person communication and not for broadcast to massive audiences, which is where it revolutionized mass communications.
It’s that last point that we need to develop and embrace new behaviors in order to realize the value of a new technology, which is most important to grasp in order to appreciate how radical disruption works and the impact the Gen Z Effect will have.
Figure 3-4: Osborne 1 Portable Computer8
“Osborne 1 open” by Bilby ˜ Own work. Licensed under Creative Commons Attribution 3.0 via Wikimedia Commons ˜
It’s human nature to initially place “new” technologies into old behaviors since we have no other frame of reference. When the first Walkman was introduced, walking around in public with tiny headphones on made little sense
—in fact, it seemed weird and antisocial. In an attempt to make the Walkman more “socially acceptable” Sony incorporated a microphone that allowed the listener to press a button and hear the person he was talking with, by channeling the person’s voice through the Walkman and into the headphones. It sounds like a ridiculous feature.
Yet today it is acceptable for people at a dinner table to be staring down at their smartphones rather than at one another. Ridiculous? Indeed, but only in contrast with previous behavior. Ultimately, much of behavior is temporal, defined by the norms and customs of the time. If we are dismissive of these new behaviors we fall back on generational stereotyping, which only reinforces the barriers between people.