Slate.com | June 13, 2012
By Ayesha and Parag Khanna
One 26-year-old says more than half his memories come from his online life. A Japanese man marries a voluptuous digital avatar. A corporate laboratory implants memories in 7-year-olds, convincing them they swam with dolphins. In their minds, they even got wet.
Even for our greatest philosopher of the surreal, Sigmund Freud, reality remained rooted in the personal and social. A century on, however, technology is granting us the ability to alter our perception of reality, construct multiple representations of ourselves like avatars, and have relationships with artificial agents like robots. All of these are simultaneously expanding and destabilizing our sense of self.
Technology is a “second self,” as MIT professor Sherry Turkle has explained: a new interface between us and others. Debates over whether social technologies cause “detachment” from reality miss the point that we are entering a new hybrid reality in which assumptions about authenticity are fundamentally challenged: Who is real? What is the line between physical and virtual? Do we each get to live our own version of the truth?
Let us begin with technology’s growing ability to manipulate how much information we have about the world around us. Google glasses and soon pixelated contact lenses will allow us to augment reality with a layer of data. Future versions may provide a more intrusive view, such as sensing your vital signs and stress level. Such augmentation has the potential to empower us with a feeling of enhanced access to “reality.” Whether or not this represents truth, however, is elusive. Consider the opposite of augmented reality: “deletive reality.” If pedestrians in New York or Mumbai don’t want to see homeless people, they could delete them from view in real-time. This not only diminishes the diversity of reality; it also blocks us from developing empathy.
The possibilities for new physical (rather than just visual) self-other relationships are emerging through haptic (“touch”) technologies that enhance intimate sensations. Adrian Cheok of Singapore has coined the emerging field of “Lovotics” at the intersection of love (philosophy, psychology, biology, neuroscience) and robotics (artificial intelligence and engineering). His “Kissenger” device is a matching pair of plastic lips pre-shaped to match you and your loved one. The porn industry is promising tele-dildonic devices that convert interactive virtual behavior into real-life sensations as well. Technology can even insinuate itself into our most intimate psychological spaces by awakening invisible neuro-chemical bonds with one another. UCLA professor Dan Siegel’s research uses fMRI technology coupled with neuro-prosthetics to allow people to share the “state of mind” generated in the frontal cortex of the brain. We could actually create a pluralistic soul out of our most individual essence.
The more time we spend in virtual environments, the more the distinction between real and digital blends away. Of the eight hours a day children today spend online, one and a half of those are using avatars (compared with only 30 minutes reading print). Microsoft’s forthcoming Avatar Kinect features photo-grammatic technology that creates near perfect digital replication of facial features including animation of your expressions. The allure of constant 3-D virtual life with our real companions will prove irresistible. As this converges with technologies like Wii and 3-D TV, which already give us the foundations for mass hands-free (and glasses-free) digital immersion, we create an interactive virtual universe. The way we navigate the Internet will evolve in step, moving from text-based Wiki to multi-media Qwiki. At Keio University, engineers are developing a system best described as tele-existence. Called the “Twister,” a room replicates any background scene in 3-D, while Twisters in different locations could allow multiple sets of participants to feel as if collectively teleported to the same setting.
The radically improved realism of immersive technological experience has propelled the purpose of our online life from social escapism to professional tool to parallel life to eventually two sides of the same coin. As the texture of the online aesthetic is becoming rich enough to rival the real one, which will we prefer? In hybrid reality, both are equally important.
We may attempt, then, to transcend our most finite commodity—time—by multiplying ourselves to maximize each moment. Initially our avatars are a direct expression of ourselves, but eventually, with the advent of AI+, we may use multiple avatars as expressions of various facets of our personalities. We may even imbue them with certain preferences that they can pursue in cyber-life, potentially creating deep entanglements on our behalf. Such an autonomous avatar isn’t just a direct representation of our real selves, but actually shapes our individual psychology and behavior. The digital mirror has a subliminal voice.
The combination of cloud-based data, devices, and software that allow us to search and share, and artificial intelligence capable of semantic understanding heralds the rise of a collective intelligence. The Internet, Jeffrey Stibel argues, is not just becoming like a brain. It is a brain: It ingests data, processes them, and “provides answers without knowing questions.” As our cognitive processes are increasingly shared with devices, networks, and the physical environment, our sense of self increasingly morphs to become the sum of our connections and relationships. Rather than one single identity, we each have a personal identity ecologycombining our real and virtual selves and our semantic links floating in the global mind (“Noosphere”). Google’s Sergey Brin calls this having “the entire world’s knowledge connected directly to your mind.”
This does not have to be done sitting in a chair. Microsoft’s Gordon Bell conducted a decade of “LifeLogging,” which can now be replicated by anyone using Zeo’s portable recording devices, which can capture just about everything we do and see. Eventually we might be able to upload this knowledge to our own parallel portable brain such as that being developed by IBM’s SyNAPSE team, a life-size carbon-nano-cortex of circuitry that will mimic the architecture and efficiency of the human brain while potentially exceeding its speed. Through such “cognitive computing,” we could potentially control all our identities simultaneously. Today your official identity converges around a national ID or Facebook login, tomorrow perhaps your DNA, but beyond that there are few if any limitations.
Finally, the rise of social robots is reshaping the milieu in which our identity forms by introducing an entirely new type of “other.” Robots are irrefutably becoming more ubiquitous, intelligent, and social. Already in the 1960s, subjects of MIT’s studies emotionally revealed themselves to the boxy and binary chatbot ELIZA. Since 2010, hospitalized children and elderly widows in daycare have increasingly been cuddling and emotionally bonding with Paro, the Japanese-designed robotic seal that physically responds to touch. For less than $10,000, the prototype Roxxxy sex robot can be made to look like anyone you want. It senses and responds to your touch and is Wi-Fi-enabled to send love notes. Each of these underscores the rise of robotic companionship.
For three nights in February 2011, millions of American households tuned in to watch the game show Jeopardy!, during which a machine, IBM’s Watson computer, “stood” between the show’s two all-time greatest players—and completely demolished them. Audiences around the world nodded, cheered, and whistled as Watson demonstrated contextual understanding of the complex idioms and puns that are the hallmark of the show’s mind-bending questions, answering almost all of them instantly and correctly. Two things happened on those nights. Advances in machine intelligence were on full display, far beyond IBM’s previous chess-playing Deep Blue computer. But equally importantly, we, the viewers, accepted a robot as a social actor in our lives. It was novel, but quickly became natural, even normal.
Artificial intelligence does not need to be fully autonomous to be compelling and persuasive. Rather, it needs to leverage the Noosphere and present itself in a compelling anthropomorphic fashion. We already have voice and gesture-based control of devices through Apple’s Siri and Microsoft’s Kinect. Lifelike robots that mimic our facial expressions, even while saying nothing profound, are sufficient to evoke Freud’s sense of the uncanny.
In various ways and with varying degrees of intelligence, robots today already perform surgery, sense earthquakes, bomb terrorists, fly planes, drive cars, baby-sit children, build hardware, trade money, fold laundry, perform in operas, and have sex (with humans). There is enough robotic penetration already to have inspired Carnegie Mellon to launch a robot census. As the definition of society expands to include humanoids and other robotic forms, how will our family structure be affected? Will each of us have robot companions longer than we have spouses? Will robots have rights? How will we hold them liable for accidents?
These are questions we may have to answer sooner rather than later, especially because the transition from pre-programmed to semi-autonomous robots has begun. MIT computer scientists have hacked and mounted the Kinect on a Segway, programming it to sense and manipulate objects, even to look for a power source to plug into. (Ordinary humans who own iPhones can certainly sympathize.) Honda’s Asimo can now walk along hallways and avoid bumping into others; soon he might be able to cross Tokyo’s infamously dense Shibuya crossing like an ordinary person. As Google’s driverless car begins to navigate blind passengersand eventually families around Nevada and beyond, we need to maintain clarity over who is ultimately behind the wheel.
The proliferation of identities in hybrid reality undoubtedly brings with it the schizophrenia of simultaneous temporal and digital lives. Technology immersion so defines hybrid reality that it requires a conscious effort to “tune technology out” through gadget-free spiritual retreats—but we show little sign of wanting to do so, with worrying results. South Korea, held up as a role model for its universal broadband connectivity and strong education system (and where most children above 6 have their own blog), is also a cautionary story given cyber-bullying, academic suicide, online addiction centers, and mandatory midnight curfews/blockage of leading gaming websites. One South Korean couple spent so many hours obsessively raising their virtual daughter Anima on the popular online game Prius that their own infant daughter (who remained unnamed) starved to death at home. In America, virtual voyeurism has had tragic consequences in real life, such as the Tennessee couple murdered in cold blood for de-friendingan estranged ex-boyfriend on Facebook. Another couple met in Second Life, got married in real life, then divorced due to affairs each then had in Second Life.
For all of the risks and unanswered questions, immersive environments can also be extremely useful coaches for the emergent hybrid reality. In a virtual classroom, for example, the augmented gaze of the teacher can be focused simultaneously on all students, making instruction that much more persuasive. Instead of Déjà vu, Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford presents us with “Veja du”: the ability to see ourselves doing something we have not done physically. We can visualize what happens to our bodies if we don’t exercise, or a massive multi-player simulation can make clear the costs of global conflict.
The world population may plateau physically, but we are multiplying ourselves digitally and robotically. The measure of our ability to manage this hybrid reality of co-existing identities will not be IQ or EQ, but TQ—technology quotient.
This essay is adapted from Hybrid Reality: Thriving in the Emerging Human-Technology Civilization, recently published by TED Books.