Designing Beyond Screens
Virtual Reality (VR) strives to recreate the physical world in a virtual one. Augmented Reality (AR), on the other hand, can bring the digital into the physical world to create a hybrid reality. AR offers new ways of applying technology to immerse ourselves in our physical reality (rather than being removed from it), and even enhance it.
Interacting with screens is a big part of our everyday modern reality. We spend a great amount of time engaging with our world and each other through two-dimensional screens, whether via a smartphone, tablet, or computer. The world we live in, however, is three-dimensional and not flat: it is physical and involves the use of multiple senses. AR presents the opportunity to design beyond the screens we use today and create new experiences that better embody the full human sensorium.
In my last Radar article, I looked at how AR, wearable tech, and the Internet of Things (IoT) are augmenting the human experience. I highlighted how computer vision and new types of sensors are being combined to change the way we interact with and understand our surroundings. Here, I’ll look at how this can be extended by integrating the human senses beyond the visual — such as touch, taste, and smell — to further augment our reality.
Touching the digital and interacting with data in novel ways
Imagine using your hands to manipulate and pull virtual objects and data directly out of a 2D display and into the 3D world. GHOST (Generic and Highly Organic Shape changing) is a research project across four universities in the United Kingdom, Netherlands, and Denmark exploring shape-changing displays that you can touch and feel. Jason Alexander, one of the researchers from Lancaster University, describes the technology: “Imagine having Google maps on your mobile phone, and when you pulled out your phone, you didn’t just see a flat terrain, the little pixels popped out so you saw if you had to walk over hills or valleys in order to reach your destination.” He explains, “This allows us to use our natural senses to perceive and interact with data.”
A surgeon, for instance, could work on a virtual brain physically, engaging in a fully tactile experience before performing a real-life operation. Artists and designers using physical materials such as clay could continually remold objects using their hands and store them in a computer as they work. Kasper Hornbæk, a project researcher from the University of Copenhagen, suggests that such a display could also allow you to hold the hand of your significant other, even if they are on another continent.
Esben Warming Pedersen, a member of the research team at the University of Copenhagen, discusses how these “deformable screens” differ from traditional touch screens. With no glass surface in front of the content, you can reach into the screen and touch the data. Pedersen explains how this is possible by first looking at the way normal glass touch screens work: “All that the iPad actually sees is the tip of your finger touching the glass display. So, when an iPad tries to find out where and how we touch it, you can think of the iPad actually as a coordinate system.” The deformable display is more complex than locating the coordinates of the touch of your fingertip; it uses depth data captured by a 3D depth camera. Pedersen is working to develop computer vision algorithms that make it possible to take this 3D data and represent it in a way so the computer can better understand and apply it in interactions.
One challenge Pedersen identifies is that we don’t yet know how to interact with these new screens. He comments on how we have an existing common vocabulary to interact with 2D displays, such as pinching our fingers to zoom out of a picture and sliding to switch to another picture, but if we then look at 3D gestures, or deformable gestures, it’s less apparent how to use these screens. Pedersen is working on user studies in search of an intuitive vocabulary of new gestures.
Combining multiple senses to experience a new reality
These new types of experiences aren’t limited to engaging one sense at a time. Adrian David Cheok, professor of pervasive computing at City University London, is working on new technologies to allow you to use all of your senses while communicating through the Internet. Cheok is building a Multi-Sensory Internet to transcend the current limitations of online communication. “Imagine you’re looking at your desktop, or your iPhone or laptop — everything is behind the glass, behind a window, you’re either touching glass or looking through glass. But in the real world, we can open up the glass, open the window and we can touch, we can taste, we can smell. So, what we need to do is we need to bring computers — the way we interact with computers — like we do in the physical world, with all of our five senses,” says Cheok.
Cheok’s sensorial inventions include an Electronic Taste Machine and a device called Scentee. The Electronic Taste Machine consists of a plexi-glass box you stick your tongue into to taste different flavors over the Internet. Using electrical and thermal stimulation, the interface temporarily tricks your tongue into experiencing sour, sweet, bitter, and salty tastes, depending on the frequency of the current passing though the electrodes. Scentee is a small device that attaches into the audio jack on your smartphone and releases an aroma using chemical cartridges when you receive a text message. Cheok explains: “For example, somebody may want to send you a sweet or a bitter message to tell you how they’re feeling. Smell and taste are strongly linked with emotions and memories, so a certain smell can affect your mood; that’s a totally new way of communicating.” He also references a commercial application he is working on with with Michelin-starred restaurant Mugaritz in San Sebastian, Spain, to allow you to smell the menu through your phone.
Cheok’s examples allow you to experience and share something that is virtual in reality; however, they are still simulations applied to mimic the real thing. Meta Cookie from the University of Tokyo combines scent and computer vision to also apply a simulation of reality, but in this case, technology is used to augment the taste of a real cookie. Meta Cookie merges an interactive olfactory display with plain edible cookies. A see-through head-mounted display allows the user to view various cookie selections in AR (with different cookie textures and colors digitally layered atop the plain cookie). Once you select the flavor of cookie you would like to eat, the air pump delivers the scent of the chosen cookie to your nose. This creates the effect that you are eating a flavored cookie, despite it being a plain one. If you don’t like the taste of the cookie you chose, you can transform it into another flavor and take another bite. In fact, you could have one ultimate cookie that embodies a different flavor with each bite. Your experience can be entirely customizable. We are beginning to see a shift in physical objects being imbued with digital properties, making them shape-shifting and adaptable to our personal needs, and in this case, our tastes.
Traditional screens as we know them are rapidly evolving, giving way to novel interactions and experiences. A new reality is coming that will forever change the way we engage with our surroundings. AR is about a new sensory awareness, deeper intelligence, and a heightened immersion in our physical world and with each other.
Read more about the ideas and inventions this next major technological shift represents in my book, Augmented Human: How Technology is Shaping the New Reality.
PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS