Augmented Reality Experience Design & Storytelling: Cues from Japan.

Japan is known as a mecca of inspiration for designers. Michael Peng, co-founder and co-managing Director of IDEO Tokyo, recently identified several places in Japan to highlight some experiences that “wow” us as designers, innovators, and human beings.

As we imagine, create, and define the future of Augmented Reality, there are excellent design cues we can draw inspiration from in the examples listed and apply them to experience design in AR.

I’ve been working with AR for the past 10 years as a designer and researcher evangelizing this new medium and focusing on storytelling in AR (my Twitter handle is @ARstories). Here are 2 of my takeaways from Peng’s list with some ideas on guiding the future of experiences and storytelling in AR.

(*If you’re interested in the future of AR, and hungry for more, this post is just a taste; read more in my upcoming book.)

1. AR and Embedded (Micro) Stories in Things
(Peng lists this as, “1. Pass the Baton – Where Stories are Currency.”)

What does storytelling look like in the augmented Internet of Things and Humans #IoTH?

Appending the word “Humans” to the term #IoT is credited to Tim O’Reilly. He stresses the complex system of interaction between humans and things, and suggests, “This is a powerful way to think about the Internet of Things because it focuses the mind on the human experience of it, not just the things themselves.”

4fdbb8d1ea5c7a147e79e0d1f17a5e900bf46980be0c8264595d2dbebe04bf8e(Photo by Michael Peng)

Peng highlighted the second-hand shop “Pass the Baton” in his list, and noted how the objects for sale are carefully curated stories: a photo of the previous owner and a short anecdote is included on the back of each price tag.

Peng recalled his first visit to the store and commented on how, after reading the description on the back of the price tag on a bracelet (‘I bought this when I went shopping with my mom 5 years ago. It was a special day.’), he felt an “instant connection”, increasing the object’s value.

This approach humanizes the object for sale by embedding a micro story in the object and emphasizes the human experience. It becomes the focus of the experience: a collection of human stories, objects that are seemingly ordinary made extraordinary by the tales they carry.

Peng wrote:

I left Pass-the-Baton with a full heart and a head full of ideas for how to increase the value of products and experiences through storytelling. I wanted to find ways to experiment with the notion of inherent value by applying new layers of storytelling to everyday experiences and systems.

AR makes this possible.

AR gives us a new way to think about the stories embedded in objects. AR can help make the invisible visible and breathe contextual information into objects. Further, anthropomorphism can be applied as a storytelling device to help humanize technology and our interactions with objects. We have the ability to author an augmented Internet of Things AND Humans. What stories will we tell?

2. AR and Macro Storytelling as a Way to Offer Choices
(Peng lists this as, “4. D47 – Where Provenance is Celebrated.”)

Peng described how D47, a restaurant, museum, and design travel store, showcases the best of each region in Japan. For instance, restaurant patrons are asked to choose where their food comes from rather than selecting the things they would like to eat.

d7f77818ba40982332c52358ffc24d7cdd7721bbdbe636c2d0af288aa43b0e70(Photo by Michael Peng)

Offering a macro choice, in this case organized by region, is a way to engage the user in a larger story, providing a wider field of view in content, not just in the tech (which we also want a wider field of view in).

Like the example above of the second-hand store presenting a back story to an object, here the bigger back story is the starting point for curating and organizing content and experiences. It allows for users to explore experiences through a macro-level entryway to navigate to content that they may not otherwise encounter if starting at the micro-level. It provides another way to order and navigate through stories, which we can apply to AR experiences.

Peng wrote how visiting the store encourages him to think of ways to better celebrate the history and origin of the things we create. There is certainly a trend in retail and restaurants, for example, for greater transparency and knowledge of where what we consume is coming from, a growing demand for a more immediate “history” and “origin” of things that inform our choices.

What kinds of new experiences can we design in AR if we begin storytelling at the macro level and allow users choices from such a top-level of navigation?

One of the things that tremendously excites me about AR is that it is an entirely new medium, and for now, completely unchartered terrain with no conventions. There is an incredible opportunity here to help define a new era of human computer interaction. The focus has been on the technology to date and not storytelling. We need to shift gears immediately if we are going to create something lasting and meaningful beyond a fad. We don’t want to end up with a wasteland of empty hardware paperweights that are experience-less. In tandem with the development of the technology, we need excellent storytellers, designers, and artists of all types to imagine and build these future experiences. I hope this is you, because we really need you, now.

Is it? Let’s continue the conversation on Twitter. I’m @ARstories.

*UPDATE: Purchase “Augmented Human” from:

Amazon (USA)
Amazon (Canada)
Indigo (Canada)
Barnes and Noble (USA)
O’Reilly (Digital, International)

4 Ideas That Will Change Augmented Reality

I’m often asked where Augmented Reality is heading and what AR will look like in the next 5-10 years. Let’s take a peek ahead. (This article is part of a larger project and book  that maps AR’s past and future as a new medium.)

If this is your first time visiting Augmented Stories, please allow me to introduce myself: I’m a designer, PhD researcher, and consultant specializing in AR since 2005. In addition to working hands on with AR to conceptualize, prototype and design compelling experiences, I consult in commercial industry, advising and leading clients as their ‘AR Sherpa’ to chart new territory in this emerging field. I travel internationally to give public talks on AR (including two TEDx talks), and I’m also an active member in the AR research and academic community.

AR is advancing rapidly as a new medium, moving beyond just a technology. One of the things I’ve dedicated myself to in this field is exploring what AR does best and the opportunities that exist in AR as a medium for storytelling and creating engaging experiences.

Each medium has unique characteristics to be harnessed, capabilities that extend beyond other mediums. We’re presently in this exciting phase of AR, of identifying, as well as being able to technically influence, what these capacities and criteria are and can become. Just like film, television, radio and photography when first novel, AR presents brand new terrain ripe for creative exploration. With this comes the possibility of forming new conventions and developing a stylistic language of AR.

Part of the task is to determine the distinct characteristics that define AR, with a focus on what differentiates AR from previous forms. (Debates on medium specificity criticize how essentialist traits can be limiting, in turn, prescribing what the aesthetics of these media should look like; however, I’m not advocating that we lock ourselves into these properties, rather, that our exploration begins here and we continue to evolve, bend, extend, and query those qualities in new ways.)

I often liken AR to cinema when it was first new, and I believe film is the closest relative we can align AR to currently, particularly in investigating AR as a new medium and identifying stylistic motifs.

The Vimeo user Kogonada recently posted a wonderful series of videos surveying filmmaking tropes. A montage of stylistic motifs from the oeuvre of various directors are cleverly curated, including one-point perspective from Stanley Kubrick, Quentin Tarantino’s framing from below, shots from above by Wes Anderson, Breaking Bad’s object POV aesthetic, and the sounds of Darren Aronofsky.

So what does this have to do with AR?

These directors introduced and perfected their own motifs within the medium of film. And, so, I believe we will see the same emerge in AR. AR will be mastered by creative geniuses, introducing us to new stylistic modes within the medium. New tropes (visual, aural, and sensorial) specific to AR will emerge that would not have been possible in other media formats.

The larger project and book I am currently working on is inspired by the “100 Ideas that Changed…” book series which looks at a range of media forms including film, graphic design, photography, and architecture, and the design ideas and critical concepts that shaped these mediums as we know them today.  Looking ahead and writing from the future to the past, what are the ‘100 Ideas That Changed AR’? Here are 4 to tease things out a little and get things started:


Image: AR Haptics demo utilizing a Phantom Stylus & HMD from the Magic Vision Lab, University of South Australia.

To date, AR has focused primarily on the visual, leaving the other senses behind. Incorporating a sense of touch and tactile feedback through haptics in AR can help to greatly heighten the sense of the ‘real’ when interacting with virtual content to enrich the overall experience. Imagine being able to feel the scales of a fish in a virtual model as though that fish were really there before your eyes. (I detail my first hand experience of haptics, the possibilities for integration with tablets and the opportunities for storytelling here.) Haptics in AR enables the user to feel a multitude of textures and surfaces that are unique to the context and can also be altered to the individual or scenario. The other example I give is in relation to my AR Pop-up Book, “Who’s Afraid of Bugs?” in which you can hold virtual creepy crawlies in your hand; imagine being able to feel that virtual spider crawling over your hand and the intensity and greater sense of reality this would add to heighten the experience.

Recent work in this arena includes REVEL, from Disney Researchers Ivan Poupyrev and Olivier Bau, which applies reverse electrovibration to create the illusion of changing textures as a user sweeps their fingers across a surface. REVEL can provide tactile feedback both on a touchscreen, or, quite wonderfully, on the physical object itself (without the use of a virtual glove or stylus). As the video above details, this haptic technique can even be applied to projections. Now, imagine, what would it be like to feel scenes in a projected film? What new tropes and stylistic motifs might emerge in such a genre? Will there be an AR director of our times who will become known for their unique sensibilities and style of touch in haptics? (This is a project I’m presently exploring, so do get in touch if you’re keen to collaborate & support this creative investigation).


AR presents the opportunity to interact with a whole new dimensional space that traces a history of illusion from trompe l’oeil to the magic lantern to stereoscopic 3D films. Early cinemagoers were said to have been astonished by the first films, witnessing trains that seemingly charged off the screen and directly at the audience. That couldn’t be a better way to describe some of the visual special effects AR can make possible today; however, we’re now able to step inside that scene and look at that train from a multitude of perspectives. As seen with the SONY video below, AR presents a unique opportunity to engage with 3D interactive photo-realistic sculptural photography, scenes that we are able to enter, explore, and even alter.

I begin to address the potential aesthetic sensibilities that breaking the screen/frame in AR presents here. This has tremendous potential in altering the way we tell stories, with the narrative of the augmented environment spilling into our physical surroundings. Stylistic motifs will emerge that explore storytelling beyond the limits of the single frame, expanding in multiple directions, puncturing conventional space and no longer confined within a distinct screen. The AR participant and audience will be at the center of this, the space navigating around the user; the user’s perception, context and active engagement will dictate and define the illusion.


One of the things I believe is distinct to AR is its capacity to be used as a powerful medium to translate perspectives, to see the world through another person’s eyes. (If we think back to the Kogonada video montages on Vimeo and the Breaking Bad sequence of object POVs, this too is possible, to witness an experience from the perspective of an object; see 3b below.) Human, or non, the possibilities exist for new stylistic motifs to emerge based on this principle.

POV in AR can range from narrative experiences and shifting between character viewpoints, to teaching tools and manuals, with a diversity of experiences from entertainment to utility, fiction to non-fiction.

Steve Mann, too often unrecognized in AR, contributed critical work to the field in the early 90’s with his wearable computer devices and concepts of Mediated Reality. Mann described Mediated Reality and his WearCam eyewear as enabling a finely customized visual interpretation of reality unique to the needs of each individual wearer. These highly personalized experiences could allow for a different interpretation of the same visual reality.

Image: Mediated Reality pioneer and University of Toronto professor Steve Mann.

When I encountered Mann’s work in the 90’s, I often thought: perhaps in the future you will want to filter your reality through someone else’s perspective. Imagine experiencing the world via the reality filter of a friend in a social network, a celebrity, someone you admire or idolize, or perhaps even a complete stranger. By enabling someone to see the world through another’s eyes, POV in AR can also be applied as a tool to build empathy, (see my previous article on this topic here), which could serve various benefits from conflict-resolution to design thinking (empathy is a trait of a good designer).


Branching from #3, this is another form of POV that can assist in navigating and conveying information about a specific environment or location. Concepts like the EyeRing by MIT researchers and Google’s patent for ‘Seeing With Your Hands’ present other means of interacting with AR and computer vision aside from the human eye.

EyeRing can aid the visually impaired to have a form of sight. A camera, worn on the finger as a ring, identifies and recognizes images, colors, and text, transmitting data to a smartphone where an application transforms the collected information into a digital voice. In addition to assistive technology for the blind, the researchers suggest a ‘tourist helper’ application where the name of a landmark is heard once pointed at with the device.

Image: Google’s patent for “seeing with your hands”.

Google’s “seeing with your hands” patent describes a glove for gathering and conveying information which enables “a user to ‘see’ an inaccessible environment” by wearing a device on one’s hands (or “other areas of the body” including a foot, hip or back) equipped with “detectors”. The device may record a series of images and then display those back to the user. Although Google’s Project Glass (which I discuss here in a previous article) is not specifically referenced in the patent, a wearable display that is remote from the glove is mentioned. One can imagine the device being paired with Glass as a possible display. The patent repeatedly refers to predetermined motions, which could serve to combine gesture with AR. One of the motions detailed is a zoom function, which could enable highly magnified views of the user’s environment, that might previously have been deemed “inaccessible”. Another possible motif here emerges: of making the invisible visible, transforming the micro into the macro.


It is no secret that filmmaker & magician Georges Méliès is one of my creative heroes. I often refer to Méliès in my talks, exhibits and articles, and named him the Patron Saint of AR in a special guest post for The Creator’s Project (A partnership with Intel and Vice) on what would have been Melies’ 150th birthday.

At a time when cinema was about documenting actualities, magician and pioneer filmmaker Georges Méliès extended the medium in incredibly novel ways to conjure impossible realities and define bold new conventions for film.

Méliès imagined fantastical worlds in his films, where the marvelous reigned over the mundane, inanimate objects became animate and forms forever shifted, disappeared and reappeared—nothing was fixed or impossible. Through the medium of film, Méliès created enchanted realities.

Méliès presented stories to embellish his latest effect; the narrative was driven by his toolkit, by the tricks he had at hand. Méliès’ stories evolved from his investigation and exploration of the novel technology of cinema and the characteristics of the medium. Méliès was a true explorer: He embraced accidents and glitches in the newfound medium of film; he had the capability to look at mistakes and see the creative potential in them, not discard them as bad or wrong. In fact, this is how the stop-trick, or double-exposure came to be: Méliès’ camera jammed while filming the streets of Paris; upon playing back the film, he observed an omnibus transforming into a hearse. Rather than discounting this as a technical failure, or ‘glitch’, he utilized it as a technique in his films.

A prolific maker, creating over 500 films, Méliès contributed critical technical advances to the medium (fades, dissolves and animations), yet, what is of equal great importance is how Méliès did not stop at these technical achievements alone; he dedicated himself to finding ways to use his technique to present compelling content. Perhaps the best way to summarize the ‘Méliès Effect’ in AR, as I’m labeling it, is in the words of John Andrew Berton Jr.: “Even though Méliès’ work was closely involved with the state of the art, he did not let that aspect of his work rule the overall piece. He used his technique to augment his artistic sense, not to create it” [1].

I interpret this as Méliès maintaining an artistry in the medium, being enthused by the possibilities of the technology and allowing the medium to guide his explorations and work, but not overrule it. Méliès serves as an inspiration to my creative practice and work in AR in that the technology inspired stories and the direction of the work, but he also gave himself the freedom of experimentation to move beyond the constraints and invent new ways of applying the technology. As such, Méliès introduced new formal styles, conventions, and techniques specific to the medium of film. This is the task at hand for AR pioneers and adventurers: to evolve novel styles and establish new conventions towards a language and aesthetics of AR.

This list is part of an ongoing work. What’s on your list of the ideas that will change AR? Let’s continue the conversation on Twitter, the boardroom, or at your next event. Get in touch; a wonderful world of AR awaits and it’s ours for the making.

[1]    Berton Jr., John Andrew. “Film Theory for the Digital World: Connecting the Masters to the New Digital Cinema.” Leonardo: Journal of the International Society for the Arts, Sciences and Technology. Digital Image/Digital Cinema: A special edition with the Special Interest Group in Graphics (SIGGRAPH) Association of Computing Machinery, New York: Pergamon Press, 1990, p.7.

UPDATE: Purchase “Augmented Human: How Technology is Shaping the New Reality” from:
Amazon (USA)
Amazon (Canada)
Indigo (Canada)
Barnes and Noble (USA)
Book Depository (Worldwide)

Designing Augmented Reality Experiences: Lean Forward or Lean Back?

Presently, for the most part, I believe Augmented Reality (AR) is a lean back experience and we need to move towards a lean forward model to drive AR into the future as a compelling, engaging new medium.

So what distinguishes these two media models? It comes down to a passive (lean back) versus active (lean forward) user experience. In 2008, Jakob Nielsen applied these terms to discuss the differences between the Web and television. He described the Web as an active lean forward medium, where “users are engaged and want to go places and get things done.” In comparison, he described the television as a lean back passive medium where, “viewers want to be entertained. They are in relaxation mode and vegging out; they don’t want to make choices.” In a 2011 talk at the mCommerce Summit in New York Steve Yankovich, Vice President of eBay Mobile, referred to the iPad as a “lean back experience (the lean back on the sofa device).” Yankovich described this as the “don’t make me work” and “entertain me!” mode.

(Image via Fifth Finger blog)

Has AR to date largely become a passive lean back entertain and dazzle me / bombard me with information experience?

I thought of this as I read Fast Company’s Co. Design article on Michaël Harboun’s AR thesis project, Transcendenz. Harboun, now a designer at IDEO, states in the article, “Regular AR applications add a layer of objective data, informing us about our surroundings. They give us an instant answer, so that we immediately know what we see.” In this immediacy, we’ve mainly become passive spectators with visuals and data ready to hand (FYI, an interesting tangent here on AR decreasing reliance on memory).

Harboun distinguishes Transcendenz as not giving answers, but asking questions. “It believes in the user’s ability to put the world around him into question, and to not content himself eating instant available data.” We can think of the participant in Transcendenz then as partaking in a lean forward media experience.

Transcendenz is driven by creating an empathetic experience for the user in AR (an arena I’ve been very interested in since I began working with AR seven years ago). My thoughts turn to Steve Mann’s work on Mediated Reality from the early 90’s. In an article in the Linux Journal, Mann states, “Mediated Reality sets forth a new computational framework in which the visual interpretation of reality is finely customized to the needs of each individual wearer of the apparatus”. As such, he furthers, “Just as you would not want to wear undergarments or another person’s mouth guard, you may not want to find yourself wearing another person’s computer”. Years ago as I read this I often thought, perhaps in the future you will want to filter your reality through someone else’s perspective, be it a friend in a social network, someone you admire or idolize, or perhaps even a complete stranger. Mediated Reality could be applied to build empathy, by enabling someone to see the world through another’s eyes.

In the Transcendenz video, we see the main character jaunted by how everyone now quite literally resembles him, taking on his physical appearance, within the Empathy prism. The narration is translated: “It’s amazing how a perfect stranger can suddenly seem so familiar. It’s as if one would project our own life on others and even the most annoying persons [sic] now make us smile.” Transcendenz is depicted as a tool to enable empathy not only towards other human beings, but to nature and our environment as well. Philosopher Immanuel Kant is referenced in the narration, “Empathy is not only about projecting ourselves on our fellows, but also on the world around us.” (I’ll spare you the stardust part.)

While in the Empathy prism, the user is actively engaged in his environment in a lean forward mode. The experience is initially accessed through the act of meditation as seen in the video at 1:18. The program indicates to the user, “You’re too excited to enter the interconsciousness. Try to meditate a bit.” There’s something curious happening in this act where a lean forward mode is entered via a lean back relaxation mode in meditation.

A couple of references here enter my thoughts. Firstly, non-traditional video games developed at USC’s Game Innovation Lab including Cloud, Journey, Flower, and Bill Viola’s The Night Journey. Game designer and lab director Tracey Fullerton describes these experimental games as positing, “the possibility of a game mechanic that expressed peacefulness, wonder and awe” as well as enlightenment (wonder being one of my favourite words in fact; here’s a link to my TEDx 2010 talk on AR & Wonderment). The core mechanic in Bill Viola’s PlayStation 3 game is described in his artist statement as “the act of traveling and reflecting rather than reaching certain destinations – the trip along a path of enlightenment.” I view this description as parallel to the mechanics and design intent of Transcendenz.

It is interesting to consider what kind of aesthetic and direction AR games may take on if such a mechanic is followed and applied as opposed to the current direction, which are predominately AR first-person shooter games.

The second reference I think of is Ian Bogost’s latest book, “How to Do Things with Video Games” and his chapter on “Relaxation”, particularly as he refers to lean back and lean forward media in relation to video games. Bogost distinguishes how leaning forward “requires continuous attention, thought and movement” and leaning back is associated with relaxation and passivity (he even mentions gluttony). He writes, “To relax through a game requires abandoning the value of leaning forward and focusing on how games can also allow players to achieve satisfaction by leaning back.” Transcendenz leans back in a state of relaxation and enlightenment, yet simultaneously asks the user to lean forward and not abandon that state, to actively engage with their environment and be present, immersed and interactive. The core mechanics of Transcendenz, comparable to Viola’s The Night Journey, are exploration, reflection and action through emotional experience to transform the user / player — and, after all, isn’t this what designing AR experiences should be about: engaging the viewer in meaningful, contextual, interactive learning experiences which are rooted in the real world.

Let’s continue the conversation: please post any comments below or reach me on Twitter.

The Future of Augmented Reality is in our Hands with Haptics & Touch Screens

There are two questions that I’m often asked: ‘What’s in store for the future of AR?’ and ‘What would you like to see in the future of e-books and tablets?’

My answer to both is haptics and tactile feedback.

In August 2011 I had the pleasure of visiting the Magic Vision Lab at the University of South Australia and experiencing their AR haptics demo. Wearing a head-mounted display (HMD) and using a Phantom stylus, I could feel the scales of a virtual fish which appeared before me. I was able to touch virtual objects and receive tactile feedback, as though these were real, physical objects I was interacting with. This completely threw off my sensibilities of the real, having difficulty distinguishing between what was real and what was virtual. This experience signified an important shift for me in the medium of AR: in the past, the only tactile component of AR was that which physically existed in our environment.

Image: Haptics Demo from the Magic Vision Lab, University of South Australia.

I was fascinated by the sense of touch and tactile feedback paired with AR. However, I was left desiring a more direct interaction in this experience, without the HMD or stylus.

Enter Senseg’s touch technology for tablets, which premiered at the Consumer Electronics Show (CES) last week. If we can merge this with AR, I truly think it can be a game changer and help push the medium forward in new important ways that are currently absent. To date, AR has been primarily a vision-based medium; we haven’t really got to augmenting touch and we can’t ignore these other very ‘real’ senses for much longer.

In the short video above, Dave Rice, VP of Senseg, discusses the technology as adding tactile effects to touch screen displays including smart phones, tablet computers, touch pads and gaming devices. He discusses the possibilities for gaming applications (I personally think this would be incredible to apply to storytelling as well) and describes a treasure hunt game in which a treasure chest is hidden and can only be found by feeling around on the screen. Dave says, “There were no visual cues there and that’s pretty exciting because now we can move to the world of feel to complement what you’re seeing, or to work independently from it and really create a new world to explore.”

For me this perfectly describes the future of AR and its potential. I think about this last quote and how it applies to my recent AR Popup Book “Who’s Afraid of Bugs?”, the first AR book designed for iPad 2. For me the next step in this book is to be able to touch and feel the texture of the virtual spider that magically appears. Imagine petting the spider and feeling each tiny hair.

(Also with today’s announcement from Apple on iBooks textbooks for iPad, “a new kind of textbook that’s dynamic, current, engrossing, and truly interactive”, imagine how haptics and tactile feedback could change the future of education in e-books, as well as AR. Talk about ‘bringing the curriculum alive’.)

A Brief Rant on the Future of Interaction Design”, is a very relevant and excellent article in which Bret Victor asks us to aim for a “dynamic medium that we can see, feel, and manipulate”. Bret’s article immediately resonated with me when I read it in November and I shared it with the AR community via Twitter as something important we needed to be aware of and really work towards.

Image: Bret Victor

Bret emphasized the use of our hands to feel and manipulate objects. He writes, “The sense of touch is essential to everything that humans have called ‘work’ for millions of years.”

“Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit. What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?” Bret calls this technology, “Pictures Under Glass”, noting that it sacrifices “all the tactile richness of working with our hands”.

Bret links to research that’s been around for decades in haptics, tangible-user interface (TUI) and even Touchable Holography. He comments on how this research has always been marginalized, but that “maybe you can help”.

AND WE CAN. As Bret so wonderfully states, the most important thing about the Future is that it is a choice. As an AR industry and community, it is our choice as to how this medium evolves. “People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.”

Let’s do this. Kindly get in touch if you’re keen to collaborate!

Let’s also continue the conversation in the comments area below and on Twitter; find me, I’m @ARstories.

*Hat tip to Stephen Ancliffe for sharing the Senseg video with me.

UPDATE (March 6, 2012): The Next Web and The Guardian published articles on haptics/touch-feedback possibly being the secret feature of the iPad 3. Could AR’s tactile future be here very, very soon? Let’s hope so. My original post was published on January 19, 2012.

UPDATE (March 7, 2012): Sadly, haptics didn’t make it into Apple’s big “New iPad” announcement today. Let’s hope for touch-feedback capabilities in the next iPad. Now, that would truly be a magical device.