Augmented Reality Storytelling: The Body and Memory Making

When designing AR experiences and AR stories, we too often forget something very important: the human body.

Augmented content is not two-dimensional or flat; it unfolds in our physical space, in our personal surroundings. We’re walking around it and crouching on the floor, exploring it from different angles and heights. AR stories are about our body in relation to the virtual constructions inhabiting our space as much as they are about the content presented.

Earlier this month, The New York Times debuted their first AR enabled article within their iOS app, a preview piece for the Winter Olympics in Pyeongchang, South Korea. Using an iPhone or iPad, readers can meet Olympic athletes in AR—figure skater Nathan Chen, big air snowboarder Anna Gasser, short track speed skater J.R. Celski, and hockey goalie Alex Rigsby—as if they were paused mid-performance.

John Branch, the author of the NYT AR article, observes how, despite all of the camera angles, watching sports on television creates an experience where you are “passively cocooned on the couch as a mere spectator to miniaturized athletes squeezed through a two-dimensional plane.” The NYT has readers moving and actively engaging with the content in AR, walking around the room where you’re reading the story. Far from being “cocooned on the couch”, you’re crawling on the floor to look speed skater J.R. Celski in the eye. And the 3-D visualizations are not miniaturized like on your TV; they’re sized true to life, and to scale. For example, you’re looking up at figure skater Nathan Chen as he appears 20 inches off the ground in your room, the height he would be mid quadruple jump.

The NYT gets this aspect of AR storytelling right: it’s not just about the athletes’ bodies and their form, it’s also about the way you’re maneuvering your body in conversation with the story and your space as you experience the content.

But there’s something else happening here that we also need to take into account when designing AR experiences.

I was recently chatting with Theresa Poulson about AR storytelling and my new book Augmented Human (Theresa is developing an incubator for creators to advance emerging forms of non-fiction storytelling at Video Lab West). We were discussing the NYT AR experience and Theresa said something I found very intriguing and something that I believe is often overlooked in AR. She mentioned how she walks past the place in her office daily where she encountered the 3D Olympic athlete and she remembers it as though it really happened there in that room, which it in fact did.

As I shared in my AR keynote at FutureX Live 2017 in Atlanta, we’re no longer just designing stories, we’re now designing memories with AR.

In 2016, the first experiences for the Microsoft HoloLens developer edition were introduced, including a game called Fragments, a crime drama that plays out in your physical environment and has you searching for clues in your space to solve the mystery. Kudo Tsunoda, CVP Next Gen Experiences, Windows and Devices Group, Microsoft, said, “Trust me, the first time one of our Fragments characters comes in to your home, sits down on your sofa, and strikes up a conversation with you it is an unforgettable experience.” It really is, and I especially remember the virtual rats.

“Fragments blurs the line between the digital world and the real world more than any other experience we built,” said Tsunoda. “When your living room has been used as the set for a story, it generates memories for you of what digitally happened in your space like it was real. It is an experience that bridges the uncanny valley of your mind and delivers a new form of storytelling like never before.”

There’s a higher level of emotional engagement with experiences like Fragments because the story is unique to your space, the position of your body, and your gaze. There is a direct contextual relationship with content responding to you and your environment. The way you experience Fragments in your home will be different from the way I experience it in my home. Spatial mapping and custom artificial intelligence allow a room’s layout to influence the placement of virtual content in the game, such as a piece of evidence hidden behind your furniture.

The emotions from the game stay with you long after you take the headset off, transforming into memories that are virtually scribed onto your environment, like an augmented palimpsest. And so, the importance of being cognizant of this fact and conscientious as we continue to design and develop AR stories and experiences. The audience is inviting you into their physical and mental homes. It will leave a virtual footprint. With both public and private spaces becoming stages for AR stories, let’s remember: always be generous and kind to your user. It will leave a lasting impression.

Let’s continue the conversation. I’m @ARstories on Twitter.

Announcing: Augmented Human: How Technology is Shaping the New Reality

Augmented Human


by Helen Papagiannis, Published by O’Reilly

I’m SUPER excited to announce my book, Augmented Human: How Technology is Shaping the New Reality.

You may remember the book being titled, “The 40 Ideas That Will Change Reality.” I’m thrilled to share it’s now morphed into something even BIGGER and is being published by O’Reilly. Pre-orders are available via Amazon and O’Reilly.

The book looks at how Augmented Reality and this next major technological shift will forever change the way we experience the world. By inspiring design for the best of humanity and the best of technology, Augmented Human is essential reading for designers, technologists, entrepreneurs, business leaders, and anyone who desires a peek at our virtual future.

Dr. Tom Furness, the grandfather of AR/VR, is an incredible human who has dedicated his career to making a positive impact upon humanity with emerging technologies and new inventions. I’m incredibly honoured and grateful to have Dr. Furness writing the foreword to my book.

An early release of the book will be available soon featuring chapters you can read immediately upon pre-order. More news to come on the book in the coming weeks! I truly can’t wait for you to read Augmented Human!

Also check out the first of many articles I’m writing for O’Reilly Radar on the current state of Augmented Reality and designing for the future.

I’m writing this from Augmented World Expo in Silicon Valley where I’m giving a talk on Advancing AR for Humanity on June 10 at 11:30am on the main stage. I hope to see you at one of my upcoming talks in June:

~ Solid Conference in San Francisco, Jun 23-25
(Save 20% off registration at this link with my code: AFF20)

Cannes Lions International Festival of Creativity in France, June 25-26

Thanks for your continued support!
All my best,
~ Helen, @ARstories.

Augmented Reality and Virtual Reality: what’s the difference?

AR and VR are often confused with each other, and used interchangeably in the media, but they are significantly different. Let’s break it down:

Augmented Reality (AR): real, physical world

Virtual Reality (VR): computer generated environment, artificial world Screen Shot 2015-01-30 at 2.25.41 PM In VR, the user is completely closed off from the physical world, fully immersed in a computer generated simulation. Eg: Oculus Rift

In AR, the user is still in their physical space, now with additional digital data layered on top of the real world. Eg. Meta Spaceglasses

I often begin my presentations distinguishing between the two. Here’s one of my talks from TEDx in 2010.

The definition of AR is expanding to include things like wearable computing, new types of sensors, artificial intelligence, and machine learning. I call this the second wave of AR. Read about it here in my upcoming book.

And if you’re wondering what the heck the HoloLens is that Microsoft announced, read about it here in my latest article. *Hint, it’s not VR.

Find me on Twitter: I’m @ARstories.

Reality Has Changed. Microsoft’s HoloLens and what you need to know about the next wave of Augmented Reality

All hands on (holo)deck! 2015 is ramping up to be the year of Augmented Reality (AR). hololens Microsoft threw their hat into the ring today announcing HoloLens, their AR headset lead by Kinect inventor Alex Kipman. Remember “Fortaleza” the AR glasses we had a peek at in the leaked Xbox 720 document in 2012? Say hello to the HoloLens prototype in 2015.

The community has been quick to point out the similarities between existing AR eyewear like Meta’s SpaceGlasses, but how is HoloLens different?

HoloLens appears to use a Virtual Retinal Display (VRD).

So, what’s VRD, you ask?

VRD mirrors how the human eye works. The back of the eye receives light and converts it into signals for your brain. Images are projected directly onto the retina with the back of the eye used as a screen effectively.

The result is a more true-to-life image than the ‘ghostly transparent superimposed representation’ (as Gizmodo reporter Sean Hollister describes) we’ve seen with AR eyewear before. Hollister details his experience of Microsoft’s prototype as “standing in a room filled with objects. Posters covering the walls. And yet somehow—without blocking my vision—the HoloLens was making those objects almost totally invisible.” He states, “Some of the very shiniest things in the room—the silver handle of a pitcher, if I recall correctly—managed to reflect enough light into my eyes to penetrate the illusion.”

hololens2 In an exclusive interview with Wired’s Jessi Hempel, HoloLens’s inventor Kipman hints at VRD with his description of how HoloLens works by tricking the human brain into seeing light as matter.

“Ultimately, you know, you perceive the world because of light,” Kipman explains. “If I could magically turn the debugger on, we’d see photons bouncing throughout this world. Eventually they hit the back of your eyes, and through that, you reason about what the world is. You essentially hallucinate the world, or you see what your mind wants you to see.”

I personally can’t wait to see what my mind wants me to see, particularly in this second wave of AR. For me, AR is about extending human capacity and the human imagination, not supplanting it. I’ve been working with AR for a decade now and it’s tremendously exciting to see this all quickly becoming a reality. We have a whole new medium waiting to be defined.

Microsoft’s HoloLens is currently a prototype with no price or release date announced, and we’ve yet to see what Magic Leap will unleash into the world, but I can promise you this: AR is coming in hot and fast. We WILL experience the world in unprecedented ways. Reality has changed. Read more about the next wave of AR in my upcoming book. 40ideasweb And as always, let’s continue the conversation on Twitter: I’m @ARstories.

Forget AR Dinosaurs, MIT Startup Wants to Bring YOU Back from the Dead

Augmented Reality pioneer Ronald Azuma ends his 1997 seminal essay A Survey of Augmented Reality with the prediction: “Within another 25 years, we should be able to wear a pair of AR glasses outdoors to see and interact with photorealistic dinosaurs eating a tree in our backyard.” Although his prediction would take us to a few more years in 2022, AR has advanced much quicker than any of us could have imagined. With the rise of wearables and devices like Meta’s SpaceGlasses, we’re getting closer to a true AR glasses experience and we WILL get there very soon.

We’ve had AR dinosaurs already appear just about everywhere — apparently a sure-fire source of go-to content. ‘What should we make with AR? Duh, a dinosaur!’.


Image Source: The Advertiser

Dinosaurs, shminosaurs.

How about interacting with a realistic virtual long dead you resurrected in the backyard instead? Now that might startle the neighbours.

Screen Shot 2014-02-19 at 3.59.35 PM

Image: Screenshot from website

MIT startup wants to bring you back from the dead to create a virtual avatar that acts “just like you”:

“It generates a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends after you pass away. It’s like a Skype chat from the past.” bares an eery resemblance to the Channel 4 Television Series Black Mirror, specifically Series 2, Episode 1, Be Right Back in which we watch widowed Martha engage with the latest technology to communicate with her recently deceased husband, Ash. Of course, it’s not actually Ash, but a simulation powered by an Artificial Intelligence (A.I.) program that gathers information about him through social media profiles and past online communications such as emails. Martha begins by chatting with virtual Ash and is able to later speak with him on the phone after uploading video files of him from which the A.I. learns his voice. hopes to immortalize you in a similar fashion by collecting “almost everything you create during your lifetime and processes this huge amount of information using complex A.I. Algorithms.”



Images: Black Mirror

But who will curate this mass amount of information that is “almost everything you create during your life time”? In an article on in Fast Company, Adele Peters writes, “While the service promises to keep everything you do online so it’s never forgotten, it’s not clear that most people would want all of that information to live forever.” Commenting on how our current generation now documents “every meal on Instagram and every thought on Twitter”, Peters asks, “What do we want to happen to that information when we’re gone?”

Will we have avatar curators?

This sentiment echoes Director Omar Naim’s 2004 film, Final Cut, starring Robin Williams. Williams plays a “cutter”, someone who has the final edit over people’s recorded histories. An embedded chip records all of your experiences over the course of your life; Williams job is to pour through all of the stored memories and produce a 1 minute video of highlights.


Image: Film Final Cut (2004)

Will’s A.I. Algorithm be intelligent enough to do this and distinguish between your mundane and momentous experiences?

In Black Mirror, Martha ultimately tells simulated Ash, “You’re just a few ripples of you. There’s no history to you. You’re just a performance of stuff that he performed without thinking and it’s not enough.” Will these simulated augmentations of us be “enough”?

Marius Ursache,’s founder says, “In order for this to be accurate, collecting the information is not enough–people will need to interact with the avatar periodically, to help it make sense of the information, and to fine-tune it, to make it more accurate.”

This post expands on a recent article I wrote on Spike Jonze’s film Her, where I discuss the film from an AR perspective. Her introduces us to Samantha, the world’s first intelligent operating system and offers us a glimpse of our soon to be augmented life when our devices come to learn and grow with us, and in the case of, become us. I discuss how our smart devices, like Samantha, will come to act on our behalf. Our smart devices will know us very well, learning our behaviours, our likes, dislikes, our family and friends, even aware of our vital statistics. The next wave of AR combines elements like A.I., machine learning, sensors, and data all to tell the unique story of YOU. With we may just see this story of you continuing while you’re long gone.


Image: Spike Jonze’s film Her (2013)

Gartner claims that by 2017 your smartphone will be smarter than you. A gradual confidence will be built in the outsourcing of menial tasks to smartphones with an expectation that consumers will become more accustomed to smartphone apps and services taking control of other aspects of their lives. Gartner calls this the era of cognizant computing and identifies the four stages as: Sync Me, See Me, Know Me, Be Me. ‘Sync Me’ and ‘See Me’ are currently occurring, with ‘Know Me’ and ‘Be Me’ just ahead, as we see Samantha perform in Her. ‘Sync Me’ stores copies of your digital assets, which are kept in sync across all contexts and end points. This data storage and an archive of an ‘online you’ will be central to’s creation of your virtual avatar. ‘See Me’ knows where you are currently and where you have been in both the real world and on the Internet, as well as understanding your mood and context to best provide services. If your mood and context can be documented and later accessed to know how you were feeling in a particular location, this will dramatically affect the curation of your memories to be accessed by the A.I. system. ‘Know Me’ understands what you need and want, proactively and presents it to you with ‘Be Me’ as the final step where the smart device acts on your behalf based on learning. Again, being able to document and access your personal needs and wants will paint a clearer picture of the story of you and who you were. The true final step of ‘Be Me’ will be put to the test once you are six feet under, which begs the question, will we become smarter when we die?

Will you register for

Let’s continue the conversation on Twitter: I’m @ARstories. And yes I’m still alive.

*Update: January 23, 2015:

Yep 2015, I’m still *still* alive, and no this isn’t a bot writing this. However, it could be. You could be receiving a beautiful hand-written note from (A.I.) me right now from the afterlife. Except I didn’t write it. A bot named BOND did using my penmanship.

More in my upcoming book on the future of reality here:

A Day Wearing Augmented Reality Glasses #womenWearingARglasses

Her bio-alarm clock wakes her up gently; she feels renewed. Her glasses give her the stats on her sleep, syncing with the accelerometer in her smartwatch. She’s pleased with the 84% sleep quality rating. She takes a moment to scan her REM log, adding a voice note with the fantastical things she saw in her dreams.

She rises and begins the day with her yoga practice, to stretch mind and body. Her EEG readings are excellent, with a high spike in alpha brain wave activity. She opts for some music from her library based on her heart rate. She smiles when The xx come on over the speakers.

She heads into the shower, but not before removing her glasses (she snickers thinking only ‘white men wearing glass’ actually shower with them on).

Dressed, she walks into the kitchen to make a light breakfast. The Wall Street Journal, The New York Times, and Wired Magazine instantly light up before her, offering the day’s headlines and relevant articles to her week ahead. Her serendipity app brings in a few articles from other news sources she might not necessarily read. She tags ‘more like this’ to the one she likes.

She fields a phone call from her boss while drinking her coffee and is able to reference both his recent email and an article in the morning’s paper, which of course, impresses him.

She walks to work while her glasses provide her with shopping deals of the day. She receives a popup reminder of a friend’s upcoming birthday with a photo of the blouse her friend liked when they were last window shopping together.

She enters a crowded, all male conference room, comfortably taking a seat in the middle of the investors’ meeting.  As she scans the room, her glasses use facial recognition to provide her with everyone’s name and LinkedIn profile.  She greets every member by name, asking about their recent projects, ventures, and successes as provided to her by her glasses.  They’re noticeably impressed.  When she assumes lead on the group presentation, her glasses sync to her visual presentation.  She had used her voice analyzer app last night to rehearse the presentation and tone. She walks the room with confidence and exits a dazzled crowd.

It’s been a long, but fruitful day. She heads out for a cocktail after work. The doors swing open into a crowded lounge and she walks in alone, but not unnoticed.  She checks tomorrow’s calendar with her glasses while ordering a drink.  A handsome stranger approaches her, asking for her name.  Her glasses immediately alert her that he’s 35 and a successful broker, whose address matches his mother’s house.  She smiles and walks away.  Well, at least he didn’t pull the, ‘So, are you a Gemini?’ line. She laughs to herself. Her attention shifts to a guy across the bar, he’s dressed casually, but locks eyes and smiles.  Her glasses tell her he’s 31, a painter, and a volunteer at the children’s hospital art wing.  She sends a friend request to him with her glasses while sending a message, “Always wanted to learn how to paint….Helen”, smiles, and walks away.

She returns home and checks her email on her glasses.  She has several emails congratulating her on a fantastic presentation and requesting meetings.  She replies by checking her availability on her calendar, which appears next to the message on her glasses.  The painter from the bar replies, “Anytime ;)”  She takes off her glasses and turns out the light.

[Hat tip and very gracious bows to James C. Nelson and Dr. Caitlin Fisher]

Google Glass & Augmented Reality Eyewear: “Oh, the Places You’ll Go!” Defining a New Era in Visual Culture

Augmented Reality eyewear and Google’s Glass will take us to new heights, quite literally: the first sequence in the “How It Feels [through Glass]” video was shot via Glass in a hot air balloon.


It was 155 years ago that the first aerial photograph was taken on a balloon flight in 1858 over Paris, France by artist Nadar (born Gaspar Félix Tournachon)(1820-1910). A pioneer in the newly emerging medium of photography, Nadar also attempted underground photography using artificial light to produce pictures of the catacombs and sewers of Paris. Nadar’s technical experiments and innovation took us to places via his camera that were previously inaccessible to photography, inspiring new ways of seeing and capturing our world.

AR eyewear and Glass offer this same opportunity at a time when AR is emerging as a new medium, which will give way to novel conventions, stylistic modes, and genres. Referencing Dr. Seuss’s book in the title of this article, AR also promises to transport us to wondrous, magical places we’ve yet to see.

This article is a follow up to a post I wrote a year ago posing the questions: Will Google’s Project Glass change the way we see & capture the world in Augmented Reality (AR) and what kind of new visual space will emerge?

As both a practitioner and PhD researcher specializing in AR for nearly a decade, my interests are in how AR will come to change the way we see, experience, and interact with our world, with a focus on the emergence of a new media language of AR and storytelling.

I’ve previously identified Point-of-View (PoV) as one of “The 4 Ideas That Will Change AR”, noting the possibilities for new stylistic motifs to emerge based on this principle. I’d like to revisit the significance of PoV in AR at this time, particularly with the release of Google Glass Explorer Edition. PoV, more specifically, “Point-of-Eye”, is a characteristic of AR eyewear that is beginning to impact and influence contemporary visual culture in the age of AR.

Google Glass

Image: Google Glass

AR eyewear like “Glass” (2013) and Steve Mann’s “Digital Eye Glass” (EyeTap) (1981) are worn in front of the human eye, serving as a camera to both record the viewer’s environment and superimpose computer-generated imagery atop the present environment. With the position of the camera, such devices present a direct ‘Point-of-Eye’ (PoE), as Mann calls it, providing the ability to see through someone else’s eyes.

AR eyewear like Glass remediates the traditional camera, aligning our eye once again with the viewfinder, enabling hands-free PoE photography and videography. Eye am the camera.

Contemporary mass-market digital photography has us forever looking at a screen as we document an event, rather than seeing or engaging with the actual event. As comedian Louis C.K. so facetiously points out, we are continually holding up a screen to our faces, blocking our vision of the actual event with our digital devices. “Everyone’s watching a shitty movie of something that’s happening 10 feet away” he says, while the ‘resolution on the actual thing is unbelievable’.

Glass presents an opportunity where your experience in that moment is documented as is without having to stop and grab your camera. Glass captures what you are seeing as you see it through PoE, very close to how you are seeing it. Google Co-Founder Sergey Brin states, “I think this can bring on a new style of photography that allows you to be more intimate with the world you are capturing, and doesn’t take you away from it.”

Google Glass video

Image: Recording video with Google Glass, “Record What You See. Hands Free.”

I agree with Brin; Glass will bring on new stylistic modes and conventions through PoE, which also appears to be influencing other mediums outside of AR.

Take for instance the viral Instagram series “Follow Me” by Murad Osmann featuring photographs of his hand being led by his girlfriend to some of the world’s most iconic landmarks.


Photographs by Murad Osmann, “Follow Me” series, 2013.

(Similar in style to the above video recording visual from Google Glass of a ballerina taking and leading the viewer’s hand.)

The article “How Will Google Glass Change Filmmaking?”, identifies two other examples in contemporary music videos: the viral first-person music video for Biting Elbows and the award-winning music video for Cinnamon Chasers’ song “Luv Deluxe”.

In “The Cinema as a Model for the Genealogy of Media” (2002), Andre Gaudreault and Phillipe Marion state, “The history of early cinema leads us, successively, from the appearance of a technological process, the apparatus, to the emergence of an initial culture, that of ‘animated pictures’, and finally to the constitution of an established media institution” (14). AR is currently in a transition period from a technological process to the emergence of an initial AR culture, one of ‘superimposed pictures’, with PoE as a characteristic of the AR apparatus that will impact stylistic modes, both inside and outside the medium, contributing to a larger Visual Culture.

Gaudreault and Marion identify key players in this process as: the inventors responsible for the medium’s appearance, camera operators for its emergence, and the first film directors for its constitution. ‘Camera operators’ around the world are beginning to contribute to AR’s emergence as a medium, and through this process, towards an articulation of a media language of AR. Mann, described as the father of wearable computing, has been a ‘camera operator’ since the 90’s. In 2013, Google Glass’s early adopter program selected 8000 ‘camera operators’ to explore these possibilities, with Kickstarter proposals since from directors for PoE film projects including both documentaries and dramas. What new stories will the AR apparatus enable? Like cinema before it, what novel genres, conventions, and tropes will emerge in this new medium towards its constitution?

Let’s continue the conversation on Twitter: I’m @ARstories.