Designing the Future of Augmented Reality: slides from PechaKucha


Sometimes you just need to relinquish control.

20 slides, 20 seconds each, and no control of the slide clicker. GO!

I had a fantastic experience at PechaKucha Toronto in December. My presentation, “Designing the Future of Augmented Reality” is now posted here.

More on the wildly fun International PechaKucha format here.

I have some exciting announcements on upcoming talks and my book to share very soon! Hope to see you at one of these events!

You can also find me on Twitter: I’m @ARstories.

*Update: Feb 24/15: Flattered to be featured as the Presentation of the Day on the PechaKucha website!

Screen Shot 2015-02-24 at 12.35.46 PM

Augmented Reality and Virtual Reality: what’s the difference?

AR and VR are often confused with each other, and used interchangeably in the media, but they are significantly different. Let’s break it down:

Augmented Reality (AR): real, physical world

Virtual Reality (VR): computer generated environment, artificial world Screen Shot 2015-01-30 at 2.25.41 PM In VR, the user is completely closed off from the physical world, fully immersed in a computer generated simulation. Eg: Oculus Rift

In AR, the user is still in their physical space, now with additional digital data layered on top of the real world. Eg. Meta Spaceglasses

I often begin my presentations distinguishing between the two. Here’s one of my talks from TEDx in 2010.

The definition of AR is expanding to include things like wearable computing, new types of sensors, artificial intelligence, and machine learning. I call this the second wave of AR. Read about it here in my upcoming book.

And if you’re wondering what the heck the HoloLens is that Microsoft announced, read about it here in my latest article. *Hint, it’s not VR.

Find me on Twitter: I’m @ARstories.

Reality Has Changed. Microsoft’s HoloLens and what you need to know about the next wave of Augmented Reality

All hands on (holo)deck! 2015 is ramping up to be the year of Augmented Reality (AR). hololens Microsoft threw their hat into the ring today announcing HoloLens, their AR headset lead by Kinect inventor Alex Kipman. Remember “Fortaleza” the AR glasses we had a peek at in the leaked Xbox 720 document in 2012? Say hello to the HoloLens prototype in 2015.

The community has been quick to point out the similarities between existing AR eyewear like Meta’s SpaceGlasses, but how is HoloLens different?

HoloLens appears to use a Virtual Retinal Display (VRD).

So, what’s VRD, you ask?

VRD mirrors how the human eye works. The back of the eye receives light and converts it into signals for your brain. Images are projected directly onto the retina with the back of the eye used as a screen effectively.

The result is a more true-to-life image than the ‘ghostly transparent superimposed representation’ (as Gizmodo reporter Sean Hollister describes) we’ve seen with AR eyewear before. Hollister details his experience of Microsoft’s prototype as “standing in a room filled with objects. Posters covering the walls. And yet somehow—without blocking my vision—the HoloLens was making those objects almost totally invisible.” He states, “Some of the very shiniest things in the room—the silver handle of a pitcher, if I recall correctly—managed to reflect enough light into my eyes to penetrate the illusion.”

hololens2 In an exclusive interview with Wired’s Jessi Hempel, HoloLens’s inventor Kipman hints at VRD with his description of how HoloLens works by tricking the human brain into seeing light as matter.

“Ultimately, you know, you perceive the world because of light,” Kipman explains. “If I could magically turn the debugger on, we’d see photons bouncing throughout this world. Eventually they hit the back of your eyes, and through that, you reason about what the world is. You essentially hallucinate the world, or you see what your mind wants you to see.”

I personally can’t wait to see what my mind wants me to see, particularly in this second wave of AR. For me, AR is about extending human capacity and the human imagination, not supplanting it. I’ve been working with AR for a decade now and it’s tremendously exciting to see this all quickly becoming a reality. We have a whole new medium waiting to be defined.

Microsoft’s HoloLens is currently a prototype with no price or release date announced, and we’ve yet to see what Magic Leap will unleash into the world, but I can promise you this: AR is coming in hot and fast. We WILL experience the world in unprecedented ways. Reality has changed. Read more about the next wave of AR in my upcoming book. 40ideasweb And as always, let’s continue the conversation on Twitter: I’m @ARstories.

Latest AR, VR, Wearable and Digital Tech articles & interviews


Will 2015 Be The Year of Wearable Technology? The Toronto Star.

Will Augmented Reality Make Us Masters of the Information Age? iQ by Intel.

Portraits of Strength Feature, Tech Girls Canada.

What We Really Mean When We Text 150 Identical Emoji in a Row, Motherboard, Vice Media.

An interview with Helen Papagiannis, Augmented Reality Specialist, The Blueprint.

Wearable Tech 2015 Top Influencer

Hungry for more AR, VR, and Wearable Tech in 2015 and beyond? Head over to and sign up for updates on my upcoming book.

As always, let’s continue the convo and chat on Twitter: I’m @ARstories. Happy 2015, friends!


Will I be seeing you soon? I need your help designing the future of AR.


Very excited by the next couple of months of talks ahead on the future of Augmented Reality! This is our future to design and I hope you’ll be there to be part of the conversation.

December 9, 2014 Toronto, Ontario, Canada: Girls in Tech Toronto

December 4, 2014 Toronto, Ontario, Canada: PechaKucha IIDEX Canada (Canada’s National Design + Architecture Expo & Conference)

November 19-21, 2014 Visby, Sweden: Augmented Reality and Storytelling, Keynote address and workshop

November 13, 2014 Toronto, Ontario, Canada: Future Innovation Technology Creativity (FITC) Wearables, Co-Emcee

October 22-24, 2014 Halifax, Nova Scotia, Canada: COLLIDE Creative Technology Conference

September 8-9, 2014 Calgary, Alberta, Canada: CAMP Festival – Creative Technology, Art and Design

August 27-29, 2014 London, UK: Science and Information Conference (SAI), Keynote address

…And on the topic of designing the future of Augmented Reality, I’ll be making some announcements about my upcoming book “The 40 Ideas That Will Change Reality” very soon! Thank you dear followers for your continued support and interest in my work! I sincerely hope to see and meet you at one of my upcoming talks soon.

Very best wishes,

Follow ARstories on Twitter  View Helen Papagiannis's profile on LinkedIn

How to Leave Your Laptop (at Starbucks) While You Pee: Invoked Computing

Experienced this dilemma? Mark Wilson (@ctrlzee), Senior Writer at Co.Design, tweeted yesterday, “If someone designs a solution to the leave your laptop with a stranger while you pee at starbucks problem, I promise to write about it.” Augmented Reality (AR) and Invoked Computing may just have the solution.

@ctrlzee tweet

A research group at the University of Tokyo has developed a concept for AR called Invoked Computing, which can turn everyday objects into communication devices. By making a gesture invoke the device you wish to use, you can activate any ordinary object to suit your communication needs. The computer figures out what you want to do and will grant the selected object the properties of the tool you wish to utilize. A proof of concept (see video) has been created for a pizza box which functions as a laptop computer, and a banana which serves as a telephone.

Invoked Computing presents a scenario where new functions are now layered atop ordinary objects, which do not normally possess those traits. Invoked Computing is the beginning of a new era of responsive environments that are on demand, context-dependent, and needs driven. Wired writer Bruce Sterling comments on how Invoked Computing affords the possibilities for sustainability and no material footprint because you can invoke and access everything.

In my recent talk at Augmented World Expo (AWE) 2014 in Silicon Valley, following Robert Scoble‘s keynote on “The Age Of Context”, I discussed how, as both a practitioner and a PhD researcher, I’ve watched AR evolve over the past 9 years. I suggested adding two new words to the AR lexicon: overlay and entryway to describe the two distinct waves in AR I’ve observed.

Overlay is exactly as it sounds, and defines the first wave of AR as we’ve grown to known it: an overlay of digital content atop the real-world in real-time. We are now entering the second wave of AR, entryway, where the definition of AR is expanding to include things like wearables, big data, artificial intelligence, machine-learning, and social media. This second wave represents a more immersive and interactive experience that is rooted in contextual design. Invoked Computing is a prime example as it combines the overlay properties we’ve seen in the first wave of AR with an on-demand experience that is personalized to the end-user.

So, go ahead and pee; that laptop will just shift back into a pizza box when you no longer need it.

Invoked Computing is one of The 40 Ideas That Will Change Reality (the title of my upcoming book).

Let’s continue the conversation. Find me on Twitter, I’m @ARstories.

Forget AR Dinosaurs, MIT Startup Wants to Bring YOU Back from the Dead

Augmented Reality pioneer Ronald Azuma ends his 1997 seminal essay A Survey of Augmented Reality with the prediction: “Within another 25 years, we should be able to wear a pair of AR glasses outdoors to see and interact with photorealistic dinosaurs eating a tree in our backyard.” Although his prediction would take us to a few more years in 2022, AR has advanced much quicker than any of us could have imagined. With the rise of wearables and devices like Meta’s SpaceGlasses, we’re getting closer to a true AR glasses experience and we WILL get there very soon.

We’ve had AR dinosaurs already appear just about everywhere — apparently a sure-fire source of go-to content. ‘What should we make with AR? Duh, a dinosaur!’.


Image Source: The Advertiser

Dinosaurs, shminosaurs.

How about interacting with a realistic virtual long dead you resurrected in the backyard instead? Now that might startle the neighbours.

Screen Shot 2014-02-19 at 3.59.35 PM

Image: Screenshot from website

MIT startup wants to bring you back from the dead to create a virtual avatar that acts “just like you”:

“It generates a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends after you pass away. It’s like a Skype chat from the past.” bares an eery resemblance to the Channel 4 Television Series Black Mirror, specifically Series 2, Episode 1, Be Right Back in which we watch widowed Martha engage with the latest technology to communicate with her recently deceased husband, Ash. Of course, it’s not actually Ash, but a simulation powered by an Artificial Intelligence (A.I.) program that gathers information about him through social media profiles and past online communications such as emails. Martha begins by chatting with virtual Ash and is able to later speak with him on the phone after uploading video files of him from which the A.I. learns his voice. hopes to immortalize you in a similar fashion by collecting “almost everything you create during your lifetime and processes this huge amount of information using complex A.I. Algorithms.”



Images: Black Mirror

But who will curate this mass amount of information that is “almost everything you create during your life time”? In an article on in Fast Company, Adele Peters writes, “While the service promises to keep everything you do online so it’s never forgotten, it’s not clear that most people would want all of that information to live forever.” Commenting on how our current generation now documents “every meal on Instagram and every thought on Twitter”, Peters asks, “What do we want to happen to that information when we’re gone?”

Will we have avatar curators?

This sentiment echoes Director Omar Naim’s 2004 film, Final Cut, starring Robin Williams. Williams plays a “cutter”, someone who has the final edit over people’s recorded histories. An embedded chip records all of your experiences over the course of your life; Williams job is to pour through all of the stored memories and produce a 1 minute video of highlights.


Image: Film Final Cut (2004)

Will’s A.I. Algorithm be intelligent enough to do this and distinguish between your mundane and momentous experiences?

In Black Mirror, Martha ultimately tells simulated Ash, “You’re just a few ripples of you. There’s no history to you. You’re just a performance of stuff that he performed without thinking and it’s not enough.” Will these simulated augmentations of us be “enough”?

Marius Ursache,’s founder says, “In order for this to be accurate, collecting the information is not enough–people will need to interact with the avatar periodically, to help it make sense of the information, and to fine-tune it, to make it more accurate.”

This post expands on a recent article I wrote on Spike Jonze’s film Her, where I discuss the film from an AR perspective. Her introduces us to Samantha, the world’s first intelligent operating system and offers us a glimpse of our soon to be augmented life when our devices come to learn and grow with us, and in the case of, become us. I discuss how our smart devices, like Samantha, will come to act on our behalf. Our smart devices will know us very well, learning our behaviours, our likes, dislikes, our family and friends, even aware of our vital statistics. The next wave of AR combines elements like A.I., machine learning, sensors, and data all to tell the unique story of YOU. With we may just see this story of you continuing while you’re long gone.


Image: Spike Jonze’s film Her (2013)

Gartner claims that by 2017 your smartphone will be smarter than you. A gradual confidence will be built in the outsourcing of menial tasks to smartphones with an expectation that consumers will become more accustomed to smartphone apps and services taking control of other aspects of their lives. Gartner calls this the era of cognizant computing and identifies the four stages as: Sync Me, See Me, Know Me, Be Me. ‘Sync Me’ and ‘See Me’ are currently occurring, with ‘Know Me’ and ‘Be Me’ just ahead, as we see Samantha perform in Her. ‘Sync Me’ stores copies of your digital assets, which are kept in sync across all contexts and end points. This data storage and an archive of an ‘online you’ will be central to’s creation of your virtual avatar. ‘See Me’ knows where you are currently and where you have been in both the real world and on the Internet, as well as understanding your mood and context to best provide services. If your mood and context can be documented and later accessed to know how you were feeling in a particular location, this will dramatically affect the curation of your memories to be accessed by the A.I. system. ‘Know Me’ understands what you need and want, proactively and presents it to you with ‘Be Me’ as the final step where the smart device acts on your behalf based on learning. Again, being able to document and access your personal needs and wants will paint a clearer picture of the story of you and who you were. The true final step of ‘Be Me’ will be put to the test once you are six feet under, which begs the question, will we become smarter when we die?

Will you register for

Let’s continue the conversation on Twitter: I’m @ARstories. And yes I’m still alive.

*Update: January 23, 2015:

Yep 2015, I’m still *still* alive, and no this isn’t a bot writing this. However, it could be. You could be receiving a beautiful hand-written note from (A.I.) me right now from the afterlife. Except I didn’t write it. A bot named BOND did using my penmanship.

More in my upcoming book on the future of reality here: