Increasingly when you take out your phone, your input is not through the screen but through the lens. Snapchat, Facetime, Instagram stories, etc. are allowing us to connect through recorded material rather than a voiced or typed one! But even that is changing, when digital projections of objects are imposed into the physical world we can say that creativity is the key to next years’ UI door.
The state of technology
During the 2017 WWDC Keynote, ARKit was introduced for the first time. A developer toolkit created to usher in the augmented reality revolution on iOS devices. With ARKit’s advanced functionalities, developers are provided a quick and easy lunching point for innovative applications. But with ARKit, Apple also wants to take AR to the mainstream level. A simple iOS 11 update into the devices that people already have, all iPhone and iPad users will have instant access to true augmented reality applications.
Google, on the other hand, has launched ARCore, an Android software development kit (SDK) that brings augmented reality to existing and future Android phones. Developers can download the SDK and start building their AR experiences. While Google already had Tango (the company’s augmented reality platform first released in June 2014) it was, however, a very limited technology in terms of reach, as it relies on a specific hardware. ARCore, on the other hand, is supposed to be more broadly used in terms of devices.
While there is no issue now with AR in terms of reach, many AR applications are challenging the existing UX rules. Some very useful ones, like IKEA Place or Amazone AR view apps allowing you to test the products in the comfort of your home, see if it fits with the rest before buying it, or MeasureKit, which turns your iPhone into a handy pocket measuring tool.
But some others can also be just fun. Pokémon Go made headlines a year ago, the legendary project brought AR gaming to the crowds by placing battlefields on the real surroundings. Ingress was Google’s first entry into the AR game market, basically an MMO that puts players into fight for control of virtual territories. And countless of other creative AR applications are being built everyday: a very interesting and growing list of AR apps is on Product Hunt.
While AR is bringing continuous innovation, especially with the rise of standard AR engines, it is expected to swiftly incorporate everyone’s daily tasks. AR is growing fast and it’s taking a promising track. Apple CEO Tim Cook even boasted that AR will be bigger than virtual reality (VR). With that said, AR presents a new frontier of HCI that will redefine how users interact with information in the upcoming years. The implication here is that UX designers of today will need to be well equipped when it comes to the knowledge required to build engaging AR-based applications.
The inspiration for this post was a new finding on Product Hunt that I really related to. You see I almost fractured my jaw in an awful little accident a year back on the Parisian subway. I was going down the stairs while checking my twitter feed and missed a step. Thank god nothing was broken despite the torturing pain that made it seem like it, but I went on and deleted my twitter account right afterwords as an impulsive angry reaction.
TweetReality is an AR app that brings tweets, search, mentions, profiles and all your favorite features and displays them on a virtual screen that overlays your iPhone or iPad. Now that would have saved me the fall and a one week medication.
AR interfaces have two main properties that make them stand out. The first is 360 degrees of potential space for content: unlike the usual screens which provide a pretty tight real estate, AR gives the ability to organize information all around the user.
The second is the capability to organize and interact with information in the Z-axis. In other words, designers will be placing UI elements, directly into space. Content strategy and information architecture will now have to take the tree scheme to another level where objects and assets are placed into a z-depth.
Since the input in AR is through the lens, the digital output is based on the real world context interpretation. The Snapchat app for example needs to recognize the face before applying a filter to it. The contextual input here is the user’s face. What this means is that UX Designers will have to create user actions and flows that adapt to the physical space on which the digital elements will be overlaid.
UX Designers also need to build reactive interfaces. The technology must respond to new external information and take into account the real-time changes in the users’ surroundings.
Moreover, research and testings will have to make sure the technology is well adapted to the different physical environments with different conditions such as lighting, weather, altitude and whether it’s an interior or an exterior setting. AR developers and designers are to put a great focus on three main functionalities: motion tracking, environmental understanding, and light estimation.
New interfaces call for new interaction standards. The existing computer interfaces have shaped many behaviors that have become rather natural overtime: clicking our way through, keying our text inputs, pinching to zoom in and out, swiping right or left to like or dislike, etc. These behaviors, so far, were shaped in an aim to get as close as possible to the way we interact with the physical world, using peripherals like the mouse, the keyboard and the touchscreen. AR however is taking us even more towards a system of natural and intuitive human interactions.
How ? by basing interaction on hand gestures. For now AR is mainly restrained to the smartphones and tablets. We’re, therefore, still using a set of traditional interactions. However, as we move toward AR wearables like the Microsoft Hololens and Meta headsets, we see more of a shifting in the human Human-Computer behavior:
Users, here, have to reach out to digital objects in order to use them as if they existed in the physical world. The only difference however is the haptic feedback that is not provided by the AR environment whatsoever. Another challenge is to build interfaces displayed in a way that does not restrict the users movements in their concrete setting.
Another interaction used by Hololens is including a second user. A user can apply diagrams and other graphics on the environment perceived and shared by a different user.
AR in this example operates as a non-command user interface in which tasks are accomplished using only contextual information and not requiring any additional input from the user. This not only helps decreasing the interaction cost to perform a task but it also combines multiple information sources to minimize switching attention from one task to another.
As more applications are growing to take advantage of this trend, augmented reality’s guidelines may certainly evolve to encompass much more than they do now. However, it is only by understanding the users’ needs and goals that developers and designers will be able to create an adapted and effective augmented reality.
Who knows, in a few years, physical and virtual realities will have merged and tapping into air with AR smart glasses might be as normal and intuitive to us as recording “Stories” and speaking to Siri are today. As designer and film maker Keiichi Matsuda has brilliantly depicted in his visionary concept film Hyper-Reality: