How to design common/basic interactions in VR

During my three years working at Accedo, I have had the opportunity to collaborate with the Innovations department and drive the VR efforts, creating many projects and applications to consume video in VR. Since Cardboard, VR has been progressively improving adding new ways to interact, and as a designer, I’ve been focusing on adapting the experiences to make the most of it. Here are some general guidelines that I’ve learned designing video VR applications for high-end devices such as HTC Vive and Oculus Rift.

Forget what you know about other platforms

When we started to create video VR applications in 2014, we quickly realized how different it was from designing for TV, desktop or mobile devices. We had done dozens of video applications before. When creating new applications, we almost never start from scratch. We reuse UX components that already exist, that have been proven, and we put them together. A carousel, a shelf, a hero banner, a grid… Indeed there is room for innovation, but if you use Netflix and compare it with any other local video application you’ll see what I’m talking about, they look all the same.

Simple shelf component in VR (Image: own creation)

In the example above you can see how a simple shelf component, widespread in TV, desktop and mobile, needs to be seen with new eyes.

First, we needed to decide what this list of videos should look like. Should it scroll horizontally or vertically? Should it be a single line or multiple? Straight or in a wheel shape? In front of the visitor or attached to the controller? After a few rounds of testing, we decided for a half-semicircle carrousel within hand reach.

Then we passed to the interaction, how to scroll it and how to open one of the items in the list. For the scroll, we could have used arrows triggered by a tap, click and drag, small swipes with the thumb, automatic scrolling based on the position of the head, etc. I guess you get the point; everything is to be decided. How easy it was designing for iPad!

Some VR experiences just copy the iPad layout in VR (Image: airVR)

Of course, we could just put the layout that works today in iPad and place it in front of the visitor in VR. It is actually what most of the major video companies are doing. But no, we didn’t want to do the virtual-iPad-in-your-face experience; we wanted to do something else, break what we have into pieces and rethink everything from scratch. To do this, we needed to understand the new space and the new way to interact with it.

Prepare a simple and quick way to prototype

To test VR experiences we can’t rely on traditional prototyping tools that we use to simulate interaction with computers and mobile devices. There is a simple reason to this: there are none for VR, at least when we started designing this experience, late 2016. There are apps to create a layout in VR, but I don’t know yet any that simulates interaction and gestures in VR. Please if you know or develop one, send me an email!

So the way that we used to simulate VR interaction is the equivalent of paper prototyping, but in three dimensions. Needless to say that paper sketching might be necessary before 3D prototyping, this is up to you. Some people are more comfortable with a pen, some directly in Sketch and some people are more physical. If you are one of these, there is a very low-cost (and efficient) way to recreate the same volumes that you are going to experience in the virtual world.

I built a model of the HTC Vive controller made out of foam, where the action buttons are marked in white for clearness. I could replace the actions by pin different icons to it, or even attach UI elements to simulate scrolls and carousels around the controller. This model gave me the tool that I needed to imagine and test experiences in three dimensions. Additionally, I could put the controller in the hands of people and ask them how they will rotate an element or select an object. This allowed me to have a solid grounding on what for most of the people feels as natural. Ready to pass to the next stage.

Common/basic Interactions

After breaking your molds and knowing a way to prototype in VR, you can start thinking about some basics of VR interaction: point & click, select/grabbing and scrolling.

Point and click

When you enter in almost any VR experience using a controller, you’ll see a laser coming out of your hands. Unless you are a storm trooper or a university teacher, we rarely point at something with a laser. Now in VR, this is a convention that is becoming the standard, and it is surprisingly intuitive. Even people that have never interacted with VR get the idea immediately:

The laser pointer is the new mouse cursor (Image: Google)

As in traditional devices, the pointer is hovering elements on the screen that can be flat or volumetric. The feedback that shows that a UI component is interactive is critical. You don’t want your visitors to be clicking around in all directions like a drunk cowboy, or missing some important part of your app. There is a general feeling when using any VR app, have I missed something? Many items in VR are just to decorate or to produce a presence feeling, so it is necessary to create a clear way to make them look “clickable.” Better to be on the heavy side than on the subtle side. The current resolution of the devices makes subtle UI changes barely visible, so go for it!

Once the user is pointing at the object that you want them to interact with, how do they interact? This question might be trivial for common devices, but when you are holding two HTC remote controls in your hands, you realize that it is far from having a clear answer.

The HTC vive controller 

Selecting objects / Grabbing

We could talk about the simple interaction where you tap an object and new content appears or the space where you are changes for a new one, but this would be too boring, so I prefer to talk about something that has no equivalent in any other platform – grabbing. We are used to grabbing objects with our hands in the real world. Actually, this was the only way we had to interact with the environment until computers arrived. In a virtual environment, where objects are perceived as volumetric, we thought that grabbing them makes total sense to our brain.

Our video consumption experience was designed to be used whilst sitting, avoiding teleporting. We decided it that way was because video applications are traditionally used on a sofa. It was clear that we were not designing a game, but a video consumption application, so we focused on how to make the video the central part of our experience. 

We first explored placing the carousel with the content within hand’s reach and interacting only by touching elements with the remote control, without a laser pointer. but quickly saw that people did not imagine that they could touch objects with the controller. We don’t have yet gloves or IR sensors to see our hands in VR, we only see the remote, and we are already grabbing it, so why should I grab another object? Probably this is related to HTC Vive representation of the controller in the virtual world. Oculus Rift has the touch controllers that are frequently virtually represented by hands, which would make grabbing interaction way more natural.

However, as mentioned earlier, the laser pointer felt very intuitive to everyone, so we continue using it as the primary way to interact with the environment, which also allowed us to place elements out of hand’s reach.

By using the laser combined with grabbing we are doing telekinesis. (Image: own creation)

Grabbing an element means to attach its position to the controller, so it moves when we move the hand. It feels natural as it is the way we already interact with objects in the real world. By using the laser combined with grabbing we are doing telekinesis, i.e. moving objects remotely. In our case we decided to make the selected object magically fly to the user’s hand. That way you can see them closely and interact with them. We used this interaction to bring the players and their metadata closer to us, so this information is conveniently attached to our controller, while we keep watching the game on the big screen.

Scrolling

One of the main parts of our application was watching a replay from a sports game. The user holds a virtual second screen attached to the controller. The objective was to create a way to scrub the video so users could watch their favorite bits of action all over again. First, we tried doing small swipes left and right, like in the Apple TV for example, so the video moved a few seconds forward or backward. This action was ok, but soon the thumb starts to get tired when done too much.

Then we found that a circular scrolling adapts more naturally to the morphology of our thumb. The circular shape of the touchpad encourages this type of interaction, guiding the movement of the thumb. Adding some haptic vibrations related to the rotation speed, and the trackpad feels like a magic analogic dial wheel.

The VR control becomes a magic tool

By extension, this circular movement can also be used in different scenarios, scrolling lists, text, and circular carousels. Small thumb flicks are discrete jumps, while the circular thumb motion is more a continuous movement ad infinitum. Clockwise feels more natural to move left-to-right, and up-to-bottom, counter-clockwise for right-to-left and bottom-to-up.

In theory, all of that sounds very good, but testing with real people you quickly realize that they need a little push to understand it for the first time. Once they have tried there is no way back, it becomes the “swipe-left, swipe-right” on iPad. Then we have the logical question: How do you show your users that they are one gesture away from the VR heaven?

New interactions? Don’t be afraid of attaching a legend to the controller

Getting familiar with the remote is an extra step to learn when starting with VR as a user. Some of the controllers have up to 5 different inputs, multiplied by two controllers at the same time. This can be very overwhelming for a VR newbie. A natural way to help users is to place a legend attached to the remote to indicate the possible interactions at any given moment.

Another aspect to take into account is the context, in our experience, the controller has different actions depending on the situation. When it is on the main screen, it scrolls and opens basketball games, but once a game is selected the possible interactions are only playing the video or closing the view.

Actions with context need to have legends (Image: own creation)

There is a say in UX, that says:

A user interface is like a joke. If you have to explain it, it’s not that good.

Most of the time I agree, but not yet for VR. Does not the keyboard still have printed letters and actions? The VR remote is kind of a “virtual keyboard” with many possible actions all at once, so at least for now, I recommend using this attached legend for the remote.

Conclusion

The interactions described here are probably not the best, it is not intended the be a definitive solution, but more of a guide in the quest of the VR standards. It is going to take time to define standards in VR interaction, something perceived as native as the scrolling or pinch to zoom gesture on an iPad. We are getting there slowly but steady.

VR designers can make the most of this period of freedom and experiment creating new cool and, why not, extravagant solutions. Other platforms have passed through that period of wildness until reaching a point of standardisation where apps are built by using blocks. Now is time to make mistakes, try, learn and try again. Lets enjoy!

This post was by José Somolinos, UX Product Manager, Accedo

You can also read this on Medium.

 

 

CUSTOMERS