直到最近,VR中的交互与导航在很大程度上依赖游戏机手柄,比如XBOX 360手柄。使用游戏手柄操控VR设备,就如同给新的iPhone手机添加鼠标一样。尽管有VR所带来的新鲜感,但是并没有给已经有33年历史的设备带来太多的改变。自从1983年经典长方形NES手柄诞生以来,就一直保持着左侧4向操控键,右侧是“A””B”按钮、中间是几个控制按钮的经典造型。


PS4 或者Xbox 360手柄也都保持与NES手柄类似的构造,稍微变化是增加了无线功能、人体工程学设计和添加少数几个按钮。大家秉承着对经典的传承,总认为没必要打破它的设计。 Nintendo 64手柄(三叉戟手柄)和索尼DualShock对手柄的改进做了很大贡献,增加了新的操纵杆,让玩家控制3D游戏时在任一方向都可以瞄准和观察周围360°视角,根据玩家推动摇杆的力度,游戏中人物移动的速度就会相应变化。玩家在玩第一人称射击游戏时可以扫射、转弯、前进和后退。他们还增加了第二个操作杆用作控制玩家视角方向。




当然输入流畅原则(crossing the streams)也是有例外的,比如隐形传态(teleportation),对于任何具有空间移动功能的VR系统来说,VR环境一般比真实空间的尺寸要大,这就要求采用某种方法让玩家快速移动位置,而不是只依赖步行走过去。尽管玩家的实际物理空间是有限的,而VR仿真的环境可以是无穷的,必须找到一种方法在VR空间内重新映射或者重构物理空间。正如Valve公司非常杰出的开发者兼设计师 Yasser Malaik所说,Valve探索出一种方法,让玩家使用手柄的激光指示将自己在VR中传输到其他地方(玩家用手柄指向目的地,然后按下按钮,屏幕淡黑,屏幕明亮后就抵达了目的地),事实证明这种方法是在VR走动的非常直观的方法。起初,他们尝试头部转动来传输位置,但是实验效果非常不理想。当Valve用手柄来代替头部转动时,体验就有显著改善,甚至还可以用做游戏中后向/侧向扫射。这种交互模式可以增强玩家的本体感觉(proprioception ,指人在闭眼时能感知身体各部的位置)。

这种输入机制仍然需要不断完善,VR输入的可能性似乎是无限的,比如手势、6自由度操控、手柄或遥控器控制、语音输入、下意识输入(空间感、加速度等你意识不到的反而在VR会很有趣)。每一种输入方式都有自己的优缺点,每种模式都需要探索。我们在2D用户界面的鼠标、键盘、Kinect, Wii基础上进行迭代开发,VR交互模式的开发过程事关各方创建一个更丰富和指数化增长的计算平台。因此可以说游戏手柄是临时跳板,我们可以做的更好。



Until very recently, navigating and interacting in Virtual Reality has largely depended on using a gamepad from a gaming console, like an XBOX 360 controller — but despite VR’s newness, not much has changed in the entire 33-year history of its still-predominant input device. Using a gaming controller to handle the input requirements for VR hardware is a little like plugging a mouse into your new iPhone (warning: you’re going to need about 7 different Thundertooth/Stormcloud/Bluebeam/LightningHopkins adapters to make this work). Since the advent of the immortal rectangular NES controller in 1983, with its 4-directional joypad on the left side for moving and it’s dual “A” and “B” buttons on the right for doing things, and with a few buttons in the center for meta-commands like pausing the whole game or bringing up a non-diegetic menu, the essential configuration has remained unchanged for console controllers over three decades later.

It’s easy, it’s efficient, it works. As the video game gods mutter to themselves on the snowy peak of Mt. Olympius, “if it ain’t broke, don’t fix it.” A modern PS4 or Xbox 360 controller is the same as an NES controller — only wireless, with many additional buttons to accommodate video games’ increased complexity and nuance, and with a more ergonomic design.

The advent of an additional gamepad joystick, which we saw with the Nintendo 64 and what constitutes perhaps the best video game controller of all time, the PlayStation DualShock, made a huge contribution to the evolution of console games. This enabled the bifurcation of two formerly inseparable variables, where the player looks and where the player goes, both previously controlled by the single directional pad or joystick of pre-N64 and DualShock controllers. With these new twin joysticks, a console game could now allow players to look and aim around a full 360-degrees while steering themselves in any independent direction. For example, before the integration of a second directional joystick an FPS player could strafe, turn, and walk forward and backward, but they could only shoot in one direction — straight ahead (think Doom II or Wolfenstein). The entire FPS genre benefitted once a second joystick for controlling player head/aim orientation comes into play — imagine Halo if you could only shoot straight ahead.

But the experience of sitting in front of a screen versus strapping on a virtual reality headset is different. When you can turn your actual head to change your view and orientation within the game, suddenly that second joystick becomes superfluous. Why “turn” the avatar’s head with a joystick when you can just turn your head in the real world?

This prompts the question: how else can we reduce our reliance on handheld gaming console controllers in VR and improve our experience by using VR’s inherent physicality and involvement of position and movement in space to our advantage? In what other ways can we replace the buttons and joysticks with real movements, thus making our interfaces more intuitive while freeing our hands from having to hold onto something like a little plastic sandwich every time we interact with a digitally simulated environment?

The answer seems to lie in breaking out the various input streams controlled by the gaming controller and applying them to specific parts of the human body. For example, we already took that second “look/aim” second joystick controller and transferred that input stream to the human head and neck thanks to the positional and gyroscopic sensors within the VR headset — now, when you turn your head, the view changes exactly as it did when you used the second joystick. It’s forming a 1:1 correlation between your real-world movement and your movement within the VR experience. Using our head to control orientation in VR seems like an intuitive and natural choice. There’s a natural mapping of your view to your neck and your eyes.

Pointing, manipulation, and actuation map naturally to the arms, wrists, hands, and fingers. The functions achieved by the trigger buttons and the buttons on the right-hand side can all be extrapolated to our arms and hands in an immediately intuitive way.

And movement maps naturally to your legs and your spine as you adjust your attitude and posture.

These kinds of correlations lead to an intuitive experience for users because they leverage the natural way users interact with the world. When we see an object in real life we want to activate, we don’t blink at it — we pick it up. Aligning the way we interact with VR with the way we interact with day-to-day life provides an exciting opportunity to make computing more direct and less abstracted. Eliminating the middleman of the gamepad controller in areas where it makes sense can make human-computer interaction more intuitive and natural.

When designing for VR with these three systems in mind — orientation, pointing/manipulation/actuation, and moving — some crucial considerations have come to light. For example, it’s wise to try to keep these three streams uncrossed. You might be familiar with how using an analog stick to move in VR while turning may create counter-rotations or accelerations that lead to jarring ocular-vestibular mismatches with what users’ brains expect. This kind of disjunct can lead to extreme visceral discomfort, and even some not-insignificant physiological reactions (that is to say — you puke your face off). So we know to avoid mapping movement to fingers.

We also know by now that mapping pointing or interaction with your head can fall short as an interface design strategy. Human neck and eye muscles are not well-tuned for those particular functions. Attempting to designate the head and neck as the controller for pointing or interaction detracts from the more natural task of managing what the user is looking at.

However, there are notable exceptions to this general principle of not “crossing the streams” of input — for example, teleportation. For room-scale VR, any simulated VR environment with dimensions larger than the physical room the user is standing in would by necessity require some kind of method for allowing the user to move around the space beyond just walking. Since the user’s physical space is inherently limited in size, and VR simulated spaces can be unlimited, there’s a huge value in providing a method for remapping or reframing the physical space within the VR space. As Valve’s astute Designer and Developer Yasser Malaika points out, Valve’s experimentation with letting users teleport with a laser pointer-style indicator (the user points at a spot somewhere on the terrain, presses a button, the screen fades to black, and they reappear in the indicated spot) has revealed it to be an instantly intuitive method for getting around in VR. Initial experiments hinged on head position to indicate where to teleport, and the results were less than satisfactory. When Valve tried using the hand controller instead of the head positioning, suddenly users saw a dramatic improvement in what they could do and began using it in innovative ways — even using it to navigate to positions they couldn’t immediately see, by pointing behind their backs to move backwards or sideways to strafe. This interface schema engages your proprioception — your body’s sense of where the limbs are posed.

These kinds of input mechanisms are still ripe for exploration. And the possibilities for VR input seem infinite — gesture, 6-degrees-of-freedom manipulation, hand controls (buttons/touchpads/triggers/etc), ambient invocation like voice commands, subconscious input (position in space, acceleration, things you’re not conscious of as a user but the VR experience can use in interesting ways). Each has strengths and weaknesses depending on the context of the desired action, and each mode still offers relatively untouched wilderness for exploration. Just as we had to iteratively develop our input methods for 2D user interfaces as we figured out the mouse, the keyboard, the touchscreen, and the IR-based motion sensor (Kinect, Wii, etc.), the development of VR input schemes will be a process that will yield a richer and exponentially more useful computing platform for all involved. The gamepad is a temporary stepping stone, but we can do better.

如原创文章转载请注明:好色VR » VR交互体验可以让你抛弃游戏手柄

分享到:更多 ()