Maybe it’s just because I’m a gamer, and tend to gravitate towards things involved with gaming, but I saw a lot of news from CES about new means of interaction. There was Valve and Gabe Newell talking about bio-metrics and gaze tracking, there were demos of the latter, hand tracking, and a whole slew of interaction methods and devices that didn’t require a keyboard and mouse. From a gaming perspective, it seemed like everyone was dissatisfied with what we’re using now?
Here’s the thing about all this talk of “new ways to interact”. There’s nothing wrong with the keyboard and mouse. They work fine! Until we have a need to be more accurate and precise with our games, people probably won’t buy into new stuff. Sure, people bought millions of Kinects, but how many people use it now? Very few, and those who do use it strictly for exercise and dancing. We shouldn’t seek to replace these means of interaction we’ve honed over years and years. What we should do, is try and enhance them.
The guy in the eye-tracking article said that the technology they were developing was better used in conjunction with already in place interactions, and I think he’s got the right idea. It doesn’t sound like he wants his technology to replace the keyboard and mouse, only enhance what you can do with those. In fact, he said that FPS games become boring, so it’s not the best way to control that, but with applications like enhancing conversation with NPCs, it’d work great.
The same idea that these things can be used to enhance games can be applied to a lot of technologies. The Kinect, for instance, would probably have a lot more use if you could just, say, navigate menus with your hand in the couch than getting you to move a raft by tilting your body. All the stuff in Kinect Adventures was fun, for maybe 10 minutes. For eye-tracking, I imagine it could work well in point-and-click adventure games, or find-the-object type games. Bio metrics like heart rate would work well in a survival horror game, where the visuals could change according to how fast your heart is beating, or even how hard you’re breathing. Facial expressions in in-game conversations. Voice pitch and fluctuation in the same. But notice that none of my examples replace basic mechanics in games, like movement, combat, menu navigation, although in certain applications, it could replace those.
I think a new level of immersion could come from any of the things I mentioned above. I think they’d be awesome to enhance stuff like virtual reality, i.e. Oculus Rift. But, even with stuff like Oculus Rift, people want to touch things, hold stuff in their hands. And that brings me to my last point.
People want feedback for what they do. People like to use their hands. The reason why everyone loves touchscreens so much is because it feels intuitive and natural. Thousands of years of evolution have done that to us. When you touch a keyboard, or click a mouse, you experiences resistance, but overcome it and realize you have activated the button. It’s as if the peripheral is saying to you, “Okay, you have pressed me.” With eye tracking, motion control, bio metrics, they don’t give feedback. You give data to it, but it doesn’t give data back to you. And there’s nothing we can do about that.
So am I wary against anything other than a keyboard and mouse/controller? Am I an old man telling these companies to get off my lawn? No. In fact, quite the opposite. I love the fact that people are exploring new ways to make the human-machine bond greater and more intuitive. What I’m a little wary of, however, is seeking to replace what we already have. So Mr. Newell, Tobii, keep doing what you’re doing. I want it as much as you do.