Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Perceptual Computing: Practical Hands-Free Gaming

$
0
0

Download Article

Perceptual Computing: Practical Hands-Free Gaming [PDF 772KB]

1. Introduction

The concept of a hands-free game is not new, and many unsuccessful attempts have been made to abandon peripheral controllers to rely solely on the human body as an input device. Most of these experiments came from the console world, and only in the last few years have we seen controllerless systems gain significant traction.


Figure 1:The Nintendo U-Force – State of the Art Hands Free Gaming in 1989

Early attempts at hands-free gaming were usually highly specialized peripherals that only applied to a handful of compatible games. The Sega Genesis peripheral called Sega Activator was a good example of this, being an octagonal ring placed on the floor with the player standing in its center. Despite the advertised ninja-style motions, the device controls simply mapped to 16 buttons and produced restrictive game play causing its silent demise.


Figure 2:The Sega Activator – an early infra-red floor ring for the Genesis console

More recent attempts such as the Sony Eye Toy* and Xbox* Live Vision gained further traction and captured the public’s imagination with the promise of hands-free control, but they failed to gain support from the developer community and only a few dozen hands-free games were produced.


Figure 3:The Sony Eye Toy* – an early current generation attempt at controllerless gaming

As you can see, hands-free technology has been prevalent in the console world for many years, and only recently have we seen widespread success in such devices thanks to the Xbox Kinect*. With the introduction of Perceptual Computing, hands-free game control is now possible on the PC and with sufficient accuracy and performance to make the experience truly immersive.

This article provides developers with an overview of the topic, design considerations, and a case study of how one such game was developed. I’m assuming you are familiar with theCreative* Interactive Gesture camera and the Intel® Perceptual Computing SDK. Although the code samples are given in C++, the concepts explained are applicable to Unity* and C# developers as well. It is also advantageous if you have a working knowledge of extracting and using the depth data generated by the Gesture camera.

2. Why Is This Important

It is often said by old-school developers that there are only about six games in the world, endlessly recreated with new graphics and sound, twists and turns in the story, and of course, improvements in the technology. When you start to break down any game into its component parts, you start to suspect this cynical view is frighteningly accurate. The birth and evolution of the platform genre was in no small way influenced by the fact the player had a joystick with only four compass directions and a fire button.

Assuming then that the type of controller used influences the type of games created, imagine what would happen if you were given a completely new kind of controller, one that intuitively knew what you were doing, as you were doing it. Some amazing new games to play would be created that would open the doors to incredible new gaming experiences.

3. The Question of Determinism

One of the biggest challenges facing hands-free gaming and indeed Perceptual Computing in general is the ability for your application or game to determine what the user intends to do, 100% of the time. A keyboard where the A key failed to respond 1% of the time, or a mouse that selects the right button randomly every fifteen minutes would be instantly dismissed as faulty and replaced. Thanks to our human interface devices, we now expect 100% compliance between what we intend and what happens on screen.

Perceptual Computing can provide no less. Given the almost infinite combination of input pouring in through the data streams, we developers have our work cut out! A mouse has a handful of dedicated input signals, controllers have a few times that, and keyboards more so. A Gesture Camera would feed in over 25,000 times more data than any traditional peripheral controller, and there is no simple diagram to tell you what any of it actually does.

As tantalizing as it is to create an input system that can scan the player and extract all manner of signals from them, the question is can such a signal be detected 100% of the time? If it’s 99%, you must throw it out or be prepared for a lot of angry users!

4. Overview of the Body Mass Tracker technique

One technique that can be heralded as 100% deterministic is the Body Mass Tracker technique, which was featured in one of my previous articles at http://software.intel.com/en-us/articles/perceptual-computing-depth-data-techniques.

By using the depth value as a weight against cumulatively adding together the coordinates of each depth pixel, you can arrive at a single coordinate that indicates generally at which side of the camera the user is located. That is, when the user leans to the left, your application can detect this and provide a suitable coordinate to track them. When they lean to the right, the application will continue to follow them. When the user leans forward, this too is tracked. Given that the sample taken is absolute, individual details like hand movements, background objects, and other distractions are absorbed into a “whole view average.”

The code is divided into two simple steps. The first will average all the value depth pixel coordinates to produce a single coordinate, and the second will draw the dot to the camera picture image render so we can see if the technique works. When run, you will see the dot center itself around the activity of the depth data.

// find body mass center
int iAvX = 0;
int iAvY = 0;
int iAvCount = 0;
for (int y=0;y<(int)480;y++) 
{
 for (int x=0;x<(int)640;x++) 
 {
  int dx=g_biguvmap[(y*640+x)*2+0];
  int dy=g_biguvmap[(y*640+x)*2+1];
  pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[dy*320+dx];
  if ( depthvalue<65535/5 ) 
  {
   iAvX = iAvX + x;
   iAvY = iAvY + y;
   iAvCount++;
  }
 }
}
iAvX = iAvX / iAvCount;
iAvY = iAvY / iAvCount;

// draw body mass dot
for ( int drx=-8; drx<=8; drx++ )
 for ( int dry=-8; dry<=8; dry++ )
  ((pxcU32*)dcolor.planes[0])[(iAvY+dry)*640+(iAvX+drx)]=0xFFFFFFFF;

In Figure 4 below, notice the white dot has been rendered to represent the body mass coordinate. As the user leans right, the dot respects the general distribution by smoothly floating right, when he leans left, the dot smoothly floats to the left, all in real-time.


Figure 4:The white dot represents the average position of all relevant depth pixels

As you can see, the technique itself is relatively simple, but the critical point is that the location of this coordinate will be predictable under all adverse conditions that may face your game when in the field. People walking in the background, what you’re wearing, and any subtle factors being returned from the depth data stream are screened out. Through this real-time distillation process, what gets produced is pure gold, a single piece of input that is 100% deterministic.

5. The Importance of Calibration

There’s no way around it—your game or application has to have a calibration step. A traditional controller is hardwired for calibration and allows the user to avoid the mundane task of describing to the software which button means up, which one means down, and so on. Perceptual Computing calibration is not quite as radical as defining every function of input control to your game, but its healthy to assume this is the case.

This step is more common sense than complicated and can be broken down into a few simple reminders that will help your game help its players.

Camera Tilt– The Gesture camera ships with a vertical tilt mechanism that allows the operator to angle the camera to face up or down by a significant degree. Its lowest setting can even monitor the keyboard instead of the person sitting at the desk. It is vital that your game does not assume the user has the camera in the perfect tilt position. It may have been knocked out of alignment or recently installed. Alternatively, your user may be particularly tall or short, in which case they need to adjust the camera so they are completely in the frame.

Background Depth Noise– If you are familiar with the techniques of filtering the depth data stream coming from the Gesture camera, you will know the problems with background objects interfering with your game. This is especially true at exhibits and conventions where people will be watching over the shoulder of the main player. Your game must be able to block this background noise by specifying a depth level beyond which the depth objects are ignored. As the person will be playing in an unknown environment, this depth level must be adjustable during the calibration step, and ideally using a hands-free mechanism.

For a true hands-free game, it’s best not to resort to using traditional control methods to “setup” your game, as this defeats the object of a completely hands-free experience. It might seem paradoxical to use hands-free controls to calibrate misaligned hands-free controls, but on-screen prompts and visual hints direct from the depth/color camera should be enough to orient the user. Ideally the only hands-on activity when using your game is tilting the camera when you play the game for the first time.

6. The UX and UI of Hands-Free Gaming

Perceptual Computing is redefining the application user experience, dropping traditional user interfaces in favor of completely new paradigms to bridge the gap between human and machine. Buttons, keys, and menus are all placeholder concepts, constructed to allow humans to communicate with computers.

When designing a new UI specific for hands-free gaming, you must begin by throwing away these placeholders and start with a blank canvas. It would be tempting to study the new sensor technologies and devise new concepts of control to exploit them, but we would then make the same mistakes as our predecessors.

You must begin by imagining a user experience that is as close to the human conversation as possible, with no constraints imposed by technology. Each moment you degrade the experience for technical reasons, you’ll find your solution degenerate into a control system reminiscent of traditional methods. For example, using your hand to control four compass directions might seem cool, but it’s just a novel transliteration of a joystick, which in itself was a crude method of communicating the desires of the human to the device. In the real world, you simply walked forward, or in the case of the third person, speak instructions sufficiently detailed to achieve the objective.

As developers, we encounter technical constraints all the time, and it’s tempting to ease the UX design process by working within these constraints. My suggestion is that your designs begin with blue-sky thinking, and meet any technical constraints as challenges. As you know, the half-life of a problem correlates to the hiding powers of the solution, and under the harsh gaze of developers, no problem survives for very long.

So how do we translate this philosophy into practical steps and create great software? A good starting point is to imagine something your seated self can do and associate that with an intention in the software.

The Blue Sky Thought

Imagine having a conversation with an in-game character, buying provisions, and haggling with the store keeper. Imagine the store keeper taking note of which items you are looking at and starting his best pitch to gain a sale. Imagine pointing at an item on a shelf and saying “how much is that?” and the game unfolding into a natural conversation. We have never seen this in any computer game and yet it’s almost within reach, barring a few technical challenges.

It was in the spirit of this blue-sky process that I contemplated what it might be like to swim like a fish, no arms or legs, just fins, drag factors, nose direction, and a thrashing tail. Similar to the feeling a scuba diver has, fishy me could slice through the water, every twist of my limbs causing a subtle change in direction. This became the premise of my control system for a hands-free game you will learn about later in this article.

Player Orientation

When testing your game, much like the training mode of a console game, you must orient the player in how to play the game from the very first moment. With key, touch, and controller games, you rightly assume the majority of your audience will have a basic toolbox of knowledge to figure out how to play your game. Compass directions, on screen menus, and action buttons are all common instruments we use to navigate any game. Hands-free gaming throws most of that away, which means in addition to creating a new paradigm for game control we also need to explain and nurture the player through these new controls.

A great way to do this is to build it into the above calibration step, so that the act of setting up the Gesture camera and learning the player’s seated position is also the process of demonstrating how the game controls work.

Usability Testing

Unlike most usability tests, when testing a hands-free game, additional factors come into play that would not normally be an issue on controller-based games. For example, even though pressing the left-pad left would be universal no matter who is playing your game, turning your head left might not have the same clear-cut response. That is not to say you have breached the first rule of 100% determinism, but that the instructions you gave and the response of the human player may not tally up perfectly. Only by testing your game with a good cross section of users will you be able to determine whether your calibration and in-game instructions are easy to interpret and repeat without outside assistance.

The closest equivalent to traditional considerations is to realize that a human hand cannot press all four action buttons at once in a fast-paced action game, due to the fact you only have one thumb available and four buttons. Perhaps after many months of development, you managed such a feat and it remained in the game, but testing would soon chase out such a requirement. This applies more so to hands-free game testing, where the capabilities between humans may differ wildly and any gesture or action you ask them to perform should be as natural and comfortable as possible.

One example of this is a hands-free game that required holding your hand out to aim fireballs at your foe. A great game and lots of fun, but it was discovered when shown to conference attendees that after about 4 minutes their arm would be burning with the strain of playing. To get a sense of what this felt like, hold a bag of sugar at arm’s length for 4 minutes or so.

It is inevitable that we’ll see a fair number of hands-free games that push the limit of human capability and others that teasingly dance on the edge of it. Ultimately, the player wants to enjoy the game more than they want an upper body workout, so aim for ease of use and comfort and you’ll win many fans in the hands-free gaming space.

7. Creating a Hands-Free Game – A Walkthrough

Reading the theory is all well and good, but I find the most enlightening way to engage with the material is when I see it in action. What better way to establish the credibility of this article than to show you a finished game inspired by the lessons preached here.


Figure 5:Title screen from the game DODGE – a completely hands-free game experiment

The basic goal when writing DODGE was to investigate whether a completely hands-free game could be created that required no training and was truly hands-free. By this definition, an application that once started from the OS would require no keyboard, mouse, or touch and was powered entirely by hands-free technology.

Having established the Body Mass Tracker as my input method of choice, I began writing a simple game based on the necessity to dodge various objects being thrown in your general direction. However, due to the lack of an artist, I had to resort to more primitive techniques for content generation and created a simple rolling terrain that incrementally added stalactites and stalagmites as the game progressed.

As it happened, the “cave” theme worked much better visually than any “objects being thrown at me” game I could have created in the same timeframe. So with my content in place, I proceeded to the Perceptual Computing stage.

Borrowed from previous Perceptual Computing prototypes, I created a small module that plugged into the Dark Basic Professional programming language which would feed me the body mass tracker coordinate to my game. Within the space of an hour I was now able to control my dodging behavior without touching the keyboard or mouse.

What I did not anticipate until it was coded and running was the nuance and subtlety you get from BMT (Body Mass Tracker), in that every slight turn of the head, lean of the body, twist in the shoulder would produce an ever so slight course correction by the pilot in the game. It was like having a thousand directions to describe north! It was this single realization that led me to conclude that Perceptual Gaming is not a replacement to peripheral controllers, but its successor. No controller in the world, no matter how clever, allows you to control game space using your whole body.

Imagine you are Superman for the day, and what it might feel like to fly—to twist and turn, and duck and roll at the speed of thought. As I played my little prototype, this was what I glimpsed, a vision of the future where gaming was as fast as thought.

Now to clarify, I certainly don’t expect you to accept these words at face value, as the revelation only came to me immediately after playing this experience for myself. What I ask is that if you find yourself with a few weekends to spare, try a few experiments in this area and see if you can bring the idea of “games as quick as thought” closer to reality.

At the time of writing, the game DODGE is still in development, but will be made available through various distribution points and announced through my twitter and blog feeds.

8. Tricks and Tips

Do’s

  • Test your game thoroughly with non-gamers. They are the best test subjects for a hands-free game as they will approach the challenge from a completely humanistic perspective.
  • Keep your input methods simple and intuitive so that game input is predictable and reliable.
  • Provide basic camera information through your game such as whether the camera is connected and providing the required depth data. No amount of calibration in the world will help if the user has not plugged the camera in.

Don’ts

  • Do not interpret data values coming from the camera as absolute values. Treat all data as relative to the initial calibration step so that each player in their own unique environment enjoys the same experience. If you developed and tested your game in a dark room with the subject very close to the camera, imagine your player in a bright room sitting far from the keyboard.
  • Do not assume your user knows how the calibration step is performed and supplement these early requests from the user with on-screen text, voice-over, or animation.
  • Never implement an input method that requires the user to have substantial training as this will frustrate your audience and even create opportunities for non-deterministic results.

9. Final Thoughts

You may have heard the expression “standing on the shoulders of giants,” and the idea that we use the hard won knowledge of the past to act as a foundation for our future innovations. The console world had over 20 years of trial and error before they mastered the hands-free controller for their audience, and as developers we must learn from their successes and failures. Simply offering a hands-free option is not enough, as we must guard against creating a solution that becomes the object of novelty ten years from now. We must create what the gamer wants, not what the technology can do, and when we achieve that we’ll have made a lasting contribution to the evolution of hands-free gaming.

About The Author

When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>