By Ron Fosner and Alex Moore
Hello, I’m Ron, and a few months ago I had the opportunity to lead an Android*/OpenGL ES* training. I not only wanted to have the class work on a graphics program that was practical but also informative and entertaining. Alex Moore had written a post on AltDevBlogADay about a unique single touch joystick implementation (see references), and I was struck with both the creativity and utility of the input gizmo he had created. I decided that for the training project we’d tackle implementing Alex’s joystick. It is a fun app to write and as well as a good introduction on how to convert touch input into movement in a 3D scene. Implementing the gizmo was an excellent introduction to processing touch input and creatively converting input into a complex control mechanism with just a little effort.
Because of the positive feedback I received from that training, I decided to share the single touch joystick app with a larger audience. We think there’s a great opportunity to break some new ground for user input through touch interfaces. So I contacted Alex and he graciously agreed to help co-author this article. Alex will be sharing his ideas on the concept and history of the single touch joystick, while I’ll be discussing how we implemented it and how you might extend these ideas even further. Now let’s let Alex give some background on the idea of the Single Touch Joystick.
Where the Concept Came From
The idea behind single touch joystick stemmed from playing a few first and third person games on my Apple iPad* and feeling very constricted when interacting with them. Our mobile devices are powerful enough now to display rich, fully realized 3D worlds, but the lack of physical buttons forms a significant barrier for playing certain types of games. On the other hand, touch excels at creating great experiences for games that are tailored to it, so it seemed logical to me to investigate if there was an innovative way it could be used for a first person game.
To start, I listed the most limiting factors with the current de-facto control method for first person games on touch: twin virtual joysticks. After researching popular games I distilled it down to the following two points:
- Due to the way gamers usually hold touch devices, only their thumbs are free. Twin virtual sticks require both thumbs to be used simultaneously, effectively rendering it impossible to do any other action without suddenly losing some level of control over your character.
- Gamers’ thumbs obscure the representation of the sticks on the screen, and as there is no haptic feedback (yet), it becomes very difficult to accurately position their thumbs.
In addition to these, I wanted to try and develop something that could easily adapt to left-handed players, and also people who only have the use of one hand. The main motivation for this concept was the niggling feeling that there must be something better than just trying to copy the control method from a console controller, which in itself is often regarded as second best to using a keyboard and mouse.
Developing the Idea
After doing the research into the flaws of current implementations, I started to look at what you actually do when you’re playing a first or third person game on a PC or console, with the aim of trying to understand what any new system needs to achieve.
If you watch a novice playing these types of game, it very quickly emerges that they often move to a position, then adjust their view, then move to another position. They rarely do both actions at once, whereas experienced players flow, seamlessly integrating movement and looking together.
Look further though and you’ll notice another trend: once a player moves off in a direction, they’ll often carry on in that direction and just use the look axis to adjust where they’re heading. If they want to switch movement direction, say from forwards to sidestepping, it is usually very purposeful. Realizing this led me to the inner / outer circle concept: once a player has set off in a direction, can you then assume the player wants to carry on in that direction?
I believed the answer is usually yes, so I drafted a design and then did a quick prototype. I was surprised at how good the initial tests felt and continued to refine it. After showing the concept to a few friends and getting a good response, I decided to dedicate a few days to really trying to get a polished mechanic together.
The additional steps taken from the first prototype to the current version were all small iterations. I added the central HUD overlay so you can see what you’re doing and the direction arrows to further strengthen the message to the player as to what the system is doing (a good friend gave me the idea of illuminating the direction the player is rotating in).
I hit a few dead ends too. One earlier version allowed you to start touching anywhere — which felt great — but prevented you from being able to rotate on the spot. I also tried reversing the circles so you rotated before moving, but that felt very odd indeed. Now Ron will discuss the details of how one can implement the idea.
Implementing the Idea
Implementation involves two steps: rendering the gizmo with the current state of the gizmo and manipulating the viewing matrix to reflect the state of the gizmo. The gizmo is modal, meaning it has two states that the user can transition between.
Initially the gizmo is just a movement joystick. The user moves the virtual joystick vertically or horizontally, and this modifies the translation matrix.
The modification we made to the original gizmo as described in Alex’s original article was to add a transition zone between the translation and rotation areas. This transition zone eases the change from translation to rotation, and makes the gizmo a little more forgiving.
To show the location of the user’s touch on the gizmo, we render a “nub” representing the tip of the virtual joystick. As the user moves this nub further away from the center of the gizmo, we increase the translation speed. The translation speed is maximized when the nub reaches the transition area.
When the user reaches the transition zone, we activate the rotation nub. As long as the user’s input is outside the pure translation area, we render two nubs. One (the red one) represents the velocity vector and is locked to the inner ring. The second nub (the yellow one) is the rotational nub.
As the user moves the nub further out, we compute a lerp value between the inner and outer rings of the transition zone. We use this value to reduce translation speed as the user moves towards the edge of the translation zone, while simultaneously increasing the rotation scaling. As the user’s input reaches the outer edge of the transition zone, the translation effect is zeroed out, and the rotation effect is maximized.
The larger the angle is from the vertical, or “forward” direction, the more rotation is added to the movement of the viewpoint. In our implementation there is some overlap between the translation and rotation modes. We have a transition zone where we are both translating and rotating. Eventually as the user pushes the joystick further out, the translation element drops to zero and the user is just rotating the viewpoint.
The rotation speed is proportional to the angle from the initial direction vector. Once the user enters the transition zone, the translation nub is locked, and the angular deviation from the direction vector (angle α) is what controls rotation speed. As the user rotates through 180 degrees, the rotation direction suddenly changes, but in practice the user is rotating at a good clip by about 45 degrees, so we rarely run in to this situation in practice.
Implementation Details
The actual implementation is fairly simple. There is some code that manages processing the user input. Basically it takes the x,y locations from the touch and lets the gizmo code process it. The gizmo resets when users lift their fingers, but when they put their fingers down we process the input, calculating out from the center of the gizmo. As the touch point moves out past the inner circle, we lock the translation nub and start processing the rotation nub. The further from the center the translation nub is, the faster the translation. The further the rotation nub is from the inner circle, the faster the rotation. The more the deflection from the direction of the translation nub, the more the rotation will also attenuate the translation, providing a smooth transition from moving to rotating.
These values are calculated for each touch input. When we want to draw the gizmo, we calculate a translation transformation from the translation nub and a rotation transform from the rotation nub. The code is straightforward. The translation and rotation values are used to calculate the model view matrix which is then passed to the shader and is used to drive around the 3D scene.
Rendering the gizmo is left entirely to the pixel shader. We did this for rapid prototyping as it’s easy to render circles and gradients in the shader code. It also makes it easy to swap out different types of gizmo controls if you want to experiment with different settings. One of the initial requests was that the gizmo gets rendered where the user can see it rather than under the thumb where it gets hidden.
The fragment shader code just renders the nub positions and then the gizmo rings using the distance from the center. The texture coordinates are scaled in the vertex shader so that they range from [-1,-1] to [1,1], which makes most of the shading fairly simple. Here’s the OpenGL ES 2.0 GLSL shader code.
precision mediump float; varying vec2 v_UV1; varying vec3 v_Normal; // 2 nubs packed in vec4 - x,y is nub1, z,w nub2 uniform vec4 u_NubPos; // gizmo params, just x used for inner ring uniform vec4 u_widgetParams; const vec4 innerColor = vec4(1.0, 1.0, 1.0, 0.4); const vec4 outerColor = vec4(0.2, 0.8, 0.8, 0.4); const vec4 innerRingColor = vec4(1.0, 0.0, 0.0, 0.5); const vec4 outsideCircleColor = vec4(0.0, 0.0, 0.0, 1.0); const vec4 nub0Color = vec4(1.0, 0.0, 0.0, 0.5); const vec4 nub1Color = vec4(0.0, 0.0, 0.0, 0.5); const float ringWidth = 0.05; void main() { float inner = u_widgetParams.x; // calculate position of nub centers // if not valid, are set to 1,1 (which causes clipping) vec2 nub0 = vec2(v_UV1.x-u_NubPos.x,v_UV1.y-u_NubPos.y); vec2 nub1 = vec2(v_UV1.x-u_NubPos.z,v_UV1.y-u_NubPos.w); // distance of current frag from nub positions float lenNub0 = length(nub0); float lenNub1 = length(nub1); // tex cords are scaled in in vertex // shader to range -1,-1 to 1,1, so center is 0,0 // where is current frag from center? float len = length(v_UV1); it’s outside the square, discard it if ( len > 1.0 ) { discard; } // draw the two nubs, if we’re inside their radius // nub0 gets drawn first else if ( lenNub0 < 0.10 )// hardcoded nub size { gl_FragColor = nub0Color; return; } else if ( lenNub1 < 0.10 ) { gl_FragColor = nub1Color; return; } // else draw the rings else if( len >= inner && len <= inner+ringWidth) { gl_FragColor = innerRingColor; } else if ( len < inner ) { gl_FragColor = innerColor; } else // draw a blended color { float m = 1.0-(len-inner/(inner+ringWidth)); gl_FragColor = mix(vec4(0,0,0,1),outerColor,m); } }
What’s Next
The current prototype is still far from a final version you could use in a game: there’s plenty more to do. The biggest problem to resolve is how to make those distinctive changes in direction more fluid so gamers can quickly switch from sidestepping left to sidestepping right, which is used a lot in the type of games this is aimed at. We’ve tried to provide a framework where we can prototype different types of gizmos or fine-tune them with a minimum of effort. Some people have no trouble adapting to the controls and get comfortable with them quickly. Others feel we’ve mashed them together in an odd and unnatural way. Such is pioneering new UI concepts.
The next step is to design a game around this proof of concept to ensure that its limitations, such as the lack of being able to look up and down, aren’t noticed or missed. This could be where you come in. We’ve done experimentation and found that it’s reasonably easy to try out different implementations and fine-tune them to get a very workable UI concept implementation, albeit one that does present a novel user input technique. But that’s also when the fun starts. So try to come up with a new joystick implementation. Perhaps you can improve on what we’ve been building here. Let us know how it works for you!
About the Authors:
Alex Moore has been designing video games since 1999, and has credits on 14 published games. Of those, 4 as Lead Designer, including the 2+ Million selling Aliens Vs Predator. As a freelance designer he has worked on multiple titles - in addition to personal projects on mobile devices he has most recently worked on launch titles for both Xbox One (Xbox Fitness) and PlayStation 4 (The Playroom).
Ron Fosner has been programming graphics professionally since 1995, and is a Game and Graphics Performance engineer at Intel, specializing on mobile platforms. He’s an expert in graphics and multithreading, and has been helping game companies optimize their games on Intel architecture since 2008. He’s currently working on creating compelling OpenGL and OpenGL-ES content and pushing the boundaries of what’s possible on mobile graphics platforms.
References:
A new type of Touch Screen Joystick, Moore, Alex. AltDevBlogADay.
An update to Single Joystick, Moore, Alex. AltDevBlogADay.
A new type of Touch Screen Joystick, Moore, Alex.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.