Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Intel® Parallel Computing Center at SURFsara BV

$
0
0

SURFsara

Principal Investigators:

ValeriuValeriu studied Electrical Engineering and got his MSc at the Polytechnic University of Bucharest. He followed-up with a PhD in Computer Architecture at the same institute. Afterwards, he continued as a postdoctoral researcher at both Eindhoven and Groningen University, working on GPU computing, computer vision, and embedded systems in the scope of several EU-funded projects. In 2014, he joined SURFsara as HPC consultant, focusing on machine learning. End 2016, he became the PI of the Intel Parallel Computing Center at SURFsara, focusing on optimizing deep learning techniques using Intel architecture, as well as extend their use to other application domains.

Description:

SURFsara is the national supercomputing center in the Netherlands, operating among other systems the Dutch national supercomputer. SURFsara offers its HPC services to researchers from the Dutch academic sector, and is aware of the rapid development and impact of machine learning in HPC. Since 2017 SURFsara became an Intel PCC, focusing on speeding up deep learning workloads on Intel-based supercomputers.

The original focus for 2017 was on minimizing the time-to-train of several deep convolutional neural networks on state-of-the-art computer vision datasets such as ImageNet and beyond. Some of the highlights of 2017 were less than 30 minute training time on the popular Imagenet-1K dataset, as well as state-of-the art results in terms of accuracy on other datasets such as the full ImageNet and Places-365 datasets. The results were obtained on large-scale state-of-the-art systems such as TACC’s Stampede2 and BSC’s MareNostrum4.

Our main research proved to have a two-way objective: (1) making sure that multi-node scaling is performed as efficient as possible and (2) developing new learning rate schedules that converge to state-of-the-art accuracies for very large batch training on up to 1536 Intel® Xeon Phi™ nodes.  Thirdly, we’ve evaluated several network architectures, particularly wider residual models on larger computer vision datasets, and obtained record accuracies. We are currently working on a methodology that aims to optimally trade off the time-to-train with regard to the desired degree of accuracy on these popular datasets. All experiments and disseminations from 2017 are resulted from the Intel Caffe* framework, used in combination with Intel MLSL (Machine Learning Scaling Library).

SURFsara will continue this work in 2018 , but will extend the focus on porting the large-batch SGD training techniques to the popular Tensorflow* framework, as well as to extend the application domain beyond computer vision, towards replacing or augmenting traditional HPC applications from natural sciences such as climatology, particle physics, and astronomy with novel deep learning techniques. Particular focus will also be on the rapidly developing medical imaging field, in need for both large-scale compute, memory bandwidth and capacity, due to the large-scale data dimensionality. Since Tensorflow allows for more flexibility in the types of architectures and usage scenarios, we will experiment with generative models, as well as with fine-tuning methods from pre-trained models when tackling these problems.

Furthermore, SURFsara is actively involved in several other deep learning activities. An important one is EDL (Efficient Deep Learning), a large Dutch-funded project focusing on bringing deep learning to industrial applications, involving many academic and industrial partners. Additionally, SURF’s innovation lab (SOIL) started an internal project that supports 3-4 projects from HPC-focused simulation sciences that propose to use deep learning to augment or extend their applications, both financially and with consultancy. These techniques already start presenting promising results, and we believe that making scalable tools and methodologies based on Caffe and Tensorflow available for the research sector is of high importance, and will further help the development of several HPC-related fields.

Publications:

  • Initial evaluation of Intel Caffe, presentation at IXPUG2017.
  • Follow-up description of Intel Caffe scaling, presentation at the Intel Booth at ISC 2017.
  • Brief description of work on scaling residual networks. Also, details on other (larger) datasets such as Imagenet-22K and Places-365.
  • State-of-the-art large batch training, arXiv paper.
  • Under review: Large Minibatch Training on Supercomputers with Improved Accuracy and Reduced Time to Train.
  • In preparation: Efficient wide network training for state-of-the-art computer vision.

Related websites:

https://www.surf.nl/en/about-surf/subsidiaries/surfsara

https://www.surf.nl/en/news/2017/11/surfsara-extends-its-status-as-intel-parallel-computing-center-for-2018.html

 


Megatasking: Making Mixed-Reality Magic Work for Your Virtual Reality Game

$
0
0

A step-by-step guide to green screen, mixed-reality video production for VR

Virtual reality (VR) delivers an incredible gaming experience for the player in the headset, but it’s a hard one to share. The first-person player perspective has a limited field of view, and can be jumpy, creating a dissatisfying viewing experience for anyone not in the headset, whether they’re watching it live on a screen or on a prerecorded video.

Green screen, mixed-reality video is an innovative technique that brings the external viewer into the VR universe very effectively. It does this by showing a third-person, in-game perspective that encompasses both the player and the game environment, creating a highly immersive 2D video solution. It’s a demanding but rewarding megatask that VR developers should definitely explore if they want to present VR experiences to their very best advantage in an increasingly competitive market.

Figure 1
Figure 1: Screenshot from the Kert Gartner-produced, mixed-reality trailer for Job Simulator*.

This technique can be used live for events or online streams, as well as for trailers and videos, by developers or YouTubers. Josh Bancroft and Jerry Makare from the Developer Relations Division at Intel have been working with the technique in-house and for event demos for the last year, developing relationships with others in the field, and honing their skills. The goal of this step-by-step, how-to guide is to share their knowledge with you, the VR development community, and to equip you to show your amazing creations to the world in the best way possible.

Figure 2
Figure 2: Cosplayer immersed in Circle of Saviors* using green screen, mixed-reality technology at the Tokyo Game Show in 2016.

This guide focuses on the workflow that Josh and Jerry have the most experience with—VR games built in Unity* for HTC Vive* using the SteamVR* plugin for mixed-reality enablement; MixCast VR Studio* for calibration; and Open Broadcaster Software* (OBS) for chroma key, compositing, and encoding for streaming and/or recording. The ways to successfully recreate the technique are numerous, with other hardware and tools available for each stage of the process, some of which is cited in this guide.

The content of this guide is as follows:

Hardware: An overview of recommendations for the physical kit needed to handle this megatask, including PCs, VR headset, camera, lenses, video capture, studio space, green screen, and lighting.

Software: Recommendations on software needs, including enabling the VR application itself, calibration of the in-game and physical cameras, capture and compositing software, and encoding software for streaming and/or recording.

Step-by-step: The calibration, compositing, encoding, and recording stages broken down into easy to follow stages.

Resources: This guide contains many links to further information, and further resources, to give you everything you need to produce amazing mixed-reality videos of VR games.

Hardware Steps

First things first: PC hardware

The usual and most accessible setup for performing the mixed-reality video megatask requires two well-powered PCs—but the same result, or better, can be achieved by one single PC powered by the Intel® Core X-series processor family, with an Intel® Core i9+ processor of 12 or more cores. The extra cores give a single PC enough processing power to handle the VR title, mixed-reality rendering and capture, and encoding for high-quality recording and/or streaming.

The VR application needs to run smoothly with a bit of headroom for the other process that are explained in detail later in this guide; namely, generating the third-person view in-game, and doing the live video capture and compositing. Running the VR application on the same machine as the capture and compositing is advisable in order to avoid latency issues.

In a two-PC setup, the second machine takes the composited video signal from the first, captures it in turn, and performs the encoding task for streaming and/or recording. This task is relatively heavy in processing terms, so the more encoding power you have, the higher quality the results will be. This is where the extra cores in the Intel Core X-series processor family come in handy.

The way you implement the process depends on the hardware available, and there's a lot of flexibility in how you manage the load balancing. But, if you have to split the work across multiple systems, the way that we found to be most efficient is to split it as evenly as possible in the manner described above.

Behind the mask: VR hardware

A key component, of course, is the VR headset itself, and the sensors that come with it. For this project, we used HTC Vive hardware because a lot of the work required to enable a VR title for this kind of mixed-reality production is built in to the SteamVR Unity plugin, and many Vive titles built in Unity will just work with this process. Useful resources related to HTC Vive mixed-reality support can be found on the Steam Developer forums.

For Oculus Rift*, support is being added for third-person, mixed-reality capture, but, at the time of writing, developers need to do some programming to enable mixed-reality support in their title. Related documentation can be found on the Oculus website.

The sensors are key—not least because an additional one is required to mount on the physical camera that the mixed-reality process requires. The HTC Vive Tracker* is ideal for this, or you can use a third Vive hand controller.

Figure 3
Figure 3: SteamVR* needs to see a third controller (bottom left) in order for the mixed-reality functionality to work.

HTC Vive hardware is used for the purposes of this guide.

Live action: Video capture

The mixed-reality process involves filming the player with a camera, with the resulting video signal fed to the PC performing the compositing. The video needs to be captured at least at the resolution and frame rate that the final video needs to be. A good target for high-quality video is 1080p at 60 frames per second.

To capture the live video, a high-quality video camera, such as a digital single-lens reflex (DSLR) or mirrorless camera, delivers the best results. You can use USB or Peripheral Component Interconnect (PCI) High-Definition Multimedia Interface (HDMI) capture devices, from companies such as Magewell, Elgato and Blackmagic Design, which plug into a USB 3.0 port or PCI slot, and have an HDMI port to plug the camera into. This lets your external camera appear as a webcam to the compositing software.

There are also internal capture cards that fit into a PCI slot that we have used in the past (from Blackmagic, and others). However, these tend to require more drivers, as well as taking up PC space and fitting time. And, of course, they won’t work with a laptop.

Any video capture device—including a regular webcam—that supports the minimum resolution and frame rate that you want your final video to be, and shows up in your chosen compositing software, will work—but a higher quality camera will deliver better results.

Another hardware requirement is a 4K monitor. As explained later, the window rendered on the desktop is a quartered window, and you only capture and record a quarter of it at a time. These quarters then become the layers that are composited to create the final mixed-reality video. For a final video with 1080p resolution, the full resolution of the desktop must be at least four times that; that is, a minimum of 4K. You can use a lower resolution monitor, but remember that the final output resolution of your mixed-reality video will be one quarter of the size at which you render the quartered split-screen window.

In the studio: Setting up the space

For the studio, you need a bit more space than you would for just the VR play area. The minimum size that Vive requires is approximately two meters by two meters, plus the additional space needed for the physical camera to move while avoiding any collisions.

You need to be able to put up as much green screen as possible in the space. In theory, a single piece of green screen fabric behind the player could work, but that severely limits where you can point the camera. You can’t point it off the green screen, because that will show whatever is in the room outside the screen, and break the mixed-reality illusion.

Ideally, three of the four walls and the floor should be covered with green screen, but it depends on how much space you have, and the stands you use for the screen.

We have used a number of configurations, but one portable solution that has worked well for a number of demos is the Lastolite* panoramic background, which creates a three-wall green screen space 4m wide and 2.3m high.

Other than the surface coverage, another thing to watch is that the fabric is pulled flat to avoid shadowing caused by ripples, and that the lighting is as even as possible (more on this later), as these factors have an impact on the ability of the chroma key filter to remove the green around the player.

Looking good: Camera spec

There’s a lot of flexibility regarding the camera you use, depending on what you have access to, but it’s important to use as good a quality camera as possible. The camera must have an HDMI output, and be able to shoot at the resolution and frame rate that you need, which should be a minimum of 1080p at 60 frames per second. Any good camera with video capabilities will work well—such as a DSLR or mirrorless camera, a camcorder or, ideally, a pro video camera.

We have used cameras including the Sony a7S*, which is a full-frame mirrorless camera, a professional Sony FS700* video camera, and a Panasonic GH5*, but many other brands and models perform equally well. An inexpensive webcam isn’t going to cut it, however, if you’re looking for high-quality results.

Looking better: Camera and tracker positioning

For this process, the third VR controller, or tracker, needs to be mounted securely on the camera in such a way that it cannot move in relation to the camera, as this will throw off the calibration. Attaching a controller securely can present problems because of its shape and hand-friendly design. We have used a number of ingenious ways to fix the controller to the camera, including ever-faithful duct tape, clamps, and a custom rig they built that goes through the center hole of the controller, and uses big washers, rubber gaskets, and a long bolt to secure it. The more recent and much more compact Vive Tracker has made this process easier, with inexpensive shoe mounts that allow easy, rigid fixing to a camera body. Using a cold shoe mount also usually aligns the tracker closely with the centerline of the camera lens, which helps with calibration.

Figure 4
Figure 4: The Vive Tracker* has a standard ¼” 20 screw fitting on its underside for easy mounting.

Making the tripod-mounted camera and attached tracker as level as possible before you start the calibration stage (which is explained later) is very important. A useful trick for this is to use the built-in compass/bubble-level app on your smartphone, if it has one. Placing the smartphone on top of the tracker, or camera, will tell you how many degrees off level the camera is in any direction, providing easy adjusting to get the camera perfectly flat and level. Having the camera and attached tracker perfectly level at the start of the calibration process reduces the amount of adjustment needed in the three positional (X, Y, and Z), and three rotational (rX, rY, and rZ), axes later on, and also makes calibration far simpler and more accurate.

Once the calibration process is complete and the physical and virtual cameras are in lock step, the physical camera can be moved freely within the bounds of the green space, including removing it from the tripod. As long as the tracker and camera don’t move in relation to each other you can move the camera however you want, and it will be tracked in 3D space, just like your hands in VR. A stabilizer is recommended for professional results when holding the camera by hand. If no stabilizer is available, you’re likely to get better results leaving the camera on the tripod, and relying on horizontal and vertical panning for camera movement. Remember to always keep the camera’s field-of-view within the green screen area.

Looking the best: Lenses and focal length

Focal length and lenses are other important considerations. A 16mm or 24mm wide-angle lens has a low focal distance and a wide field-of-view, which means the camera needs to be relatively close to the subject, increasing the risks of physical collision, and of the field-of-view accidentally slipping outside the green screen area.

However, a longer focal distance—such as a 70mm zoom lens—means you have a much narrower field-of-view, and so need to be further away from the player to keep them in the frame. This can be problematic if space is limited. The best lens and focal distance to use will depend on the space available, the extent of the green screen, how much of the player you want to shoot, and how you want to film (that is, static camera versus handheld). A 70mm lens is probably too long in most circumstances, and it’s likely that you’ll want to stick to something like a 24mm or 28mm lens.

What is very important, especially if you’re using a zoom lens where the focal distance can be adjusted, is that you keep the focal distance fixed once calibration is complete. Changing the focal distance (that is, zooming in or out) will throw off the calibration, ruining the alignment between the virtual and physical camera, and forcing you to start again. With this in mind, fixed focal distance lenses are a safe bet.

In the spotlight: Lighting

Your lighting goal is to evenly illuminate both the subject and the green screen, avoiding any harsh shadows or dramatic differences in lighting that will show up in the chroma key. If possible, a three-light setup is ideal: one to light the subject, and two to light the green screen behind the subject—one from either side to minimize shadows. Fewer lights can be used depending on the conditions. Light emitting diode (LED) panels are great—they’re portable, and don’t produce as much heat as tungsten or halogen lights, which helps keep conditions comfortable for the player.

Figure 5
Figure 5: At Computex in Taipei, in May 2017, the Intel setup included frame-mounted green screen on three walls and the floor, and two fluorescent lighting panels.

Be sure to check that the frequency of the existing lighting in the space doesn’t cause excessive strobing on the camera image, especially if you’re relying on the ambient lighting rather than bringing in your own. This can occur with some LED lights, and other lights of a certain frequency. The only way to know for sure is to run a test in the space.

Software Steps

Soft choices: The VR application

The first key thing, from a software point of view, is that the game is enabled for mixed reality. This means that it allows the implementation and positioning of an additional third-person camera in-game, and can output a quadrant view comprised of separate background and foreground layers taken from that in-game, third-person view. This is the view that will be synced and calibrated with the physical camera, enabling the entire mixed-reality process.

Mixed-reality enablement is possible with games built in Unreal Engine*, and it’s also possible to code your own tools, as Croteam has done with their Serious Engine*, which supports the entire mixed-reality process, including capture and compositing. You can watch Croteam’s tutorial video on how to create mixed-reality videos in any of their VR games here.

For this guide, however, we will focus on the process as it relates to games built in Unity for HTC Vive using the SteamVR plugin, which automatically enables games for mixed-reality video.

Softly does it: Calibration

For mixed reality to work correctly, you need a way to calibrate the position offsets between the physical and virtual cameras. You could tweak the values manually, and calibrate by trial and error, but that’s a very painful process. We have primarily used MixCast VR Studio for the alignment and calibration of the in-game and physical cameras, a key process which is explained in detail later.

Soft results: Compositing and encoding

We use OBS Studio* for the compositing and encoding for recording and/or streaming. OBS Studio is open-source software with good support that is widely used by video creators and streamers. There are other solutions available such as XSplit*, but OBS Studio is used for the purposes of this guide.

Step by Step

Before we get into the calibration, let’s recap where we are so far, and what we need to do next. We have one PC running the VR application, and the video capture and compositing, and a second PC handling the encoding for recording and/or streaming (or one single PC for all tasks, if it’s from the Intel Core X-series processor family). We have a camera with an additional tracker, or controller, attached, perfectly level, in a studio space with as much green screen as possible.

The VR application we’re using is enabled for mixed reality (for example, built in Unity using the SteamVR plugin), with an additional in-game, third-person camera implemented, and outputting a quadrant view, including foreground and background layers. Then, we have capture and compositing software running.

The next stage is to calibrate the in-game and physical third-person cameras so they are perfectly in sync. This will let us film the player and have the in-game camera track both the physical camera and the hand movements of the player (and, by extension, any items or weapons player(s) are holding in the game), with a high level of precision. This lets us accurately combine the in-game and real-world video layers, and create a convincing mixed-reality composite.

Camera calibration

Before you start the calibration process, it's worth going back to double check that the camera and attached tracker are perfectly level. For the calibration process, the player (or a stand-in) needs to stand in front of the camera in the play volume, with the controllers in their hands. That person should also be level, meaning they should stand directly in front of the camera and square their shoulders to it, so they're aligned and centered with the centerline of the camera view.

This is important because when you’re doing the adjustments, you have six different values that you can change: the X, Y, and Z position, and X, Y, and Z rotation. The adjustments can be fiddly, but if you know the camera is level and flat, and the person is standing directly on the camera’s center line, you can minimize some of those offsets to get them close to zero and not have to adjust them later. It helps things go much more smoothly.

For the calibration process of aligning the in-game and physical cameras, we use MixCast VR Studio. Start up the software, make sure it can see the physical camera, and, using the drop-down menu, check that it knows which device is tracking your physical camera as the third controller (the Vive Tracker or controller attached to the camera). Before you start, you also need someone in the VR headset positioned in the play space, with a controller in each hand.

Quick setup

Next, launch the Quick Setup process, which walks you through the calibration process. This will give you the XYZ position, XYZ rotation, and field-of-view values for the virtual camera that the VR application needs in order to line it up with the physical camera.

Figure 6
Figure 6: Select Quick Setup in MixCast VR Studio* to begin the calibration process.

The first step is taking one handheld controller and placing its ring on the lens of the physical camera to line it up as closely as possible. Click the side buttons on the controller to register as complete.

Figure 7
Figure 7: The first calibration step involves aligning the physical controller with the physical camera.

Next, the tool projects crosshairs at the corners of the screen. Move the hand controller to position the ring so it lines up as closely as possible with the center of the crosshair, and click the button.

Figure 8
Figure 8: The setup process initially provides two crosshairs, in opposite corners of the screen, with which to align.

Initially, there are two crosshair alignments to complete for the top-left and bottom-right corners of the screen. Once they’re done, there is an option to increase precision by clicking on the Plus button to bring up more crosshairs.

Figure 9
Figure 9: Clicking the Plus button brings up additional crosshairs for greater precision.

We have found that four crosshairs is the optimal number. Only two, and it’s not quite closely enough aligned, and more than four also tends to be off. Four crosshairs will cover the four corners of the screen.

Figure 10
Figure 10: Completing the fourth-corner alignment operation.

By this point, a rough calibration is established. You will see two virtual hand controllers tracking the approximate positions of the physical controller. From there, you use the additional refinement controls below the screen to adjust the camera position and rotation to bring them as close as possible.

Figure 11
Figure 11: The controls for XYZ position, and XYZ rotation, are for fine-tuning the camera position.

Fine tuning

To fine-tune the hand alignment, the person in VR holds their hands to the sides so that you see the virtual controllers drawn over the top of the real ones. If the virtual controllers are further apart than the physical controllers, you need to bring the drawn controllers closer together by pulling the virtual camera slightly back. To do this, the person in VR clicks the arrows to adjust the camera position.

Figure 12
Figure 12: Showing the VR drawn hand controller out of alignment with the real one.

Figure 13
Figure 13: Here, following adjustments, the drawn hand controller is in tighter alignment with the real one.

Each click results in a small, visible movement in the desired direction. It’s useful to keep track of the number of clicks, in case you overshoot and need to go back, or otherwise need to undo. Next, look at the up and down alignment by moving the controllers around in the space.

Figure 14
Figure 14: Once the camera is lined up, click the checkmark to confirm the settings.

It’s possible to hold the hand controller still and get it perfectly aligned, and then move it somewhere else only to find that the alignment is off. It may be aligned in one position but not in another, which means there is more fine-tuning to do. This is an iterative process—Josh describes it as a six-dimensional Rubik’s Cube* where you have to get one face right without messing up the others—but, through careful trial and error, you’ll eventually have them perfectly lined up.

Once you have the virtual and physical controllers well aligned, there is a selection of other objects that can be used to perform additional alignment checks, including weapons, sunglasses, and a crown. Play around with the items to make sure everything looks aligned.

Figure 15
Figure 15: The selection of objects available in MixCast VR Studio* for alignment purposes.

Figure 16
Figure 16: Holding the drumstick to check controller alignment.

The crown is particularly useful to check the alignment of the player’s head in VR. The player places it on their head, and then the camera position and rotation can be adjusted until the crown is perfectly centered and level.

Figure 17
Figure 17: Use the crown to check accurate alignment and position of the player’s head in relation to the camera.

Camera config

When you have the alignment values set you’re calibrated and ready to go, and you can use those values with any VR application that uses the same method. You need to save the XYZ position, the XYZ rotation values, and the field-of-view value from MixCast VR Studio as a externalcamera.cfg file.

Figure 18
Figure 18: The externalcamera.cfg file showing the XYZ position, XYZ rotation, and field-of-view values that need to be saved following the calibration process.

For this Unity/SteamVR mixed-reality method, two conditions have to be met to trigger the quartered screen view that you need for compositing the mixed-reality view. The first is that a third controller, or tracker, has to be plugged in and tracking. The second condition is that the externalcamera.cfg file needs to be present in the root directory of the executable file of the VR application that you’re using.

Figure 19
Figure 19: Ensure the externalcamera.cfg file is saved in the same root folder as the VR application executable.

Launch the game

Now it’s time to fire up the VR application you’re using to create your mixed-reality video. With Unity titles, if you hold the Shift button while you launch the executable, a window pops up that lets you choose the resolution to launch at, and whether to launch in a window or full screen.

At this point, you need to specify that the application needs to run at 4K (in order that each quadrant of the quartered window is high-definition 1080p), and that it needs to run full screen, not windowed (uncheck the windowed box, and select the 4K resolution from the list). Then, when you launch the application, it will start the quartered desktop window view running at 4K resolution.

Figure 20
Figure 20: Example of the quartered desktop view for Rick and Morty: Virtual Rick-ality*.

This quartered view is comprised of the following quadrants: game background view (bottom left); game foreground view (top left); game foreground alpha mask (top right); and first-person headset view (bottom right).

Compositing

The open-source OBS Studio software is used for compositing for the purposes of this guide. Before you start, make sure you have the quadrant view on screen. The first step is to capture the background layer, which is the lower-left quadrant. Add a new Source in OBS of the type “Window Capture”. Select the window for the VR application. Rename this source “Background”. Next, crop away the parts of the screen that you don’t want to capture by adding a Crop Filter to this source (right-click on the Background source, Add Filter, Crop/Pad). The crop values represent how much of the window to crop from each of the four sides, in pixels. So, to capture the bottom-left quadrant for the background layer, use a value of 1080 for the top, and 1920 for the right (remember, at 4K 3840 x 2160 resolution, this is exactly half of each dimension).

Figure 21
Figure 21: The cropping filter showing the quartered view, before entering the crop values.

Figure 22
Figure 22: The cropping filter showing only the bottom-left quarter (background view), after having entered the crop values.

Once you’ve applied the crop filter, you’ll see it in the preview window, but it won’t take up the full screen. Right-click on the source, select Transform, and choose, Fit to screen (or use the Ctrl+F shortcut). Every layer in the composite image needs to be full screen in OBS, so do that for each layer.

Chroma key

When you have your background, you then need do the same thing for the physical camera view, and cut away the green background, leaving the player ready to be superimposed on the captured game background.

Figure 23
Figure 23: Defining the source for the physical camera in OBS*.

Add a new Video Capture source. Choose your camera capture device, and name this layer “Camera”. You should see your camera view. Next, right-click on the source, go to filters, and select "Chroma Key".

Figure 24
Figure 24: Selecting the chroma key filter in OBS*.

Figure 25
Figure 25: Once Chroma Key is selected, the green background will disappear, and the sliders can be used for fine tuning.

You can adjust the sliders in the chroma key settings until you get the person sharp against the background, with the green screen completely removed, and without erasing any of the person. The default values are usually pretty good, and should only need small adjustments. This is where you will see the benefit of good, even lighting. If you make too many changes, and mess up the chroma key filter, you can always delete the filter and re-add it to start fresh.

Figure 26
Figure 26: An interim test composite, minus the foreground layer, showing the player positioned on top of the game background.

When it looks good, position it to “Fit to Screen” in the preview window, and make sure the Camera layer is listed before the Background layer in the Source list (which means the camera layer is rendered on top). You should see your camera view against the VR background at this point.

Foreground

Next, you need to follow the same process as the background layer for the foreground, which is the upper-left quadrant. Add a new window-capture source, capture the quadrant view, and apply a crop filter—this time cutting off the right 1920 and the bottom 1080 pixels to isolate the upper-left corner of the quadrant. Size it with Ctrl+F for “Fit to Screen” to make sure it fills the full preview window. Name this source “Foreground”.

Figure 27
Figure 27: Crop of the foreground view.

The foreground layer shows objects that are between the player and the camera (and therefore should be rendered on top of the live camera view). You’ll need to key-out the black parts of the foreground view to allow the background and camera layers to show through. Right-click on the Foreground source, and apply a “Color Key” filter.

Figure 28
Figure 28: Select the Color Key filter to remove the unwanted black areas of the foreground view.

OBS will ask what color you want to use for the key, and there’s an option for custom color. Go into the color picker, and choose black. To be sure, you can actually set the values, selecting the hexadecimal value #000000 for solid, absolute black. Apply that, and it will make the black part transparent, so the foreground objects can sit in front of the rest of the layers in the composite.

Figure 29
Figure 29: Select the color key hex value of #000000 for solid black.

Figure 30
Figure 30:The foreground layer with the color key applied, and the black area removed.

The upper-right quadrant is the alpha mask, which can be applied to the foreground layer, but is more complicated to use. However, if the foreground layer includes solid black elements that you don’t want to turn transparent, then applying the alpha mask is the way to do that. Setting up the alpha mask is beyond the scope of this guide, but you can find useful information on this topic in the OBS Project forums.

Composite image

With the Background, Camera, and Foreground sources properly filtered, and in the correct order (Foreground first, then Camera, then Background), you should be able to see your real-time, mixed-reality view.

Figure 31
Figure 31: The final composite image, comprised of game background, player, and foreground.

Once you have the mixed-reality source, you can do other creative things in OBS. For example, you can set it up to switch between the mixed-reality view and first-person view. This is great for showing the difference between the two views, and can be useful during streams, or recording, to vary what the viewer sees.

You can also set it up to show a different camera by selecting it as another source, which is useful if you’re hosting a live stream and want to cut to a camera pointing to the host, for instance. It’s also possible to bring different graphics or stream plugins into your OBS workflow as needed, which again is an important capability for streamers and YouTubers.

Troubleshooting lag

One issue that could arise is lag in OBS, which may be caused by a mismatch in frames per second between the different video sources. If that happens, first make sure that the desktop capture (in OBS settings under Video) is set at 60 frames per second. Next, check the frame rate on your camera capture is set to 60 frames per second. Lastly, check that your video camera is also set at 60 frames per second.

We had a lag problem with a demo, and we discovered that one of the cameras had been set to 24 frames per second. Setting it to 60 frames per second instantly fixed the problem. It may be that your setup can’t handle higher than 30 frames per second, which is fine; but, in that case, all the settings noted above need to be at 30 frames per second.

Video out

The last stage of the process is taking the mixed-reality video signal and encoding it for recording or streaming. With a two-PC setup, the first machine does the capture and compositing, and the second one handles the encoding for recording or streaming. If you’re using a PC utilizing the Intel Core X-series processor family with lots of cores, it should be able to handle all the tasks, including high-quality encoding, without needing to output to a second machine.

In a two-PC setup, to send the signal from the first PC to the second, OBS has a feature called Full-screen Projector, which allows you to do a full-screen preview. If you right-click on the preview window, you can pick which monitor to output it to. Then, you can take a DisplayPort* or HDMI cable, and plug it into your graphics processing unit (GPU) so that the computer running the VR and compositing thinks that it’s a second monitor (you can also run it through a splitter, so you actually do have it on a second monitor as well).

Send the signal into the second computer using a second USB HDMI capture device (or PCI card) with the same 1080p/60 frames per second capabilities. You also run OBS on that second computer, and set up a very simple scene where you have one video capture device, which is the USB HDMI capture box, or card. You then set all the recording and streaming quality settings on that second system.

The second computer is also where you would add other details to the video output, such as subscriber alerts or graphics. When it comes to switching scenes, it can get complicated across two machines. For example, switching between first- and third-person view needs to be done on the first computer, while you might have a number of other operations running on the second computer that you also need to manage.

Encoding

OBS output settings, by default, are set to simple, which uses the same encoder settings for recording and streaming. You will need to set it to advanced mode to be able to adjust the settings for streaming and recording separately.

You usually have the choice of using either the GPU or the CPU for encoding. When using a single PC for all tasks, we recommend using the x264 CPU encoder, as using the GPU encoder negatively impacts the frame rate of the VR experience, both for the person in VR and on the desktop.

Using the x264 CPU encoder also takes advantage of the CPU hardware scaling to the point where, if you have an Intel Core X-series processor family with a 12- or 18-core processor, you can crank the quality settings up very high because it’s using the CPU to do the encoding and not the GPU, resulting in a better quality final video.

Streaming

With streaming, the bitrate is limited by the upstream bandwidth of your internet provider, and the bitrate your stream provider supports. Twitch*, for example, is limited to about six megabits, so in OBS you select a bitrate of 6,000 kilobits per second. It’s best to use the maximum streaming bitrate that your stream provider, and your Internet provider, can handle. If you have slower Internet, or just want to lower the quality, you could drop that down to 4,000, or 2,500, kilobits per second for streaming.

Figure 32
Figure 32: Streaming settings in OBS*.

There’s also the key-frame interval to consider, which should be changed from the default zero (0) auto setting to two seconds. Other settings include setting it to constant bitrate, and selecting medium for the CPU usage preset. This gives you a high-quality video for your stream.

Recording

The recording encoder settings are identical, except for the bitrate, which you can turn up much higher. For streaming on Twitch, you’re looking at a bitrate of 6 megabits per second; for recording, a maximum could be anything from 20- to 100- megabits per second, depending on the system. It’s here that a powerful system such as the Intel Core X-series processor family can really make a difference in terms of the video quality—but if you set your bitrate too high, the file size may become unmanageable.

You’ll need to experiment with the bitrate value to find the sweet spot between quality, file size, and not overloading the encoder. Start low, around 15 megabits, then run a test recording using the whole mixed-reality stack. The CPU usage varies depending on what’s happening, so watch the status bar and stats window on OBS for encoder overloaded! or dropped frames warnings. If you don’t get any warnings, your system might be able to handle a little more. Stop the recording and raise the bitrate a bit, if you want.

Figure 33

Figure 33: Recording settings in OBS*.

That’s a wrap

The green screen, mixed-reality video process for VR requires a good deal of trial and error to get right but, once you’ve nailed it, the results are very satisfying. Covering every eventuality and permutation in a guide such as this is impossible; but the aim is that, by now, you have a grasp of what’s involved, and can experiment with producing mixed-reality videos of your own. We’ll be looking out for them.

More Information

Intel Developer Zone

Parallel Techniques in Modeling Particle Systems Using Vulkan* API

$
0
0

View PDF

Example of using GPU- and CPU-based solutions utilizing Vulkan graphics and compute API

Tomasz Chadzynski
Integrated Computer Solutions Inc.

1 Introduction

Parallel processing has become one of the most important aspects of today's programming techniques. The shift in paradigm forced by the CPUs hitting the power wall enforced programming techniques that emphasized the spreading of computations over multiple cores/processors. As it turns out, many of the computation tasks that software developers face are parallelizable. One example is the case of modeling particle systems. Such systems are widely used in many fields from physical simulation to computer games.

The Vulkan* API is a collaborative effort by the industry to meet current demands of computer graphics. It is a new approach that emphasizes hiding the CPU bottleneck through parallelism, allowing much more flexibility in application structure. Aside from components related only to graphics, the Vulkan API also defines the compute pipeline for numerical computation.

This paper discusses and compares aspects of the implementation of a particle system using CPUs and GPUs supported by a Vulkan-based renderer example.

1.1 Assumptions and expectations

With all the performance gains that Vulkan has to offer, there is a downside in the form of a rather steep learning curve. Drawing a triangle, the simplest example available on the Internet can take around six hundred lines of code. This is because Vulkan API requires the developer to specify nearly all aspects of the rendering mechanism. Still, given some effort, Vulkan is a clear and straightforward example for using API.

This document does not intend to teach Vulkan basics. Some basic knowledge of Vulkan is required, however.  You should already know how to set up simple Vulkan applications. If you are new to Vulkan, then a good start would be to work through the LunarG tutorial.

The example code accompanying this paper is organized to emphasize presenting architectural concepts which lead to two important assumptions: 1) The return status from Vulkan function calls is in most cases ignored to avoid overly expanding the code base. In production, every return status should be checked and reacted on; 2) Conceptually close topics are kept within a single code flow. This reduces the need for jumping through the sources, but sometimes leads to rather long functions.

This paper is recommended to be read while looking at the accompanying code. Make sure you have the examples downloaded and use your favorite code browsing tool. To avoid long code listings, this paper references the code through a series of tags. For example, GPU_TP.15 references point fifteen in the Vulkan compute version of the code base. Each tag is unique and easily searchable within the code base.

Lastly, one of the major design goals of Vulkan is to give as much freedom to the application designer as possible. Therefore, there are many ways of implementing a given task depending on the design goals, target hardware, scalability, and so on. This paper’s goal is to introduce and spark some creative thinking, rather than provide a final approach.

1.2 Additional resources

Vulkan has received much attention, which has resulted in many good information sources. Here are few recommended reference materials:

  1. Vulkan specification and quick reference: https://www.khronos.org/registry/vulkan/
  2. OpenGL and GLSL specification. We are mostly interested in GLSL as we use it as a shader programming language: https://www.khronos.org/registry/OpenGL/index_gl.php
  3. Vulkan SDK and an excellent tutorial covering Vulkan basics: https://vulkan.lunarg.com/
  4. Another Vulkan tutorial goes beyond basics, covering more advanced memory handling and textures: https://vulkan-tutorial.com/
  5. Database of hardware supporting Vulkan. A great source for capabilities that are offered by the variety of hardware: http://vulkan.gpuinfo.org/
  6. A large set of examples in Vulkan from basic triangle to compute shader: https://github.com/SaschaWillems/Vulkan

1.3 Building examples

The project build is organized using Visual Studio* 2017. Overall, it uses three dependencies: GLM, GLFW, and Vulkan SDK. The first two are installed automatically through NuGet. The Vulkan SDK should be downloaded and installed. After opening the project, select the "Property Manager" tab. From there, unfold the "Debug|x64" and open Vulkan Dir. Then go to "User Macros" and set the VK_DIR property to point to the main Vulkan SDK directory. For example, "C:\VulkanSDK\1.0.51.0." The Debug and Release both use the same property, so one change will affect both. From there just click “Build.”

2 Concept of a Simulated Particle System

Our goal is to model the movement of multiple objects (particles) through space. To make things more interesting, the scene will contain entities that would alter the path through which the particle is moving.

A single particle is defined as a set of two vectors, position in space, velocity, and an additional scalar representing mass. These set of properties describe the state of each particle:

formula

We can look at the way a particle moves through space with displacement given by the differential equation:

formula

For now, we will treat the velocity as a constant and solve for displacement:

formula

We arrive at a general formula that models how a particle is moving through space. Constant velocity is assumed (but will change soon). Also, we must have constant C, a value which must be known, in order to describe our system completely. The constant C is related to the initial conditions of this differential equation. In other words, the particle at the beginning of its life has to have a predefined state. This requires using generators.

2.1 Generators

The concept of a generator is to be a function which fulfills the task of "placing" a new particle into the scene in some predefined manner. This demonstration defines a single sprinkler generator with functionality consisting of spawning particles at a predefined point in space with given velocity and mass.

The position of the point of origin is given directly as a parameter. The direction is defined using spherical coordinates which are then internally converted into the Euclidean coordinate system. The angles are defined as follows:

  1. (theta): Angle starts at x-axis and rotates counterclockwise.
  2. (phi): Angle starts from z-axis towards xy-plane.
  3. The value of speed is given as a separate argument.

Euclidean coordinates

To convert into Euclidean coordinates, we use the following set of equations:

formula

Going back to our general equation for particle motion and factoring in the generator, we can determine that at time t=0 the position of a particle becomes x0:

formula

At this point, we have a formula that we can use to describe the motion of a particle. However, our scene will not look that interesting as all of the particles would only move in a straight line and at a constant speed. To give our scene more dynamic behavior, we introduce the concept of interactors.

2.2 Interactors

To understand how we can consistently influence the motion of a particle, we start treating velocity as a variable instead of constant. The change of velocity is given by the following formula:

formula

This equation looks similar to the previous formula for displacement. By solving this differential equation, we arrive at the formula for velocity:

formula

Therefore, to consistently change particle motion we influence the velocity indirectly through changing acceleration. We define the interactor as a function that, for the given particle state and given interactor properties, returns the acceleration vector:

formula

Unfortunately, defining the interactor as a function becomes problematic. When we substitute back into the velocity equation, we see that the differential equation has a potential to become unsolvable:

formula

This is not what we want. Our goal is to have the ability to introduce different types of interactors without being worried about breaking our model. To do that, we need to implement the Euler method of solving differential equations to approximate the particle movement numerically.

2.3 Euler method in numerical simulation of particle movement

The Euler method working principle is to step through the differential equation instead of solving across the entire domain. If we assume a sufficiently small time step, we can approximate the velocity (2) and acceleration (6) to be constants within the span of a single step. This allows us to fall back to the equations (5) and (7), and discretize them so they can be programmed. Therefore, the final set of equations to be used is as follows:

formula

If the math so far in this chapter has given you trouble, don't get discouraged. It is enough to understand the final equations (10) to work with the rest of material. From now on we will treat the acceleration vector as a constant within the single iteration. Next, we dive deeper into interactors that affect the motion of a particle.

2.4 More on interactors

We established earlier that the particle motion within the scene would be influenced by changing acceleration, using Newton's second law:

formula

We see that the task of an interactor should be to generate some force and convert it into acceleration using the mass of the particle. Moreover, we can sum up the forces coming from multiple interactors together:

formula

Therefore, our final approach is as follows:

formula

The main reason to not multiply out the mass is that interactors that simulate gravity don't use it. In other words, the acceleration due to gravity does not depend on the mass of an object on which gravitational pull is exerted. It is also possible that some interactors which don’t model physically accurate behavior could simply ignore mass as well. This is an optimization step which could save a significant number of cycles spent on multiplication.

2.4.1 Constant force interactor

This is the simplest form of an interactor. Its function is to exert the force on a particle regardless of any other properties. It could be used to make a particle go towards some predefined direction:

formula

2.4.2 Gravity point interactor

This is the second type of interactor. It represents a single point that exerts the force of gravity on particles in the scene. Such a point has mass and position, but in our example it does not have volume. It uses the standard gravity equation:

formula

Now we can see why we chose not to include the mass of a particle within each interactor. When deriving the formula for acceleration, we end up with:

formula

In this equation, G is the gravitational constant, M is the mass of the gravity point, d represents the distance, and r is the unit vector pointing toward the gravity point interactor from the location of the particle.

2.4.3 Gravity planar interactor

This is the last type of interactor implemented for this example. It is modeled as an infinite plane exerting gravitational force on the particles in the scene. In this case, the r vector is always perpendicular to the plane. Therefore, for any given particle location, the distance has to be calculated, as such: Let P1={x1,y1,z1} represent a particle in space. The plane is described by its normal vector n={a,b,c}, and a point on that plane, P={x,y,z}. The distance D is given by:

formula

This type of interactor can be used to model large objects compared to scene size such as the Earth.

2.4.4 Additional thoughts

Simulating physically correct behavior is not the only way to build interesting particle systems. In fact, setting up a physically accurate system in equilibrium could be very hard. A similar effect could be achieved by using non-physically accurate interactors. Such functions can look into the entire state of a particle and generate acceleration that will attempt to force certain behavior.

3 Vulkan Renderer for CPU-Based Simulator

The design of the renderer component used in visualizing the CPU version of the particle simulator will be described first. If you are not familiar with Vulkan, it is a good idea to pause here and review one of the tutorials. The LunarG tutorial covers sufficient material to understand the rest of this paper. Also, remember that Vulkan leaves the application design to the programmer and allows for a variety of ways to approach a task. Take this guide as one of the options available rather than the definitive solution.

At a high level, when programming using Vulkan, the goal is to construct a virtual device to which drawing commands will be submitted. The draw commands are submitted to constructs called “queues.” The number of queues available and their capabilities depends upon how they were selected during construction of the virtual device, and the actual capabilities of the hardware. The power of Vulkan lies in the fact that the workload submitted to queues could be assembled and sent in parallel to already executing tasks. Vulkan offers functionality to coherently maintain the resources and perform synchronization.

In our application, we took the approach of breaking up the Vulkan renderer setup into five components (see graphic below). The goal is to present an easy-to-understand codebase for the Vulkan application.

Figure 2

3.1 Physical device selection

The first step in every Vulkan application is to create the Instance Object. This functionality is delivered through VkInstance. It is required that an application has a VkInstance created before starting anything else. The VkInstance is the interface used to inspect the hardware capabilities of a computer system. This functionality is encapsulated in the AppInstance object (CPU_TP1). The AppInstance object is essentially a wrapper exposing the interface for convenient hardware device enumeration.

3.1.1 Validation layers: A crucial step in Vulkan application development

One of the most critical steps when developing with Vulkan is to enable the validation layers (CPU_TP2). By design, for performance reasons, Vulkan does not check the validity of the application code. When used incorrectly, Vulkan calls would simply fail or crash with virtually no traces that would allow for debugging. With validation layers enabled, the Vulkan loader injects a layer of code for verification. The enabling of validation layers is a critical step that every developer should remember. However, that layer affects performance and should be disabled in production.

3.2 Vulkan virtual device and swapchain

The next step is to create the actual virtual device that will expose the command queue used to render the scene. First, a surface object must be instantiated. (The topic of the surface is out of the scope of this article, but it is covered in detail in the LunarG tutorial.) However, in this case, the demo code supports Windows* only, and the surface is created and exposed by the SurfaceWindows class (CPU_TP3).

Next is creating the Device object (CPU_TP4). Note that the Device class encapsulates much more than VkDevice. Starting with the constructor, the device class exposes multiple versions, which allows more flexibility in initialization (CPU_TP5). The physical device used to create the virtual device can be "first fit" or it can be specified using a VendorID number, if the software is to use specific hardware. End-user systems can have multiple devices that support Vulkan, and the first one on the list is not always the device desired.

The default constructor (CPU_TP6) obtains the list of available physical devices and then runs the createDevice method. If the creation was successful, this physical device is then used. If the creation was not successful, the constructor tries the next physical device. The actual device creation is done in the createDevice method (CPU_TP7). The first step is to determine if the physical device can deliver queues with the required functionality (CPU_TP8). Devices can deliver many types of queues. Some queues may support all of the required operations. Some queues may be highly specialized for the specific tasks, such as compute or transfer. In this case, the application expects the queue to allow for transfer and drawing graphics. The application will iterate over available queues until an index is found that matches the expected criteria.

Once completed, the virtual device is created. Besides VkDevice, the object supplies the queue handlers and swapchain that correspond to the virtual device. (For more details, please check the Vulkan tutorials and reference.)

3.3 Scene setup

The ParticlesElement (CPU_TP9) task is to draw the particle fountain. For it to be drawn, it has to belong to the scene. The Scene object role is to aggregate all the scene elements, maintain command buffer, and the render pass (CPU_TP10). When the simulator is ready to render, it calls the render function (CPU_TP11). This function is responsible first for initiating the command buffer for recording, obtaining the current render target, and finally starting the render pass (CPU_TP12). Once that is done, the renderer begins iterating over every element in the scene (CPU_TP13), allowing it to record drawing commands into the command buffer. In this case, there is a single scene element that draws the particle fountain. After all scene elements finish recording, the render function submits the command buffer into the queue and presents the results on the screen.

3.4 Scene Element

The SceneElement (CPU_TP14) represents the single element or entity within the scene. The expected scope of scene element objects is to handle everything related to rendering a single entity and maintaining required resources, like the pipeline and buffers. The ParticlesElement focuses its entire operation around two methods. First is the constructor (CPU_TP15) that initializes the rendering pipeline. Second is the recordToCmdBuffer (CPU_TP16) that is invoked by the Scene class to record element-specific draw calls into the command buffer.

3.5 Graphics memory allocation and performance

There is one important aspect that has to be considered when the memory for the buffers is allocated. The Vulkan device can expose different memory types that it can use. The Memory tab in the hardware database shows that devices expose multiple heaps from which memory could be allocated. Looking at the list of memory types, each type represents two values: the memory type flags that describe properties and the heap that is associated with them. (See hardware database referenced in section 1.2.)

The type of heap selected could significantly affect performance. For example, some devices, mainly discrete graphics cards, expose one heap with only DEVICE_LOCAL_BIT supported, which is the fastest memory available for that device, but does not allow direct access from the CPU (more on that in a moment). Another heap with the HOST_VISIBLE_BIT allows direct access, but has potentially longer access time.

As of today, most of Intel's integrated GPUs expose heaps with both DEVICE_LOCAL_BIT and HOST_VISIBLE_BIT. The presence of the first one ensures optimal access time for the platform. The second allows the memory mapping of the allocated buffer into the CPU address space and direct access (CPU_TP17, CPU_TP18).

When selecting a memory heap there is a three-step process. The first step is to specify and create the buffer (CPU_TP19) (note that vkCreateBuffer does not allocate memory for the buffer yet!). The second step is to use the Vulkan API to obtain the list of memory types that can support the defined buffer (CPU_TP20). The third step is to use the selected memory index to allocate the buffer memory. Of particular interest is the memoryTypeBits field in the VkMemoryRequirements. This field is a bit string that has a bit set to 1 at the position of the bit that corresponds to the memory type index of the memory type that can be used to allocate buffer memory. If the bit contains a value of 0, then a memory heap index cannot be used with the particular object of interest.

The first step above will likely provide multiple options. Therefore, in the second step, the exact memory type desired must be chosen. In this case, on Intel’s GPU, all of the memory types have DEVICE_LOCAL_BIT set. HOST_VISIBLE_BIT and HOST_COHERENT_BIT also must be set. The HOST_COHERENT_BIT guarantees that memory operation is observed without the need to manually flush the buffers. If the memoryTypeBit is set and the memory type at the given index supports the requested flags, then the specific index is selected to be used (CPU_TP21). In the third step, that index is used when allocating the actual memory on the heap (CPU_TP22).

This memory allocation topic is quite important, and you are encouraged to look further into it. Topics like staging buffer or Vulkan buffer copy operations would be a good next step, especially for devices with dedicated memory.

4 CPU-Based Particle Simulator

Section 2 describes the model of the particle simulator that was implemented for this project. The implementation is based on a real-time infinite loop. The simulation loop contains three main operations:

      1. Obtain the time difference between the current and previous iteration (CPU_TP23).
      2. Compute the next Euler iteration for the particle model (CPU_TP24).
      3. Upload and render a new frame using the Vulkan renderer (CPU_TP25).

The entire physical simulation happens within the progress method (CPU_TP26) within the algorithm, and it is composed of three major steps:

      4. For every particle, the algorithm launches the interactors and adds up the acceleration vectors (CPU_TP27).
      5. The algorithm sorts the particle list in such a way that all of the active particles (TTL>0) occupy the left side of the list, and all of the inactive particles occupy the right side of the list (CPU_TP28).
      6. Launches particle generator(s) in order to add new particles to the list if the generator’s conditions have been met, and there is space left in the buffer (CPU_TP29).

With this simple model, many interesting effects may be generated. Although this implementation uses a real-time approach (i.e., the actual hardware clock is used to obtain the time difference), the code can easily be modified to use a stepwise approach. An important point to remember is that the time difference has to be sufficiently small, otherwise it will break the model. Knowing that, it can be determined whether or not this approach is a good choice for a system that is capable of spinning the main loop at a sufficient speed. If that is not the case, consider switching from a real-time approach to an arbitrarily determined time-step, which will not use the system clock.

4.1 Particle model

To represent the set of particles in a scene, the Buffer class (CPU_TP30) is used. It is a relatively simple class that encapsulates a memory array of particle structures (CPU_TP31). The class also delivers a few convenience functions and the array sorting algorithm.

The purpose of the sorting algorithm is to place particles into two groups within the particle array. The active particles are placed on the left side and the inactive on right side. It is implemented as follows:

  1. Initialize L index at the first element and P index at the last element.
  2. Progress L toward the end of the array until a particle with TTL<=0 is encountered.
  3. Progress P toward beginning the array until a particle with TTL>0 is encountered.
  4. Swap tab[L] with tab[P].
  5. Repeat with L+1.

If at any time L==P, the finish condition is reached.

4.2 Scene interactors and generators

Both interactors and particle generators influence the scene during the simulation. Within the model, both are represented by their corresponding abstract interfaces. The interactors use BaseInteractor (CPU_TP32) and generators, BaseGenerator (CPU_TP33). Both need to be defined before the Model object is instantiated (CPU_TP34).

In the case of the interactor, all that the model requires is to obtain the acceleration vector given the particle state and time difference. Internally, the interactor can perform arbitrary operations, and interactors can be fundamentally different from each other. For example, the ConstForceInteractor (CPU_TP35) uses only force magnitude, direction vector, and particle data.

The BaseGenerator, aside from a common interface, delivers functions to generate particle properties like speed with randomized deviations (CPU_TP36). However, it leaves the exact particle location, the rate of appearance, and other details to the derived classes. In this example, point generator (CPU_TP37) is used. The point generator is set in such a way that it acts as particle sprinkler (CPU_TP38). It is given direction in polar coordinates specified in degrees, and emits particles according to a defined rate.

4.2.1 Performance vs. maintainability

There exists a performance bottleneck in the implementation. A specific design choice was made to prioritize maintainability over performance. The problem originates from the polymorphic call into the computeAcceleration method (CPU_TP39).

Polymorphic calls tend to be slower than regular function calls, and every interactor is executed for every particle. It is clear how this could become a significant factor for a large number of particles. However, the advantage in this case is that there is an easy way to implement new interactors into the mix with no need make changes to the Model's code.

The alternative here could be to implement some mechanism that would detect the type of interactor and execute its procedures using a conditional statement. This will improve performance, but also increase the complexity of the code.

Which approach to use is a case-by-case decision that depends on project goals. When a small number of particles is expected with a large variety of interactors present, then the virtual method approach is a good choice. However, if the number of particles is large, then switching to other solutions could be a good way to improve performance.

4.3 Lifetime of the particle and the importance of sorting

There is one last topic related to particle generation and particle lifetime. Once a particle is moving through the scene, it can easily fly outside of the visible area. Since it is not visible, there is virtually no purpose in modeling it. Sometimes the goal is to model effects like a flame with a specific height. To implement such cases, every particle must have a property that determines its lifespan called “Time To Live” (TTL). At each iteration, the time difference is subtracted, and if the value is equal to or below zero, the particle is considered inactive. When particles become inactive, no operations are performed on them and they are no longer drawn.

There are two issues related to the case of inactive particles. First, is that a way is needed to free the space up for new particles. Second, is that inactive particles should no longer be drawn on the screen. This project example uses a simple rendering mechanism. However, in the case of a more complicated multi-pass renderer with more computation-heavy shaders, it is a good idea to avoid processing elements that would not be visible on the screen. This is where sorting comes in. Sorting the particle array allows for submitting draw calls that specify only the number of particles that should appear on the screen. It also simplifies the work of generators as they do not have to search through the buffer for free slots to fill.

There is a possible alternative approach. Instead of the contiguous array, a linked list could be used. This would eliminate the need for sorting, but, on the other hand, it would slow down the upload of data to the GPU buffers because of the need to convert the linked list structure into a contiguous array.

5 Multithreaded Approach to Computing Particle System

The speed of the simulation could be improved by parallelizing the workload. There are a few cases to consider:

      1. Parallelize the workload across the array of particles, calculating the interactors’ contribution and Euler step as a single work unit.
      2a. Parallelize the workload across interactors, calculating the effect of each interactor at every particle as a single work unit.
      2b. Further parallelize Option 1, above, by splitting interactor contributions into separate work units.
      3. Parallelize the sorting algorithm.
      4. Parallelize the generators’ contribution.

Option 1, above, is the recommended approach. For each particle, the algorithm computes, as a single task, the contribution from each interactor, summing the acceleration vectors and then performing the Euler step (CPU_TP40). Since each particle resides in a different memory region, and interactors usually do not modify their internal data when computing particles, this method provides the most optimal solution. The OpenMP* framework was used to provide multithreading. OpenMP is an API that is based primarily on compiler directives and used for multithreaded-oriented development. It has lower overhead regarding required code to be written. It could be easily used to parallelize loops, but has much more to offer. However, sometimes it might not deliver the level of detail as some other available libraries.

Option 2a, is probably not recommended. Some might find that this option is appealing from the perspective that the data that defines the properties of an interactor would have to be loaded once, and then used extensively while computing each particle. This is something that will be supported by the CPU cache, significantly speeding up the process. However, there are issues with this approach. For example, acceleration must be summed up, which means there will be a need to synchronize access between threads executing the interactors and then store the acceleration vector for every particle in an additional memory region. Moreover, the entire array will require an additional pass performed to calculate the Euler step. These two disadvantages would produce significant synchronization overhead, likely rendering this strategy impractical.

Option 2b is an extension of Option 1. Hypothetically, if there are computationally demanding interactors, it could be beneficial to split their operations. However, the goal is to achieve minimal context switching. If every particle computation and every interactor contribution is parallelized, it will most likely lead to an explosion of threads. Switching between those threads would further decrease performance.

Option 3 suggests improving the sorting algorithm through parallelization. This strategy might improve sorting speed; however, the type of data must be considered. In this particular case, a particle is in an active or inactive state. The order of the particles that are in the same state is not important, as long as they are all placed on the correct side of the sorted array. This allows for the array to be sorted in a single pass. If a parallel sorting algorithm is used instead, it is possible that performance could be improved. However, such improvement is not guaranteed. Possible improvements may be determined through experimentation.

In Option 4, parallelizing the generation of particles has a high potential of improving performance, but only under certain conditions. The first consideration is to determine the workload of all of the generators. If the memory buffer is almost always full, there won’t be any space for the generators to work. Therefore, spawning additional threads to execute the generators will only create unnecessary overhead. Every generator takes as much space as needed, and what is left is assigned to the next generator in the list. This could result in some threads executing generators just to find out that there is nothing for them to do.

5.1 Going further with parallelization

There are few more things worth mentioning that could affect performance. First is the issue of false sharing. A single particle structure size was 44 bytes. Given that the cache line size was 64 bytes, what was fetched from lower level memory was an entire particle structure, or part of a particle structure, that the current thread was working on, as well as part of a particle structure located right next to it. If two parallel threads each work on a single particle located next to each other, the modification of the acceleration vector inside one thread will invalidate the entire cache line. Essentially, this will cause cache miss on the other thread, even though the threads were assigned to different particles. This situation can be alleviated by aligning the particle structures to the 64-bytes boundary. However, this will also increase memory waste.

The second and preferred solution is to group multiple particle structures together and direct them to be processed as a single unit. This is what OpenMP does by default in the "parallel for" directive.

OpenMP is a very good framework, but it has disadvantages. OpenMP is built into a number of compilers that support it. However, not all compilers support OpenMP, or support an outdated version of OpenMP. A great alternative to OpenMP is the thread support library that was introduced with C++11 and should be supported in almost all C++ compilers currently available.

6 Vulkan Compute in a Modeling Particle System

This section addresses the topic of porting a CPU-based particle simulator onto the GPU using Vulkan compute. Vulkan compute is an integral part of the Vulkan API. Although most of the simulation will be moved onto the GPU, the sorting and generation of particles will still be kept on the CPU.

The goal of this section is to present how to set up the compute pipeline, instantiate more advanced shader input constructs, and show how to send frequently changing, but small, portions of data onto the GPU without using memory mapping.

This is a good moment to mention a tool that is extremely valuable when debugging Vulkan applications. The tool, RenderDoc, can be used to capture the current application state for inspection. The tool can be used to examine buffers content, shader input, textures, and much more. RenderDoc is automatically installed with VulkanSDK. Although not discussed in this document, it is highly recommended for readers to become familiar with this tool.

6.1 Using GPU memory buffer for computation and visualization

The first step is to repurpose the buffer structure that holds particle data. Among the buffer types available to use on a GPU is a buffer type called the shader storage buffer object (SSBO). This is a type of buffer that is similar to the uniform buffer. However, SSBOs are much larger in size than the uniform buffer and are writable from within the shader.

The internal storage, which was the standard C-style array contained by the Buffer class, is now replaced with an SSBO. Moreover, the same buffer memory region will be reused as a vertex buffer for rendering. The compute shader will modify the SSBO content, and then the renderer will use it as a vertex buffer object (VBO) to render the frame.

Up until now, the Buffer class was independent of Vulkan. Now, however, the entire application is dependent of Vulkan. This means that before the Model class can be instantiated, the Buffer class must be constructed. And before the Buffer class is constructed, the Vulkan virtual device must be created.

To make things appear in a more organized order, the buffer, the device, the AppInstance, and the rest of supporting code was moved into a new namespace called “base.” Now, instead of creating the Model first, the application starts from creating the AppInstance, device, and buffer objects (GPU_TP41). The code creating the vertex buffer has been moved out of ParticleElement into buffer (GPU_TP42). Note the change in the usage field: it now has two flags assigned, VERTEX_BUFFER_BIT and STORAGE_BUFFER_BIT. The rest of the procedure remains the same. The Buffer class now delivers the VkBuffer handle and interface to access the buffer on the CPU side (GPU_TP43). The constructed buffer is then used in instantiation of both the Model (GPU_TP44) and the ParticlesElement(GPU_TP45) objects.

6.2 Moving computation from the CPU to compute shader on the GPU

The Model class is where most of the changes will occur. In this version, the Model class will contain another pipeline which will execute the compute shader.

First, one of the biggest changes is in how the interactors are generally implemented. Although the interactor logic was moved into the compute shader, there is still a need for the mechanism to allow for the ability to dynamically specify the properties of each interactor in the application logic. Because of the structure of GLSL language, it is no longer possible to implement interactors using an abstract interface. This requires the interactors to be split by their type and provide specific data for all of them. To do this, we define a structure (GPU_TP46) called Setup to contain arrays for each type of interactor. Each array defines properties for an interactor and allows an arbitrary number of interactors of a given type to be present up to a specified upper boundary. The count variables specify the actual number of active interactors.

When submitting complex structures to buffer objects, it is critical to adhere to the layout rules that apply for the given type of buffer (Note the alignment statement in the structure field’s declaration (GPU_TP47)).  For uniform or shader storage buffer objects, these layouts are std140 and std430. Their full specification can be found in the Vulkan specification and GLSL language specification.

As in the CPU version, interactors were initialized before constructing the Model object (GPU_TP48). However, this time the initialization is based on populating the Setup structure. Later, the entire structure will be uploaded into the GPU during construction (GPU_TP49), making it available for the compute shader to access.

In the Vulkan API the compute stage and graphics stage cannot be bound into a single pipeline. As a result, two pipeline objects are available. Compared to the graphics pipeline, the compute pipeline (GPU_TP50) is less complicated. The primary concern is to define the proper pipeline layout. Beyond that, the process of creating the compute pipeline is nearly the same as creating the graphics pipeline.

Once the pipeline is established and the necessary data is ready to execute, the procedure starts recording commands into the command buffer for the compute pipeline (GPU_TP51). However, there are two concepts that are worth examining.

The first concept is “push constants.” Push constants are small regions of memory residing internally in the GPU designated for fast access. The advantage of using push constants is that data delivery can be scheduled right within the command buffer (GPU_TP52) without the need for memory mapping. The memory layout of push constants needs to be defined during pipeline construction, but the process is much simpler compared to the descriptor set (GPU_TP53).

The second concept is related to the division of workload. The number of elements is split into work groups that are scheduled to execute on the compute device. The size of the work group is specified within the compute shader (GPU_TP54). Then the API is given the number of work groups scheduled (GPU_TP55) for execution of the compute job. The sizes and number of workgroups are both limited by the hardware. Those limits can be checked by looking at the maxComputeWorkGroupCount and maxComputeWorkGroupSize fields, which are part of VkPhysicalDeviceLimits structure (which is a part of the VkPhysicalDeviceProperties structure that is obtained through calling vkGetPhysicalDeviceProperties). It is imperative to schedule a sufficient number work groups with appropriate size to cover the entire set of particles. The optimal choice of sizes is dictated by the specific hardware architecture and influenced by the compute shader code (e.g., the internal memory requirements).

6.3 Structure of the compute shader-oriented simulation

With the transition to the compute shader, the interactors take the form of two components. First are the interactor parameters which are delivered through the uniform buffer (GPU_TP56). The second is the function set wherein each function corresponds to a specific interactor and calculates the acceleration vector for the processed particle (GPU_TP57).

The function per iteration of the compute shader is to obtain a particle from SSBO using the global id index (GPU_TP58). The compute shader iterates through each interactor descriptor and invokes its function as many times as the count field indicates (GPU_TP59). Once all interactors have been executed, the acceleration vectors are summed and the compute shader invokes the Euler step function (GPU_TP60). The particle is then saved back to SSBO.

After the computing step is done, the renderer is invoked. The renderer uses the same buffer, but accesses memory through the vertex buffer interface, instead of SSBO, to render the scene.

7 Going Further

Building on this work, there are few options to consider that may improve performance.

7.1 Eliminate the CPU role in computations

Currently, the GPU compute shader only calculates the interactor’s influence on the particle. All sorting and generating of the particles were left to the CPU (see section 5). In the case of Intel's GPU, this approach is not penalized at all due to the uniform access to the memory. However, for other platforms, especially a discrete GPU, allocating buffers using the memory heap with most efficient access time is a must. This could mean that address space in such a heap might not be accessible from the CPU through memory mapping. In that case, the only way to access the data would be by copying buffers internally between heaps back and forth, or accessing it through the compute shader.

An approach in which buffers are copied between each other will almost certainly create a performance bottleneck. Therefore, the compute shader option seems more appealing. Initially, the sorting was left in its sequential form because of the unclear gains of parallel algorithms for this specific scenario. However, a GPU offers many more computing units which could make parallel sorting algorithms, like an “odd-even sort,” deliver better results.

There is still the issue of synchronization. An instance of the compute shader cannot start the next sorting pass before the previous one has completed. However, if only a single local work group is used, then the GLSL language delivers a set of Shader Invocation Control Functions (see GLSL specification) that will deliver the capability of setting up proper memory barriers.

Similarly, in the case of the generators, the biggest problem was the random number generator not being thread-safe and its dependence on the generator’s internal state. The randomize function can be ported into GLSL, and the coherence maintained using the GLSL atomic memory functions.

7.2 Parallelizing the performance path by double buffering

For this project, the approach was to first compute the scene and then render it sequentially. The complexity of the rendering in this case was fairly basic, and, therefore, its impact on performance was negligible. However, this will not always be the case. With more involved renderers it is necessary to attempt to perform computations and rendering in parallel. The Vulkan API was designed with this goal in mind and provides the means to synchronize program flow, but does not enforce it. For example, looking at where the program loop is waiting for the rendering function to finish (CPU_TP61). The part of the code responsible for presentation obviously has to wait, but the model does not. In fact, the model could start computing the next step as soon as the vertex data is delivered and only need to wait before entering the setVertexBufferData function. From here the application can be split into two threads and use a producer-consumer setup.

In the GPU version, instead of using a single SSBO, double buffering can be introduced. One buffer is used both as a render-input and compute-input. At the same time, the second buffer is populated by the compute shader with the Euler function output.

It is important to remember that there is an upper threshold to what can be achieved, which is limited by the capabilities of the hardware. The compute and render tasks executed on the same GPU will not always provide performance gains because of the finite number of compute units available.

8 Conclusion

Vulkan offers the ability to bypass the CPU bottleneck that exists in previous generations of graphics APIs. However, Vulkan has a steep learning curve and is more demanding when it comes to application design. Nonetheless, despite being very explicit, it is a very consistent and straightforward API.

It is worth remembering these few points which will help with application development, as well as allow achievement of better application performance:

  1. Start your development with the validation layers enabled and become familiar with tools like RenderDoc.
  2. Carefully design your application as you will have to make many decisions about various details.
  3. Pay attention to where your buffer memory is allocated as it might have significant performance implications.
  4. Understand the hardware you are working with. Not all the techniques will work equally well on every type of GPU. Although all Vulkan devices expose the same API, their properties and limitations might be fundamentally different from each other.

Object Detection on Drone Videos using Caffe* Framework

$
0
0

Abstract

The purpose of this article is to showcase the implementation of object detection1 on drone videos using Intel® Optimization for Caffe*2 on Intel® processors. The functional problem tackled is the identification of pedestrians, trees and vehicles such as cars, trucks, buses, and boats from the real-world video footage captured by commercially available drones. In this work, we have conducted multiple experiments to derive the optimal batch size, iteration count, and learning rate for the model to converge. The deep learning model developed is able to detect the trained objects in real-world scenarios with high confidence, and the ratio between the detected objects and desired objects is almost equivalent.

Introduction

Modern drones have become very powerful ever since they have been equipped with potent cameras. They have been successful in areas such as aerial photography and surveillance. The integration of smart computer vision with drones has become the need of the moment.

In today’s scenario, object detection and segmentation are the classic problems in computer vision. More challenges exist with the drones due to the top-down view angles and the issue to integrate with a deep learning system for compute-intensive operations.

In this project, we implemented the detection component using Single Shot MultiBox Detector topology (SSD)1. We implemented our solution in Intel® Xeon Phi™ processors and evaluated frame rate, and accuracy on several videos captured by the drone.

Experiment Setup

The following hardware and software environments were used to perform the experiments.

Hardware

Architecturex86_64
CPU op-mode(s)32-bit, 64-bit
Byte Order:Little Endian
CPU(s):256
On-line CPU(s) list:0-255
Thread(s) per core:4
Core(s) per socket:64
Socket(s):1
NUMA node(s):2
Vendor ID:GenuineIntel
CPU family:6
Model:87
Model name:Intel® Xeon Phi™ Processor 7210 @ 1.30 GHz
Stepping:1
CPU MHz:1302.386
BogoMIPS:2600.06
L1d cache:32K
L1i cache:32K
L2 cache:1024K
NUMA node0 CPU(s):0-255

Software

Caffe* Setup

Caffe1.0 (optimized on Intel® architecture)
Python*2.7
GCC version4.8.5

Model

The experiments detailed in the subsequent sections employ the transfer learning technique to speed up the entire process. For this purpose, we used a pre-trained Visual Geometry Group (VGG) 16 model with SSD topology.

Solution Design

The main component of our system comprised a training component and a detection algorithm running SSD. SSD is compute-intensive, but has been more optimized for Intel® architecture. We adopted Caffe* optimized on Intel architecture as our Deep Learning Frameworks and the hardware is an Intel Xeon Phi processor.

In this work, the entire solution is divided into three stages:

  1. Dataset preparation
  2. Network topology and model training
  3. Inferencing

Dataset Preparation

The dataset used for training the model was collected through unmanned aerial vehicles (UAVs). The images collected vary in resolution, aspect, and orientation, with respect to the object of interest.

The high-level objective of preprocessing was to convert the raw, high-resolution drone images into an annotated file format, and then use for training the deep learning model.

The various processes involved in the preprocessing pipeline are as follows:

  1. Data creation
  2. Video to image frame conversion
  3. Image annotation
  4. Conversion to framework-native data format

The individual steps are detailed below.

Step 1. Dataset creation

The dataset chosen for these experiments consists of 30 real-time drone videos in the following 7 classes: boat, bus, car, person, train, tree, and truck.

Step 2. Video to image frame conversion

To train the model, all the video files were converted to image frames. The entire conversion code was built using OpenCV 33.

The final dataset prepared for training consists of 1,312 color images.

Step 3. Image annotation

Image annotation task involved manually labeling the objects within your training image set. In our experiment, the Python* tool, LabelImg*4 was used for annotation. The tool provided the object coordinates in XML format as output for further processing.

The following figure shows the training split for each class:

data set distribution
Figure 1: Training data set distribution

Step 4. Conversion to framework-native data format

To enable fast and flexible access to data during training of the network, we used framework-specific file formats.

Data Conversion

Data enters the Caffe model through data layers. Data can be ingested from efficient databases (LevelDB or LMDB), directly from the secondary memory, or from files on disk in HDF5 or common image formats. This experiment used the Lightning Memory-Mapped Database (LMDB) as the data format for ingesting the data to the network.

LMDB is a software library that provides a high-performance embedded transactional database in the form of a key-value store. It stores key/data pairs as byte arrays, which gives a dramatic write performance increase over other similar stores. LMDB always scales up the performance and maintains data integrity inherently by design.

Network Topology and Model Training

SSD

The SSD approach discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages, and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component.

Model

In our experiment, we used the VGG 16 as the base network. An auxiliary structure was appended to the network to produce detections with the following key features:

  • Multiscale feature maps for detection: We added convolutional feature layers to the end of the truncated base network. These layers decreased the size progressively and allowed predictions of detections at multiple scales.
  • Convolutional predictors: Each added feature layer can produce a fixed set of detection predictions using a set of convolutional filters. For a feature layer of size m×n with p channels, the basic element for predicting parameters of a potential detection is a small 3×3×p kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location.
  • Default boxes and aspect ratios: We associated a set of default bounding boxes with each feature map cell for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k at a given location, we computed c class scores and the four offsets relative to the original default box shape. This resulted in a total of (c+4)k filters that were applied around each location in the feature map, yielding (c+4)kmn outputs for an m×n feature map.

To achieve a faster convergence, we trained the network using a Microsoft COCO* pre-trained Caffe model. The model is available to download at the Caffe Model Zoo5.

Transfer Learning

In our experiments, we applied transfer learning on a pre-trained VGG 16 model (trained on Microsoft COCO dataset). The transfer learning approach initializes the last fully connected layer with random weights (or zeroes), and when the system is trained for the new data, these weights are readjusted. The base concept of transfer learning is that the initial layers in the topology will have learned some of the base features such as edges and curves, and this learning can be reused for the new problem with the new data. However, the final, fully connected layers would be fine-tuned for the very specific labels that they are trained for. Hence, this needs to be retrained on the new data.

Inferencing

The video captured by the drone was broken down into frames using OpenCV with a configurable frames per second. As the frames were generated, they were passed to the detection model, which localized the different objects in the form of four coordinates (xmin, xmax, ymin, and ymax) and provided a classification score to the different possible objects. By applying the NMS (Non-Maximal Suppression) threshold and setting confidence thresholds, the number of predictions can be reduced and kept to the prediction that is the most optimal. OpenCV was used to draw a rectangular box with various colors around the detected objects (see Figure 2).

detection flow diagram
Figure 2: Detection flow diagram

Results

The following detection was obtained when the inference use-case was run on below sample images.

cars in traffic
Figure 3: Cars in traffic as input for an inference6
cars in traffic detected with drone
Figure 4: Green bounding boxes display the objects detected with label and confidence
parked cars
Figure 5: Parked cars as input for an inference6
parked cars detected with drone
Figure 6: Green bounding boxes display the objects detected with label and confidence
moving cars and persons
Figure 7: Moving cars as input for an inference 6
cars and persons detected with drone
Figure 8: Green bounding boxes display the objects detected with label and confidence

Conclusion and Future Work

The functional use case attempted in this paper involved the detection of vehicles and pedestrians from a drone or aerial vehicle. The training data was more skewed towards cars as opposed to other objects of interest since it was hand crafted from videos. The use case could be further expanded for video surveillances and tracking.

About the Authors

Krishnaprasad T and Ratheesh Achari are part of the Intel team working on the artificial intelligence (AI) evangelization.

References

The references and links used to create this paper are as follows:

  1. Scalable High Quality Object Detection (PDF)
  2. Caffe:
  3. https://github.com/opencv/opencv
  4. https://github.com/tzutalin/labelImg
  5. Caffe Model Zoo:
  6. Test Image Sources:

Object Detection on Drone Videos using Neon™ Framework

$
0
0

Abstract

The purpose of this article is to showcase the implementation of object detection1 on drone videos using Intel® optimized framework for neon™2 on Intel® processors. The functional problem tackled in this work is the identification of pedestrians, trees, and vehicles such as cars, trucks, buses, and boats from the real-world video footage captured by commercially available drones. In this work, we have conducted multiple experiments to derive the optimal batch size, iteration count, and learning rate for the model to converge. The deep learning model developed is able to detect the trained objects in real-world scenarios with high confidence, and the ratio between the detected objects and desired objects is almost equivalent.

Introduction

Modern drones have become very powerful and ever since they have been equipped with potent cameras. They have been successful in areas like aerial photography, surveillance, etc. The integration of smart computer vision with drones has become the need of the moment.

In today’s scenario, object detection and segmentation are the classic problems in computer vision. More challenges exist with the drones due to the top-down view angles and the issue to integrate with a deep learning system for compute-intensive operations.

In this project, we implemented the detection component using Single Shot MultiBox Detector topology (SSD)1. We implemented our solution in Intel® Xeon Phi™ processors and evaluated frame rate, and accuracy on several videos captured by the drone.

Experiment Setup

The following hardware and software environments were used to perform the experiments.

Hardware

Architecturex86_64
CPU op-mode(s)32-bit, 64-bit
Byte Order:Little Endian
CPU(s):256
On-line CPU(s) list:0-255
Thread(s) per core:4
Core(s) per socket:64
Socket(s):1
NUMA node(s):2
Vendor ID:GenuineIntel
CPU family:6
Model:87
Model name:Intel® Xeon Phi™ Processor 7210 @ 1.30 GHz
Stepping:1
CPU MHz:1302.386
BogoMIPS:2600.06
L1d cache:32K
L1i cache:32K
L2 cache:1024K
NUMA node0 CPU(s):0-255

Software

neon™ Framework Setup

neon™ version2.2
Intel® Nervana™ platform aeon (data loader)1.0
Python*2.7
GCC version6.3.1

Model

The experiments detailed in the subsequent sections employ the transfer learning technique to speed up the entire process. For this purpose, we used a pre-trained Visual Geometry Group (VGG) 16 model with SSD topology.

Solution Design

The main component of our system includes a training component and a detection algorithm running SSD. SSD, though it is compute-intensive, has been more optimized for Intel® architecture. We adopt the neon™ framework as our Deep Learning Frameworks and the hardware is an Intel® Xeon Phi™ processor.

In this work, the entire solution is divided into three stages:

  1. Dataset preparation
  2. Model training
  3. Inferencing

Dataset Preparation

The dataset used for training the model is collected through unmanned aerial vehicles (UAVs). The images collected vary in resolution, aspect, and orientation, with respect to the object of interest.

The high-level objective of preprocessing is to convert the raw, high-resolution drone images into an annotated file format, which is then used for training the deep learning model.

The various processes involved in the preprocessing pipeline are as follows:

  1. Data creation
  2. Video to image frame conversion
  3. Image annotation
  4. Conversion to framework-native data format

The individual steps are detailed as follows:

Step 1. Dataset creation

The dataset chosen for these experiments consisted of 30 real-time drone videos in the following 7 classes: boat, bus, car, person, train, tree, and truck.

Step 2. Video to image frame conversion

To train the model, all the video files were converted to image frames. The entire conversion code was built using OpenCV 33.

The final dataset prepared for training consisted of 1,312 color images.

Step 3. Image annotation

Image annotation task involved manually labeling the objects within the training image set. In our experiment, we relied on the Python* tool, LabelImg*4 for annotation. The tool gives the object coordinates in XML format as output for further processing.

The following figure shows the training split for each class:


Figure 1. Training data set distribution.

Step 4. Conversion to framework-native data format

To enable fast and flexible access to data during training of the network, we used framework-specific file formats.

Data Conversion for Neon™ framework:

We used aeon data loader5 which is Nervana’s new and evolving project. The aeon data loader is designed to deal with large datasets from different modalities, including image, video, and audio that may be too large to load directly into memory. We used a macro batching approach, where the data is loaded in chunks (macro batches) and are then split further into mini batches to feed the model. The basic workflow is depicted in Figure 2.


Figure 2. The aeon data loader pipeline.

First, users perform ingestion, which means generating a manifest file in comma-separated values (CSV) format. This file indicates to the data loader where the input and target data reside. Given a configuration file (JSON), the aeon data loader processes the next steps (green box). During an operation, the first time a dataset is encountered, the data loader will cache the data into CPIO format, allowing for quick subsequent reads. During provision, the data loader reads the data from disk, performs any needed transformations on-the-fly, transfers the data to device memory, and provisions the data to the model as an input-target pair. We use a multithreaded library to hide the latency of these disk reads and operations in the device compute.

Network Topology and Model Training

SSD

The SSD approach discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages, and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component.

Model

In our experiment, we used the VGG 16 as the base network. An auxiliary structure is then appended to the network to produce detections with the following key features:

  • Multiscale feature maps for detection: We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales.
  • Convolutional predictors: Each added feature layer can produce a fixed set of detection predictions using a set of convolutional filters. For a feature layer of size m×n with p channels, the basic element for predicting parameters of a potential detection is a small 3×3×p kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location.
  • Default boxes and aspect ratios: We associate a set of default bounding boxes with each feature map cell for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k at a given location, we compute c class scores and the four offsets relative to the original default box shape. This results in a total of (c+4) k filters that are applied around each location in the feature map, yielding (c+4) kmn outputs for an m×n feature map.

To achieve a faster convergence, we trained the network using an ImageNet pre-trained model. The model is available to download at the Neon™ Model Zoo6.

Transfer Learning

In our experiments, we applied transfer learning on a pre-trained VGG 16 model (trained on ImageNet). The transfer learning approach initializes the last fully connected layer with random weights (or zeroes), and when the system is trained for the new data, these weights are readjusted. The base concept of transfer learning is that the initial layers in the topology will have learned some of the base features such as edges and curves, and this learning can be reused for the new problem with the new data. However, the final, fully connected layers would be fine-tuned for the very specific labels that they are trained for. Hence, this needs to be retrained on the new data.

Inferencing

The video captured by the drone is broken down into frames using OpenCV with a configurable frames per second. As the frames are generated, they are passed to the detection model, which localizes the different objects in the form of four coordinates (xmin, xmax, ymin, and ymax) and provides a classification score to the different possible objects. By applying the NMS (Non-Maximal Suppression) threshold and setting confidence thresholds, the number of predictions can be reduced and kept to the prediction that is the most optimal. OpenCV is used to draw a rectangular box with various colors around the detected objects (see Figure 3).


Figure 3. Detection flow diagram.

Results

The different iterations of the experiments involve varying batch sizes and iteration counts.

Batch SizeKMP_AffinityOMP_NUM_THREADSTraining Time
64granularity=thread,verbose,balanced161.10X
64none,verbose,compact164.29X
64granularity=thread,verbose,balanced241.30X
64granularity=fine,verbose,balanced241.00X
64none,verbose,compact322.50X

Note: The numbers in the table are indicative. Results may vary depending on hyper parameter tuning.

The following detection was obtained when the inference use-case was run on the below sample image.


Figure 4. Red bounding boxes display the objects detected.

Conclusion and Future Work

The functional use case attempted in this paper involved the detection of vehicles and pedestrians from a drone or aerial vehicle. The training data was more skewed towards cars as opposed to other objects of interest since it was hand crafted from videos. The use case could be further expanded for video surveillances and tracking.

About the Authors

Krishnaprasad T and Ratheesh Achari are part of the Intel team working on the artificial intelligence (AI) evangelization.

References

The references and links used to create this paper are as follows:

  1. Object Detection, 2014: https://arxiv.org/pdf/1412.1441.pdf
  2. Neon™ framework: https://neon.nervanasys.com/index.html/
  3. OpenCV: https://github.com/opencv/opencv
  4. LabelImg: https://github.com/tzutalin/labelImg
  5. Nervana aeon: http://aeon.nervanasys.com/index.html/
  6. Neon™ Model Zoo: https://github.com/NervanaSystems/ModelZoo/tree/master/ImageClassification/ILSVRC2012/VGG

Visualising CNN Models Using PyTorch

$
0
0

And no, you don’t need a GPU to test your model.

Before any of the deep learning systems came along, researchers took a painstaking amount of time understanding the data. Finding visual cues before handing it off to an algorithm. But right now, we almost always feed our data into a transfer learning algorithm and hope it works even without tuning the hyper-parameters. And very often, this works. The current Convolutional Neural Network (CNN) models are very powerful and generalize well to new datasets. So training is quick and everyone is happy until running it on your test set where it bombs. You try to tune hyper-parameters, try a different pre-trained model but nothing works. This might be the right time to check your data and see if the data itself is right.

But then again, who has the time to go through all the data and make sure that everything is right. Or having the compute to try out multiple hyper-parameters and fine tune the model. So we can choose for the easier alternative of visualizing our model and checking what part of the image are causing the activations. This will give a very good understanding of the defining features of the image.

There is an urban legend that back in the 90’s, the US government commissioned for a project to detect tanks in a picture. The researchers built a neural network and used it classify the images. Once the product was actually put to test, it did not perform at all. On further inspection they noticed that the model had learnt the weather patterns instead of the tanks. The trained images with tanks were taken on a cloudy day and images with no tanks were taken on a sunny day. This is a prime example of how we need to understand the learnings by a neural net.

Back in 2012, when AlexNet took the world by storm by winning the ImageNet challenge, they gave a brief description of the learning of convolutional kernels.

In this, you can observe that the initial layers are learning the dependencies like lines and edges. As you proceed further down in the image, more intricate dependencies are learnt. Check out the homepage of cs321n, a simple CNN runs live in your browser and the activations are shown in it.

In 2014, Karen Simonyan and their team won the ImageNet challenge. One of the key aspects that helped them win was a better understanding of the learning by CNNs. They plotted saliency maps to show the activations, and understood the work better.

Over the time the visualisations have gotten better. One of the most useful and easy to interpret activations is from Grad-cam: Gradient weighted class activations mapping. This technique uses class-specific gradient information flowing into the last layer to produce a coarse localisation map of the important regions in the image.

One of the biggest advantages of using visualisations is that we can understand which features are causing the activations. Recently, I was working on the ISIC challenge for skin cancer detection. I was achieving a probability of ~70%, when I inspected a few images run through grad-cam, I realised that the network was concentrating on the wrong features. It was giving a greater weightage to the skin color instead of the lesion.

Let’s dive into the code now

It is pretty straight forward. First, we load our trained model, then we define the target class. After that, we set all the gradients to zero and run a forward pass on the model. The activations in these gradients are then mapped onto the original image. We plot a heat map based on these activations on top of the original image. This will help in identifying the exact features that the model has learnt.

Required dependencies:

  • OpenCV*
  • PyTorch*
  • Torchvision*

We load the model into the memory and then the image. I trained my model on the ISIC 2017 challenge using a ResNet50, which I’m loading. If you have a different pre-trained model or else a model that you have defined, just load that into the checkpoint. If you notice, we are passing additional parameters to the torch.load function. This is to ensure that even if we have a model trained on a graphics processing unit (GPU), it can be used for inference on a central processing unit (CPU).

def load_checkpoint():"""
        Loads the checkpoint of the trained model and returns the model."""
    use_gpu = torch.cuda.is_available()
    if use_gpu:
        checkpoint = torch.load(opt.model)
    else:
        checkpoint = torch.load(
            opt.model, map_location=lambda storage, loc: storage)

    pretrained_model = models.resnet50(pretrained=True)
    num_ftrs = pretrained_model.fc.in_features
    pretrained_model.fc = nn.Linear(num_ftrs, 2)

    if use_gpu:
        pretrained_model = pretrained_model.cuda()

    pretrained_model.load_state_dict(checkpoint)
    pretrained_model.eval()

    return pretrained_model

Now we need to start processing the image. The transforms you used on the trained model need to be used again here. If there is a mean subtraction, that needs to be performed. Then load that into the variable for the forward pass.

    def preprocess_image(cv2im, resize_im=True):"""
            Resizing the image as per parameter, converts it to a torch tensor and returns
            torch variable."""
        if resize_im:
            cv2im = cv2.resize(cv2im, (224, 224))
        im_as_arr = np.float32(cv2im)
        im_as_arr = np.ascontiguousarray(im_as_arr[..., ::-1])
        im_as_arr = im_as_arr.transpose(2, 0, 1)
        im_as_ten = torch.from_numpy(im_as_arr).float()
        # Add one more channel to the beginning. Tensor shape = 1,3,224,224
        im_as_ten.unsqueeze_(0)
        # Convert to Pytorch variable
        im_as_var = Variable(im_as_ten, requires_grad=True)
        return im_as_var

Then we start the forward pass on the image and save only the target layer activations. Here the target layer needs to be the layer that we are going to visualize.

      def forward_pass_on_convolutions(self, x):"""
              Does a forward pass on convolutions, hooks the function at given layer"""
          conv_output = None
          for module_name, module in self.model._modules.items():
              print(module_name)
              if module_name == 'fc':
                  return conv_output, x
              x = module(x)  # Forward
              if module_name == self.target_layer:
                  print('True')
                  x.register_hook(self.save_gradient)
                  conv_output = x  # Save the convolution output on that layer
          return conv_output, x

      def forward_pass(self, x):
          """
              Does a full forward pass on the model"""
          # Forward pass on the convolutions
          conv_output, x = self.forward_pass_on_convolutions(x)
          x = x.view(x.size(0), -1)  # Flatten
          # Forward pass on the classifier
          x = self.model.fc(x)
          return conv_output, x

Now we need to call the function to execute the above defined functions. Below, we perform the forward pass along with the gradients of the target class. The code is well commented, you can understand the code by reading through it.

def generate_cam(self, input_image, target_index=None):"""
            Full forward pass
            conv_output is the output of convolutions at specified layer
            model_output is the final output of the model"""
        conv_output, model_output = self.extractor.forward_pass(input_image)
        if target_index is None:
            target_index = np.argmax(model_output.data.numpy())
        # Target for backprop
        one_hot_output = torch.FloatTensor(1, model_output.size()[-1]).zero_()
        one_hot_output[0][target_index] = 1
        # Zero grads
        self.model.fc.zero_grad()
        # Backward pass with specified target
        model_output.backward(gradient=one_hot_output, retain_graph=True)
        # Get hooked gradients
        guided_gradients = self.extractor.gradients.data.numpy()[0]
        # Get convolution outputs
        target = conv_output.data.numpy()[0]
        # Get weights from gradients
        # Take averages for each gradient
        weights = np.mean(guided_gradients, axis=(1, 2))
        # Create empty numpy array for cam
        cam = np.ones(target.shape[1:], dtype=np.float32)
        # Multiply each weight with its conv output and then, sum
        for i, w in enumerate(weights):
            cam += w * target[i, :, :]
        cam = cv2.resize(cam, (224, 224))
        cam = np.maximum(cam, 0)
        cam = (cam - np.min(cam)) / (np.max(cam) -
                                     np.min(cam))  # Normalize between 0-1
        cam = np.uint8(cam * 255)  # Scale between 0-255 to visualize
        return cam

Now we need to save the cam activations on the original image like a heat map to visualize the areas of concentration. We save the image in three different formats, B/W format, heat map, and the heat map superimposed on top of the original report.

    def save_class_activation_on_image(org_img, activation_map, file_name):"""
          Saves the activation map as a heatmap imposed on the original image."""
        if not os.path.exists('./results'):
            os.makedirs('./results')
        # Grayscale activation map
        path_to_file = os.path.join('./results', file_name + '_Cam_Grayscale.jpg')
        cv2.imwrite(path_to_file, activation_map)
        # Heatmap of activation map
        activation_heatmap = cv2.applyColorMap(activation_map, cv2.COLORMAP_HSV)
        path_to_file = os.path.join('./results', file_name + '_Cam_Heatmap.jpg')
        cv2.imwrite(path_to_file, activation_heatmap)
        # Heatmap on picture
        org_img = cv2.resize(org_img, (224, 224))
        img_with_heatmap = np.float32(activation_heatmap) + np.float32(org_img)
        img_with_heatmap = img_with_heatmap / np.max(img_with_heatmap)
        path_to_file = os.path.join('./results', file_name + '_Cam_On_Image.jpg')
        cv2.imwrite(path_to_file, np.uint8(255 * img_with_heatmap))

Here is the entire gist of the script. To run the code you need to provide the input arguments.

python visualisation.py --img <path to the image> --target <target class> --model <path to the trained model> --export <name of the file to export>

As I have said earlier, this visualization helped me understand my skin cancer detection model. I will now show you the results from that model after I tuned it.

From the above images you can notice that in the non-cancerous images, the activations are on the left. The model was activating for that particular skin color. This gave me the insight to normalise the entire dataset by mean and standard deviation.

The entire code for this has been modified from the amazing repository by Utku Ozbulak. I hope this has been helpful. If you have any feedback or questions, I would love to answer them.

Get Ready: The Four Ps of Marketing for Indie Game Developers

$
0
0

Succeeding in today’s highly competitive games market takes more than hard work and a brilliant game. You need a marketing strategy as carefully crafted as any game design, and a plan for differentiating your game from the thousands of others on the market.

That’s especially true for independent game developers who don’t have the active fan base enjoyed by well-known brands and franchises, or the financial resources to go toe-to-toe with established gaming studios. The good news is that plenty of channels exist (that won’t cost more than time and effort) for getting the word out about what you’re doing.

This guide explores a classic marketing framework called "the Four Ps". Use it to evaluate your game’s commercial potential, take stock of the competitive landscape, set strategic goals, and create a plan for achieving commercial success.

the four Ps of marketing
Figure 1: The Four Ps of Marketing framework.

The Four Ps Marketing Framework

The Four Ps concept originated with Procter & Gamble* more than a century ago. Then, the Ps were price, place, promotion, and packaging (because the product was always soap, but the packaging differentiated it for different consumer segments). Later, as companies began to apply these new marketing methods to more complex products, the “packaging” P gave way to “product.” Fast forward a couple decades to Neil Borden. Borden, a professor at Harvard Business School, coined the phrase Marketing Mix in the early 1950s, referring to the ingredients of marketing campaigns. The best-known marketing mix evolved from Procter & Gamble’s Four Ps.

If your inner marketing maven is whispering, “Wait, aren’t there seven Ps?”, the answer is yes. Three additional Ps (Physical Evidence, Processes, and People) are often included in the marketing mix when dealing with service-oriented businesses. While some might argue that subscription-based games are essentially software-as-a-service businesses, we’re going to restrict our focus to the Four Ps marketing mix and how it applies to indie game marketing. We’ll provide practical advice on how to use those Ps to gain visibility, as well as sales, in the increasingly crowded games space.

Using the Four Ps

For game developers, the Four Ps let you evaluate and plan using this simple matrix:

Product

  • Variety
  • Quality
  • Design
  • Features
  • Brand name Packaging
  • Services

Price

  • List price
  • Discounts
  • Revenue model

Promotion

  • Advertising
  • Personal selling
  • Sales promotion Public relations

Place

  • Channels
  • Coverage Assortments Locations
  • Inventory Transportation Logistics

To get started, look at each of the Ps above, take a high-level view, and ask yourself:

  • Product: What sets my game apart from other games (gameplay variety, quality, design, other features)?
  • Price: What revenue model should I use; what price should I set (list price, discounts, subscription, free)?
  • Place: What are my distribution options (online download, streaming, in-store, channel partnership bundling, and so forth)?
  • Promotion: Given my resources, what are the best ways to attract attention (via the web, social media, relationships with key influences/YouTube* gamers, trailer videos, events)?

Mutually Dependent Variables

An important aspect of the Four Ps is that each component is interdependent — they go hand-in-hand — and you’ll need to plan and use them in combination with each other.

Markets rarely stand still, so you’ll need to commit time and energy to monitoring and adjusting your plans to keep each ingredient in your marketing mix aligned and in-tune with current market conditions. If any one of the Ps falls out of step with the others, don’t hesitate to re-evaluate and adjust accordingly.

If that sounds daunting and time consuming, don’t worry. It’s more straightforward than it might sound. For example, imagine you’re busy coding when you get a Slack* message telling you that one of your distribution channels is experiencing a temporary outage. You were about to post a very prominent banner on your website linking to that channel. Rather than direct traffic to a site that’s down, you could delay posting the banner, or direct potential customers to other outlets during the outage to help maximize sales.

Yes, Your Game is a Product

Creative individuals — game developers included — take great pride in their creations. So much so that many find it difficult to think of the fruits of their labor as a product with commercial potential. Embracing that idea, however, is an important step in making the transition from being someone dabbling in a fun hobby to someone committed to generating income from making a product that other people will pay to experience.

Thinking of your creation as a product has another advantage. Emotional attachment can cloud your judgment and, while being passionate about what you’re doing is great, being brutally honest when it comes to making business decisions is best. The sooner you start thinking of your baby/labor-of-love/awesome creative experience as a product, the better.

With that in mind, ask yourself: Is your product unique in the marketplace, or is it familiar?

  • A unique product is something original, unproven, and unfamiliar. Assuming enough people share that perception — and it’s fun, engaging, and priced right — it could have commercial potential.
  • A familiar product is something similar to existing games — perhaps a reinterpretation of, or a variation on, a trendy genre. If your product is perceived by enough people as better than what’s on the market, it could have commercial potential.

Every product has advantages and disadvantages. Understanding what these are in your product’s case is very important to being able to craft a plan aimed at convincing people that your product is worth playing and buying.

Here’s where the Product component of the Four Ps can help you catalog and quantify your product’s strengths and weaknesses, as compared to its competition.

Create a table and list the top five or ten products you’re competing against, then create a list of the strengths, weaknesses, and distinguishing features of each. Keep your list at a high level — use broad strokes to define differentiating factors. No one will care that one of your algorithms is 25 percent more efficient than an algorithm in your game engine, but they will care that your graphics look better than those of other games on the market.

An often-overlooked feature is the length of time that it takes customers to play through a game. If you know this, make a note of it for each competitor game. It’s a vital statistic when it comes to pricing your product, which we’ll talk more about in the next section.

ProductYouCompetitor 1Competitor 2Competitor 3Competitor 4Competitor 5
Variety      
Quality      
Design      
Features      
Brand Name      
Packaging      
Services      

Table 1. Sample table for competitive analysis — determining your competition’s strengths and weaknesses.

In addition to evaluating how your product’s strengths and weaknesses stack up against the competition, you need to also take into account your product’s life cycle. At each stage of that life cycle — from pre-release to beta testing, up through launch and its end of life (or the release of its first sequel) — you’ll want to have a good understanding of the challenges present at each specific stage, and have a plan for dealing with them.

Key to accomplishing that is knowing your intended audience, and tailoring a story that presents your product’s value proposition in the following:

Words— Describe your product in terms that emphasize its primary selling points, and what makes it stand out. Be consistent in how you use gameplay-specific jargon and character names.

Pictures— A picture is worth a thousand words, so emphasize the best things your product has to offer. Gameplay screenshots should focus on attention grabbers: epic battle scenes, monsters, vehicles, puzzles, and so on.

Videos— Game trailers are extremely effective ways to pique the interest of potential players. Keep trailers focused on communicating what makes your product a blast to play. Gameplay videos by you and key influencers are another great way to attract attention.

Behind-the-scenes interviews, webcasts, and blog posts— Let your audience watch as your product develops. This builds a pre-release fan base, while you and your team become a part of your product’s value.

Deploy these words, pictures, and videos everywhere and anywhere you interact with potential customers—on your website, social media, download and streaming sites that carry your product, and YouTube Gaming*. Use every digital channel available to you in this regard.

Price

It may seem obvious, but price refers to how much someone has to pay for your product. What’s not always obvious is that your product’s price should be based on its perceived value in the market, not simply what it cost to produce and distribute. A product with a price that’s higher or lower than its perceived value won’t live up to its commercial potential. In fact, some would say it simply won’t sell. It’s crucial to understand what your target audience thinks of your product.

Gaining that understanding requires you to take a dispassionate look at your product and its competition — all the things you did when evaluating your product’s strengths and weaknesses. Add to that evaluation by surveying the prices — and revenue models— your competitors are using. If possible, look back 6 to 12 months at any promotional discounts they may have offered, when they offered them, and under what circumstances. Lay that information out in another table:

PriceCompetitor 1Competitor 2Competitor 3Competitor 4Competitor 5You
List price      
Discounts      
Revenue model      

Table 2. Sample table for determining the pricing factors of your competition.

If you’re able to track list prices and discounts for six months or more, put those prices on a timeline that lets you spot seasonal pricing and discount trends. For example, identify whether discounts are common during the December holiday season, or if they are timed to coincide with popular gamer events and tradeshows.

Comparing revenue models for competitive analysis purposes can help you to gain a better understanding of your pricing strategy. Tracking pricing and discounting also lets you see how practical considerations might impact such things as net revenue and cash flow. For example, distribution channels typically take a percentage of the total price of a product. For your business, you’ll want to know what that percentage is, and, for your competitive analysis, it’s helpful to understand that percentage when making revenue comparisons. Comparing net prices, not list prices, tells you what your intended audience is actually paying for similar products. For business planning, knowing whether a particular distribution channel sends payments once a month, once a quarter, or on some other schedule is also helpful.

The last factor in Table 2 is for tracking revenue models, and tells you whether your competitor’s earnings are based on:

  • One-time payments (download content pricing or DLC pricing)— Players purchase the product once.
  • Subscription fees— Players pay a recurring fee to access the game online.
  • DLC pricing— Players purchase additional content and upgrades that enhance their game experiences.
  • Subscription fees— Players pay a recurring fee to access the game online.
  • Episodic pricing— Players pay to access individual episodes or a complete season of a game.
  • Microtransactions (in-game purchases)— Players buy keys to unlock features and additional powers.
  • Free to play— Players don’t pay anything up-front, but pay to avoid in-game advertising or pay for an enhanced game experience, extra content, and more features.
  • Bundle pricing— Bundling lets you get exposure for your product and extend its life by selling it along with products from other developers.

Read reviews and end-user feedback to get a sense of how much value customers place on your competitor’s products. Play games similar to yours or, if your game is truly different from anything on the market, find and play other uniqueproducts, with an eye toward the value being delivered.

Setting the right price

There’s no one right way to price products, but there are plenty of pitfalls to avoid. One common mistake is believing that price alone drives sales. That idea leads to the notion that undercutting your competitors by offering a familiar product at a lower price will guarantee that people will buy your product over theirs. That’s often not the case. More importantly, setting an initial price that’s too low can hurt you. Unless you announce that your starting price is an introductory special offer, and will be raised after a predetermined period, it can be difficult to raise the price after it has been set low.

Another common mistake is to base pricing solely on the time you and your team invested in making your product. It’s one thing for building contractors to base their fees on time and materials, but, for game developers, audiences rarely know or care how much time and loving care went into creating a game experience. Gamers care whether your product is fun, entertaining, and worth the time and money spent playing it. In other words, the key to putting a price on your product is to align it with the market’s perceived value of your product.

Questions to ask yourself:

  • How much is the market willing to pay?
  • How much are your competitors charging?
  • How long will it take customers to play your game?
  • Are you offering a discount at launch, and plan on raising the price later?
  • Does your list price leave room for future promotional discounts?

Similar advice on pricing best practices is offered by Steam*.

Discounting dos and don’ts

Discounting your product’s price can play a valuable role in extending its shelf life—boosting sales when they’re flat or declining, as your product matures in the marketplace. You need to be careful, however, when timing promotional discounts. Offering discounts too frequently can undermine your full retail price. For example, potential customers may resist paying full price, or any price, because they expect you to lower the price in the near future. Even if that future never comes, your sales will suffer as people wait for the price to come down.

Try to avoid discounting as a knee-jerk reaction to slower than expected unit sales. That kind of emotionally driven behavior can undermine the market’s perceived value of your product. Instead, have a pricing strategy in place that specifies when to offer discounts and under what circumstances. For example, you might create a pricing strategy that offers a discount during a holiday season, when a lower price could attract attention from gift shoppers. Similarly, offering a discount on your primary product in advance of releasing new content can help seed the market, and drive more interest in your upcoming release.

Keep in mind that discounts don’t have to be purely monetary in nature. For example, at launch your introductory price could include additional content — “a USD XX value” — at no additional cost, for a limited time. After meeting a sales target, or after a certain amount of time has passed, you can start charging full list price.

Free to play

At the other end of the pricing spectrum sits taking a free to play approach to pricing (an option made popular by titles such as Candy Crush Saga* and World of Tanks*). Free to play generates revenue from in-game purchases (power-ups, customized objects, and so on) or from up-leveling to remove advertising. Free to play has been adopted by many established game studios, in part to combat piracy. Many publishers favor episodic pricing, an approach that combines one-time pay with a subscription fee, in the form of season passes. These passes can cost nearly as much as the original game, and give players exclusive access on a limited-time or one play-through basis to certain elements of the game, along with bonus features.

Promotion

Think of promotion as any activity that’s designed to drive sales. For indie game developers, offering a discount is promotion, as are activities that start conversations and build relationships with the gaming press and key influencers within the gaming community. Exhibiting at a tradeshow or speaking at an event are also types of promotion.

If you’re wondering how that differs from marketing, think of Promotion as an ingredient of the "marketing mix", along with Product, Price, and Place. In other words, marketing can exist without promotion, but promotion doesn’t exist without marketing.

When planning a promotion strategy, use "What you need for game promotion" section as a checklist divided into three categories:

  • Assets to create that will help you promote your product any time, and on any channel (see the Place section, below).
  • Things to be doing on an ongoing basis.
  • Events to participate in to promote your product.

To help you plan an event strategy, create a separate document with a timeline that includes vital details such as deadlines for submissions, load-in/load-out dates for exhibitors, deadlines for making travel reservations, and a schedule for producing targeted press materials (press releases tailored for the event, new game trailers, and so on).

What you need for game promotion

Assets

  • Logo— An icon image that instantly communicates what your product is and is not.
  • Gameplay screenshots— Pictures that emphasize the best things your product has to offer.
  • Trailers— To get potential customers excited to experience and buy your product.
  • Messaging text— Tell your game’s story in a sentence or short paragraph, and create a few bullet points that emphasize your product’s key selling points.
  • Gameplay videos— Showcase your product by focusing on scenes that will entice people to want to learn more about your product, play it, and buy it.
  • Press materials/playable demo(s)— Have these always on hand and ready to distribute when the opportunity arises.

Ongoing Activities

  • Build relationships with key influencers, YouTube gamers, and streamers— Enlist the aid of others to spread the word about what you’re building and how cool it is.
  • Establish partnerships with other brands— Force-multiply your promotional results by also employing the marketing muscle of established businesses to help you get the word out about your product.
  • Blog about how development is progressing— Let your audience see what’s happening behind the scenes to get them interested in the end result. Turn yourself, and your team, into a part of your product’s key selling points.
  • Refresh content on your social media channels (Facebook*, Instagram*, YouTube*, and so on)— Build a fan base and create excitement over your product.
  • Build and maintain an email list of potential customers— Spark interest with the goal of converting list subscribers to paying customers.
  • Build a website and refresh its content to continually build excitement about your product as it develops— Attract attention, generate interest, and make it easy for people to take action — read reviews, play a demo, watch trailers, subscribe to your email list, and purchase your product.

Events

  • Tradeshows (Game Developers Conference, Electronic Entertainment Expo, Intel® Buzz Workshops, PAX, Independent Games Festival, and so on)— Exhibit and speak to increase awareness and attract interest from potential customers and the gaming press.
  • Gamer meetups— Meet potential customers and get them excited about your product.
  • Contests Intel®Level Up Game Developer Contest— Get valuable feedback from notable industry judges, raise awareness, and possibly win valuable prizes.

What about advertising

There was a time when product marketing was synonymous with advertising. For indie game developers, however, the costs often outweigh the benefits — depending on where you choose to run an ad campaign. For example, social media ads are relatively inexpensive, and may be worth the effort and money spent on them, but placing ads with more traditional media — print and television — may not yield enough returns to make the cost and effort worthwhile.

If you do choose to advertise, target your ads carefully. You want to reach as many people that fall within your target audience demographic as possible. Digital delivery media — such as YouTube — let you pick the age, gender, parental status, and household income of those who will see your ad. Placing an ad in a popular, general-interest print publication that covers games as part of its editorial mix may put your message in front of a lot of people, but it’s unlikely that they all play, or care about, games.

When in doubt, ask yourself if what you’re planning to spend will result in enough potential sales to make the cost worthwhile.

Relationship-based promotion

One of the most effect ways to spread the word about your product is to enlist the help of established and respected gamers who actively share what they’re playing with the gaming community. These YouTube gamers and streamers can be an elusive bunch, but building relationships with key influencers whose interests align with your product is a great way to build an audience.

Partner with established brands

Building relationships with established brands — companies whose products align with yours — is a great way to expand your promotional activity in ways you normally would not be able to afford. For example, some companies may invite you to exhibit in their booth at prominent industry events at minimal or no cost, other than your travel expenses.

Public relations (PR) — should you hire a pro, or DIY

Unless you’re extroverted and love to interact with people — including complete strangers — you may find it challenging to directly engage in many of the activities professional publicists typically handle for their clients. Hiring a PR firm, however, can be costly. If you can’t afford to hire a pro, consider teaming up with a friend or family member who’s comfortable building relationships, storytelling, and being persistent. The effort you and your promotion-oriented friend/family member/partner put into public relations can pay off significantly.

Whether you’re doing PR or someone is doing it for you, before contacting influencers and the press, know their niches, and respect them. When you’re building a PC game, don’t contact people or publications solely focused on mobile games. When telling your product’s story, keep sight of its key selling points, but be careful not to oversell it in the process. Let others draw their own conclusions about its quality.

Place

Places where people can buy your product are the focus of the fourth P. With so many digital distribution channels available, plan on leveraging all of the available channels, assessing them as your product matures in the marketplace. Do the same with any physical places where your product can be purchased.

Table 3 can help you catalog all of your distribution channels. As you start planning, use the table as a checklist. As your distribution network grows, be sure to track your market coverage, and spot gaps in your network. As you start selling product, track your sales. And, like the other three Ps in the marketing mix, diligently monitor each category and adjust your marketing mix accordingly.

PlaceChannel 1Channel 2Channel 3Channel 4Channel …
Coverage     

Assortments (OEM bundles and

partner distribution channels)

     
Locations (brick-and-mortar stores)     
Inventory     
Transportation     
Logistics     

Table 3. Record your distribution channels

Must-have channels

Online outlets for distributing your product fall into two categories: places you can set up yourself, and places other businesses run with which you can partner. The latter usually involve a straightforward sign-up process for joining a partner program.

Places that should be on every indie game developers’ must-have channel list include, but are not limited to:

  • A website— Be sure your site — either for your company, or dedicated solely to promoting your game — includes prominently positioned calls-to-action (that is, download, purchase, play the demo, and so forth).
  • A Facebook page— Be sure to include links to your website, your YouTube channel, your product on YouTube Gaming, your Steam landing page, and information about any other place that’s promoting your product.
  • A YouTube channel — Provide a place where people can watch your video trailers.
  • A YouTube Gaming presence— Post gameplay videos of your product made by key influencers and gamers, not your team.

Assortments, or game bundles, are another way to get your product additional exposure. By packaging your product, or a partial version of your product with other similar titles, you can piggy-back promotion and sales efforts with other developers.

Sites such as Green Man Gaming and Humble Bundle can greatly help boost exposure to your product.

Crowdfunding sites offer another channel for both selling and promoting your product. While you’re still developing your product, crowdfunding sites let you raise money by selling promotional items (t-shirts, bumper stickers, and so forth), offer early access to betas, or playable demos, and so on. They can also be a source of news when, for example, players leave glowing feedback, or your fundraising exceeds your wildest dreams.

Boutiques

Getting distributed in a large brick-and-mortar retail outlet can be challenging for indies with no proven track record. Boutique game stores often have a highly dedicated clientele. Teaming up with boutiques whose customers align with your product can be a very effective alternate retail strategy.

Getting inventory from here to there

Use the last three categories of Table 3 — Inventory, Transportation, and Logistics — to track available inventory, shipping and transportation services (and costs), as well as any related logistics. Even if you’re selling digital keys to download a compressed package containing your product, or to unlock your product on a streaming site, use these categories to help ensure that each of your digital channels has what they need to effectively distribute your product.

Summary

When used correctly, marketing strategies and promotional plans based on the Four Ps can help you get your game in front of the right people, at the right time — giving you a better chance at achieving your sales and profit goals, while building customer satisfaction and loyalty. To do that effectively, devoting time and effort to marketing and promotion is necessary. How much time is enough? Our recommendation is to spend a minimum of 20 percent of your time marketing and promoting your work. Without marketing, it doesn’t matter how much hard work and brilliant design went into your game. If people don’t know it exists, it isn’t likely to generate much revenue. With the right marketing, however, the sky’s the limit.

Resources

Marketing’s Four Ps: First Steps for Entrepreneurs, Purdue University (PDF)

Marketing Mix: The 4Ps of Marketing for Businesses an alternate perspective

Green Marketing Strategy and the Four P's of Marketing

Four Ps - Investopedia

Marketing mix - Wikipedia

Get Noticed: Attending Your First Event as an Indie Game Developer

$
0
0

The very best independent game developers apply the same degree of adaptability to the promotion of their game as they do to its development. To jump-start your promotional activities, increase your network, and gain inspiration, we urge you to strongly consider trade shows, developer conferences, workshops, and networking events. Even if you arrive with only the briefest demo, or proof-of-concept presentation, attending can provide you with greater knowledge and experience, more insights into the industry, and lead to greater market presence and status for your game.

It won’t be easy if you’re already operating on a shoestring, but try to leave something in the budget for events and festivals—advice that Roger Paffrath, a Brazilian producer and co-creator of the side-scrolling platformer Little Red Running Hood* readily endorses. Writing at indiegames.com, he said: “Hands down, [attending events] is the best way to show your game to other people, and start networking with other developers and press.”

Alberto Moreno of Crocodile Entertainment, the creators of Zack Zero*, says his one big regret is not starting his promotional efforts sooner. “After passing [quality assurance] QA, and a little under one month before game release, we began to think about [public relations] PR … If we could turn back time, we would begin by establishing press contacts way in advance.”

At gamedevelopment.tutsplus.com, Robert DellaFave—game designer and project manager at Divergent Games—recommended that all indie developers consider events as a key part of their promotional plan. “Despite the theory that all game developers are vampires who dwell in dark basements, getting out into the light of day and attending public gatherings is one of the smartest things you can do to promote your game. I promise you won’t turn to ash.”

Find the Event That’s Right for You

Deciding how to approach events can be difficult—there’s a dizzying number to choose from, while travel costs and time commitments will vary by event. There are less formal and more frequent game dev meetups almost everywhere, including India and Brazil, for example. More formal events include the Chinese Game Developers Conference in Shanghai or the big Game Developers’ Conference (GDC) show in San Francisco. From IndieCade in Los Angeles to London’s Rezzed, or from Paris Games Week to Germany’s Gamescom, you could possibly attend an event every week of the year if you could travel the globe.

“I would recommend a combination of gaming and developer events, and obviously there are hundreds of them out there,” says Patrick DeFreitas, Software Partner Marketing Manager at Intel. He points newcomers to GameConfs.com, which provides a calendar of gaming-related events broken down by country.

Be smart about your selections, and be serious about choosing your targets. If you’re on a tight budget, you can hook up with dozens of fellow PC developers at one of hundreds of global game development meet-ups, and talk face to face with developers at a similar spot in their developer’s journey. If you bring a playable demo you could attract lots of local interest, and you could end up promoting your game with only an investment of time.

Alternatively, you can mingle with the game-playing public at larger gaming-industry events, like PAX, GDC, and DreamHack. These offer a much wider scope, and attract thousands of attendees—but the challenge is to not get lost in the crowd.

Figure 1

Figure 1: Conferences such as GDC offer insights into what the biggest names in gaming are doing.

The more intimate nature of Intel® Buzz Workshops makes them a great source of information relating to technical elements such as cleaning up game code and optimizing your programming. You can also learn more about distribution channels, consumer metrics, and other business topics. These workshops are usually limited to a few hundred attendees, so you won’t be overwhelmed.

Figure 2

Figure 2: Intel® Buzz Workshops offer great networking opportunities, and are not overwhelming.

The key point is to consider shows and conferences in terms of the possibilities they offer. If you’re just starting out, you may want to hit local events first, because they’re easier and cheaper to attend. When you’re ready to invest in a more expensive show, do your research first. Andreea Vaduva is a marketing, PR, and community manager for the stealth platformer Black the Fall*, which was created in Bucharest, Hungary, and set in a dystopian communist dictatorship. The game was promoted far and wide and, in an article at LinkedIn.com, Vaduva looked back favorably on attending ten events in one year, and recommended using the Promoter app calendar for research. “Before submitting your game to a competition or accepting an invitation, always check online to see how many attendees are coming, what other developers are saying about it, etc.,” she said. Considering Black the Fall won Best Indie Game at Gamescon 2016, it’s good advice.

The big events offer you the opportunity to rub shoulders with your biggest competitors. You might also meet studio executives and publishers that appreciate what you are trying to bring to market. Your goal is to get a feel for the industry, beyond what you’re reading online or via social media. At these events you get to see what future the industry is planning for itself, listen to some of the star names in gaming host classes, and give talks. You’ll come away with your own personal viewpoint on the state of the industry and where your game fits, plus some contacts for when your game nears completion.

Don’t Waste Your Time—Plan Ahead

If you’re the only person to plan around, the good news is that your schedule is easy to control. The bad news is that everything rests on your shoulders. You’ll have to plan your social media presence, your technology for playing the game demo, and your personal calendar.

Most indies bring a laptop with their code so that they can maintain flexibility in their day. Laptops and personal devices are fine, but strip out anything personal on your device, such as exotic ringtones, interesting browser histories, embarrassing photos, or disorganized desktops. And always make backups.

If you have the benefit of being part of a larger team, you need to work out how to divide your time in order to cover the most ground. One or two of you can hustle to meetings with the laptop containing your game, while the others attend talks and classes relevant to their role in development. The likes of GDC in San Francisco, for example, and the Develop Conference in Brighton, England build entire tracks of expert talks and presentations around visual arts, programming, audio, business, and more. You may want to have someone else dive deep on a single track, while you bounce around and hit key topics from name-brand speakers, for example.

What you don’t want to do is attend events with the goal of tracking down some particular individual at all costs. Experienced, sought-after conference notables come with a schedule already in place, and probably have an admin or assistant to ensure it stays that way. You can’t plan to go to an event with a goal of simply stalking them for that perfect moment. That is a waste of your time, and likely to end in disappointment. Plan your time to be as productive as possible, and don’t rely on serendipity.

If your goal, however, is to meet a certain kind of person, there is certainly value in attending shows, even without prearranged meetings. You could take your game to an event such as TwitchCon if you’re just starting out on your promotional journey, and are looking for exposure within the streaming community and their millions of fans.

Figure 3

Figure 3: TwitchCon is a great venue for getting noticed as you start your promotional journey.

At TheNextWeb.com, reporter Lauren Hockenson described TwitchCon’s Broadcaster Alley as “The gallery of Twitch success stories.” She met up with broadcaster CohhCarnage, a former student of game design who left before completing his degree to become a full-time streamer, and now has sponsorship deals with Intel, Razer, Aorus, and Madrinas Coffee. He says he’s never seen anything so uniquely focused as Broadcaster Alley.

“Every single person you meet… has a vested interest in something that you do, too,” he said. “Everyone is cordial and nice, and enjoys Twitch. It’s not like any con I’ve ever been to at all.”

You might not be able to get time with the biggest streamers, but showing your game to lots of smaller streamers, who then go on to feature it on their channels, gets you a foot on the promotional ladder. Don’t forget, some of the smaller streamers could go on to be incredibly successful in the future, so building a relationship with them early can pay off later.

Bring Your Code and Show It Off

Ideally, you want to take a code sample that shows off your game’s vision. That doesn’t necessarily mean a finalized edition, or even a whole level, but it does mean something that allows people to understand what you’re aiming for. You have to give people enough information to communicate to their audience why they should care. You need enough of a sample to show a publisher your vision, and you need a big enough demo to allow fellow developers to provide input on how well you’re executing your idea. Developers and publishers understand the process of game creation, and are used to seeing games that are nowhere near finished, so don’t worry about not having something final to show.

DellaFave says it’s never too early to start marketing and promoting your game. “Instead of waiting until the eleventh hour, follow this general rule: Begin your marketing campaign the moment you have something that illustrates the fundamental mechanics and look of your game,” he advised.

Be aware of any content restrictions particular to the country hosting the event you’re attending. Do not bring a demo that includes content that violates local laws or corporate guidelines regarding game content, particularly if you’re at a public event that admits children and teenagers, and your game includes material aimed solely at an adult audience. Take this responsibility seriously.

How to be Part of a Booth

Renting space at game events is beyond the budget of most independent game developers. Partnering with larger companies, or applying to be part of a dedicated independent game booth (such as GDC’s Indie MEGABOOTH), is a good alternative. Landing a booth presence gives you a certain status, saves money, gives you a solid base for meeting people and scheduling appointments, and more. You’ll have to work up front to get accepted, so having a personal network that includes representatives of major gaming companies can ensure that you’re aware of deadlines, commitment levels, and content restrictions.

If you can do it, the effort could make a big difference. Dustin Hendricks, founder of Last Life Games, is the creator of the side-scrolling platform game Trial by Viking*. He took the plunge, purchasing booth space at GDC, and described the experience for GamaSutra.com. “I was able to get like three months’ worth of polish just out of watching people play, watching where they get hung up, and [hearing] some stuff people mentioned that might make it even cooler.”

Figure 4

Figure 4: The Indie MEGABOOTH is perfect for promoting your game on a small budget.

Make sure you budget for your own accommodation, transport and sustenance—don’t assume that being accepted as part of a booth equates to a free event for you and your team. Also, don’t assume that there will be enough free food lying around to forage for your needs.

The Indie MEGABOOTH requires you to submit your game in advance for consideration. This is common practice, and the extra boost of a deadline may jump-start your creative juices into a productive roll.

If that deadline blitz isn’t enough of an incentive, consider what’s at stake if you win a category. The annual Intel® Level Up Game Developer Contest, for instance, can provide category winners with great exposure, such as showcasing at PAX West. Winners get a new marketing bullet to add to promotional materials, and a great reason for a new press release, a new Tweet, and an update to their landing page.

Figure 5

Figure 5: Winning the Intel Level Up Game Developer Contest means cash and marketing support, as well as recognition.

The payoff can be huge. Dean Dodrill of Humble Hearts decided to lock down his content and polish up a submission of his action role-playing game (RPG) game Dust: An Elysian Tail* for the annual Dream.Build.Play event. “I had low expectations, as this would be the first time anyone outside of a handful of play testers actually played my experiment, so I was quite surprised when I won the Grand Prize,” he wrote. Three years later his game was a headliner of the Microsoft Xbox Live* Arcade (XBLA) Summer of Arcade event.

Network Like Crazy

Don’t be shy once you’re actually at the event. Think outside of the box when it comes to showing off your game, so attendees will remember you. Be active on social media, and mention as many people, players, and companies as you can—they’ll likely return the favor and share your Tweets if you mention them.

The Phantom Compass team rushed to pull together a GDC prototype for a unique game combining RPG elements and pinball—Rollers of the Realm*. Once at GDC, they got right to work. “We set up meetings with potential publishers and barked ‘RPG pinball’ to any passers-by with a press badge. Almost all were immediately intrigued by the concept, yet couldn’t wrap their heads around how it would work. This turned out to be a great recipe to start a dialogue,” they said in their postmortem.

Contests are a great example of maximizing your booth time. Mini tournaments can get the public dialed in and playing the game, and can generate tremendous buzz on the floor. Recording those sessions with industry luminaries, or enthusiastic booth visitors, makes a great new promotional asset that you can share.

For inspiration on taking tournaments to the next level, consider the efforts of Gamelab, creators of Gangs of GDC*, a massively multiplayer mobile phone fighting game, just for GDC. It was a “wonderful little gumdrop of fun,” they reported, that also provided an inspiring experience with mobile technologies.

Handing out t-shirts, business cards, game keys, and flyers describing the core features and idea of your game can provide a lasting connection. Make sure you have something for people to remember your game by, and don’t be cheap. A poorly produced t-shirt or meager flash drive communicates a negative message long after the show is over.

If you’ve never spoken at an event, and worry about standing in front of a crowd, take some time to watch others do it. See how prepared they are, and imagine yourself up there. Follow along, and take a few notes about how you could do the same thing with your story.

Figure 6

Figure 6: Talking about your game in front of an enthusiastic audience is a great way to get inexpensive exposure.

Maybe your background is your story, if that’s what inspired you. Some examples:

  • A Federal Bureau of Investigation (FBI) agent who creates a title around forensic science
  • A wilderness guide who builds a challenging survival game
  • An art therapist who shows how to paint with emotions as colors

Your passion and enthusiasm for your vision are your key selling points. Still, public speaking isn’t for everyone; if it’s not for you, consider other ways of getting your message across. Post-event parties are numerous, and include those hosted by large gaming companies, as well as smaller, more intimate social events arranged by groups of fans and developers. Use these events as a means of meeting like-minded individuals. Remember to bring high-quality business cards; printing them yourself is dangerous. Darius Kazemi, blogging at TinySubversions.com, describes a system for taking notes on who you talked to and what they said. Guard your reputation, and stay businesslike, especially in the face of free drinks.

Figure 7

Figure 7: Use events to network and make new friends. Remember to bring business cards, and stay professional.

Follow Up After the Event

The event might be over, but that doesn’t mean your work is done. If you met with influencers, and showed off your game to them, you should jot down their comments. Even if the coverage is negative, take the time to engage fairly and in a professional manner. What you don’t want to do is argue or fight back, no matter how tempting. If someone has taken the time to play your game, you owe them the courtesy of accepting their feedback. One bad reaction is a data point, and hopefully you’ll find something constructive in their view.

Also, it is totally acceptable to follow up with a polite piece of correspondence reminding relevant parties of your meeting, and asking if they need anything more from you to help them produce coverage. Keep in mind that influencers see a lot of new games at events, and they might not have gotten around to covering yours yet.

Finally, when you’re in a position to look back at the event, reflect on whether it was a success, in order to help you in planning for your next outing.

“Did you leave that event ultimately feeling as though progress has been made in anything that you’re doing personally as a developer, an entrepreneur, or an artist?” asks DeFreitas. “Did you feel like there was any sort of advancement or progress made with respect to your game getting out there? And if you can say ‘yes’, then it was really worth going.”

Resources

Intel® Developer Zone

Intel® Level-Up Game Developer Contest

Intel® Buzz Workshop Series


One Door VR: The First Proof of Concept on Un-Tethered VR Using MSI* Backpack PC

$
0
0

Corey Warning and Will Lewis are the co-founders of Rose City Games*, an independent game studio in Portland, Oregon.

Rose City Games was recently awarded a development stipend and equipment budget to create a VR Backpack Early Innovation Project. The challenge was to come up with something that could only be possible with an un-tethered VR setup. In this article, you’ll find documentation about concepting the project, what we learned, and where we hope to take it in the future. Below is the introductory video for the project.


Figure 1. Watch the introductory video of the One Door VR project.

Inspirations Behind Project: One Door

Earlier this year, our team attended the Resident Evil Escape Room in Portland, Oregon. Being huge fans of that franchise, experiencing that world in a totally new medium was really exciting, and it got us thinking about what other experiences could cross over in a similar fashion.

At the time, we were also trying out as many VR experiences as we could get our hands on. When we heard about the opportunity to work on an un-tethered VR experience, we knew there had to be something interesting we could bring to the table.

We’re currently operating out of a co-working space with some friends working on a variety of VR projects. The WILD crew had some experience in merging real space and VR, so I asked Gabe Paez if he remembered any specific challenges he encountered during that project. “Doors” was his response, and I decided to chase after creating a “VR Escape Room” experience, with the idea of moving through doors as the core concept!

Overview

The scope of this project is to create a proof of concept VR application using the MSI* One VR Backpack. We’re attempting to create a unique experience that’s only possible using this hardware, specifically, an un-tethered setup.

Right away, we knew this project would require an installation, and because of this, we’re not considering this product for mass market. This will likely be interesting content for exhibitions such as GDC Alt.Ctrl, Unite*, VR LA, etc.

One Door Game Concept

Players will be in a completely virtual space, interacting with a physical door installation. They will be wearing the MSI One VR Backpack, with a single HTC Vive* controller, and an HTC Vive headset. Each level will contain a simple puzzle or action the player must complete. Once completed, the player will be able to open the door and physically step through to the next level. At that point, they will be presented with a new puzzle or action, and the game will progress in this fashion.


Figure 2. The proof of concept setup for One Door

The player can open the door at any time. However, if a puzzle or action is incomplete, they will see the same level/door on the other side of the installation. We’re considering using an HTC Vive Tracker for the actual door handle, so that we can easily track and calibrate where the player needs to grab.


Figure 3. One Door front view


Figure 4. One Door top view

Installation Specifics

  • The door will need to be very light weight.
  • We’ll need support beams to make sure the wall doesn’t tip over.
    • Sandbags on the bases will be important.
  • We should use brackets or something similar that allows assembling and disassembling the installation quickly, without sacrificing the integrity each time.
  • The HTC Vive lighthouses will need to be set higher than the wall in order to capture the entire play area.
    • We’ll need quality stands, and likely more sandbags.
  • We may need something like bean bag chairs to place around the support beams/lighthouses to ensure people don’t trip into anything.
    • Another consideration is having someone attending to the installation at all times.


Figure 5. One Door field setup inside the HTC* lighthouses

Our Build-Out

  • Mobile, free-standing door with handle and base
  • MSI One VR Backpack and off-site computer for development
    • Additional DisplayPort-to-HDMI cable required
    • Mouse/keyboard/monitor
    • OBS to capture video
  • 2 lighthouses
    • Stands
    • Adjustable grips to point lighthouses at an angle
    • Placed diagonally on each side of the door
  • 1 Vive Tracker
    • Gaffe tape to attach it to the door, ripped at the charging port
    • Extension cables and charging cables run to the tracker for charging downtime
  • 2 Vive controllers
    • We didn’t need them, but showing hand positioning was helpful for recording video
  • iPhone* to capture real-world video


Figure 6. Door with HTC Vive*

This project was very much VR development training for us in many ways. This was our first time working with a Vive, and implementing the additional physical build out for a new interactive experience created a bit of a learning curve. I feel like a majority of our hang-ups were typical of any VR developer, but of course we created some unique challenges for ourselves that we're happy to have experience with now. I would definitely recommend that VR developers thoughtfully explore the topics below and learn from our assumptions and processes before kicking off a project of their own.

Our First Time with HTC Vive*

We've played with the Vive a ton, but this was our first time developing for it. Setting up the general developer environment and Unity* plugins didn't take much time, but we had to think very strategically about how to develop and test more seamlessly past that point. Very commonly, it saved us an immense amount of time to have two people on site at a time: One person tending to Unity, while the other moved controllers and trackers, re-adjusted lighthouses, adjusted room scale, and acted as a second pair of eyes.


Figure 7. One Door VR development and testing

With regard to hardware specifically as well as our project needing to use a physical prop, we went back and forth on many choreographies for how lighthouses were able to track devices, and even had quite a bit of trouble with hooking up a monitor. Since the MSI One VR Backpack has one HDMI output and one DisplayPort input, we had to borrow (and later buy) a DisplayPort-to-HDMI converter to both develop the application and use the Vive headset simultaneously. Luckily, this didn't delay development for too long, and was a better solution than our initial workaround — attaching the HDMI output to an HDMI switcher that we already had, and flipping between our monitor/dev environment and the headset. Continuing with this process for the duration of the project would have been very unrealistic and a huge waste of time.

We were introduced to more new experiences during this project, like being able to remotely work from home and use Unity's Collaborate feature, exploring how awesome it was to experience VR without being tethered, and becoming very familiar with how quickly we’re able to kick off a VR project.

Budget

Almost directly paired with testing new equipment and working with a physical build-out, our budget was another challenge we had to overcome. The recommended list of equipment provided by Intel was not totally covered by the allotted funding, so we had to pick and choose a bare minimum for what we might be able to use in our project, then consider how leftovers could satisfy the hours put in by an experienced developer. Luckily, because of our connections in the local game developer community, we were able to work with one of our friends who's been interested in experimenting on a project like this for some time. Still, if we were to do this project from scratch, we would very likely scope it with a higher budget in mind, as at least two more trackers, converter cables, adjustable joints for the tops of lighthouse stands, and a few other small items would have been considered in our minimum requirements to complete this project on a tighter timeline with a more polished product in mind.

Location/Space

From a consumer standpoint, we know that room-scale VR is unrealistic for many, and we still ran into a few issues as we planned for and worked on this project. One of my biggest recommendations to other developers working in room-scale VR would be to buy a tape measure early and make sure you have space solely dedicated to your project for the entirety of its development. We share a co-working space with about 20 other local VR developers, artists, game makers, and web designers, so needing to push our build-out to the side of the room at the end of every dev session added to our overall setup time. It did give us a lot of practice with setup and familiarity with devices, but another interesting revelation was that we never would have been able to do this from any of our homes!

Unique Build-Out

Since our project involved a prop (a full-sized, free-standing door), we had to make obvious considerations around moving it, storing it, and occlusion for the lighthouses. When we think about taking our project beyond a prototype, there are so many more issues that become apparent. Thinking about how this project would likely continue in the future as a tech demo, festival/museum installation, or resume piece, we also had to consider that we would need to show it to more people than ourselves and direct supporters. With this comes an additional consideration: safety. We definitely cut corners to very quickly build a functional prototype, but thinking around polish and transportation readiness, we would definitely recommend spending more time and resources towards creating a safer experience catered to those unfamiliar with VR.

As we prototyped, we were able to remember to pick our feet up in order to not trip, slowly move forward to avoid bashing into an outcropping in the door, and find the door handle without any problem. What we've made serves as an excellent tech demo, but we would definitely take another pass at the door prop before considering it any sort of consumable, public product, or experience. To make transportation easier, we would also build the door differently so that we could disassemble it on the fly.

Moving Forward

We're confident in what we have as a technical demo for how easy, interesting, and liberating it can be to use the MSI One VR Backpack, and we're also very proud and excited of what we were able to learn and accomplish. So much so that we'd like to continue implementing simple puzzles, art, voiceover, and accessibility features to make it more presentable. After some additional testing and polish, we'd like to shop the prototype around, searching for a sponsor related to content and IP, VR tech, interactive installations, or trade shows so that we can share the project with a wider audience! Intel is a prime candidate for this collaboration, and we'd love to follow up after giving another round on the demo.

Thanks for letting us be a part of this!

Code Sample (Unity)

When using a peripheral as large as a door, the room choreography needs to be spot-on with regard to your lighthouse and tracker setup — particularly the tracker, which we affixed to our door to gauge its orientation at any given time (this mainly allowed us to tell whether the door was closed or open). We made a simple setup script to position the door, door frame, and door stand/stabilizers properly.

The Setup Helper is a simple tool that provides a solution for the position and rotation of the door and door frame relative to the Vive Tracker position. Setup Helper runs in Editor mode, allowing it to be updated without having to be in Play mode, but should be disabled after running the application to allow the door to swing independent of the frame in game. Multiple Setup Helpers can be created to position any other geometry that needs to be spaced relative to the door, like room walls, floors, room decor, etc. in order to avoid potential visual/collision-oriented gaps or clipping.

The Setup Helper hierarchy is shown above. The following applies to the areas highlighted in blue, including the tracker (attached to the door) and doorway.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[ExecuteInEditMode]
public class SetupHelper : MonoBehaviour {
	public bool setDoorFrameToTracker = false;
	public GameObject doorFrameGo;
	public Transform trackerTransform;
    public bool trackRotation = false;
	public Vector3 doorframeShift;//used to set the difference in placement to make it fit perfectly on the tracker position
	// Use this for initialization
	void Start () {

	}

	// Update is called once per frame
#if UNITY_EDITOR
	void Update () {
		if (setDoorFrameToTracker)
			SetDoorFrameToTracker();
	}
	void SetDoorFrameToTracker()
	{
		doorFrameGo.transform.position = trackerTransform.position + doorframeShift;
        if (trackRotation)
            doorFrameGo.transform.rotation = trackerTransform.parent.rotation;
	}
#endif

}

About the Authors

Corey Warning and Will Lewis are the co-founders of Rose City Games, an independent game studio in Portland, Oregon.

Developer Success Stories Library

$
0
0

Intel® Parallel Studio XE | Intel® System Studio  Intel® Media Server Studio

Intel® Advisor | Intel® Data Analytics Acceleration Library | Intel® Distribution for Python*
 Intel® Inspector XEIntel® Integrated Performance PrimitivesIntel® Math Kernel Library | Intel® Media SDK  | Intel® MPI Library | Intel® Threading Building Blocks | Intel® VTune™ Amplifer 

 


Intel® Parallel Studio XE


Altair Creates a New Standard in Virtual Crash Testing

Altair advances frontal crash simulation with help from Intel® Software Development products.


CADEX Resolves the Challenges of CAD Format Conversion

Parallelism Brings CAD Exchanger* software dramatic gains in performance and user satisfaction, plus a competitive advantage.


Envivio Helps Ensure the Best Video Quality and Performance

Intel® Parallel Studio XE helps Envivio create safe and secured code.


ESI Group Designs Quiet Products Faster

ESI Group achieves up to 450 percent faster performance on quad-core processors with help from Intel® Parallel Studio.


F5 Networks Profiles for Success

F5 Networks amps up its BIG-IP DNS* solution for developers with help from
Intel® Parallel Studio and Intel® VTune™ Amplifer.


Fixstars Uses Intel® Parallel Studio XE for High-speed Renderer

As a developer of services that use multi-core processors, Fixstars has selected Intel® Parallel Studio XE as the development platform for its lucille* high-speed renderer.


Golaem Drives Virtual Population Growth

Crowd simulation is one of the most challenging tasks in computer animation―made easier with Intel® Parallel Studio XE.


Lab7 Systems Helps Manage an Ocean of Information

Lab7 Systems optimizes BioBuilds™ tools for superior performance using Intel® Parallel Studio XE and Intel® C++ Compiler.


Massachusetts General Hospital Achieves 20X Faster Colonoscopy Screening

Intel® Parallel Studio helps optimize key image processing libraries, reducing compute-intensive colon screening processing time from 60 minutes to 3 minutes.


Moscow Institute of Physics and Technology Rockets the Development of Hypersonic Vehicles

Moscow Institute of Physics and Technology creates faster and more accurate computational fluid dynamics software with help from Intel® Math Kernel Library and Intel® C++ Compiler.


NERSC Optimizes Application Performance with Roofline Analysis

NERSC boosts the performance of its scientific applications on Intel® Xeon Phi™ processors up to 35% using Intel® Advisor.


Nik Software Increases Rendering Speed of HDR by 1.3x

By optimizing its software for Advanced Vector Extensions (AVX), Nik Software used Intel® Parallel Studio XE to identify hotspots 10x faster and enabled end users to render high dynamic range (HDR) imagery 1.3x faster.


Novosibirsk State University Gets More Efficient Numerical Simulation

Novosibirsk State University boosts a simulation tool’s performance by 3X with Intel® Parallel Studio, Intel® Advisor, and Intel® Trace Analyzer and Collector.


Pexip Speeds Enterprise-Grade Videoconferencing

Intel® analysis tools enable a 2.5x improvement in video encoding performance for videoconferencing technology company Pexip.


Schlumberger Parallelizes Oil and Gas Software

Schlumberger increases performance for its PIPESIM* software by up to 10 times while streamlining the development process.


Ural Federal University Boosts High-Performance Computing Education and Research

Intel® Developer Tools and online courseware enrich the high-performance computing curriculum at Ural Federal University.


Walker Molecular Dynamics Laboratory Optimizes for Advanced HPC Computer Architectures

Intel® Software Development tools increase application performance and productivity for a San Diego-based supercomputer center.


Intel® System Studio


CID Wireless Shanghai Boosts Long-Term Evolution (LTE) Application Performance

CID Wireless boosts performance for its LTE reference design code by 6x compared to the plain C code implementation.


NERSC Optimizes Application Performance with Roofline Analysis

NERSC boosts the performance of its scientific applications on Intel® Xeon Phi™ processors up to 35% using Intel® Advisor.


Daresbury Laboratory Speeds Computational Chemistry Software 

Scientists get a speedup to their computational chemistry algorithm from Intel® Advisor’s vectorization advisor.


Novosibirsk State University Gets More Efficient Numerical Simulation

Novosibirsk State University boosts a simulation tool’s performance by 3X with Intel® Parallel Studio, Intel® Advisor, and Intel® Trace Analyzer and Collector.


Pexip Speeds Enterprise-Grade Videoconferencing

Intel® analysis tools enable a 2.5x improvement in video encoding performance for videoconferencing technology company Pexip.


Schlumberger Parallelizes Oil and Gas Software

Schlumberger increases performance for its PIPESIM* software by up to 10 times while streamlining the development process.


Intel® Data Analytics Acceleration Library


MeritData Speeds Up a Big Data Platform

MeritData Inc. improves performance—and the potential for big data algorithms and visualization.


Intel® Distribution for Python*


DATADVANCE Gets Optimal Design with 5x Performance Boost

DATADVANCE discovers that Intel® Distribution for Python* outpaces standard Python.
 


Intel® Inspector XE


CADEX Resolves the Challenges of CAD Format Conversion

Parallelism Brings CAD Exchanger* software dramatic gains in performance and user satisfaction, plus a competitive advantage.


Envivio Helps Ensure the Best Video Quality and Performance

Intel® Parallel Studio XE helps Envivio create safe and secured code.


ESI Group Designs Quiet Products Faster

ESI Group achieves up to 450 percent faster performance on quad-core processors with help from Intel® Parallel Studio.


Fixstars Uses Intel® Parallel Studio XE for High-speed Renderer

As a developer of services that use multi-core processors, Fixstars has selected Intel® Parallel Studio XE as the development platform for its lucille* high-speed renderer.


Golaem Drives Virtual Population Growth

Crowd simulation is one of the most challenging tasks in computer animation―made easier with Intel® Parallel Studio XE.


Schlumberger Parallelizes Oil and Gas Software

Schlumberger increases performance for its PIPESIM* software by up to 10 times while streamlining the development process.


Intel® Integrated Performance Primitives


JD.com Optimizes Image Processing

JD.com Speeds Image Processing 17x, handling 300,000 images in 162 seconds instead of 2,800 seconds, with Intel® C++ Compiler and Intel® Integrated Performance Primitives.


Tencent Optimizes an Illegal Image Filtering System

Tencent doubles the speed of its illegal image filtering system using SIMD Instruction Set and Intel® Integrated Performance Primitives.


Tencent Speeds MD5 Image Identification by 2x

Intel worked with Tencent engineers to optimize the way the company processes millions of images each day, using Intel® Integrated Performance Primitives to achieve a 2x performance improvement.


Walker Molecular Dynamics Laboratory Optimizes for Advanced HPC Computer Architectures

Intel® Software Development tools increase application performance and productivity for a San Diego-based supercomputer center.


Intel® Math Kernel Library


MeritData Speeds Up a Big Data Platform

MeritData Inc. improves performance―and the potential for big data algorithms and visualization.


Qihoo360 Technology Co. Ltd. Optimizes Speech Recognition

Qihoo360 optimizes the speech recognition module of the Euler platform using Intel® Math Kernel Library (Intel® MKL), speeding up performance by 5x.


Intel® Media SDK


NetUP Gets Blazing Fast Media Transcoding

NetUP uses Intel® Media SDK to help bring the Rio Olympic Games to a worldwide audience of millions.


Intel® Media Server Studio


ActiveVideo Enhances Efficiency

ActiveVideo boosts the scalability and efficiency of its cloud-based virtual set-top box solutions for TV guides, online video, and interactive TV advertising using Intel® Media Server Studio.


Kraftway: Video Analytics at the Edge of the Network

Today’s sensing, processing, storage, and connectivity technologies enable the next step in distributed video analytics, where each camera itself is a server. With Kraftway* video software platforms can encode up to three 1080p60 streams at different bit rates with close to zero CPU load.


Slomo.tv Delivers Game-Changing Video

Slomo.tv's new video replay solutions, built with the latest Intel® technologies, can help resolve challenging game calls.


SoftLab-NSK Builds a Universal, Ultra HD Broadcast Solution

SoftLab-NSK combines the functionality of a 4K HEVC video encoder and a playout server in one box using technologies from Intel.


Vantrix Delivers on Media Transcoding Performance

HP Moonshot* with HP ProLiant* m710p server cartridges and Vantrix Media Platform software, with help from Intel® Media Server Studio, deliver a cost-effective solution that delivers more streams per rack unit while consuming less power and space.


Intel® MPI Library


Moscow Institute of Physics and Technology Rockets the Development of Hypersonic Vehicles

Moscow Institute of Physics and Technology creates faster and more accurate computational fluid dynamics software with help from Intel® Math Kernel Library and Intel® C++ Compiler.


Walker Molecular Dynamics Laboratory Optimizes for Advanced HPC Computer Architectures

Intel® Software Development tools increase application performance and productivity for a San Diego-based supercomputer center.


Intel® Threading Building Blocks


CADEX Resolves the Challenges of CAD Format Conversion

Parallelism Brings CAD Exchanger* software dramatic gains in performance and user satisfaction, plus a competitive advantage.


Johns Hopkins University Prepares for a Many-Core Future

Johns Hopkins University increases the performance of its open-source Bowtie 2* application by adding multi-core parallelism.


Pexip Speeds Enterprise-Grade Videoconferencing

Intel® analysis tools enable a 2.5x improvement in video encoding performance for videoconferencing technology company Pexip.


Quasardb Streamlines Development for a Real-Time Analytics Database

To deliver first-class performance for its distributed, transactional database, Quasardb uses Intel® Threading Building Blocks (Intel® TBB), Intel’s C++ threading library for creating high-performance, scalable parallel applications.


University of Bristol Accelerates Rational Drug Design

Using Intel® Threading Building Blocks, the University of Bristol helps slash calculation time for drug development—enabling a calculation that once took 25 days to complete to run in just one day.


Walker Molecular Dynamics Laboratory Optimizes for Advanced HPC Computer Architectures

Intel® Software Development tools increase application performance and productivity for a San Diego-based supercomputer center.


Intel® VTune™ Amplifer


CADEX Resolves the Challenges of CAD Format Conversion

Parallelism Brings CAD Exchanger* software dramatic gains in performance and user satisfaction, plus a competitive advantage.


F5 Networks Profiles for Success

F5 Networks amps up its BIG-IP DNS* solution for developers with help from
Intel® Parallel Studio and Intel® VTune™ Amplifer.


Nik Software Increases Rendering Speed of HDR by 1.3x

By optimizing its software for Advanced Vector Extensions (AVX), Nik Software used Intel® Parallel Studio XE to identify hotspots 10x faster and enabled end users to render high dynamic range (HDR) imagery 1.3x faster.


Walker Molecular Dynamics Laboratory Optimizes for Advanced HPC Computer Architectures

Intel® Software Development tools increase application performance and productivity for a San Diego-based supercomputer center.


 

New Case Study: Optimal Design with 5x Performance Boost

$
0
0

 

In the competitive world of CAD/CAE, high performance is everything. Optimal design of a product with dozens or even hundreds of components is what determines its quality, usability, and competitiveness―and can even help it save human lives. Getting to the optimal design quickly is another key to success.

DATADVANCE  is a company that provides software products for high-end CAD/CAE model optimization―with a wide range of intellectual data analysis, predictive modeling, and design optimization services for customers in industries like aerospace, automotive, biomedical, electronics, and others.

A key contributor to the flexibility of the company’s flagship pSeven* platform is its full scriptablility with Python* pSeven Core*. And when DATADVANCE tested Intel® Distribution for Python*, it found a new way for its customers to find the speed they need―boosting performance up to 5x over the standard Python distribution.

Get the full story in our new case study.

Read it >

Collapsing/Hiding Content on your Pages

$
0
0

Before I go into how to create collapsible content on IDZ, I want to get an overview of why you should avoid hiding content as much as possible in your documentation. 

  • Forcing people to click on headings one at a time to display full content can be cumbersome, especially if there are many topics on the list that individuals care about. If people need to open the majority of subtopics to have their questions answered or to get the full story then an accordion is not the way to go. In this situation, it’s better to expose all the content at once. It is easier to scroll down the page than to decide which heading to click on. (Every single decision, no matter how minor or how easy, adds cognitive load.) The experience feels less fragmented with fewer attention switches.
  • Accordions increase interaction cost. Readers treat clicks like currency: they don’t mind spending it if the click is worthwhile and has value. However, resentment ensues when a click is considered a wasted effort; it doesn’t take many wasted clicks to escalate people’s reaction to full-blown defiance. Acquiring click targets, such as links and buttons, and waiting for content to appear requires work and wastes precious time that users don’t want to give.
  • Hiding important content behind triggers diminishes people’s awareness of it. An extra step is required to see the information. Headings and titles must be descriptive and enticing enough to motivate people to “spend” clicks on them. When content is hidden, people might ignore information.
  • Printing is another consideration that a reader correctly points out. Accordions are not well suited for printing documents and require people to print snippets of content at a time.

If you're worried about your page getting too long, you may want to familiarize yourself with some of the myths of scrolling long pieces of content.

 

BKMs

  • It's ok to hide content when...
    • developers need only a few key pieces of content to be successful and the rest just provides context and may get confusing or in the way.
    • you have supplemental information that can be skipped 
    • a small portion of your audience will find that content relevant
  • Don't put any legal content in a hidden element (it has to be visible to the user without effort)
  • Don't put steps of a tutorial or code sample in a hidden element. Developers prefer to scroll through procedural tasks.
  • Make sure you visible copy is very descriptive of what the developer will see if they click. Too vague and they won't ever interact.
  • Don't utilize these features unless you have a very good grasp of html as you will have to be editing the content in html. The WYSIWYG is not stable enough for editing this content in "preview" mode.
  • Creating entire pages with imagery and tables within these hidden areas will be flagged as feature abuse. Keep to a couple paragraphs, or bullets, or a simple single image with text.
  • Don't get fancy with layouts (trying to mimic 2 or 3 column content is a no-no)

 

Examples of Collapsable Content

Standard FAQ

Is this a good question to ask?
This is your answer area. Maecenas faucibus mollis interdum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Vestibulum id ligula porta felis euismod semper. Vestibulum id ligula porta felis euismod semper. Cras mattis consectetur purus sit amet fermentum. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Nullam id dolor id nibh ultricies vehicula ut id elit.
Is this question number two?
Content can be styled. Avoid using imagery and basically creating a full page of hidden content.
  • Demonstrate Code Branch coverage over reference decoder
  • Determine Syntax coverage which reports all value from the range of syntax element.
  • Access a combined view of Syntax and Code Branch reports

You will have to disable rich-text (link below wysiwyg) in order to enter this code in. You can put multiple paragraphs, bullets and such in the answer area.

Note: this style doesn't align with content. We are tracking down the bug (low priority).

<dl class="faq"><dt>Question (visible)</dt><dd>Answer area (hidden)</dd><dt>Question 2</dt><dd>Answer 2 (hidden)</dd></dl>

 

Standard FAQ with Persistent Link Functionality

Visible question one?
Hidden answer area.
Another visible question?

Nullam quis risus eget urna mollis ornare vel eu leo. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Cras mattis consectetur purus sit amet fermentum. Curabitur blandit tempus porttitor. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

Run the installation wizard, and then launch the application from the Windows* Start Menu. Start by creating a blinking LED light using the tutorial on the welcome page. Try other sample applications and create your own programs.

You'll notice the link icon at the end of the FAQ. Clicking on it will trigger a copy function (give it a try above). When you provide developers with that link, it will take you to that element with the item opened. The difference between standard and adding persistent links in the code is the data-target. Ensure you use dashes and have a unique name for each target. 

Note: this style doesn't align with content. We are tracking down the bug (low priority).

<dl class="faq"><dt data-target="visible-question-one">Visible question one?</dt><dd>Hidden answer area.</dd></dl>

 

Checklist

A random item for your checklist
Here is the answer to your item nullam quis risus eget urna mollis ornare vel eu leo. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Nulla vitae elit libero, a pharetra augue. Aenean lacinia bibendum nulla sed consectetur.
Your checklist is getting bigger now
Here is the answer to your item nullam quis risus eget urna mollis ornare vel eu leo. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Nulla vitae elit libero, a pharetra augue. Aenean lacinia bibendum nulla sed consectetur.

The only thing that changes between a standard FAQ and the Checklist is the class applied to the <dl>. Note, it is better to edit this in the disable view to ensure you don't break the checklist styling.

<dl class="checklist"><dt data-target="random-item-one">A random item for your checklist</dt><dd>Here is the answer to your item nullam quis risus eget urna mollis ornare vel eu leo. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Nulla vitae elit libero, a pharetra augue. Aenean lacinia bibendum nulla sed consectetur.</dd><dl>

 

Expandable List

My amazing list of items is incredibly useful
Answer to benefits donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Maecenas sed diam eget risus varius blandit sit amet non magna. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Donec sed odio dui. Cras mattis consectetur purus sit amet fermentum.
Need at lease 2 to Make a list
Answer to benefits donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Maecenas sed diam eget risus varius blandit sit amet non magna. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Donec sed odio dui. Cras mattis consectetur purus sit amet fermentum.

Very similar to the checklist, but without the check. The class will change to "expandeablelist" to create this look.

<dl class="expandablelist"><dt>My amazing list of items is incredibly useful</dt><dd>Answer to benefits area.</dd></dl>

 

Traditional Show/Hide Toggle

Do you work for Intel? Get your free iPhone X* ›

I was just kidding. No really, I don't have a free iPhone to give to my fellow Intel colleagues. I'm sorry for getting your hopes up. You can put any standard WYSIWYG html formatting in the "more content" div area. 

Click on the free iPhone offer to see the toggle in action. Our toggle system works on the traditional div visible/hidden system. Remember that that visible copy is going to need to be very clear and enticing to inspire a "click". There are also limitations to the toggle. While the code does allow you to change the text when opened (via the data-toggled-text field), this only works for some browsers. Don't rely on it for everyone. 

<p><a class="more-toggle" data-toggled-text="Hide me!" href="#">Here is your CTA ›</a></p><div class="more-content"><p>Your more content area.</p></div>

 

Myths of the Long Scrolling Page

$
0
0

Many teams implement techniques to limit scrolling of long pages for the wrong reasons. "Above the fold" was a common term used at the start of the web's evolution (over 20 years ago). Most of these myths have been debunked almost a decade ago, but they still live on. 

 

Myth #1: Users don’t scroll long pages

Users do scroll when the content is relevant, organized properly, and formatted for ease of scanning. In fact, people prefer scrolling the page for content over pagination when the topics within that page answer the right questions. The standard scroll wheel on a mouse, arrow keys, and track pads have made scrolling much easier than acquiring click targets.

 

Myth #2: Customers don’t read information at the bottom of the page

Reluctance to scroll is a behavior of the past. While you should still be mindful of people’s limited attention span on websites and prioritize content wisely, you shouldn’t fear long formats. People will see the bottom if you give them good reason to go there.

 

Myth #3: People avoid pages with a lot of content

People have the ability to handle vast amounts of information, when presented properly. In our upcoming Writing for Developers course, we emphasize the requirement for writing well, and more importantly, writing for web-based reading. Reading and scanning patterns are different between web-based and print-based content. While online users typically scan for information, it does not mean they want less information. Websites should not be information light. The same information needs to be written, structured, and presented differently.

 

The "fold" and what you should care about

Now that I have hopefully put some of your fears of the length of your content, let's go over the reasons you should pay attention to the fold.

  • This is the area you make your first impression. Developers will look very far down a page if... 
    • the layout encourages scanning, and
    • the initially viewable information makes them believe that it will be worth their time to scroll.
  • Google's new search AI looks above the fold to see what the page is focusing on. If your main juicy piece of content is below the fold, it assumes it isn't as important as what you did put there. In short, it's acting similar to a developer. 

Finally, while placing the most important stuff on top, don't forget to put a nice morsel at the very bottom.

Intel® Software License Manager User's Guide

$
0
0

1. About this Guide

This guide helps you get started using the Intel® Software License Manager with your Intel® Software Development Product. 
Related Publications

2. About the Intel® Software License Manager

The Intel® Software License Manager is a collection of software components that helps you manage your license file(s) in a multiple-user environment. Before you can use Intel® Software Development Products, you must have the correct license installed. The Intel® Software License Manager can be downloaded separately from your product if you purchased a floating license. 
This document describes the installation and use of the Intel® Software License Manager for supported platforms.
The Intel® Software License Manager’s primary function is to serve a finite number of license seats concurrently to a larger number of users of a software product. You only need the Intel® Software License Manager when you have a floating license (see License Types).

2.1 Current Version

The current version of Intel Software License Manager available for download is 2.6.0.003. This uses Flexera Flexlm version 11.14.1.1.

NOTE: The 2018 release of Intel Parallel Studio and its components require the license server manager to be at Flexlm version 11.14.1 or higher.  

2.2 Supported Platforms

The Intel® Software License Manager is supported on the following platforms:
  • Microsoft Windows* for IA-32 and Intel® 64 architectures
  • Linux* IA-32 and Intel® 64 LSB (Linux Standards Based) compliant architectures
  • OS X* IA-32 and Intel® 64 architectures
More information about downloading the Intel® Software License Manager package that best matches your license host server OS can be found in the Intel® Software License Manager downloads section of this guide.
NOTE: The minimum LSB 3 requirement for installing and using the Intel® Software License Manager is that the shared object ld-lsb.so.3 (32-bit systems) or ld-lsb-x86-64.so.3 (64-bit systems) is located in /lib (32-bit) or /lib64 (64-bit). 
Please refer to the installation section to setup the shared library if it is not in place on your system.
You can run the Intel® Software License Manager on any supported platform, with Windows*, Linux*, or OS X* applications running on separate network nodes. For example, you can install the Intel® Software License Manager and license file on a Linux* operating system to manage floating licenses for Windows*, Linux*, or OS X* applications.

2.2 Floating License

With the floating license, users run the Intel® Software Development Product on their local system, and the license served by the Intel® Software License Manager runs on a central server.  The floating license is used in multiple-user environments, and the
Intel® Software License Manager monitors the number of concurrent users (counted) permitted in the license file. 
The sample counted floating license file below is for Intel Parallel Studio XE Professional Edition for Fortran and C++ Linux*. 
Floating License Sample:
 
SERVER licserver 00270e00ffff 27009
VENDOR INTEL
PACKAGE IF83F2F10 INTEL 2018.0515 E4AB5E0CA283 COMPONENTS="AdvXEl \
ArBBL CCompL Comp-CL Comp-FL Comp-OpenMP Comp-PointerChecker \
DAAL-L DbgL FCompL MKernL PerfAnl PerfPrimL StaticAnlL \
ThreadAnlGui ThreadBB" OPTIONS=SUITE ck=147 SIGN=BC6DBB4847FA
INCREMENT IF83F2F10 INTEL 2018.0515 permanent 2 73866154322D \
VENDOR_STRING="SUPPORT=COM \
https://registrationcenter.intel.com" HOSTID=ANY \
PLATFORMS="i86_r i86_re it64_lr it64_re amd64_re i86_mac \
x64_mac" BORROW=169 DUP_GROUP=UH ck=108 SN=SMSA6W6KC6M5 \
SIGN=1E20119EACBA
The essential components of the sample license file are listed below along with their corresponding values:
  • Host name: licserver. It must contain the host name returned by the OS.
  • Host id (lmhostid): 00270e00ffff
  • Port Number: 27009. This the port used by lmgrd.
  • Vendor: INTEL.  This is the name of the vendor daemon to use.  To specify the port used by the INTEL daemon, specify it on this line with: port=28519.  Otherwise it will use a port assigned by the OS.
  • Supported Software Products: AdvXEl (Intel® Advisor for Linux*), Comp-CL (Intel® C++ Compiler for Linux*), Comp-FL (Intel® Fortran Compiler for Linux*), etc.
  • Older license files may not have feature codes used by newer product versions.
  • Supported Product Platforms: i86_r, i86_re (Linux* on IA-32 architecture), it64_lr, it64_re (Linux* on IA-64 architecture), i86_mac (Intel®-based systems running OS X*)
    • This is the platform running the product, and does not restrict the platform used for the license manager.
  • Intel Support Expiration Date:  2018.0515 (May 15, 2018)
    • The support expiration applies to access to product updates and support.  Products built after the support expiration date are not supported by the license.
  • Product Expiration Date: permanent (Never expires)
    • The license will always support products built before the support expiration date, even if the support expiration date has passed.
  • License Count: 2.
    • Two licenses can be checked out simultaneously.
Note: While host name and port numbers can be changed, editing many parts of the license file renders the entire license file invalid.  References to IA-64 architecture platforms are for legacy purposes only.  

2.3 Technical Support

Every new product purchase or renewal of an Intel® Software Development Product includes one year free product updates and priority support in the Online Service Center. 

2.4 Client/Server Backward Compatibility

The Intel Software Licensing Manager MUST BE the latest version to support the new Intel product version.   Please make sure to download the latest ISL Manager when you upgrade to a newer product version.   The older ISL manager will not support newer product version.  

3. Registration and Downloads

Before installing the Intel® Software License Manager, you must have a registered license.  Registering the license links it to you, but does not associate it with a particular server until you activate it.  

3.1 License Registration

If you have not registered your license, you may do so in the Intel Registration Center by following the steps in this guide: Steps to register a floating license
You may also activate your license at this time if your license server is not connected to the internet for remote activation.

3.2 Intel® Software License Manager downloads

The Intel® Software License Manager can be installed on Windows*, Linux*, or OS X* machines regardless of the OS indicated on your floating Intel® Software Development Product licenses.  It is important that you have the latest version.
Please follow the instructions here - Where can I download Intel® Software License Manager servers? 
The most recent version of the User Guide for the Intel® Software License Manager is also available from the web site.

4. License Activation and Configuration

License activation is the process of assigning a server or set of three servers to a license to be served by the Intel® Software License Manager.  The license can be configured to use different ports and change other settings.

4.1 Activation

The Intel Software License Manager installer will automatically activate a license with default settings if it has internet connectivity.  If your license server does not have connectivity, you can manually activate the license in the Intel Registration Center.  Activation requires the host name and host ID of the server(s) to be serving the license.

4.1.1 Host Name and Host ID

The host name and host ID are system-level identifiers used to identify the system on which you install the Intel® Software License Manager and license file.   
It is strongly recommended that you run the lmhostid utility to obtain the hostid value it will use for license validation. The lmhostid utility is installed in the same location as the Intel® Software License Manager.
If you have not installed the license manager, use the instructions below to find the host name and host ID of the system.  If multiple IDs are displayed, choose the one associated with eth0.  If the host name command returns a fully qualified domain name, you must use it to configure the license.
4.1.1.1 Microsoft Windows*
  1. From the Start menu, click Run...
  2. Type cmd in the Open: field, then click OK.
  3. Type ipconfig /all at the command prompt, and press Enter.
In the resulting output, host name is the value that corresponds to Host Name, and host ID is the value that corresponds to Physical Address.
For example, if the output of ipconfig /all included the following:
Host Name . . .  . . . . : mycomputer
. . .
Physical Address . . . . : 00-06-29-CF-74-AA
The host name is mycomputer and the host ID is 00-06-29-CF-74-AA.  Note, the host ID will be entered without the dash character.
4.1.1.2 Linux*
  1. Run the hostname command to display the host name.  If the fully qualified domain name is shown, then the license file must use it.
  2. Run the command /sbin/ifconfig eth0 to display the hardware address. 

For example, if the /sbin/ifconfig eth0 command returns

HWaddr 00:D0:B7:A8:80:AA

then the host ID is 00D0B7A880AA.

4.1.1.3 OS X* 
  1. Run the hostname command to display the host name.  If the fully qualified domain name is shown, then the license file must use it.
  2. Run the command /sbin/ifconfig en0 ether to display the hardware address. 
The following is an example of an address that could be returned by this command:
 
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:13:20:60:23:4f 

4.2 Configuration

There are several settings you can change in the Intel Registration Center to configure your license file.

4.2.1 Ports

The license manager uses two ports – one for each daemon it runs.  Both of these ports MUST be open and unblocked. 
When changing either port, you must ensure:
  • The port numbers in all license files match on both the license host server and client systems.
  • The license server is restarted to read the updates.
  • The INTEL_LICENSE_FILE environment variable on the client systems is pointing to the correctly updated license file or port@host.
You may need to add a port exception to allow the FlexNet Publisher* license server daemon, the Intel® Software License Manager vendor daemon, and the application using these daemons to communicate. See the OS vendor documentation for more information on adding port exception to the system firewall.
4.2.1.1 lmgrd
This is the primary daemon for the license manager.  It listens for license checkout requests and routes them to the correct vendor daemon.  The default port used by the Intel® Software License Manager is 27009.  This can be changed in the Intel Registration Center.
It is specified in the first line in the license file:
SERVER licserver 00270e00ffff 27009
4.2.1.2 INTEL daemon
This is the daemon used to serve licenses for Intel developer products.  It can be specified in the license file, or assigned by the OS each time lmgrd is started.  To prevent the INTEL port from changing at lmgrd startup and potentially being blocked by a firewall, it is recommended to specify this port in the license file.  You can do this by opening the license file in a text editor and adding the following bold text to the VENDOR INTEL line:
VENDOR INTEL port=<port>
To verify which port is used by the license manager, look in the license manager log file.  
  • In Windows*, this is typically located at C:\Program Files\Intel\licenseserver\Iflexlm.log
  • In Linux* and Mac OS*, this is typically located at /opt/intel/licenseserver/lmgrd.log
The log file will show the port number as follows:
14:16:16 (INTEL) (@INTEL-SLOG@) ===============================================
14:16:16 (INTEL) (@INTEL-SLOG@) === Vendor Daemon ===
14:16:16 (INTEL) (@INTEL-SLOG@) Vendor daemon: INTEL
14:16:16 (INTEL) (@INTEL-SLOG@) Start-Date: Tue Oct 03 2017 14:16:16 Pacific Daylight Time
14:16:16 (INTEL) (@INTEL-SLOG@) PID: 23500
14:16:16 (INTEL) (@INTEL-SLOG@) VD Version: v11.14.1.1 build 201886 x64_n6 ( build 201886 (ipv6))
14:16:16 (INTEL) (@INTEL-SLOG@)
14:16:16 (INTEL) (@INTEL-SLOG@) === Startup/Restart Info ===
14:16:16 (INTEL) (@INTEL-SLOG@) Options file used: None
14:16:16 (INTEL) (@INTEL-SLOG@) Is vendor daemon a CVD: No
14:16:16 (INTEL) (@INTEL-SLOG@) Is TS accessed: No
14:16:16 (INTEL) (@INTEL-SLOG@) TS accessed for feature load: -NA-
14:16:16 (INTEL) (@INTEL-SLOG@) Number of VD restarts since LS startup: 0
14:16:16 (INTEL) (@INTEL-SLOG@)
14:16:16 (INTEL) (@INTEL-SLOG@) === Network Info ===
14:16:16 (INTEL) (@INTEL-SLOG@) Listening port: 55294
14:16:16 (INTEL) (@INTEL-SLOG@) Daemon select timeout (in seconds): 1

4.2.2 Redundant Server Setup

The Intel® Software License Manager supports a triad configuration for back-up and redundancy.  This requires the use of three servers defined in the license file, two of which must be active to serve licenses.  
To set up a redundant server configuration:
  1. Identify the three servers you will use to run the license manager.  
  2. Go to the Intel Registration Center and open the license file for modification.
  3. Check the box to enable the three server configuration.
  4. Enter the host ID and host name for each server.  These IDs and names must be unique.
  5. Submit the changes and download the updated server license file.
  6. Download and install the latest Intel® Software License Manager package on each server named in the license file.  
  7. Ensure that all three servers are running the license manager to establish a quorum.
  8. Install or run the licensed product on a client machine as you would for a single server configuration.  The license file used by the client must contain all three server names above the VENDOR line.

4.2.3 License Borrowing

License borrowing allows users to hold a license seat for a limited amount of time, such as when working offline.  This will completely consume a license seat until it is returned to the server.
Borrowing is enabled when the license contains the keyword BORROW.  It will also include the maximum borrow time, which is defaulted to 169 hours.  To change the maximum borrow time, or disable licensing borrowing, you must contact support to get a new license file generated.

5. Installing the Intel® Software License Manager

5.1 Windows* Installation

Follow the steps below to install the software for the Intel® Software License Manager on Windows* systems:
  1. Download the latest Intel® Software License Manager for Windows* that matches your environment (w_isl_server_<ver>.exe).
  2. Run the self-extracting executable downloaded in the previous step.
  3. Accept the license agreement to continue with the installation.
  4. To configure the license manager, you must provide a serial number or license file:  ISLM Install SN
    1. To use a serial number, you must be connected to the internet.  
      1. If you have already activated your license, select the same host ID you provided to the registration center.  This will generate the license file in a default folder used by the license manager.
      2. If you have not activated your license, select either host ID.  This will automatically activate your license with the server’s host name and host ID you selected, along with default port values for the license manager daemons.
    2. To use a license file, browse to the location of your activated license file.
    3. The license manager installer uses C:\Program Files (x86)\Common Files\Intel\ServerLicenses as the default license folder.  It will automatically detect licenses in that location.  You can also set environment variable LM_LICENSE_FILE to point to a folder or license file.
  5. If there are any invalid license files in the default license folder or LM_LICENSE_FILE, be sure to remove them.
  6. Finish the installation process. The license manager is installed in C:\Program Files\Intel\licenseserver and should start automatically as a service.  It will create a log file named Iflexlm.log in the install folder.
  7. The Intel® Software License Manager service runs as the Local Service account.  This may have insufficient privileges to create the log file in step 6, resulting in the license manager being unable to start.  In this case, it is necessary to either change the Intel® Software License Manager service Log On account to Local System, or change the log file path to a folder writeable by the user account.  See this article for more details.
  8. Ensure that the ports used by the license manager services are not blocked.  There are two:
    1. Lmgrd port – this is specified in the license file and has a default value of 27009.
    2. INTEL port – this may be specified in the license file as: VENDOR INTEL port=<port>.  As of December 2017, this is defaulted to 28519 in the Intel Registration Center.  If it is not, then the OS will assign it when the license manager is started or restarted.  Check the log file for the value.

5.2 Linux* and OS X* Installation

Follow the steps below to install the software for the Intel® Software License Manager on Linux* or OS X* systems:
  1. Place the downloaded package <l|m>_isl_server_<ver>_<architecture>.tar.gz in the directory to which you wish to extract its files.  This need not be the same location in which you plan to install the Intel® Software License Manager files.
  2. Extract the files from the package with the following command:
    tar -zxvf <l|m>_isl_server_<ver>_<architecture>.tar.gz
  3. Run one of the following scripts from the directory created to begin installation:
    1. install.sh – command line
    2. install_GUI.sh – graphical interface

5.2.1 Command Line Installation

  1. Select the elevation mode for the installation.
  2. Accept the license agreement to continue with the installation.
  3. To configure the license manager, you must provide a serial number or license file:  ISLM Install Step 3
    1. To use a serial number, you must be connected to the internet.
      1. If you have already activated your license, select the same host ID that you provided to the registration center.  This will generate the license file in a default folder used by the license manager.
      2. If you have not activated your license, select a host ID.  This will automatically activate your license with the server’s host name and host ID you selected, along with default port values for the license manager daemons.
    2. To use a license file, enter the name and path of your activated license file.
    3. The license manager uses /opt/intel/serverlicenses as the default license folder.  It will automatically detect licenses in that location.  You can also set environment variable LM_LICENSE_FILE to point to a folder or license file.
  4. If there are any invalid license files in the default license folder or LM_LICENSE_FILE, be sure to remove them.
  5. Start the installation.  The default install location is /opt/intel/licenseserver.  To change it:
    1. Select the Customize installation option:  ISLM Install Step 4a
    2. Select the Change installation directory option:  ISLM Install Step 4b
  6. Finish the installation process. The license manager should start automatically as a service.

5.2.2 Graphical Installation

  1. Select the elevation mode for the installation.
  2. Accept the license agreement to continue with the installation.
  3. To configure the license manager, you must provide a serial number or license file:  ISLM Install Config
    1. To use a serial number, you must be connected to the internet.
      1. If you have already activated your license, select the same host ID that you provided to the registration center.  This will generate the license file in a default folder used by the license manager.
      2. If you have not activated your license, select a host ID.  This will automatically activate your license with the server’s host name and host ID you selected, along with default port values for the license manager daemons.
    2. To use a license file, browse to the location of your activated license file.
    3. The license manager uses /opt/intel/serverlicenses as the default license folder.  It will automatically detect licenses in that location. You can also set environment variable LM_LICENSE_FILE to point to a folder or license file.
  4. If there are any invalid license files in the default license folder or LM_LICENSE_FILE, be sure to remove them.
  5. Start the installation.  The default install location is /opt/intel/licenseserver.  To change it:
    1. Click the Customize button:  ISLM Install Summary
    2. Change the Destination folder:  ISLM Install Destination
  6. Finish the installation process. The license manager should start automatically as a service.  It will create a log file in the installation folder named lmgrd.log.
  7. Ensure that the ports used by the license manager services are not blocked.  There are two:
    1. Lmgrd port – this is specified in the license file and has a default value of 27009.
    2. INTEL port – this may be specified in the license file as: VENDOR INTEL port=<port>.  If it is not, then the OS will assign it when the license manager is started or restarted.  Check the log file for the value.

5.2.3 Starting the Intel® Software License Manager automatically on Linux* after reboot

There are two ways to set up the Intel® Software License Manager to start automatically.
5.2.3.1 Add the startup command to the system startup files
For Linux*, add the following steps to the system startup files (which may be /etc/rc.boot, /etc/rc.local, /etc/rc2.d/Sxxx, /sbin/rc2.d/Sxxxx, or the startup files in the /etc/init.d/rcX.d directories, where X is 1, 2, 3, or 5) to ensure that the Intel® Software License Manager server starts after reboot. It is important that the network has been initialized before the license server is started. Ensure that there is a white space (“ ”) between each argument. It is not necessary for server startup be done as root.
cd <server-directory>
`pwd`/lmgrd –c `pwd`/<licensefile> -l `pwd`/<log file>
or
$(pwd)/lmgrd –c $(pwd)/<licensefile> -l $(pwd)/<log file>  (non-csh)
Ensure that the change directory is set to the one created in Step 1 above. The
-c <license file> should point to the license file copied to the server directory from the registration e-mail. Use the full path to the license file, including the full license file name.
The -l <log file> will capture information that will be useful for debugging unanticipated server or license check-out problems. Use the full path where the log file should be created, including the log file filename.
5.2.3.2 Use Systemd
To use system:
1.Create a file in /etc/systemd/system called flexlm.service which contains the following:
[Unit]
Description=Intel Licensing Manager
After=network.target network.service
[Service]
Environment=LICENSE_FILE=<licensefile>
Environment=LOG_FILE=<log file>
WorkingDirectory=/opt/intel/licenseserver/
ExecStart=lmgrd -c $LICENSE_FILE -l $LOG_FILE
Restart=always
RestartSec=30
Type=forking
[Install]
WantedBy=multi-user.target
2.To test the service, run:
$ systemctl start flexlm.service
$ systemctl status flexlm.service
3.If it is working, run the following to start the service at startup:
$ systemctl enable flexlm.service 

5.3 Using Multiple Licenses

You can serve multiple licenses with a single installation of the Intel® Software License Manager as long as the following is true:
  • Each license has the same server host ID and host name
  • Each license uses the same lmgrd and INTEL vendor daemon ports
Essentially, the first two lines (or four for redundant server set-up) of the license files must match.

5.3.1 Separate License Files

The preferred method of serving multiple licenses is to use separate files.  Although the license manager installer only accepts a single serial number or license file, you can restart it with the path to a folder containing multiple license files to serve them all.  All licenses should use file extension .lic.  To start the license manager with a folder path, either:
  • Run the command lmgrd –c <path to license folder> -l <path to log file>
Or
  • Set environment variable LM_LICENSE_FILE to the license folder path, such as /opt/intel/serverlicenses
  • Run lmgrd –l <path to log file> 

5.3.2 Combining License Files

You can combine multiple licenses into a single file.   This is recommended if you want to ensure the order in which the licenses are used, such as serving the more restrictive licenses first.  However, you need to take care in modifying the license file as to not invalidate it.
To combine license files, use a text editor to open the first license file.  Then copy all lines under the VENDOR line from the second file and append them to the first.  Repeat for each license file you want to include, taking care not to include duplicate SERVER or VENDOR information.

6. Managing the Intel® Software License Manager

6.1 Windows* Utilities

The Intel Software License Manager for Windows package has a few utilities for configuration and status checks.

6.1.1 Intel® Software License Manager Configuration (ilmconfig.exe) 

The ILM configuration utility is provided by Intel to allow quick starting, stopping, and license rereading of the Intel Software License Manager.  It is accessible from the Start menu from All apps -> Intel® Software Development Products -> Configure Intel® Software License Manager Utility, or by running ilmconfig.exe in the license manager install directory.  ilmconfig.exe
6.1.1.1 License file
This value is defaulted to the license entered during the license manager installation.  To change it, press the “…” button to browse to the license file you want, and then click “Apply.”  
6.1.1.2 Start
If the license manager is not running, the left-most button will be labeled “Start.”  Use this button to start the license manager.  This will also write to the debug log file or create it if it doesn’t exist.  This log file must have write permission for the user starting the license manager.
6.1.1.3 Stop
If the license manager is running, the left-most button will be labeled “Stop.”  Use this button to stop the license manager.
6.1.1.4 Apply
This button is only available if the license file has changed or been reselected after using the “…” button.

6.1.2 LMTOOLS.exe

LMTOOLS is a tool provided by Flexera Software LLC for configuration and management of the license manager. From this utility you can view the system settings, start and stop the license manager, reread the license file, check the license server status, configure the paths used by the license manager, and configure the borrowing capability.
6.1.2.1 Launch and Usage
LMTOOLS can be run by browsing to the Intel Software License Manager install folder and running lmtools.exe.  It requires administrator privileges.   LMTOOLS Service/License File
6.1.2.2 System Settings
This tab displays the host settings used to verify the server license.  The host ID on the license must match one of the Ethernet IDs.  LMTOOLS System Settings
6.1.2.3 Start/Stop/Reread
From this tab you can start, stop, or reread the license file.  LMTOOLS Start/Stop/Reread
6.1.2.4 Server Status/Server Diags
Use these tabs to view the status of the license manager and/or get troubleshooting information. 
Server Status
Click the “Perform Status Enquiry” button to view the status of the license server using the license file named at the bottom of the dialog box.  It will display the number of total license seats for each component and the number currently checked out. LMTOOLS Server Status
Server Diags
Click the “Perform Diagnostics” button to get a list of available licenses on the system, including non-floating licenses.  Enter a code in the Feature Name field, such as Comp-CW, to see which licenses grant that feature.  LMTOOLS Server Diags
6.1.2.5 Config Services
This tab allows you to change the names and locations of the files used by the license manager, and enable/disable the automatic startup of the license manager at system power up.  After making any changes to this tab, click the “Save Service” button to save them. LMTOOLS Config Services
Path to lmgrd.exe file
This is the location of the primary license manager process.  The default location is C:\Program Files\Intel\LicenseServer.
Path to the license file
This is the path to the license file read by the license manager.  It is important to make sure this path only contains valid server license files.  It should not contain non-floating, disabled, duplicate, or client license files (floating licenses containing USE_SERVER).  The default is C:\Program Files (x86)\Common Files\Intel\ServerLicenses.
Path to the debug log file
The license manager writes all output to this debug log file, including start-up, status checks, and license checkouts.  It is created automatically when lmgrd is started.  The user starting the license manager must have write privileges for this file – lmgrd will not use administrator privileges even if the user has them.  If the user launching lmgrd does not have write access to this file, they may get an error stating that the license is invalid.
Start Server at Power Up
Check this box to automatically start the license manager service at system start-up.  It is only enabled if the Use Services box is also checked.
Use Services
Check this box to run the license manager as a service.
FlexNet Licensing Service Required
Check this box to enable the FlexNet licensing service.
6.1.2.6 Borrowing
Use this tab to manage license borrowing.  The license file must contain the BORROW keyword, as configured in the Intel Registration Center.   LMTOOLS Borrowing

6.1.3 lmgrd.exe

Lmgrd is the primary process for the Intel Software License Manager.  To run it manually through the command-line:
  1. Open a Command Prompt as administrator
  2. Move to the license manager install directory
  3. Run lmgrd with options –c <license file> and –l <log file>
cd <install-directory>
lmgrd –c <path to license file> -l <path to log file>

6.1.4 Lmutil.exe

Lmutil is used by LMTOOLS to run license manager commands related to status checks and borrowing.  It can be run manually as an administrator in the command-line.

6.2 Linux* and OS X* Utilities 

Unlike the Intel Software License Manager for Windows*, the Linux and OS X versions do not contain graphical tools for monitoring and configuring the license manager.  These actions must be run in the command-line using two utilities.

6.2.1 Start the license manager (lmgrd)

The lmgrd process launches the license manager for a given license file or folder.  This is the daemon provided by FlexNet to serve licenses for multiple vendors.  The settings used by lmgrd are determined by the license file(s):
  • The port used by lmgrd is defined in the top line, defaulted to 27009.
  • The daemon to serve licenses for a particular vendor (INTEL) is defined in the second line.  The port for the INTEL daemon may also be defined in this line, otherwise it will be assigned by the OS when lmgrd starts.
  • Both ports used by lmgrd and the INTEL daemon must be open.
The license file or folder can be specified in environment variable LM_LICENSE_FILE or directly in the lmgrd command.  If using a folder, the license files must contain extension .lic.
To run lmgrd, execute the command:
 lmgrd –c <license file> -l <log file> 
Both the –c and –l arguments are optional.  If –c is not specified, it will use the value in LM_LICENSE_FILE.  If the log file is not specified, then all output will be written to stdout.

6.2.2 Check the license manager status (lmstat)

Lmstat will provide status information for a running license manager, including:
  • lmgrd version, port number, and license file used
  • INTEL daemon version
  • Total number of license seats available for individual components listed in the license(s)
  • Number of license seats currently in use for each component
To run lmstat, execute the command: 
 lmstat –a 

6.2.3 Check the host ID used by the license manager (lmhostid)

Lmhostid provides the host ID(s) used to validate the license file.  One (and only one) of the host IDs listed should be provided during activation of the license, and listed in the license file next to the host name.
To run lmhostid, execute the command:
 lmhostid

6.2.4 Reread the license file (lmreread)

Lmreread rereads the license file.  Use this when you’ve renewed or otherwise made changes to your license file.
To run lmreread, execute the command: 
 lmreread –c <license file>

6.2.5 Release a license seat (lmremove)

Lmremove releases a license seat from a particular user.  Use this command if a user is unable to return the seat to the license pool, such as due to a system crash.  First, run lmstat –a to determine the feature name, user name, and host name, such as:
Users of PerfAnl:  (Total of 7 licenses issued;  Total of 1 license in use)"PerfAnl" v2018.0914, vendor: INTEL, expiry: 1-jan-0
  platforms: i86_n  i86_r  i86_re  ia64_n  amd64_re  i86_mac  x64_n  x64_mac  , vendor_string: SUPPORT=COM https://registrationcenter.intel.com
  nodelocked license locked to NOTHING (hostid=ANY)
    user1 hostname1 /dev/pts/0 (v2007.0901) (serverhost/27009 203), start Mon 11/13 12:26
Values for the above example:
  • Feature = PerfAnl
  • User name = user1
  • Host = hostname1
To run lmremove, execute the command: 
 lmremove –c <license file> <feature> <user name> <host>

6.2.6 Shut down the license manager (lmdown) 

Lmdown shuts down the license manager (lmgrd process) for a given license.
To run lmdown, execute the command:
 lmdown –c <license file>

6.2.7 Remove the license manager (uninstall.sh)

To remove the license manager:
  1. Run the uninstall.sh script to remove the installed files from the system.
  2. To permanently remove the Intel® Software License Manager, delete the lines that were added to the system startup files as described in section 4.2.3 of this guide.

7. Using the Client Application for the First Time

You must complete the following steps to use an Intel client application with the Intel® Software License Manager for the first time.

7.1 Installing the Client Application

The Intel® Software Development Product installers allow you to provide information to connect to the license manager in the following ways:

7.1.1 Use the client license file

This file only contains the information necessary to connect to the license server and uses the USE_SERVER directive.  This file will be generated if the user enters the serial number during installation, or it can be downloaded from the Intel® Registration Center. The license file would be in the following format:
SERVER <server name> <hostid> <port>
USE_SERVER
where <server name>, <hostid>, and <port> all come from the SERVER line in the license file which was used to install the license server. 
If using a redundant server setup, all three servers should be listed.

7.1.2 Use the INTEL_LICENSE_FILE environment variable

Set the INTEL_LICENSE_FILE environment variable on the client system to port@host. This can be done through the product installer or manually.
If using a redundant server setup, all three servers should be listed, for example:
27009@host1:27009@host2:27009@host3
Use colons to separate multiple values in Linux*, and semicolons in Windows*.

7.1.3 Use the server license file

Using the full server license file is not recommended.  It must match the one used by the license server exactly.

7.2 License Check-out 

7.2.1 License Seat Allocation Overview

For an Intel® compiler product, the license seat is checked out (allocated) as soon as the application is started; and returned when the application work is “done”.
Note: For performance libraries (Intel® Integrated Performance Primitives, Intel® Threading Building Blocks, and Intel® Math Kernel Library), and some other Intel® Software Development Products, license check-out is only done during product installation.
All floating license seats are available on a first-come first-served basis for check-out by any number of products installed on client systems that have been configured to check-out floating license seats from the license host server system.
However, if all license seats are allocated for the requested product, additional users are forced to wait for one of the client systems to release a floating license seat. This is automatically done in our products, and means that some users may experience delays if all license seats are simultaneously allocated. The wait time for a license seat is forever, until a license seat is available, and is not configurable by end users. 
When all license seats are allocated, a new license check-out request will retry every 30 seconds until a license seat is available for the request. The license seat wait time may appear like the product is hung or that performance has suddenly degraded.

7.2.2 Check License Allocation Status

To determine how many floating license seats are checked-out, use the command lmstat:
On the license host server system, execute one of the following commands:
Linux*
lmstat -a -c <license file>
Windows*
lmutil lmstat –a –c <license file>
<license file> is the full path including filename to the floating license file.
 
On the client system, execute one of the following commands:
Linux*
lmstat -a
Windows*
lmutil lmstat –a

7.2.3 License Borrowing

License borrowing allows users to hold a license seat for a limited amount of time, such as when working offline.  This will completely consume a license seat until it is returned to the server.
To enable borrowing, the license file must contain the keyword “BORROW.”  This will set the maximum borrow time to a default of 169 hours.  Borrowing is only supported in single server configurations.
7.2.3.1 Borrow
To borrow a license from a client machine, follow these steps:
  1. Copy the lmutil component to the client machine.  
  2. Run lmutil lmborrow INTEL dd-mmm-yyyy [hh:mm]
    1. INTEL is the vendor daemon to borrow from
    2. The date is when the license will be returned, and must be less than the maximum borrow time specified in the license file.
  3. Run the application to check out and borrow the license.
  4. Verify that the license seat has been borrowed by running lmutil lmborrow –status
  5. Disconnect from the network to use the license without connecting to the license server.
7.2.3.2 Return
To return a license early from the client machine, run lmutil lmborrow –return <feature>.  Otherwise the license will be returned on the date specified in the first borrow command.

8. Troubleshooting

This chapter explains how to generate debug logs, lists the information you should provide when opening a support request, and provides solutions for some common problems.

8.1 Getting Online Support

You may visit the floating license FAQ first to search for an answer to your license manager problem.  If you don’t find a solution, continue on with the next section.

8.2 Creating Debug Logs for License Checkout Issues

If your licensing does not work properly, review the previous sections to verify the installation. If the problem persists after you verify correct installation, you should open a support case with Online Service Center.  There are a few logs and output data that are helpful in diagnosing licensing issues.

8.2.1 License Server Log

This is the log specified when running lmgrd, and will help to identify issues on the server side.
  • For Windows* this is typically located at C:\Program Files\Intel\licenseserver\Iflexlm.log
  • For Linux* and Mac OS*, this is typically located at /opt/intel/licenseserver/lmgrd.log

8.2.2 Client Log

To enable logging of licensing issues on the client side, set environment variable INTEL_LMD_DEBUG=<path to log file>.  This will generate detailed information on the license checkout. 
Follow these steps to create a log file to send to support: 
  1. Set INTEL_LMD_DEBUG.  If it is already set, please clear the log file it points to.
  2. Run the application with the license failure.
  3. Send the log file to support.
After you are finished gathering debug information, be sure to unset this environment variable.  Otherwise it will continue to write to the log file for each checkout, which may cause slow performance and result in a very large log file.

8.2.3 lmstat Output

The output of lmstat –a will provide the daemon versions and help determine whether there is a problem with the server or the client.

8.3 Server Issues

Server issues include problems running the license manager or serving licenses to remote clients.

8.3.1 License manager is not running

If the license manager will not install or start, the problem is typically caused by an invalid license.  You can find more information in the license server log.  If it doesn’t exist, then make sure the license file is valid.

8.3.2 License manager is not serving licenses

If the license manager is running but not serving or under-serving licenses, then it is either a problem with blocked ports or the license file.
8.3.2.1 Port Issues
If the two ports used by the license manager are blocked, then clients will be unable to connect and checkout licenses.  Determine the ports used by the lmgrd and INTEL processes.  The lmgrd port is defined in the license file and is defaulted to 27009.  The INTEL port may be defined in the license file, but if it is not then the OS will assign one when lmgrd is started.  Check the server log file to find this value.
Once you’ve identified the port numbers, make sure they are not blocked by a firewall.  You should be able to telnet to each port from a remote machine.
8.3.2.2 License File Issues
The license manager may appear to under-serve license seats if any of the following are true:
  1. The license file is outdated.  If the license has been renewed but the license file has not been updated on the server, then it may fail to serve seats for newer product versions, while able to serve older product checkouts.
  2. The license version is outdated.  Licenses are upgraded as newer products change the feature codes they use.  Even though a license for the 2015 product version is valid and current, it may not contain the right codes for newer product checkouts.  Upgraded licenses will always be backward compatible for older products.
  3. Multiple license files are used inefficiently.  The Intel® Software License Manager can serve licenses from a variety of license files and versions.  However, it may not serve those licenses in the most efficient order.  An example is if you have a license for two Composer Edition 2015 seats and one for two Composer Edition 2017 seats, for a total of four.  If requests for the 2015 version consume the 2017 seats, then requests for the 2017 version will have to wait.  That means two seats are being consumed out of a total of four.  To enforce the order of licenses used, combine the license files.

8.3.3 Verify Compatible Versions

In a complex installation of multiple FLEXlm* and/or FlexNet Publisher* licensed products, which include daemons from different vendors, a single lmgrd is used to manage the use of all licensed products. 
You can use any lmgrd whose product version (lmgrd -v) is greater than or equal to all of the vendor daemons’ product versions. If your lmgrd version is less than any of the vendor daemons’ versions, server startup failures may result.
Note: It is recommended that Intel license manager to be run on a separate lmgrd instance.  You need to specify a dedicated port in the Intel license and then run the lmgrd –c <vendor-license-dir-list>. 

8.4 Client Issues

Client issues generally involve slow checkouts or being unable to communicate with the license server.

8.4.1 Slow Checkouts

Check the following if the license checkout seems slow:
  • Old license server information or license files.  Check the following places for invalid licenses and delete any found:
    • INTEL_LICENSE_FILE environment variable - make sure port@host is correct and any folders specified do not contain invalid license files.
    • For Linux: /opt/intel/licenses
    • For Windows: [Program Files]\Common Files\Intel\Licenses
  • Slow DNS lookups.  Newer versions of the product always do DNS resolution before connecting to the license server, even with IP addresses.  Make sure that the fastest nameserver is called first, by checking nsswitch.conf and resolv.conf, for example.
  • INTEL_LMD_DEBUG is set.  If you are not actively debugging license checkouts, make sure this environment variable is not set.

8.4.2 No Checkouts

Problems with floating license checkouts are usually caused by issues communicating to the server or an invalid license. 
If there appears to be a problem connecting to the license server, check the server name specified in the license file or the INTEL_LICENSE_FILE environment variable.  Make sure the server name and port are correct, and if the full server license file is being used, that it matches the license file on the server exactly.
If the server information appears to be correct, try to telnet to the server name and port specified.  If this succeeds, then find the secondary INTEL daemon port and try to telnet to it.

8.5 Information Needed for Support Requests

When opening a support request, you should provide the following information to the support team:
  • Client Information
  • Intel Software License Manager Information

8.5.1 Client Information

  • Package ID of the product. This is the name of the file you downloaded and installed.
  • Name of client application with all parameters.
  • Operating system, architecture, kernel, glibc, and any service packs installed on the client system.
  • Values to which the LM_LICENSE_FILE and INTEL_LICENSE_FILE environment variables are set.
  • A copy of all license files used on the client system. It is important to copy the license files themselves rather than copy/pasting the contents as this can mask other potential problems with the files.
  • On Linux* or OS X*, set INTEL_LMD_DEBUG to /tmp/licensecheckout.log and on Windows*, set INTEL_LMD_DEBUG to C:\temp\licensecheckout.log and run the client. Once the client finishes execution, attach the licensecheckout.log to the support issue.
  • If you are opening a support request about a segmentation fault issue, attach the stack dump.

8.5.2 Intel Software License Manager Information 

  • The output of lmstat –a (Linux*) or lmutil lmstat –a (Windows*)
  • Operating system, architecture, kernel, glibc, and any service packs installed on the system on which the Intel® Software License Manager server is installed. 
  • The Intel® Software License Manager server file name that you downloaded and installed 
  • A copy of the server log file at one of the following locations, depending on your operating system: 
    • Windows*: <install drive>:\program files\common files\intel\flexlm\iflexlm.log
    • Linux * or OS X*: <install location of servers>/lmgrd.log
  • A copy of the license file you used to start the server. It is important to copy the license files themselves rather than copy/pasting the contents as this can mask other potential problems with the files.
  • Values to which the LM_LICENSE_FILE and INTEL_LICENSE_FILE environment variables are set.

Creating Engaging User Experiences in VR

$
0
0

This article explores the foundation of design and how it can be translated into a user experience (UX) for fully immersive involvement in virtual reality (VR), utilizing modern UX techniques to create interactions that are more intuitive, allowing for more immersive engagements. As VR gear becomes more accessible, sleek, and mainstream, it is imperative that developers continue learning new ways to approach UX. Beginning with the concept phase, into design, onward to user types and controls, followed by navigation and safety, and finally concerning implementation, there must be a considerate way to meet the user in their reality to have the most impact.

What is UX?

True engagement is hard to come by unless the quality of the experience is authentic, and the interactions are easy to use and understand, UX is the relationship between the user and their satisfaction with the functionality of a product. In VR, the key to immersion is engaging the mind with realism in movement, function, and quality. Design of the UX must be thought of first, taking into consideration the use case, flow, and intended users. Often confused with the elements that interact with the user interface (UI), UX must be treated as separate, and even more fundamental to a good product.

There are many tools to explore and perfect UX, below are some examples. To understand the audience, a user persona will focus on how a type of user would perceive and functionally experience a design. Flow diagrams enable a designer to map out the process that a user would take through a system and allow for testing with wireframing and prototyping to prove or falsify assumptions. Other choices for tools relevant in creating VR UX include user journey maps, interviewing and research. User-centered design creates personally impactful experiences as a result of implementing such tools in product development.

Image representing a filled out document of a generic person
User Persona

Image repesenting a generic diagram
Flow Diagram

Image repesenting generic wireframe
Wireframe & Prototyping

How to Explore and Create Concepts for VR

When beginning to develop a concept for VR, many people, especially those new to VR, tend to think that VR sits on top of current experiences like movies, marketing, or videos. These old processes are often not questioned for a new medium and lead to interesting issues, like being given storyboards. If storyboarding is still used then the expectation should be a fluid idea that the content in the storyboard may not be the main point of interest in the experience, given that the user is the driving force. As far as translating a VR experience from thought to concept the foundational supporting items such as modelling and concept art help steer the vision, but often miss important features that users need. There are several other more functional options such as Unity* EditorVR which will allow for in engine experimentation while developing a fleshed out idea for experience functionality; cardboard or brown boxing in which a rapid physical prototype can be created with depth, dimension, and interaction; or recorded video using stand-ins, all of which can provide a better take on how to interact with the environment.

Download Unity

In this example, The MR Configurator Tool from Underminer Studios, you can see some hints toward UX with cues for the user such the push button and a familiar modern tablet interface. These are valuable parts of development, but lack the thorough understanding of the user to accurately predict and design an intuitive experience.

To gain some perspective on the roots of entertainment media, let's take a look at video games and movies that take their roots from theatre, in the position of an audience member; whereas VR takes its roots from performance theatre, where each patron is a part of the play itself. Visual trickery and movie magic that would be found used in non-VR media are perceived as cheap and visually unpleasant in VR. Especially with regard to special effects, users are bothered by the sizing and placement of effects in VR more so than others. To make a visually pleasing and spatially accurate experience more akin to a simulation, developers must use a high level of realism for spacing and perception to make more realistic experiences. For example, in an experience where the user is driving a racecar, the need to interact with the steering wheel and gear shift at the same time must be met in an authentic way that makes the user believe they are sitting in a car and can actually perform the tasks necessary to drive.

At GDC 2016, Jesse Schell and Shawn Patton spoke about lessons learned in VR design in their talk, I Expect You To Die: New Puzzles, New Hands. The example below is a clear representation of how brown boxing is such an effective and efficient tool for physical prototyping for VR. Watching a user interact with an environment and props gives great insight into the 'why' of an interaction.

Developer creating a cardboard mock-up

Typography is inevitable and still has no great solution. We currently see a lot of screen-jacking, which forces the view to be blocked by a billboard-style display covering the world and lowering immersion. The most fundamental need here is communication. All UI and UX should be from the environment itself in an ideal scenario, but providing the proper cues is very hard. Audio cues or pictogram-style instructions seem to have supplanted some of the older paradigms, and new interfaces with hand tracking make a much more intuitive interaction that allows for fewer words.

Design Fundamental

Good design involves taking into consideration the intuitive needs of many varied users. No one knows that it is good design per se; they just know that it works easily and doesn't have a steep learning curve. Take children's toys, for example. A well-designed toy will not need an explanation, but instead becomes more of an extended function of the user. Designers expect that things are used a certain way, but instead the use case is not the same and users are frustrated by the functionality. One way to avoid this issue is to playtest, a lot. The more users, the wider the swath of functionality tested and proven or disproven.

cartoon of two figures struggling with a door

Bad design in VR has some negative physical effects such as nausea. Issues like sensory conflict are created by forced camera movements, low-quality animation, frames per second (fps), the rate at which the world is rendered—these can be reduced or removed as issues when UX design is taken into consideration early on. A real-world example here is doors. If a door is designed properly with the correct flat plate for pushing and a handle or bar for pulling, we then would not need push or pull signs or have surfaces that need constant wiping from handprints, but even this most basic daily function is overlooked in design. Can you imagine getting sick every time you encountered a poorly designed door?

In VR, a huge element of design is actually putting on a headset. The colors, scale, and space will look very different from this vantage point versus looking at a monitor. Building a room-scale space which is two meters by two meters can drastically affect the UX with regard to lighting textures and parallax, which are the key elements to creating a sense of depth. An experience built using the capabilities of room scale within the limits of the actual physical space means that the locomotion or movement within the world needs to be carefully chosen. If you build a poor experience that doesn't take space into account, you can find yourself in a metaphorical corner, where users are constantly teleporting around with no focus, never quite sure of the space around them. Environmental cues like directional audio, lighting, clear paths, or glowing objects (also known as inductive design), where the environment itself tells you information, drawing your attention to the important parts of the scene. Other opportunities to make this this more clear are using level of detail (LOD) for contrast in distant objects including changing the appearance to be more faded. Users need feedback from the environment to feel immersed; this can include shadows cast from hands and sound that is responsive to movement. These considerations are important for all experiences, but taking into account the users and their norms is even more crucial.

Stylized image of a generic VR headset

Designing for Users

Tutorials are very important for every experience to teach users the unique paradigms they need to understand, in order to function in the VR environment, especially in these early days when there are inconsistent norms. The ideal would be task-based practice without the ability to skip, including six degrees of movement—yaw, pitch, roll, and x, y, and z axis—for any input including controllers, hands, visuals, and other elements when creating a standard for each experience. There are three traditional types of users that can be grouped together, and to make a universally enjoyable engagement, the designer should know how to overcome the obstacles for each.

  • One user type is the T-Rex; these people put their arms at a 45-degree angle and don't tend to turn or look around. The key is to loosen them up with directional audio and visual cues, and very obvious cues like arrows with timers.
  • The second user type is the Ultra-Enthusiastic; they do not read any text and just want to get to game play. They are typically used to playing video games and may be experienced in other VR, but they have the expectation that they will be able to pick it up along the way. The best way to help this user is by forced fun gameplay like tutorials.
  • The third user type is the Precise, who reads absolutely everything and does all of it properly. The negative of this user is that they are not having an intuitive interaction, and likely will get lost if the instructions are not exact. In some cases, this user will be able to use critical thinking and get over certain shortcomings, and in other cases this user will get stuck without further explanation. Using very clear cues and the same paradigms is the best way to account for this user.

Universal design is inclusivity for both mental and physical differences and likenesses. Taking into account all learning and bodily abilities and planning for a solution is imperative when creating long-lasting paradigms. Experiences that can be had by all the above user types and used from a seated position with controls that can be adapted to any set of physical limitations will be accessible by a wider audience. As hardware becomes more low profile, intuitive, and available we can really start to break down these barriers for all kinds of users.

User Input Controls

Functional controls can be a hand or any one of the various controllers. Each type of control has distinct functionality, and since they are all less than two years old, implementation does not have an established norm, and users will likely not understand the basics including how to hold, button uses, UI access, or interaction among others. Using pictograms, infographics, text, or hands-on demonstration are necessary until these become more mainstream.

There are several types of controllers specific to individual systems each constructed to be ergonomic, durable, and as familiar as possible as well as custom controllers that are more specialized, like a gun, sword, bicycle, or other props for more realistic interactions. There are suits that allow for a built-in motion capture experience by tracking all movements. Each has its own set of capabilities and negatives. These tend to increase cost and lower accessibility and add another barrier to adoption. Below are some examples of variations. The other consideration with tracking controllers is that the experience must be optimized to track them well or immersion is hindered.

VR Gaming tools

Hand tracking has far fewer constraints and is a way to interact with the system that is the most intuitive and natural. Gestural control is a much faster and more realistic input control. User interfaces that are embedded in the hand functions, like flipping over to the palm, create an easier way for users to get over the learning curve. The engagement increases, and the number of errors decreases. The hardware itself is newer and improving vastly with each iteration.

photo of developer using Underminer Studios VR data visualization tool

In the above image you can see the Underminer Studios VR data visualization tool, ManuVRing Data, using Leap Motion controls. We developed this project on a Vive and had the option to choose between the controllers or adding hand tracking capabilities. Given that this was a rapid prototype during a hackathon, we wanted the users to have more intuitive interactions within the system. The judges for the competition had little VR experience and were a perfect example of how to use design thinking for all skill levels. In order to combat a sense of unknown using controllers that are unfamiliar, using one's own hands to pinch, zoom, rotate, select, and manipulate the virtual world allowed for instinctive motions and little explanation needed for the paradigms present in the experience.

Navigation

Navigation is imperative to a realistic experience. Camera angles, frame rates, pace, and locomotion are key factors while acceptance can be intentional or accidental; a good designer plans for both directional and time-based acceptance criteria. Taking into account the user types mentioned above can help a designer prepare for typical issues. Both the T-Rex and Precise users are aware of the environment and of the negatives of the physical experience when they don't follow the paradigms. While the Ultra-Enthusiastic user tends to be far less cautious searching for the more visceral experience, they often do not make the connection that there could be physical harm in real life. Taking this into account is important for navigation and safety.

Gaze-based navigation is when your line of sight is your selection tool and there is no other input needed to change content. In fast-paced environments this can be an issue because the user is limited to one input at a time. This is a great tool for informational, educational, or visual experiences that are turn based or strategy focused.

Teleporting or blink is when a user jumps from one location to another with their vision being obscured by a brief darkening or lightening of the screen; if white is chosen it can increase nausea. Timing between the blink, location, and mental catch-up is also a huge factor here. A slight timing delay between the locations and a very light animation that freezes time for three to five frames, so the user can mentally process the change helps to reduce negative effects. A newer technique is blinking without removing the visual; a slight motion blur and a darkening of the visual with a vignette-style view allows the individual to have situation awareness.

Turn-based navigation is when you take an experience and you augment the amount of forward movement and turning relation that a user believes they are performing versus their visual perception. Essentially the user is walking 50 paces forward and turning visually 180 degrees, but the actual movement is only 90 degrees. This design can maximize playable space. Negatives to this approach are having to be very knowledgeable about three-dimensional math and the limitations of each headset type, including updates per system for tracking. This is a tricky one to get right and must be executed precisely to limit adverse effects.

Auditory cues are core to making a proper VR experience. A user cannot look everywhere, and the expectation is that people are not looking in the right direction at all times. So, in order to attract their attention and create more opportunity and an inductive experience, leveraging audio design and 360-degree directional sound will add cues that a user can follow naturally. Another option is to add psychoacoustic audio, which registers above and below a human perceptible hearing range and creates an involuntary emotional response; this can lead a user to pursue or even avoid specific interactions.

User Safety and Comfort

Stylized image of a VR user in a warning sign

This is, unfortunately, typically an afterthought. A good designer will consider the long-term ramifications of a user's actions in VR and plan to avoid or reduce them where possible. For instance, people do not naturally know how to look at their feet by bending at the waist in VR. An intended path that is not followed can cause nausea; using visual cues to stop a user at a visual barrier instead of a physical barrier like clipping, when someone bounces into a wall and off of it, is not a good mechanism for VR. The most important safety mechanism is the chaperone bounds, a grid that warns of physical bounds of the play space, which is there for safety. It is important to add these basics into a tutorial as well, so that the user knows how to react to these elements during an experience.

Implementation

Virtual reality is an exciting new frontier that needs a lot more exploration. When designing for 3D interactive experiences there are a lot of influences from psychology, to architecture, sound, physics, lighting, and so on. The more comprehensive look at the design influences, the better. To be successful at creating impactful VR experiences, a designer must have an awareness of the hardware differentiations and limitations that will directly affect the optimization needs and shape the overall experience. Since we are so early in the lifecycle of VR this generation, there is still a lot of experimentation needed to continue creating a comprehensive best practices guide. For now, adhering to user-centered design, beautiful and realistic experiences, and optimizing to maximize an experience will be the fundamentals.

VR Optimization Tips

About the Author

Alexandria Porter is the is CEO of Underminer Studios. She brings a grounded perspective to a fast-paced business on the leading-edge of technology with small business experience, design prowess, and management skills. As a solution-focused company, we serve clients that seek to leverage technology to push boundaries and change the perspective of how to solve real problems. Utilizing more than a decade of experience, strong industry connections, and out of the box thinking to create unique products, Underminer Studios is driven by a passion for impactful uses of technology that will shape the future.


Stand Out From the Virtual Reality Crowd With Mixed-Reality Video

$
0
0

Why using mixed-reality video is the smart way to promote your VR game

It's the rare game developer that doesn't want their creation to be a massive hit. While the world of virtual reality (VR) gaming may not yet have seen its crossover, blockbuster moment, there is no shortage of fierce competition for a piece of the pie in a fledgling (for now at least) market of VR hardware owners.

A quick search on the Steam store revealed more than 2000 different VR-enabled apps and games for the PC, the vast majority of which have appeared since 2016. In this context, you want your VR game to grab the eyeballs of potential players, and not let go. Mixed-reality video is a great way to do just that.

Player using mixed-reality
Figure 1. Green screen mixed reality as demonstrated in the 2016 video by HTC Vive*.

Green screen mixed-reality video for VR games is a 2D video production technique that gives the viewer a third-person view of the player in the game's world. Using a hardware and software stack that is explored here, video creators can record or stream live gameplay of the player immersed in the virtual environment, taking the viewing experience far beyond the limitations of the first-person headset view. It's something every VR developer should try.

"Wow, that's really cool!" said pretty much everyone.

HTC and Valve* were among the first to use mixed reality to show off VR in their Vive demo video released in early 2016, and the technique has been used to great effect in trailers for Job Simulator from Owlchemy Labs, and Fantastic Contraption from Northway Games. The Serious Sam developers at Croteam have also embraced it—from their early demos at EGX in 2016, to implementing proprietary mixed-reality video tools in all of their VR titles.

Josh Bancroft and Jerry Makare at Intel® Developer Relations have been working with the technique for over a year, rolling out live demos in 2017 at GDC, Computex in Taipei, E3, and, most recently, at the Intel Holiday Showcase in New York. For them, using mixed-reality video to stand out from the VR crowd is a no-brainer. Having heard the reactions first hand—mostly variations on the subheading above—they understand the advantages of mixed reality over the first-person view in giving viewers outside the headset a compelling sense of the VR experience.


Figure 2: Live green screen mixed reality stage demo by the Intel® team at Computex 2017 in Taipei, featuring Rick and Morty: Virtual Rick-ality.

"The greatest advantage of mixed-reality video for VR is the fact that you can give a glimpse of what it's like inside the VR experience," said Josh. "When you're a third party looking at a first-person VR video, there are things that you don't realize are distracting—when you're in VR, your head movements are smoothed out by your brain, but if you're watching in third person, it's really jumpy and distracting." The third-person mixed-reality view neatly eliminates those distractions.

Sharing is Caring

For many gamers, sharing their experiences with friends is primordial, be it online, local multiplayer, or as a spectator around the TV. With VR, the player is sealed off behind a VR headset, with only the jumpy first-person view on screen, making it harder to include others in the experience. Mixed reality can change that.

"Using the third-person view adds a more social aspect, it's more of a shared experience," said Jerry. "People can see the environment and understand what you're reacting to, which can lead to all sorts of fun things, like people backseat driving and giving you hints while you're playing."

VR game played in a social setting
Figure 3. An example of how mixed reality can make VR more social, from the 2016 HTC Vive* video.

Good Business

"The VR market is very crowded right now, and it's hard to distinguish yourself," said Josh. "It's really new and everyone's trying it, but no one has really figured out the secret, smash-hit formula, so there's a lot of opportunity." Enabling your game for mixed reality is one of the ways developers can seize that opportunity.

Firstly, it allows you to create trailers, videos, and live demos with an irresistibly immersive view of the experience from inside the game world, as opposed to using only the standard, first-person view. You can then use these trailers and videos to showcase the game at events, and beyond. "Using that third-person mixed-reality view could give you the edge that gets more people watching your trailer and buying your game," said Josh.

Secondly, adding mixed-reality support to your game throws open the doors to content creators, streamers and YouTubers, allowing them to create their own mixed-reality videos of your game. YouTubers, including Get Good Gaming and DashieGames, have already used mixed reality to great effect in their videos. As the top comment on this DashieGames video puts it, "I could watch this kind of gameplay for hours and not get bored."

Screenshot of user in a mixed-reality game
Figure 4: YouTuber Get Good Gaming playing VR gladiator title Gorn in mixed reality, while dressed for the occasion.

Many developers are already implementing features designed to help streamers and content creators—such as optimizing for Twitch* streaming, and integrating audience interaction tools. When it comes to VR games, mixed-reality enablement needs to be high on that list. Northway Games built a whole suite of content-creator features into Fantastic Contraption, including mixed-reality support. They wanted to use it in their own streams but, more importantly, they wanted their players to do the same, with the goal of reaching a bigger audience with what they knew was compelling content.

Fantastic Contraption screenshot
Figure 5. One of Northway Games' live mixed-reality streams, featuring their VR game Fantastic Contraption.

"There is a lot of growth in building integration for streamers, YouTubers, and content creators, and adding mixed-reality support to your game aligns directly with that," said Josh. "If you show that your game will let streamers and content creators produce this amazing mixed-reality video for their audience, that can be really appealing, and ultimately help multiply your reach."

Serious Commitment

For the last couple of years, independent developers Croteam have been on a mission to bring VR to their entire catalog of recent games, including four Serious Sam games, and The Talos Principle. Innovators to the core, it was inevitable that they would explore the possibilities of mixed reality as part of their VR push. "All the cool-looking videos had mixed reality in them, and it looked like the best way of trying to explain to people what VR feels like," said Goran Adrinek, senior programmer, and in-house mixed-reality expert, at Croteam.

The chance to flex their mixed-reality muscles came when they had the opportunity to present Serious Sam VR: The Last Hope* at EGX 2016 in the United Kingdom. "Our plan was to get some kind of real-time mixed-reality implementation going so we could look cool on the stand, capture some promo videos, and also show off the physical minigun controller," said Goran.

Woman with gun playing VR
Figure 6. The Serious Sam* minigun finding its way into green screen mixed reality at Croteam.

Zero to Hero

However, there was a problem: They had two weeks until showtime, and zero mixed reality enabled in the game. "At that time, we were just discovering what mixed reality was all about, and we tried to do it the same way as the others did: Rendering several views from the game that can be blended together in open broadcaster software," said Goran.

Ever the perfectionists, they weren't satisfied with the results, so they decided to build their own mixed-reality tools in their proprietary Serious Engine that they've been iterating on for the best part of two decades. They chose to bring the entire mixed-reality process into the engine, including capturing the live video, and compositing it in real time inside the game.

It was a bold move that paid off. "We made it in time for EGX 2016, and it worked out nicely," said Goran. "Watching someone play in mixed reality is a much nicer experience than watching a first-person video, and you could see it was grabbing a lot of attention on the show floor."

Users playing mixed-reality games
Figure 7. Croteam's live green screen mixed-reality demo of Serious Sam VR: The Last Hope* at EGX 2016.

Home Advantage

As well as the advantage of needing only one piece of software (the VR game) to manage the whole process, creating mixed-reality video in-game has the added bonus of making the effects look better, because they're mixed with the player's image in the game engine where they're generated.

Coding everything in their engine—from video capture and virtual in-game third-person camera, to green screen chroma key, and compositing—is clearly no small task, but it didn't seem to faze Croteam. "It's hard to tell exactly how much work it was, but not that much, since we didn't have a lot of time to do it," said Goran.

The rapid-fire work in time for EGX was just the beginning of mixed-reality support for Croteam. "After EGX, we continued developing mixed reality until we solved all the problems that were bugging us—like camera calibration, controller delay, and moving-camera scenarios," said Goran. "We ended up with an easy setup process with an in-game setup wizard that anyone could follow."

Mixed Reality Setup Wizard
Figure 8: The Mixed Reality Setup Wizard is included with all Croteam's VR titles.

There is more in store, as they continue to hone their technique and tools. "We did some experiments using the depth information to get even better visual results when mixing the player's image in the game, and this is definitely the way to go," said Goran. "We hope to expand on this approach in the future."

The Mixed Difference

Echoing the thoughts of Josh and Jerry, Croteam saw the clear benefits of mixed-reality support beyond the show floor and their own trailers. "It also helps players easily make videos that can be grabbed directly from the game, and streamed or uploaded to online services like Twitch or YouTube*," said Goran. "We expect a lot more player-generated material to hit the Internet as a direct result of how easy it is to do mixed reality in our games."

Despite the successful two-week hustle pre-EGX, Goran doesn't want to make it all sound too easy. "It is a lot of work, whether you mix stuff in post-production, or if you invest in technology that streamlines the entire process as we did," he said. But, in the end, it's an investment worth making. "The approach and tools are really open for everyone, and even in its basic form, it can produce great results," continued Goran.

Croteam have documented their journey with mixed-reality implementation on their blog—essential reading for any developer looking to explore the technique.

Beyond Gaming

It's not only game developers whose businesses can benefit from the creative applications of mixed reality. One area that Josh highlighted is real estate. As a quick Google search demonstrates, virtual property walkthroughs are already very much a thing, allowing a prospective buyer to visit a property from the comfort of their own home, or a realtor's office. "To take that further, you could make a mixed-reality view of the realtor walking them through the property," suggested Josh. "Or they can be given a mixed-reality video where they see themselves walking around it, so they become more attached to it."

VR in real estate image
Figure 9. CNN Money* report entitled "Virtual reality is the new open house," about the use of VR in real estate.

From his perspective as a video creator, Jerry can envisage mixed reality being used as a powerful tool in visual storytelling, potentially linked to the immersive theater experiences that have become popular in London, New York, and elsewhere. "There is an opportunity for people to create rich, curated, mixed-reality experiences based on VR," said Jerry. "It's all about adding to the social environments, broadening the reach of what's happening, so people can better understand what's going on and share the experiences together."

In the wake of the Spider-Man: Homecoming - Virtual Reality Experience game released in June 2017, Josh expects to see more commercial brands creating VR and mixed-reality experiences as a promotional tool. "Imagine setting up an environment in a movie theater or theme park where you can insert yourself into your favorite movie or game experience, and hang out with Iron Man, or swing with Spider-Man," said Josh. "Not only do you get to experience that in the VR headset but, with mixed reality, you get a video of yourself doing it that you can share."

Spiderman game screenshot
Figure 10: The Spider-Man: Homecoming - Virtual Reality Experience* trailer from Sony Pictures Entertainment.

Take the Plunge

But, before we start posting mixed-reality videos of our head-to-head encounters with super heroes to social media, there's more work to be done with mixed-reality video in the world of games, and more creativity to come from the minds of developers.

One such creative leap that impressed Josh was the live demo of the Circle of Saviors* VR game at Tokyo Game Show 2016. The game involves slashing away at dragons with a great big sword, which requires a healthy suspension of disbelief. To complete the effect, the green screen demo was performed by a player in full cosplay, including flowing red cape, so that the final composited mixed-reality video showed the player, in character, immersed in the game universe.

"It was really spectacular," said Josh. "The developer saw the capability of mixed reality, and realized it could be used to do something different, taking the creative vision in a new direction that wasn't possible before."

Cosplay user in a mixed-reality game
Figure 11. Cosplayer demonstrating Circle of Saviors, in mixed-reality, at Tokyo Game Show 2016.

And that's the key—showing people what's possible in VR, whether or not they're in the headset themselves. Right now, mixed-reality video does a great job of communicating the idea of "this could be you" to viewers, and VR developers owe it to themselves, and their games, to check out what it can do.

"Showing the player fully immersed in a 3D world using mixed reality is the best option we've come up with for promoting our VR games in 2D media," said Croteam's Goran. "You should definitely try it."

While there are certainly technical challenges involved, they're far from insurmountable, as the experiences of Croteam, and others, prove. Plus, there is an ever-growing number of resources available to help you. "It's not as hard as it seems, and it's strangely addictive," said Josh.

"Once you are able to bring a camera into VR with you, and show that third-person perspective on your virtual world, it will spark your creativity as a developer, and you'll start thinking of cool ways to use it," continued Josh. "It's worth it!"

More stories, tutorials, case studies, and other related material are planned around VR mixed-reality technologies. To stay up-to-date with all the latest news, join the Intel® developer program at: https://software.intel.com/gamedev.

Retopologizing VR and Game Assets Using Blender*

$
0
0

retopolgized images

This article covers how to both retopologize a mesh to a clean, low-density model, and then UV unwrap that mesh to support adding texture maps to the new model. It also discusses the use of free tools like Blender* and its BSurface add-on for retopologizing a sculpted 3D mesh.

About Retopology

Retopology is the act of rebuilding a 3D mesh so it's lower in size and designed cleaner for use. Retopology may seem like just busy work in the 3D creator’s process, but it is a key task and also allows flexibility in working with your models. Most original meshes, built using 3D modelers, sculpted, or scanned, capture a level of detail to make the model look good. However that same detail creates a large file that is difficult on reads, writes, and memory for your target application. Also, most highly detailed meshes can have too much detail or be nonsymmetrical, creating odd or unwanted creases, folds, and movement when animating the mesh.

high topology mesh and renderlow topology mesh and render

To avoid this, you’ll need to retopologize the mesh to a more basic, usable structure, and then transfer the detailed data about the color, bumps, and creases out of the 3D structure and into “baked” 2D materials maps. This allows the 3D mesh to be a much smaller file with fewer vertices, yet produce the same results in rendering. In the images above, the original mesh was reduced from 500,000 faces to 2,000 faces, yet the output looks relatively the same.

What you need to get started:

  • PC, Mac*, or Linux* computer; Intel® Core™ i5 or Intel® Core™ i7 processor recommended
  • Blender 3D with the BSurface add-on enabled (free to download)

Blender* and BSurface

Blender is a free, full-purpose 3D modeling, rendering, and animation studio. With this powerful tool, you can create simple 3D objects or produce a full-featured animated film in Blender. For the purpose of this exercise we focus on the modeling side of Blender. If you have never used Blender before, I suggest watching this getting started video to learn how to set up Blender for good results.

To do the work in this tutorial you will need a high topology 3D mesh. For instructional purposes we used the Suzanne monkey head model that is a mesh object in Blender. To create a higher density version of this model, one with over 500K vertices, follow these steps:

  1. AddMesh, and then select Suzanne.
  2. Under the Tools panel, select Smooth.
  3. Under the Properties panel, click the wrench icon to get to modifiers.
  4. Add the modifier called Subdivision Surface.
  5. Set Subdivision to 5.
  6. Click Apply.

Retopology Using BSurface

The following retopology steps are covered in this video.

Setting up BSurface

To install BSurface, from the File menu, click User Preferences. Select the Add-Ons tab. Search for “bsurf”. On the right the add-on for BSurface will appear. To enable, check the box to the left of its name.. BSurface should now be installed and enabled.

In order for BSurface to place vertices in the right place lined up with the original model, you’ll need to have the original model loaded and initially selected while in Object Mode in the scene. Then to create your retopology, add a plane, which creates a new asset. Go to the Edit menu, and then select Snap to Features.

To configure BSurface, from the Tools menu, select the Grease Pencil tab. Then do the following:

  • Select Continue.
  • Under Data Source, select Object.
  • Under Stroke Placement, select Surface.

surface button in the menu

You may find you can move some of the vertices, edges, or faces around manually. To get those items to also cling to the surface of your model, select the following while in Edit mode: Under Snap, select on, and then under Snap Element, select Face. Finally, click the next four icons on the bottom right.

face button in the menu

You are now ready to use BSurface.

Creating your first grid mesh

BSurface automatically creates a mesh based on a few lines you draw over the original mesh.

  • You only need to draw a few parallel lines in one direction. BSurface will create a crossing mesh based on those lines. Do not draw lines from different directions or in crossing directions.
  • In the Tools tab under the Tools panel, you will see BSurface. Click the Add Surface button.

steps to add surface

  • To adjust what has been created, set the number of Cross strokes or Follow (lines between the lines you drew). You can also clear Loops on strokes if you want BSurface to ignore all but the first and last lines you drew. Follow will now be subdivisions between the first and last stroke.

cross strokes example

Creating a radial mesh

Often you need a mesh that circles around an area like an eye or mouth. To create these types of meshes, simply draw evenly spaced radial lines from the area you are circling.

radial mesh

Click Add Surface, and then select Cycle Cross to close the radial and set the number of cycle crosses needed.

radial mesh example

As before, clear Loops on strokes to set the subdivisions more evenly.

Corner Fill

You will find you often have two sides of a mesh completed, but you still need to fill the mesh between them. To do that, select all the points on the two sides. Use SHIFT+ RIGHT+CLICK to unselect the point in the corner, and then draw one of the missing sides. In the example below the top and right edges were set and selected. A bottom edge was drawn, following the curve of the mesh points. Click Add Surface to fill it in.

corner fill

corner fill grid

Connecting Meshes

To connect two meshes, select all the points you need to connect. Be sure you have selected the same number of points one each side. Then imagine a series of lines between them. Draw at least two of those lines with the last line drawing over one set of highlighted vertices.

connecting meshes

connecting meshes detail

You now have the basics of BSurface needed to quickly retopologize a model. Continue to use BSurface to create a full mesh over your high topology model.

UV Unwrapping and Adding Detailed Texture Maps

UV unwrapping is a key piece to the process and is a part of the pipeline that allows you to get details lost from your high topology model onto your newly retopologized model.

UV unwrapping is the process of defining how a 2D image will be mapped over all the vertices in the mesh. Imagine a world globe where you have a smooth sphere and plan to have a printed image of the globe’s features wrapped onto the globe. For the 2D image to wrap over the globe, you will need to make cuts into the image in a way that allows that image to wrap properly over a sphere. This is called a map projection.

texture map
Source Wikipedia: Goode homolosine projection of the world

For 3D modeling, UV unwrapping of a model allows you to define this projection pattern, which works as a guide for all sorts of 2D information to be mapped over the 3D object. In the globe example, you could bake information into 2D image maps for the color, roughness/gloss, and height information for the land or water of the earth, and then have that projected onto the sphere using the UV map.

Below is an example of a UV map of a 3D alien head model. The left image is called a color map, and the middle image is called a normal map or bump map. Both images have a wireframe mesh flattened out over the respective images. This flattened out mesh pattern is the UV map.

UV map

What you need

For this part of the tutorial, you need a high poly mesh with texture details, which is a file you can use, or alternatively you can take the high poly object you created and added texture details using Sculpt. The linked file is an example of the monkey head sculpted with wrinkles and lines and skin folds.

First be sure the high poly mesh and the retopology or low poly mesh are in the same space. Set their X, Y, and Z coordinates to the same location using the translate options in the properties panel.

The steps that follow are also covered in this video.

Step 1: Marking Seams

Marketing seams is the process of selecting where the mesh will cut or break apart in order to be a flat 2D pattern of the 3D model. Think of seams like seams on a shirt or plush toy, where the 2D material will be stitched together. And like those seams you wouldn’t want too many of them, nor would you want to put seams in highly visible areas. Put them where there is a lot of geometry that curves around and where it makes sense to have natural edges. When baking texture maps as we are doing, the seams will likely not show. However if you use procedural maps of noise or other patterns, those patterns will often break at the seams.

To mark you seams, select your Low Poly Object, and then go to Edit mode. Select the Edge selection tool, and then select where you want your seams to go.

marking seams example

  • Select Mesh Menu> Edge> Mark Seams. Do this for every seam you want to mark.

steps to follow

Seams will appear as red-colored edges on your model.

red colored edges of seams

Step 2: Unwrap the Low Poly Mesh

This step is straightforward and consists of unwrapping the model, and then creating a texture file to be mapped. Again you need to be in Edit mode of your low poly model. Select all, verifying that every mesh face is selected. Type “U” to bring up the Unwrap menu. Select UV Unwrap at the top of the menu.

For the UV map to work, you’ll want it as a guide for a texture file you are going to bake or render. Create a new viewport window and set that viewport to UV Image Editor. Under Image in that view port select New Image.

red colored edges of seams

Set any desired resolution options, give it a name, and then save.

steps to unwrap the low poly mesh

It should now be a black image with nothing in it. You however can see the UV map over the black image.

steps to unwrap the low poly mesh

Step 3: Bake a Normal Map of the High Poly Mesh

In this step, you will “bake” the texture information on your high-resolution model to a 2D image file called a normal map. This texture will then be added to the material on your low-resolution model. The first step is to make sure your low-resolution model has a material with a texture in it.

Select your low-resolution or retopology model. In the Properties panel go to the Materials tab. If a material doesn’t already exist, add a material and select Nodes. (Be sure your render settings at the top of the Blender application are set to Cycles Render.)

cycles render

Open the Node Editor by setting one of your viewports to Node Editor. From the Add menu, select Texture, and then Image Texture.

steps to select image texture

With this added, set the file to the name of the UV image you created in the previous section. Also set it to Non-Color data.

steps to select non-color data

Next you need to select the bake settings. Under the Properties Panel, select the Render setting, which is the camera icon. Scroll to the bottom to the Bake section and do the following:

  • For Bake Type, select Normal.
  • For Space, select Tangent.
  • Select Selected To Active.
  • Set Ray Distance to be at least 1.00.

bake section

The next part can be tricky. You need to select both models, but you also need to be sure to first select the high-res version, and then the low-res version. Be sure you are in Object Mode and not Edit Mode.

Once in Object mode, using the Asset Inventory panel, verify that both the high topology model and the low topology model are visible and have the eye icons to the right of their names. Select the high topology model so it appears in white. Right-click it, and then click Select. You should see the model selected in gold with its name in the 3D Viewport.

bake section

Now press and hold the SHIFT key, and then right-click the low poly model in the 3D viewport. This action causes the name of the low poly model to now appear in white, the low poly model will be named in the 3D viewport, and the selection color will change from gold to orange. With that done, click Bake in the Render panel.

steps to select high topology

Step 4: Apply the normal map to your model

In this final step we take the baked file and connect it to the material of our low poly model. First you’ll need to save the Baked render from the last step. In your viewport, under the UV image map, first verify that the name of your saved UV map is listed in the file browser. The image should be shaded blue/purple, similar to the image shown below. The image on the left shows the normal map when in Edit mode; the image on the right shows the baked image while in Object Mode. While in the UV Map Image editor, select the Image Menu, and then click Save.

texture ready to connect to the model

We are now ready to connect this texture to our model.

Go to the Node Editor where you have the Texture Image Node. Connect the Texture Node to a Normal Map Node. You can get this Normal Map Node under the Add> Vector> Normal Map.

image in normal map mode

Connect the Normal Map Mode into the Normal slot on the Principled BSDF Shader. You should now see all the details from your high-resolution model in your low-resolution model.

image in normal map mode

You have now completed a major part of the 3D asset workflow for use in games or virtual reality software. In addition to texturing normal information on a model, you can texture with other material maps that create realistic skin, hair, or other materials from cracked concrete to scratched gun metal. For reference on this next workflow, do a web search on PBR materials using tools like Substance Painter* and Blender.

Intel® Xeon® Platinum Processor Accelerates MariaDB Server* and MyRocks*

$
0
0

MariaDB Server* benefits from a 78 percent increase in throughput and an 18 percent cut in response time when running on the Intel® Xeon® Platinum 8180 processor.

MariaDB Server is one of the most popular database servers in the world. It’s made by the original founders of MySQL and notable users include Wikipedia, WordPress.com and Google.

View Solution Brief (PDF)

Electrical Characteristics of the Intel® High Definition Audio Specification

$
0
0

The Intel® High Definition Audio specification has recently changed, affecting the existing electrical characteristics and constraints. A whitepaper has been provided to explain these changes. This is available as a PDF linked at the bottom of this page.

In the white paper, you'll find updated physical requirements, as well as as the requirements of respective products. 

More Information on Intel® High Definition Audio Changes

Exasol Accelerates In-Memory Data Analytics by up to 114 Percent

$
0
0

Exasol offers a high-speed in-memory database, which enables organizations to make faster and smarter decisions. The company found that the Intel® Xeon® Platinum processor accelerated performance by up to 114 percent.

The team at Exasol optimized the software, line by line, to take advantage of low level processor features. The company has used Intel® VTune™ Amplifer to identify CPU related performance bottlenecks and to tune performance.

View Solution Brief (PDF)
 

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>