Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Training Series for Development on Intel-based Android* Devices

$
0
0

This series of videos presents an overview of Intel tools available for Android* developers through Intel® Software. You´ll discover the benefits of these tools in your everyday life as a programmer whether or not you´re targeting Intel devices. These tools will improve your life as a developer by saving you time, allowing you to create better experiences, and helping you target your app to multiple platforms and architectures. Take a look at the five training videos and follow the links to learn more about each tool.

Intel Tools for Android* Developers

Xavier Hallade, Intel Technical Engineer and GDE

Android* developers can benefit from one or more of the tools that Intel offers. Xavier gives a quick description of some of the tools available, what they can do for you, and the benefits you can gain in terms of productivity and results. He also covers how the tools that can help you in debugging, performance optimization, cross-platform development and more.

 

NDK: Discover the Android Native Development Kit

Xavier Hallade, Intel Technical Engineer and GDE

Xavier introduces us to the Android* Native Development Kit (NDK), the official component you can use to integrate C/C++ into Android* applications. In some scenarios the usage of the NDK can generate a big performance boost on your app.

 

Intel® Graphics Performance Analyzers 

Seth Schneider, Intel Technical Engineer

Seth presents the Graphics Performance Analyzers, a set of powerful tools that help developers utilize the full performance potential of their platform. The Analyzers provide a system performance view in real time, track down errors, locate graphic issues and more.

 

Build Multi-OS Native apps with Intel® INDE​

Karthiyayini Chinnaswamy, Intel Technical Engineer, and Jeff Kataoka, Intel Marketing Manager

Karthi and Jeff walk us through Intel INDE, a developer productivity suite that enables cross-OS and cross-architecture native mobile application development using Java* and C++. Take a look at the set of tools that will allow you to improve your app performance and save time through code reuse and streamlined access of platform capabilities.

 

Develop Apps in HTML5 Using the Intel® XDK

Dale Schouten, Intel Technical Consulting Engineer

The Intel XDK allows you to develop in HTML5 and obtain installable mobile apps for Android*, Windows* and iOS*. Dale walks us through the features of the tool that makes it easy for developers to create, debug, and deploy apps across multiple OSs, using one code base.

 


Multi-OS Engine [Beta] Download and Installation

$
0
0

Multi-OS Engine provides time-saving and productivity features to use Java code to create Android* and iOS* apps.  

Thank you for applying for early access to the Multi-OS Engine beta.  With your approval for early access to this beta product, please utilize the links below to download the Multi-OS Engine.

Download the Multi-OS Engine beta below:

Windows* Host version (175MB)

OS X* Host version (116MB) 

 

Multi-OS Engine Developer Resources:

If you have an OS X* development machine, refer to the  Quick Start Guide for Local Build

If you have a Windows* development machine, refer to the Quick Start Guide for Remote Build

For more information:

Explore Intel® INDE

A suite of C++/Java* native developer tools for creating cross-OS, media-rich performance applications.

For a complete list of EULAs used by Intel® INDE, see the product release notes.
 

Benefitting Power and Performance Sleep Loops

$
0
0

by Joe Olivas, Mike Chynoweth, & Tom Propst
 

Abstract

To take full advantage of today’s multicore processors, software developers typically break their work into manageable sizes and spread the work across several simultaneously running threads in a thread pool. Performance and power problems in thread pools can occur when the work queue is highly contended by multiple threads requesting work or when threads are waiting on work to appear in the queue.

While there are numerous algorithmic solutions to this problem, this paper discusses one in particular that we have seen as the most commonly used. We also provide some simple recommendations to improve both performance and power consumption without having to redesign an entire implementation.

Overview of the Problem

The popular method for solving performance and power problems is to have each thread in the thread pool continually check for work in a queue and split off to process once work becomes available. This is a very simple approach, but often developers run into problems with the methodologies to poll the queue for work or deal with issues when the queue is highly contended. Issues can occur in two extreme conditions:

  1. The case when the work queue is not filling fast enough with work for the worker threads and they must back-off and wait for work to appear.
  2. The case when many threads are trying to get work from the queue in parallel, causing contention on the lock protecting the queue, and the threads must back off the lock to decrease contention on the lock.

Popular thread implementations have a some pitfalls, yet by making a few simple changes, you'll see big differences in both power and performance.

To start, we make a few assumptions about the workload in question. We assume we have a large dataset, which is evenly and independently divisible in order to eliminate complications that are outside the scope of this study.
 

Details of the Sleep Loop Algorithm

In our example, each thread is trying to access the queue of work, so it is necessary to protect access to that queue with a lock, in order that only a specified number of threads can concurrently get work.

With this added complexity, our algorithm from a single thread view looks like the following:
 


 

Problems with this Approach on Windows* Platforms

Where simple thread pools begin to break down is in the implementation. The key is how to back off the queue when there is no work available or the thread fails to acquire the lock to the queue. The simple approach is to constantly check, otherwise known as a “busy-wait” loop, shown below in pseudo code.

while (!acquire_lock() && work_in_queue);
get_work();
release_lock();
do_work();

Busy Wait Loop

The problem with the implementation above is if a thread cannot obtain the lock or there is no work in the queue, the thread continues checking as fast as possible. Actively polling consumes all of the available processor resources and has very negative impacts on both performance and power consumption. The upside is that the thread will enter the queue almost immediately when the lock is available or when work appears.

Sleep and SwitchToThread

The solution that many developers use for backing off checking the queue, or locks that are highly contended, is typically to call Sleep(0) or SwitchToThread() from the Win32 APIs. According to MSDN Sleep Function documents, calling Sleep(0) allows the calling thread to give up the remaining part of its time slice if and only if a thread of equal or greater priority is ready to run. 

Similarly, SwitchToThread() allows the calling thread to give up the remaining part of its time slice, but only to another thread on the same processor. This means that instead of constant checking, a thread only checks if no other useful work is pending. If you want the software to back off more aggressively, use a Sleep(1) call, which always gives up the remaining time slice, and context switch out, regardless of thread priority or processor residency. The goal of a Sleep(1) is to wake up and recheck in 1 millisecond.

while (!acquire_lock() || no_work_in_queue)
{
  Sleep(0);
}
get_work();
release_lock();
do_work();

Sleep Loop

Unfortunately, a lot more is going on under the hood that can cause some serious performance degradations. The Sleep(0) and SwitchToThread() calls require overhead since they involve a fairly long instruction path length, combined with an expensive ring3 to ring 0 transition costing about 1000 cycles. The processor is fooled into thinking that this “sleep loop” is accomplishing useful work. In executing these instructions, the processor is being fully utilized, filling up the pipeline with instructions, executing them, trashing the cache, and most importantly, using energy that is not benefiting the software.

An additional problem is that a Sleep(1) call probably does not do what you intended if the Windows’ kernel’s tick rate is at the default of 15.6 ms. At the default tick rate, the call is actually equivalent to a sleep that is much larger than 1 ms and can wait as long as 15.6 ms, since a thread can only wake up when the kernel wakes it. Such a call means the thread is inactive for a very long time while the lock could become available or work placed in the queue.

Another issue is that immediately giving up a time slice means the running thread will be context switched out. A context switch costs something on the order of 5,000 cycles, so getting switched out and switched back in means the processor has wasted at least 10,000 cycles of overhead, which is not helping the workload get completed any faster. Very often, these loops lead to very high context switch rates, which are signs of overhead and possible opportunities for performance gains.

Fortunately, you have some options for help mitigating the overhead, saving power, and getting a nice boost in performance.

Spinning Out of Control

If you are using a threading library, you may not have control over the spin algorithms implemented.  During performance analysis, you may see a high volume of context switches, calls to Sleep or SwitchToThread, and high processor utilization tagged to the threading library.  In these situations, it is worth looking at alternative threading libraries to determine if their spin algorithms are more efficient.

Resolving the Problems

The approach we recommend in such an algorithm is akin to a more gradual back off. First, we allow the thread to spin on the lock for a brief period of time, but instead of fully spinning, we use the pause instruction in the loop. Introduced with the Intel® Streaming SIMD Extensions 2 (Intel® SSE2) instruction set, the pause instruction gives a hint to the processor that the calling thread is in a "spin-wait" loop. In addition, the pause instruction is a no-op when used on x86 architectures that do not support Intel SSE2, meaning it will still execute without doing anything or raising a fault. While this means older x86 architectures that don’t support Intel SSE2 won’t see the benefits of the pause, it also means that you can keep one straightforward code path that works across the board.

Essentially, the pause instruction delays the next instruction's execution for a finite period of time. By delaying the execution of the next instruction, the processor is not under demand, and parts of the pipeline are no longer being used, which in turn reduces the power consumed by the processor.

The pause  instruction can be used in conjunction with a Sleep(0) to construct something similar to an exponential back-off in situations where the lock or more work may become available in a short period of time, and the performance may benefit from a short spin in ring 3. It is important to note that the number of cycles delayed by the pause instruction may vary from one processor family to another. You should avoid using multiple pause instructions, assuming you will introduce a delay of a specific cycle count.  Since you cannot guarantee the cycle count from one system to the next, you should check the lock in between each pause to avoid introducing unnecessarily long delays on new systems. This algorithm is shown below:

ATTEMPT_AGAIN:
  if (!acquire_lock())
  {
    /* Spin on pause max_spin_count times before backing off to sleep */
    for(int j = 0; j < max_spin_count; ++j)
    {
      /* pause intrinsic */
      _mm_pause();
      if (read_volatile_lock())
      {
        if (acquire_lock())
        {
          goto PROTECTED_CODE;
        }
      }
    }
    /* Pause loop didn't work, sleep now */
    Sleep(0);
    goto ATTEMPT_AGAIN;
  }
PROTECTED_CODE:
  get_work();
  release_lock();
  do_work();

Sleep Loop with exponential back off
 

Using pause in the Real World

Using the algorithms described above, including the pause instruction, has shown to have significant positive impacts on both power and performance. For our tests, we used three workloads, each of which had longer periods of active work. The high granularity means the work was relatively extensive, and the threads were not contending for the lock very often. In the low granularity case, the work was quite short, and the threads were more often finishing and ready for further tasks.

These measurements were taken on a 6-core, 12-thread, Intel® Core™ i7 processor 990X  equivalent system. The observed performance gains were quite impressive. Up to 4x gains were seen when using eight threads, and even at thirty-two threads, the performance numbers were approximately 3x over just using Sleep(0).



Performance using pause

Performance using pause

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

As mentioned before, using the pause instruction allows the processor pipeline to be less active when threads are waiting, resulting in the processor using less energy. Because of this, we were also able to measure the power differences between the two algorithms using a Fluke NetDAQ*.

Power Consumption with optimization.

Knowing that your software is saving 0.73W over a standard implementation means it is less likely to be a culprit for draining laptop battery life. Combining reduced energy consumption with the gains in performance can lead to enormous power savings over the lifetime of the workload

Conclusions

In many cases, developers may be overlooking or simply have no way of knowing their applications have hidden performance problems. We were able to get a handle on these performance issues after several years of investigation and measurement.

We hope that this solution is simple enough to be retrofitted into existing software.  It follows common algorithms, but includes a few tweaks that can have large impacts. With battery life and portable devices becoming more prevalent and important to developers, a few software changes can take advantage of new instructions and have positive results for both performance and power consumption. 

About the Authors

Joe Olivas is a Software Engineer at Intel working on software performance optimizations for external software vendors, as well as creating new tools for analysis. He received both his B.S. and M.S. in Computer Science from CSU Sacramento with an emphasis on cryptographic primitives and performance. When Joe is not making software faster, he spends his time working on his house and brewing beer at home with his wife.
Mike Chynoweth is a Software Engineer at Intel, focusing on software performance optimization and analysis. He received his B.S. in Chemical Engineering from the University of Florida. When Mike is not concentrating on new performance analysis methodologies, he is playing didgeridoo, cycling, hiking or spending time with family.

 

Tom Propst is a Software Engineer at Intel focusing on enabling new use cases in business environments. He received his B.S. in Electrical Engineering from Colorado State University. Outside of work, Tom enjoys playing bicycle polo and tinkering with electronics.

 

Installing Intel® RealSense™ SDK on a Mac*

$
0
0

Introduction

Intel® RealSense™ technology is becoming popular with developers, and along with the buzz comes numerous (and inevitable) questions regarding its functionalities. One of the most-asked questions is how to get the Intel® RealSense™ SDK running on a Mac*.  The following step-by-step guide shows you how to run the SDK sample apps with an Intel® RealSense™ camera on a Mac through Boot Camp*.

Running an Intel® RealSense™ SDK Sample on a Macbook Air* through Boot Camp*
Running an Intel® RealSense™ SDK Sample on a Macbook Air* through Boot Camp*

 

Getting Started

First, you’ll need the following:

To determine if you have a 4th generation Intel® Core™ processor (code name Haswell) or later system, open Terminal and type the following command:

sysctl -n machdep.cpu.brand_string

That should return something that looks like this:

Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz

If you’re not familiar with the Intel processor naming system, look at the 4-digit number after the i3/i5/i7. If the first digit (the “thousand’s place”) is a 4 or higher, then you are good to go. Also, to see the list of all the 4th generation and 5th generation processors, click the appropriate link. 

Intel® RealSense™ Camera F200
Intel® RealSense™ Camera F200

 

Installing Windows Through Boot Camp

Use Cmd+space or click the top right-hand corner magnifying glass on the desktop to launch Spotlight* and plainly search “Boot Camp”. “Boot Camp Assistant” should subsequently appear.

Finding Boot Camp* Assistant through Spotlight*
Finding Boot Camp* Assistant through Spotlight*

The Boot Camp Assistant will transfer the Windows installation files from either an ISO file or a USB drive.  At this point, Windows will install itself onto your Mac via the USB drive, along with the latest Windows drivers. The application will conveniently guide you through the course of resizing your Mac device’s current system partition and producing a Windows partition.

Installing Windows* through Boot Camp* Assistant
Installing Windows* through Boot Camp* Assistant

To begin, select the tasks you want to perform. It is recommended you leave all these options checked. However, if you already have a Boot Camp USB drive or have already partitioned your Mac, feel free to uncheck these options to speed the process up. This scenario is usually used by those who plan to install Windows on more than one Mac and have already made a USB drive.

Selecting tasks to perform on Boot Camp* Assistant
Selecting tasks to perform on Boot Camp* Assistant

Next, point your Mac at the ISO file or USB drive. Insert a USB flash drive and select it. Please keep in mind that this drive will be deleted, so you should take precautions and back up valuable files.

Your Mac will now create the proper Windows installer drive; you should see the notification that reads “Copying Windows files.” Depending on your drive’s rate of speed, this process may take quite a while. It is common for the progress bar to periodically appear stagnant and unmoving; please be patient when this occurs.

Copying Windows* files to USB drive
Copying Windows* files to USB drive

When the process is complete and your Mac has successfully completed the creation of a USB installation drive, you will come across the “Create a Partition” screen. At this point, you may split your Mac system drive into two parts: one partition for Mac OS X and another for Windows. You can determine how much space to allot to your Windows system and OS X system—32GB or more is recommended for your Windows partition. If you have several hard disks on your Mac, you may choose to dedicate one specifically to Windows. 

Unfortunately, Boot Camp Assistant will not be able to resize your partitions after the completion of this process; you will need to use a third-party tool to do so.

Partitioning Windows* drive
Partitioning Windows* drive

Windows will now complete the installation process. Afterwards, the Boot Camp installer will appear and install all appropriate Windows system hardware drivers and utilities.

When the Boot Camp installer completes its tasks, you can remove the USB drive. If you do not plan to perform the installation process on another Mac, you are done with your USB drive.

 

NTFS error

Since Boot Camp formats the Windows drive to FAT32 partition and Windows 8.1 only supports NTFS partition, you will probably receive the following error during installation:

“Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS.”

Make sure you select the “BOOTCAMP” partition. You should have a “format” option to click that will reformat the drive to NTFS so that you may continue the Windows installation process.

NTFS error during windows installation process

 

Installing the Intel® RealSense™ SDK on Boot Camp

You’ll notice that the process of installing the Intel RealSense SDK onto Boot Camp is the same as the usual Windows installation process.

Step 1: Plug your Intel RealSense Camera into one of the USB3 ports on your Mac.

Step 2: Follow the instructions on the Intel RealSense SDK download page to install the Intel® RealSense™ Depth Camera Manager (DCM), which includes the Camera Virtual Driver and Depth Camera Manager Service, and then the SDK.

Plugging the F200 camera into the USB3 port on the Macbook*
Plugging the F200 camera into the USB3 port on the Macbook*

Step 3: Depending on the version of Windows 8.1, Media Feature Pack may not be included.  Simply download it and continue the installation if needed.  Media Feature Pack is required for the installation of the Intel RealSense SDK.

Step 4: When the SDK is installed, try running the Samples Apps. You can find them in a desktop icon folder called “Intel RealSense SDK 2014.” The directory’s name is C:\Program Files (x86)\Intel\RSSDK\sample if the SDK is installed in the default location.

Running an Intel® RealSense™ SDK Sample on a Macbook Air* through Boot Camp*
Running the SDK Sample on Macbook Air*

Step 5: Code samples can be modified and used through Visual Studio.

And there you have it! Intel RealSense technology is officially running on your Mac.

 

That’s a Wrap

As you have gathered by now, the steps needed to successfully run Intel RealSense technology on a Mac are relatively straightforward and simple using the Apple Boot Camp application.

You can use other virtual machine tools such as Parallels, VMWare Fusion*, and Virtual Box, but the USB3’s simulation isn’t the same as running OS natively. Simply put, the installation can be still accomplished, but the camera will fail to load up.

 

About the Author

Peter Ma has over 14 years of experience developing web, mobile, and IoT applications. His experience includes database, web backend, web frontend, mobile (Android* and iOS), and IoT development. He is an Intel® Software Innovator who has developed several demos through the application of Intel® technologies. Currently, Peter is a Rapid Prototype Specialist, consulting for both large corporations and startups. He attends many hackathons in his spare time and often wins!

What's New in the Intel® RealSense™ SDK R4 (v.6.0)

$
0
0

 The fourth release (R4) of the gold  SDK, version 6.0, is now available for the Intel® RealSenseTM F200 camera and gold release for the release for the Intel® RealSenseTM world facing R200 camera Please note that the camera driver/service (DCM) for the F200 OR R200 camera must be installed as well as the SDK. Please use DCM 1.4.3 for the F200 camera and DCM 2.3 for the R200 camera. They are available on the same download page as the SDK.

With full Windows 10 64 bit support (or Windows 8.1 64 bit August update),  the system must have a 4th generation Intel® processor (or later), an available root USB3 port, 8 GB RAM, and 6 GB free space. The new SDK may be installed without a need for you to previously uninstall the older version.  Note: The SDK installer won't start when there is not enough free disk space. It may also pause for a long period when selecting language packs (to get from the web) or changing camera types.

Note:  For building an installer for your complete app, please read  “Deploying Your Application” in the SDKDevGuide.pdf or Add RealSense Runtime to your App Installer

The R200 shares Blob and Face Tracking with the F200,  but only the R200 supports Scene Perception, Enhanced Photography, and 3D capture of full Head and Body, Also, unlike the F200, the R200's graphics driver must support OpenCL* 1.2. A quick way to check which modes are supported by which cameras, check the Samples which are divided into front, rear, and both.

Deprecated or Replaced Modules or Functions

  • PXCContourExtractor  > Use the blob tracking module.
  • PXCBlobExtractor > Use the blob tracking module.
  • PXCDataSmoothing >  Use the PXCSmoothing utility.
  • Query/SetBlobSmoothing is replacing Query/SetSegmentationSmoothing and Query/SetContourSmoothing
  • QueryBlob replaced QueryBlobByAccessOrder (BlobData) 
  • QueryContourPoints, IsContourOuter and QueryContourSize were deprecated in BlobData
  • Emotient* Emotion Detection has been deprecated, Facial landmarks and gaze detection can be used instead.
  • Unity facial animation sample application was removed

New functionalities:

  • FaceConfiguration/FaceData added GazeConfiguration, GazeCalibData, and GazeData to support gaze tracking.
  • Javascript /Web support is integrated back into the full SDK to provide support for running game apps in the browser. Support is available for blob/hand/face tracking, speech command and control, and Unity C# (web player platform).
  • Face Tracking   R200 added face landmarks
  • Face detection and tracking improvement due to improved classifier, algo adaptation of face rectangle. Face Rec improved
  • 3DScan added:  Get/Set scanning area;  Get/Set maximum triangles;  Event notifications (during scan)
  • 3DSeg added:    SetFrameSkipInterval (skip during processing). New usage cues (fading) at near/far extents and optional callback support for user enter, too close and too far events.
  • BlobConfiguration added Query/SetMaxPixelCount, Query/SetMaxBlobArea  Query/SetMinBlobArea, and EnableColorMapping/IsColorMappingEnabled 
  • BlobData introduced the IContour interface to manage the contour data. andQueryContour function to retrieve any contour data. 
  • Calibration interface is moved to its own include file: pxccalibration.h.
  • Capture  Added a static function DeviceModelToString, added event notification when camera list changes and extended the rotation field in DeviceInfo to specify the camera installation location relative to the display panel. The StreamOption enumerator added the option to read unrectified color streams (R200 only.)
  • EnhancedPhoto and PXC[M]EnchancedVideo are new interfaces to support enhanced photo/videography.
  • SenseManager interface added EnableEnhancedVideo, QueryEnhancedVideo, and PauseEnhancedVideo functions to support enhanced video
  • HandData added the IContour interface to provide any extracted contour data.
  • Hand Tracking added contours to the mask image and fixed mask image smoothness. Now 3x faster convergence due to improved calibration mechanism. Also improved alignment of the base joints for more accurate full hand skeleton and improved. finger pointing to camera.supporting hands up to  85 cm
  • Image interface extends ImageOption to support retrieving rotated image views.
  • Photo interface renamed Load/SaveXMP to Load/SaveXDM; and added QueryRawDepthImage to retrieve the original depth image.
  • Head orientation improved in wide angles
  • Landmarks confidence improved (less false positive landmarks detected) , Better accuracy of 2d/3d landmarks and wide angle improvement.
  • Object Tracking brings marker-less tracking RGB+Depth for 2D objects. and Edge-based 3D object tracking. 
  • ToolBox: Camera Calibration, creation of object model and configuration files for 3d feature based and edge based tracking . 
  •  Utilities: Fixed Euler angle conversion in a non-default Euler order.
  • Instant 3D tracking (SLAM)to create a map and auto-start tracking with extensible learning mode for 2D/3D tracking. 
  • Blob Module added filtering per Blob Area or according to Max, or Min, pixel count. added Segmentetion_Image_Type for mapping Blobs to color stream. Now had better separation between objects that are close but not touching, improved closest point stability, and consolidated contour and mask smoothing to one

SIGGRAPH 2015 and Intel®

$
0
0

SIGGRAPH 2015 and Intel

I’ve been enamored with graphics technology since I bought an Apple II computer in college and started playing games on it. It was only natural that SIGGRAPH (ACM’s Special Interest Group for GRAPHics) would become my favorite industry event. The fun thing about SIGGRAPH is that it lets you peer into the future. This is the event to attend if you want to see what’s brewing in graphics research labs around the world. Graphics techniques developed in academic labs quickly get incorporated into commercial rendering systems, and you see the results in next year’s films. The techniques get refined further, to create real-time implementations, and then get incorporated into game engines a year or two later, maybe less. It used to take several years for the progression from academic research to film rendering techniques to inclusion in games. But now it seems like this progression occurs in months instead of years.

Graphics hardware now comprises as much as three-fourths of the die on some Intel processors, so it is only natural that SIGGRAPH has also become a favorite industry event for our graphics technology team. We’re doing a bunch of things at SIGGRAPH again this year and you can get the whole SIGGRAPH schedule on our website.

This year’s SIGGRAPH conference will include more game developer material than I’ve seen in my 32 years of attending. Of course the highlight of SIGGRAPH is the technical paper presentations, and Intel will be presenting contributions this year with papers entitled Layered Light-Field Reconstruction for Defocus Blur and Single-View Reconstruction via Joint Analysis of Image and Shape Collections. Our booth will be front and center (South Hall, both 701) and we’ll be doing 11 demos, including hot-off the press implementations of the latest DirectX*, Vulkan* and OpenGL ES* APIs on Intel® Graphics; hardware technology demos showing how Thunderbolt increases productivity of 4K workflows; results from research that include real light capture and editing for visual effects; demos from key Intel partners including Epic, Autodesk, and Wacom;  and a new Photoshop plugin that we are building to increase quality and performance of texture compression on Intel hardware platforms.

We will be conducting a series of talks in Concourse Hall, Room 406B from Tuesday through Thursday. If you want to hear about all that Intel has to offer at SIGGRAPH, come to our Intel® Fast Forward talk at noon on Tuesday. That talk covers everything we’re doing at SIGGRAPH and will be followed by talks on using SIMD for skin deformation (with DreamWorks Animation);  the OSPRay ray tracing framework; the Embree ray tracing library; matrix math (with DreamWorks Animation); Thunderbolt™ 3 technology; optimizing Pixar’s OpenSubdiv library; and light capture and rendering for virtual production. These talks are repeated a couple of different times in order to let you fit them into your schedule at SIGGRAPH.

We are also teaming with Waskul.tv again this year to offer a variety of interesting, live-streamed video interviews with industry figures from academia and industry. Get the latest schedule here and swing by the South Hall lobby to listen to interviews in progress Monday through Thursday of SIGGRAPH. Interviews will also be posted on Waskul.tv for viewing after SIGGRAPH.

We are also organizing a Birds-of-a-Feather (BOF) session for technical artists. This meet-up will occure in Room 504 on Wednesday from 3:30-4:30 pm. Come and check it out and give us some ideas about how Intel can improve its products for the content creation segment.

In conjunction with SIGGRAPH, we’ve also released a couple additional source code samples for Android game developers. One sample shows you how to use Intel® INDE GPA to improve the performance of your Android* game. A second shows you how to add support for Adaptive Volumetric Shadow Maps (aka really cool smoke!) to your Android game. We’ve also just published a great new tutorial on rasterizer order views. Check it out! From the newsroom: we’ve just announced the next-gen enthusiast desktop PC and, in some outstanding news for content creators, we’ve announced that the first Intel® Xeon® processors are coming to notebooks this fall.

Once again, for all of the Intel activities at SIGGRAPH 2015, see our online event schedule, follow us on the web, twitter (@IntelSoftware, @RandiAtIntel, #Intelgamedev), or Facebook. Hope to see you in L.A.!

Single-View Reconstruction via Joint Analysis of Image and Shape Collections
Image from Vladlen Koltun’s paper, Single-View Reconstruction via Joint Analysis of Image and Shape Collections

DreamWorks Animation/Intel Tech Talk
DreamWorks Animation/Intel Tech Talk

Before images demonstrated high-quality, accelerated texture compression using Intel’s texture compression Photoshop plugin.

After images demonstrated high-quality, accelerated texture compression using Intel’s texture compression Photoshop plugin.
Before (top) and after (bottom) images demonstrated high-quality, accelerated texture compression using Intel’s texture compression Photoshop plugin.

Project Newton - Total Solution for Smart Home IoT Control

$
0
0

Download PDF

Introduction

“Smart home” is the term used for a wide range of functionalities that enable users to control and manage their home devices using their smartphones, tablets or laptops, for example, control their thermostats remotely from an app on their smartphones. However, many think smart homes have two main stumbling blocks: no unified solution and they’re not practical yet. Project Newton, one of Intel’s latest innovation projects, can help mitigate these issues.

Problems

“There’s no unified solution”: Users want to make their own choices. They want to buy smart devices, like TVs, fridges, air-conditionings, from Apple, Samsung, Tencent, Xiaomi, etc. But each manufacturer has its own solution (for example, different manufactures have different communication protocols to follow) for smart homes. It’s almost impossible to standardize ecosystems.

“They’re not practical yet”: Previous smart home IoT control solutions included: Voice control, which might be impacted by background noise; Phone control or other remote controller, which can be inconvenient because it involves fetching a phone, launching the related app, searching for the IoT device you want to control, etc; Gesture recognition via camera, which can depend on the light environment and your position/direction in the room.

Project Newton

Project Newton is intended to provide an advanced total solution for smart home IoT control.

  • Connect any smart home IoT device
  • Does not depend on hand-held devices
  • Low cost
  • Not dependent on the environment, light intensity, noise level, or positional direction of the user

We developed an advanced total solution to smart home IoT control, named Project Newton. This system includes the connection of all main platforms (Intel® Core™ processor, Intel® Centrino* processor technology, Intel® Atom™ processor, ARM* mobile platforms) and all IoT platforms (Intel® Edison board, Intel® Galileo board, Raspberry*, Spark*, Mbed*, Freescale*, Arduino Uno*, etc.). Thus, Project Newton can connect platforms running all current mainstream OSs (Windows*, Linux*, Android*) and NonUI-OSs (Mbed*, Contiki*, RIOT*, Spark*, OpenWRT*, Yocto*, WindRiver*, VxWorks*, Raspbian*, etc.) in real time.

Smart home vendors usually define a set of application layer communication protocols, but these protocols are relatively closed. In Project Newton, we use CoAP (Constrained Application Protocol), which is a software protocol intended to allow simple electronics devices to communicate interactively over the Internet. CoAP protocol, based on the RESTful framework, is converted into an HTTP protocol to build a smart gateway easily. The basic prototype of CoAP design in Project Newton is below.

CoAP Design of Project Newton
CoAP Design of Project Newton

To classify all OSs supported in this idea, Intel® Galileo board, Intel® Edison board, UNO, and Spark boards support Arduino programming standards. Popular IoT LPC1768 boards support Mbed and RIOT operating systems. Mbed is a platform and operating system for Internet-connected devices based on ARM and is convenient for the operation of hardware resources by calling corresponding objects. RIOT is an open source operating system that supports multi-threading and several different development boards. It can be programmed using standard C language. An LPC1768 chip needs to use an external Wi-Fi* module for communication. Windows, Linux, and Android also easily adapt to CoAP. The software architecture of Project Newton is below.

Software Architecture of Project Newton
Software Architecture of Project Newton

OS Connection Implementation in Project Newton

  • CoAP Implementation

The source code for CoAP is open source code called MicroCoAP, which can be downloaded. It’s lightweight and written in standard C code, so it’s easy to transform it onto different platforms. MicroCoAP contains four files (CoAP.h, CoAP.c, endpoints.c, and main.c). CoAP.h and CoAP.c implement CoAP protocol, which is shown below, and endpoints.c includes response functions related to specific nodes.

CoAP Protocol

The main.c file builds a CoAP server. We mostly use CoAP_packet_t, CoAP_parse, CoAP_handle_req, and CoAP_build. CoAP_packet_t defines the data structure of a CoAP packet. The CoAP_parse function parses the hexadecimal data received from the network or serial port and converts the data to CoAP_packet_t structure. The CoAP_handle_req function analyzes received CoAP packets and makes proper responses. The CoAP_build function translates the response packets into hexadecimal data. The key source code to build a CoAP server is shown below.

The key source code to build a CoAP server

  • Arduino* Implementation

Arduino is an open-source electronic prototyping platform that creates an Arduino software and hardware standard. The Intel® Edison board, the Intel® Galileo board, and the Spark core board follow the Arduino standard. It mainly consists of two functions: setup and loop. Function setup initializes the hardware and is called once before a loop function. Function loop is a dead loop that can be used to execute the main function.

To add COAP to an Arduino board, just add the three files (CoAP.h, CoAP.c, endpoints.c) into the project and modify the setup and loop functions according to main.c in microCoAP, which can be found below.

CoAP server on Arduino* platform

  • Mbed Implementation

Mbed is an object-oriented C++ library developed for the ARM Cortex*-M processor. We can operate general purpose input/output (GPIO) and other hardware resources by using a related class. However, there is no default Wi-Fi module or library. Here, the UART-to-WIFI module is used to add Wi-Fi functionality to Mbed. Class WIFI is developed based on the datasheet of UART-WIFI module, and each function will send a relative string to the UART-WIFI module.

You need to add the three files (CoAP.h, CoAP.c, endpoints.c) into the project and modify main.c file according to main.c file in microCoAP project. To compile it, just use the make command—everything has been set properly in the Makefile, which is shown below.

CoAP server on Mbed platform
CoAP server on Mbed platform

  • RIOT Implementation

RIOT is an open source developer friendly operating system for IoT that can support several boards, such as the Mbed LPC1768 and the Spark Core development kit. It supports C and C++ language programming. The tests folder contains many sample APIs to connect with the hardware. Similar to Mbed, CoAP server can be implemented by changing the Wi-Fi class of Mbed into a C function and modifying APIs related to the serial port. However, it often crashes in our experiments, so it’s just for testing. To compile it, use the “BOARD=Mbed_lpc1768 make clean all flash” command in the root folder and use “BOARD=Mbed_lpc1768 make term” to monitor the serial port and get print information from Mbed. See the CoAP server code structure for RIOT below.

RIOT code structure
RIOT code structure

  • Contiki Implementation

Contiki is a multi-process open source OS. An example folder contains several samples of using APIs to control hardware. It also does not have a default Wi-Fi module. To implement CoAP server, use the similar functions of RIOT and rewrite the UART sample. To compile it, use the “TARGET=cc2530dk make” command and download the binary program using jlink. Check out the sample code below.

Sample code of CoAP server on Contiki
Sample code of CoAP server on Contiki

  • OpenWRT Implementation

OpenWRT is an eMbedded Linux for routers. It supports standard Linux API. So the microCoAP code can be used directly. In the package folder, set up the microCoAP project and modify the Makefile according to the Makefile in other project. Then, run the make command in the root directory. The compiled app will be in the bin folder. The app can be uploaded into the development board by ftp or usb storage. Finally, use the opkg command install CoAP*.ipk to install the CoAP app, and CoAP will be installed in OpenWRT. Add “/usr/bin/CoAP &” into the /etc/rc.local file. See the CoAP server code structure for OpenWRT below.

OpenWRT code structure
OpenWRT code structure

Practical Solution in Project Newton

To be a practical solution in IoT, it must be able to control any IoT device by natural gestures, without wearing any electronic wearable devices and not be affected by environmental light/noise/etc. One way a solution like this could be implemented is by using a 9 axis gyroscope to detect the user’s hand motions.

A sample of 9 axis gyroscope with Wi-Fi*
A sample of 9 axis gyroscope with Wi-Fi*

A pattern recognition algorithm analyzes the data in real time and detects the user’s position, direction, and gestures. By calculating the relative position/direction between users and IoT devices, the system can identify which IoT device the user is facing. By further calculating their hand direction and gestures, the targeted IoT device is selected and controlled

Show Cases of Project Newton

Here are the demo show cases in our lab, where Project Newton was used to connect and control a variety of devices with different OSs. These demo show cases include environment (Figure 1), controlling development boards (Intel® Edison board in Figure 2), controlling Android devices (Figure 3), controlling a robot arm (Figure 4), and controlling a robot car (Figure 5).

 The demo show cases environment in our lab
Figure 1: The demo show cases environment in our lab

 Select (blue) and control (green) boards in Project Newton
Figure 2: Select (blue) and control (green) boards in Project Newton

 Control Android devices in Project Newton
Figure 3: Control Android devices in Project Newton

 Control Robot Arm in Project Newton
Figure 4: Control Robot Arm in Project Newton

 Control Robot Car in Project Newton
Figure 5: Control Robot Car in Project Newton

 

Going Further

Project Newton, an advanced total solution for smart home IoT control, is an easy-to-use solution that gives users the ability to connect any smart home IoT device they want. It’s a low-cost solution that frees the user from having to use a hand-held device. And it does not rely on environmental parameters like light, noise level, etc.

However, like all open source projects, Project Newton could be even better. Our next steps are to improve gesture recognition, boost performance, and incorporate wearable devices. For example, the next-generation Google* Glass reportedly will sport a larger prism and will be powered by an Intel Atom processor, which could be a better practical IoT control solution for Project Newton.

 

About the Author

Zhen Zhou earned an MS degree in software engineering from Shanghai Jiaotong University. He joined Intel in 2011 as an application engineer in the Developer Relations Division Mobile Enabling Team. He works with internal stakeholders and external ISVs, SPs, and carriers in the new usage model initiative and prototype development on the Intel Atom processor, traditional Intel® architecture, and embedded Intel architecture platforms.

Case Study: Implementing Intel® x86 Support for Android* with CRI Middleware

$
0
0

Download PDF

Overview

Android* devices powered by the Intel® Atom™ processor are rising in popularity, and supporting applications are being released continuously. To meet the needs of application developers focused on creating games for Android devices with Intel Atom processors, middleware companies began supporting x86.

One such company, CRI Middleware Co., Ltd., offers runtime library x86 support for Android middleware. It has done this by changing the build settings of the makefile in the Android NDK and replacing the ARM* NEON* instructions. Included in x86 support of the middleware runtime library for Android devices is a plug-in for Unity*, a game engine developed by Unity Technologies, which allows developers to build games by simply setting the x86 folder as the build target with the Android NDK.

About CRI

CRI is a Japanese company researching, developing, and selling audio/video middleware. Founded in 1983 as a research institute of the independent IT solution vendor CSK Corporation, CRI developed middleware for the Sega home video game console. After going independent in 2001, it has racked up accomplishments with middleware for gaming consoles and is now producing products under the CRIWARE* brand.

In the past few years CRI has been focused on supporting smartphones and expanded its middleware for the iPhone* and Android. CRI currently provides the integrated sound middleware CRI ADX*2 for Android, the high quality and high performance movie middleware CRI Sofdec* 2, and the file compression/packing middleware FileMajik PRO. Using these, game and application developers are able to add powerful video to games and new rendering of communication with interactive sound, all while increasing development efficiency.

Integrated sound middleware CRI ADX 2

Integrated sound middleware CRI ADX 2

High quality and high performance movie middleware CRI Sofdec 2

High quality and high performance movie middleware CRI Sofdec 2

Intel® x86 Supported Backgrounds

CRI has been focused on Intel x86 support for middleware since 2012 when Android devices powered by the Intel Atom processor arrived in the market. Software developers began to call for Android x86 emulator support. There were issues with the operating speed of the original ARM emulator that emulates the ARM architecture, but high-speed operation was achieved using the x86 emulator and the Intel® Hardware Accelerated Execution Manager (Intel® HAXM). Many developers who used it said that they wanted to use CRIWARE with the x86 emulator for development efficiency gains, and thus an x86 support version was first released in June 2013.

Yusuke Urushihata, CRI unit head, states, “X86 support is simple to achieve using Android NDK. We were actually able to add support by simply adding one line for the output destination to the makefile.”

CRI released CRIWARE for the Android NDK development environment.

NDK development environment

NDK development environment

Later in August 2014 the game engine company Unity Technologies announced x86 support for Unity 4 and 5, and a native development environment was released. As a result, developers then called for CRI middleware x86 support for the Unity library which CRI later released.

Development integration manager at CRI, Atsushi Sakurai, stated, “Game developers (users) using the 3D game engine Unity strongly requested x86 support, so we decided to provide x86 middleware support.”

Unity development environment

Unity development environment

Other than Unity, the 2D-oriented Coco2d-x is another CRI-supported video game framework, which runs in the native Android development environment. Because it is written in C++, it does not require a special library.

Development Focused on x86 Support

CRI developed the runtime library with the program development environment Android NDK, creating it in a pre-determined order and adding support without any hitches.

Most parts of the build settings of the Android NDK makefile (Android.mk) are repaired and added one line at a time such that they are output in x86 format.

When it came to decode processing of videos, CRI used ARM NEON instructions, however it produced a make error when running a build for x86. Therefore, CRI rewrote the decode processing code in optimized C and C++ that has proven itself on Intel® architecture-based platforms.

To establish x86 support immediately, it was crucial that the CRI-provided middleware was multi-platform. Multi-platform support was originally included in the design concept of CRI’s middleware, so that a framework like the NEON instruction set exists for switching to a different environment even in the event of a target-specific problem. For example, even if the endians in the source code are different, they are automatically distinguished using CRI to configure the environment built with little endians.

How to use CRIWARE* for x86 Support

CRI provides libraries for ARM and x86.

The procedure when using Unity for video game development and CRIWARE to provide x86 support for Android devices with Intel Atom processors is quite simple. All you need to do is use the Android NDK to run the application’s build. As long as you’re using an Intel Atom processor-based device with Android and you have a test environment, x86 support is easy to achieve.

In native development environment, the application makefile and x86-targeted settings are modified one line at a time when you run the build. During this process, the library with x86 support is stored in an x86 folder that is automatically created in the libs folder.

Evaluation and Future Prospects of Intel® Atom™ Processor-based Devices running Android

At the present time, CRI has not yet finished its data analysis and so hasn’t drawn any conclusions as to the performance of Android devices powered by the Intel Atom processor versus ARM-powered ones.

Nevertheless, an application’s video and audio processing speed almost always depends more on the platform and audio/video drivers than on architecture, so the processor matters. Intel Atom processor-based devices run smoothly and do not experience slow operations or unnatural screen movements.

Moving forward, the possibility of supporting Intel® Streaming SIMD Extensions (Intel® SSE) is being examined for processing ARM NEON’s video decode instructions. Furthermore, preparations for 64-bit Android have also begun. CRI is therefore hoping for continued provision of information and technical support from Intel.


Optimization Techniques for the Intel® MIC Architecture: Part 3 of 3

$
0
0

Abstract

This is part 3 of a 3-part educational series of publications introducing select topics on optimization of applications for Intel’s multi-core and manycore architectures (Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors).

In this paper we discuss false sharing, highlighting the situations in which it may occur, and eliminating it with the help of data container padding.

For a practical illustration, we construct and optimize a micro-kernel for binning particles based on their coordinates. Similar workloads occur in Monte Carlo simulations, particle physics software, and statistical analysis.

Results show that the impact of false sharing may be as high as an order of magnitude performance loss in a parallel application. On Intel Xeon processors, padding required to eliminate false sharing is greater than on Intel Xeon Phi coprocessors, so target-specific padding values may be used in real-life applications.

Download the full article (PDF) DownloadDownload

 

In the second publication of this series, we will demonstrated optimization of this workload, focusing on vectorization. Optimization Techniques for the Intel® MIC Architecture: Part 2 of 3

 

Optimization Techniques for the Intel® MIC Architecture: Part 1 of 3

$
0
0

Abstract

This is part 1 of a 3-part educational series of publications introducing select topics on optimization of applications for the Intel multi-core and manycore architectures (Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors).

In this paper we focus on thread parallelism and race conditions. We discuss the usage of mutexes in OpenMP* to resolve race conditions. We also show how to implement efficient parallel reduction using thread private storage and mutexes.

For a practical illustration, we construct and optimize a micro-kernel for binning particles based on their coordinates. Such a workload occurs in such applications as Monte Carlo simulations, particle physics software, and statistical analysis. The optimization technique discussed in this paper leads to a performance increase of 25x on a 24-core CPU and up to 100x on the MIC architecture compared to a single-threaded implementation on the same architectures.

Download the full article (PDF) DownloadDownload

In the next publication of this series, we will demonstrate further optimization of this workload, focusing on vectorization.   Optimization Techniques for the Intel® MIC Architecture: Part 2 of 3

Coming Soon…The New Intel® Parallel Studio XE

$
0
0

It’s coming. …

The new Intel® Parallel Studio XE will ship soon.  It includes all-new libraries and tools to boost performance of big data analytics, large MPI clusters, or any code that benefits from parallel execution.  Leverage the power of Intel® processors like never before. 

Product highlights include:

  • Make faster code using both vectorization and threading with new vectorization assistance
  • Boost the speed of data analytics and machine learning programs with new data analytics acceleration library
  • Improve large cluster performance with the ability to profile large MPI jobs faster

To learn more, visit Intel Software Development Products in the Intel® Software Pavilion at IDF.  Be sure to attend these technical sessions:

Enabling Microsoft* Azure* on the Intel® Edison Board

$
0
0

This article explains how to establish a connection with Microsoft Azure cloud services using nodeJS* API. While this was written for the Intel® Edison board, it should work on any platform that runs nodeJS. This includes creating a storage account, store and retrieve data.

Create an Azure storage account

Log in to Azure Management Portal and click New at the bottom of the page.

Select DATA SERVICES -> STORAGE -> QUICK CREATE and type a name to use in the URI for the storage account. Make sure that a green tick mark is enabled indicating a unique name.

Click CREATE STORAGE ACCOUNT.

Set up your development environment

Install azure-storage npm module into your project

npm install azure-table-node

This article uses a third party node module azure-table-node for convenience. You can use the official azure client library for node.

Setup an Azure storage connection

Create a node reference variable for the module.

var azureTable = require('azure-table-node');

The function setDefaultClient requires storage name and access key which can be obtained from Azure portal.

  azureTable.setDefaultClient({
    accountUrl: 'https://' + this.accountName + '.table.core.windows.net/',
    accountName: this.accountName,
    accountKey: this.config.accessKey,
    timeout: 10000
  });

Create table

Get the default client created above. Use this client to perform operations on Azure storage.

var defaultClient = azureTable.getDefaultClient();

Create a table with a name using the client.

// use the client to create the table
defaultClient.createTable('tableName', cb);

Store Data

Azure table storage takes data as entities. Each entity will have a Row and Partition Key. A Row Key must be unique to every row.

entity['PartitionKey'] = entity.sensor_id.toString();
entity['RowKey'] = Date.parse(entity.timestamp).toString();
entity['value'] = 86;
client.insertOrReplaceEntity(‘tableNmae’, entity, function(err, data) {
  // err is null
  // data contains etag
});

Query Data

Timestamp based query

dataQuery = azureTable.Query.create('PartitionKey', '==', readQuery.sensor_id).and('timestamp', '>=', readQuery.timestamp);

Sensorid based query

dataQuery = azureTable.Query.create('PartitionKey', '==', readQuery.sensor_id);

Call queryEntities on the client with options to return only required fields viz., sensor_id, timestamp, and value.

client.queryEntities(‘tableName’, {
    query: dataQuery,
    onlyFields: ['sensor_id', 'timestamp', 'value']
  }, function(err, results, continuation) {});

References

Microsoft documentation - https://azure.microsoft.com/en-us/develop/nodejs/

Azure node project - https://www.npmjs.com/package/azure

Azure Table Storage client - https://www.npmjs.com/package/azure-table-node

Azure - https://msdn.microsoft.com/en-us/library/azure/dn578280.aspx

Azure Storage Explorer - https://azurestorageexplorer.codeplex.com/

Enabling Google Cloud on the Intel® Edison Board

$
0
0

This article explains how to establish a connection with Google cloud services using nodeJS* API. While this was written for the Intel® Edison board, it should work on any platform that runs nodeJS. This includes creating a cloud project, store and retrieve data.

Create Google cloud project

Log in to Google cloud portal and click Create Project at the top-left of the page.

Type a name for the project and press the create button

You can see an activity window on the bottom of the page. Once the project is created project overview page shows up.

Select APIs (under APIs & auth) from the left pane.

Search for the required API and click Enable API.

For e.g., Google Cloud Datastore API

Now select Credentials from the left pane, press the Create new Client ID button and select Service account.

After successful creation a private key file (*.json) is downloaded. Save the file which will be used for authentication.

Setting up the development environment

Install Google Cloud Node.js Client

npm install --save gcloud

Copy the downloaded JSON key file to the board.

Create a datastore/db object

"cloud" : {"projectId": "iot-cloud-project","keyFilename": "./key.json"
    }

db = gcloud.datastore.dataset(cloud);

Store Data

Each entity in the storage requires a unique key to store in the database and consists of a namespace and path attributes.

  key = {"namespace": this.namespace,"path": [data[i].sensor_id, data[i].timestamp]
           }
  db.save({
            key: key,
            data: data[i]
        }, function(err) {
            if (!err) {
              //console.log("Test data saved");
            }
    });

Query Data

Timestamp based query

dataQuery = db.createQuery(namespace, readQuery.sensor_id)
  			.filter('timestamp >=', readQuery.timestamp);

Sensorid based query

dataQuery = db.createQuery(namespace, readQuery.sensor_id);

Run query

    db.runQuery(dataQuery, function(err, result) {
      if(err) {
        logger.error(JSON.stringify(err, null, ''));
      } else {
logger.info('Google - Data received: %d', result.length);
});

References

Google Cloud documentation - https://cloud.google.com/docs/

Google Cloud Node.js Client - https://github.com/GoogleCloudPlatform/gcloud-node

Cloud Datastore concepts - https://cloud.google.com/datastore/docs/concepts/overview

Cloud Datastore API - https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.14.0/datastore

Intel® Galileo Gen 2 Board Schematics

Intel® Galileo Gen 1 Board Schematic


Intel® Edison Kit for Arduino* Hardware Guide

Intel® Edison Board - Arduino* Expansion Board Schematic

Intel® Edison Mini Breakout Board Hardware Guide

Intel® Edison Mini Breakout Board Schematic

Intel® Game Quality Assurance Evangelism Program

$
0
0

Download PDF

The Intel® game developer program, Achievement Unlocked, is designed to help game developers at every stage along their journey. We are to help you write highly optimized games that take advantage of Intel’s latest graphics hardware. To make a good game you need good Quality Assurance (QA) testing. Implementing a robust QA process is vital for releasing a problem-free game for your customers. However, QA testing is not a simple thing. Major bugs can be difficult to find and reproduce. Some only occur when certain conditions arise, difficult to observe, or are 100% reproducible. It is impractical to test everything on every platform you intend to support. Efficient time management and comprehensive methodologies are required to get good coverage and reduce bug escapes.

To help you meet your quality goals, Intel has created the Game Quality Assurance Evangelism Program. As part of Achievement Unlocked this program can help you achieve your vision through QA testing assistance and knowledge. We partnered with top QA labs in the industry to help educate and support QA testing for your game. Keep reading to learn about the benefits of Game Quality Assurance Evangelism Program and how to get help with all your QA questions and issues. More benefits are being planned, so stay tuned for details!

Learn:

The Game Developer Zone on IDZ is the place to learn about developing games on Intel graphics. Here, you can read blogs, articles, and white papers. Topics include testing tools, methodologies, and other topics written by our QA partners and Intel validation engineers. QA is complex and involves many aspects to be effective, but it’s required for developing great games.

Discuss:

Have questions about QA testing? Or want to discuss QA topics? Visit the forum at the Game Developer Zone. It is the community for game developers to share information, discuss issues and ask questions about Intel graphics.

https://software.intel.com/en-us/forums/developing-games-and-graphics-on-intel

Test:

Intel has partnered with some of the top QA labs in the industry and are here to help you make the best game possible. As partners in the Game Quality Assurance Evangelism Program they have access to in-development graphics drivers and the Intel driver development team to help get driver issues fixed in time for your game’s release. The current QA partners are listed below; this list is expected to grow over time.

Quality Assurance Partners:

Enzyme

Enzyme was founded in 2002 by Yan Cyr and Emmanuel Viau, two pioneers in the video game industry. Using their international experience and the expertise of all the Enzymers, we have combined creativity and discipline in order to create Quality Assurance (QA) services and a testing methodology that add value to the clients’ products.

We are a passionate community dedicated to QA for video games, apps, software and websites. Whether you need QA testing, PC/Mobile compatibility testing, project evaluation or focus groups, or you are looking for localization or linguistic resources, or you need methodology or project management consultants, partnering with us will contribute to the achievement of your goals.

Our mission is to put our passionate workforce to use and contribute to the success of your projects.

OUR FUNDAMENTAL VALUES

OUR TEAM holds an important place at the heart of our operations. We favour interactions between individuals and teams over the strict use of processes and procedures. A cohesive team with strong communication is of greater value than a group of individuals, while very competent, working in an isolated way.

THE TESTING METHODOLOGY must provide a maximum quality result at the lowest possible cost. While structuring and controlling the teams’ performances, the testing tools must allow for the integration of creativity, judgment and the added value of individuals and their skills. In this sense, our approach to project management stimulates the constant sharing of skills within our teams.

COLLABORATION with our clients rather than simple contractual negotiation is a powerful incentive for our teams. We consider ourselves a link in each of our clients’ production chain, consequently understanding early on their needs, expectations and procedures. Continuous communication ensures maximum compatibility between the clients’ expectations and the results we produce.

ADAPTING TO CHANGE is part of our teams’ DNA, our approach to project management and our testing methodology. Initial project planning as well as staff management must be flexible in order to respond to each client’s needs over the course of each project. This ability to deal with changes additionally serves as a means for the continuous improvement of our approaches, processes, procedures and tools.


GlobalStep

GlobalStep is a premier technology services firm providing QA and Customer Support to the digital media and interactive entertainment industries. We are acknowledged as a worldwide leader for functionality, compliance and compatibility solutions and are rapidly growing our test engineering and automation teams. As a synergistic extension to our QA support, GlobalStep provides fully integrated customer and technical support services with Tier 1,2 and 3 escalation mechanisms and knowledgebase maintenance.

With a team now totaling more than 500 full time test professionals we develop and implement customized solutions across multiple platforms including Games Consoles, PC/Mac, IOS, Android, Kindle and Windows Mobile. Our clients range from Fortune 100 companies to independent developers.

While the DNA of the company lies in the PC and game console platforms we also house one of the largest, most experienced mobile test teams in the world. Within the mobile space GlobalStep has tested IOS products since the inception of Apple’s App Store as well as Windows Mobile and Android products for over 5 years. To date we have provided over 1 million hours of mobile test solutions to our clients.

At GlobalStep we provide best in class quality and service at an attractive price point and are known for being flexible and responsive in an ever-changing industry.


Testology

Andy Robson, a Bullfrog and Lionhead industry veteran of 20+ years, formed Testology in 2006 with the intention of delivering developmentally aware functional testing to the entire video games industry. What’s now become an industry diverse business, Testology provides the very best functional testing for all digital platforms in the video games, web & digital, gambling, and virtual reality industries. As winners of 3 Develop Excellence Awards for Best Service Provider 2011 and Best QA Provider in both 2014 & 2015, Testology continues to develop their high quality, personable, and attentive approach to outsourcing on all new technologies and generations of hardware. With an unrivalled device inventory (50+ iOS and 100+ Android – running multiple software versions), and an expert awareness of mobile testing requirements, mobile QA occupies an ever-increasing percentage of the Testology clientele, and they frequently see many of their tested products feature on Apple’s “Best of” lists.

Testology has worked on hundreds of multi-platform products for some of the most revered developers and publishers in the industry. With a unique and passionate perception of QA, Testology are able to create and execute complete, or supportive, test phases for developers and products of all sizes, establishing a reputation as the “go to guys” for testing. Testology has over 85 unique projects passing through its UK based office every month, with all testing being done in-house by over 100 expertly trained testers who understand the business philosophy, as well as the job at hand – an important part of the Testology ideology.

 Testology’s services focus on functional testing, compliance testing (console and mobile), experience development and product feedback, and device/browser compatibility. Each product is treated with a custom attentiveness so that no two-test phases are the same. This approach is intensified by a commitment to “service” through communication, availability, and flexibility.


Testronic

Testronic delivers a complete spectrum of QA, localization, compatibility, and customer support solutions at the highest level, to a wide variety of games industry companies, large and small.   Games tested by Testronic’s PC Compatibility lab entertain over 50 million global players per month.

Our PC compatibility lab supports thousands of potential configurations and hardware peripherals, managed by expert technicians who will work with you to identify a test plan that suits your needs and budget.


Testronic

VMC offers Achievement Unlocked program members insights into the history and future direction of the games industry; proven QA best practices, processes, and tools that result in solid, enjoyable games. VMC can also leverage our years of experience to provide developers with technical guidance that will save you time and maximize Intel’s technological capabilities.

VMC provides global, end-to-end, pre- and post-launch production support services for every stage of product development and release. VMC partners with companies of all sizes to get better products to market faster, and to deliver exceptional support for every stage of the product lifecycle. Our scalable, strategic outsourcing services are customized to align with how your business operates. VMC can provide you, the future game leaders with useful information to address a wide variety game-production issues.

Quality is engrained in our company culture and directly aligns with your commitment to create exceptional user experiences; we offer comprehensive testing for functionality, compatibility, compliance, certification, localization and usability in our secure facilities. Our experienced test teams blend hands-on play, efficient automation, and relevant reporting to deliver qualitative and quantitative data about your game's performance. Players also expect a fast, pleasant, and effective support experience and VMC offers our multi-channel player support service through live game operations to support your community.

By participating in the Achievement Unlocked program, VMC aims to support game developers as they establish brand recognition for their original games, learn how to protect their intellectual property (IP), provide high-quality gaming experiences to players, and broaden their exposure to increase their chances for commercial success.

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>