Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Where can I find the installation log files?

$
0
0

Where can I find the installation log files?

 

Windows

Intel Parallel Studio XE <version> (including MKL, IPP, DAAL, TBB, MPI)

%temp%\pset_tmp_PSXE<version>_<username>

Example: PSXE 2016 = %temp%\pset_tmp_PSXE2016_<username>

 

Intel System Studio <version>

%TEMP%\pset_tmp_ISS<version>_<username>

%TEMP%\pset_tmp_ISS<version>WT_<username>

Example: Intel System Studio 2016 Windows Target = %TEMP%\pset_tmp_ISS2016WT_<username>

 

Intel Visual Pro Analyzer

Intel Open CL SDK

%temp%\intel_tmp_<username>

 

Linux

/tmp/intel.pset.UID***.log

/tmp/intel.issa.UID***.log

Mac OS

/tmp/intel.pset.UID***.log

/tmp/intel.issa.UID***.log


What's New? Intel® Threading Building Blocks 4.4 Update 3

$
0
0

Changes (w.r.t. Intel TBB 4.4 Update 2):

- Modified parallel_sort to not require a default constructor for values
    and to use iter_swap() for value swapping.
- Added support for creating or initializing a task_arena instance that
    is connected to the arena currently used by the thread.
- graph/binpack example modified to use multifunction_node.
- For performance analysis, use Intel(R) VTune(TM) Amplifier XE 2015
    and higher; older versions are no longer supported.
- Improved support for compilation with disabled RTTI, by omitting its use
    in auxiliary code, such as assertions. However some functionality,
    particularly the flow graph, does not work if RTTI is disabled.
- The tachyon example for Android* can be built using Android Studio 1.5
    and higher with experimental Gradle plugin 0.4.0.

Preview Features:

- Added class opencl_subbufer that allows using OpenCL* sub-buffer
    objects with opencl_node.
- Class global_control supports the value of 1 for
    max_allowed_parallelism.

Bugs fixed:

- Fixed a race causing "TBB Warning: setaffinity syscall failed" message.
- Fixed a compilation issue on OS X* with Intel(R) C++ Compiler 15.0.
- Fixed a bug in queuing_rw_mutex::downgrade() that could temporarily
    block new readers.
- Fixed speculative_spin_rw_mutex to stop using the lazy subscription
    technique due to its known flaws.
- Fixed memory leaks in the tool support code.

Intel® MPI Library 5.1 Update 3 Readme

$
0
0

The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.0 (MPI-3.0) specification. This package is for MPI users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ coprocessor on Linux*. You must have a valid license to download, install, and use this product.

The Intel® MPI Library 5.1 Update 3 for Linux* and Windows* packages are now ready for download. The Intel® MPI Library is available as a stand-alone product and as part of the Intel® Parallel Studio XE Cluster Edition. Please visit the Intel® Software Evaluation Center to evaluate this product.

New in this release:

  • Fixed shared memory problem on Intel® Xeon Phi™ processor (codename: Knights Landing)
  • Added new algorithms and selection mechanism for nonblocking collectives
  • Added new psm2 option for Intel® Omni-Path fabric
  • Added I_MPI_BCAST_ADJUST_SEGMENT variable to control MPI_Bcast
  • Fixed long count support for some collective messages
  • Reworked the binding kit to add support for Intel® Many Integrated Core Architecture and support for ILP64 on third party compilers
  • The following features are deprecated in this version of the Intel MPI Library. For complete list of all deprecated and removed features, visit our deprecation page.
    • SSHM
    • MPD (Linux*)/SMPD (Windows*)
    • Epoll
    • JMI
    • PVFS2

Refer to the Intel® MPI Library 5.1 Release Notes for more details.

Contents:

  • Intel® MPI Library 5.1 Update 3 for Linux*
    • File: l_mpi_p_5.1.3.181.tgz

      A File containing the complete product installation for Linux (x86-64bit/Intel® Xeon Phi™ coprocessor development)

    • File: l_mpi-rt_p_5.1.3.181.tgz

      A File containing the free runtime environment installation for Linux (x86-64bit/Intel® Xeon Phi™ coprocessor development)

  • Intel® MPI Library 5.1 Update 3 for Windows*
    • File: w_mpi_p_5.1.3.180.exe

      A File containing the complete product installation for Windows (x86-64bit development)

    • File: w_mpi-rt_p_5.1.3.180.exe

      A file containing the free runtime environment installation for Windows (x86-64bit development)

A Developer’s Guide To Intel® RealSense™ Camera Detection Methods

$
0
0

Abstract

The arrival of a new and improved front-facing camera, the SR300, has necessitated changes to the Intel® RealSense™ SDK and the Intel® RealSense™ Depth Camera Manager that may prevent legacy applications from functioning. This paper provides an overview of some key aspects in developing camera-independent applications that are portable across the different front-facing cameras: Intel® RealSense™ cameras F200 and SR300. It also details several methods for detecting the set of front- and rear-facing camera devices featured in the Intel RealSense SDK. These methods include how to use the installer scripts to detect the local capture device as well as how to use the Intel RealSense SDK to detect the camera model and its configuration at runtime. This paper is intended for novice and intermediate developers who have either previously developed F200 applications and want to ensure compatibility on SR300-equipped systems or want to develop new Intel® RealSense™ applications targeting SR300’s specific features.

Introduction

The arrival of the new and improved front-facing Intel RealSense camera SR300 has introduced a number of changes to the Intel RealSense SDK as well as new considerations to maintain application compatibility across multiple SDK versions. As of the R5 2015 SDK release for Windows*, three different camera models are supported including the rear-facing Intel RealSense camera R200 and two front-facing cameras: the Intel RealSense camera F200 and the newer SR300. The SR300 brings a number of technical improvements over the legacy F200 camera, including improved tracking range, motion detection, color stream and IR sensors, and lower system resource utilization. Developers are encouraged to create new and exciting applications that take advantage of these capabilities.

However, the presence of systems with different front-facing camera models presents several unique challenges for developers. There are certain steps that should be taken to verify the presence and configuration of the Intel RealSense camera to ensure compatibility. This paper outlines the best-known methods to develop a native SR300 application and successfully migrate an existing F200 application to a SR300 platform while maintaining compatibility across both cameras models.

Detecting The Intel® RealSense™ camera During Installation

In order to ensure support, first verify which camera model is present on the host system during application install time. The Intel RealSense SDK installer script provides options to check for the presence of any of the camera models using command-line options. Unless a specific camera model is required, we recommend that you use the installer to detect orientation (front or rear facing) to maintain portability across platforms with different camera models. If targeting specific features, you can check for specific camera models (Intel RealSense cameras F200, SR300, and R200) by specifying the appropriate options. If the queried camera model is not detected, the installer will abort with an error code. The full SDK installer command list can be found on the SDK documentation website under the topic Installer Options. For reference, you can find the options related to detecting the camera as well as sample commands below.

Installer Command Option

--f200
--sr300
--f200
--r200

Force a camera model check such that the runtime is installed only when the requested camera model is detected. If the camera model is not detected, the installer aborts with status code 1633.

--front
--rear

The --front option checks for any front facing camera and --rear option for any rear facing camera.

Examples

Detect presence of any rear-facing camera and install the 3D scan runtime silently via web download:

intel_rs_sdk_runtime_websetup_YYYY.exe --rear --silent --no-progress --acceptlicense=yes --finstall=core,3ds --fnone=all

Detect presence of an F200 camera and Install the face runtime silently:

intel_rs_sdk_runtime_YYYY.exe --f200 --silent --no-progress --acceptlicense=yes --finstall=core,face3d --fnone=all

Detecting The Intel RealSense Camera Configuration at Runtime

After verifying proper camera setup at install time, verify the capture device and driver version (that is, Intel RealSense Depth Camera Manager (DCM) version) during the initialization of your application. To do this, use the provided mechanisms in the Intel RealSense SDK such as DeviceInfo and the ImplDesc structures. Note that the device information is only valid after the Init function of the SenseManager interface.

Checking the Camera Model

To check the camera model at startup, use the QueryDeviceInfo function, which returns a DeviceInfo structure. The DeviceInfo structure includes a DeviceModel member variable that includes all supported camera models available. Note that the values enumerated by the DeviceModel include predefined camera models that will change as the SDK evolves. You will want to verify that the SDK version on which you are compiling your application is recent enough to include the appropriate camera model that your application requires.

Code sample 1 illustrates how to use the QueryDeviceInfo function to retrieve the currently connected camera model in C++. Note that the device information is only valid after the Init function of the SenseManager interface.

Code Sample 1:Using DeviceInfo to check the camera model at runtime.

// Create a SenseManager instance
PXCSenseManager *sm=PXCSenseManager::CreateInstance();
// Other SenseManager configuration (say, enable streams or modules)
...
// Initialize for starting streaming.
sm->Init();
// Get the camera info
PXCCapture::DeviceInfo dinfo={};
sm->QueryCaptureManager()->QueryDevice()->QueryDeviceInfo(&dinfo);
printf_s("camera model = %d\n", dinfo.model);
// Clean up
sm->Release();

Checking The Intel RealSense Depth Camera Manager Version At Runtime

The Intel RealSense SDK also allows you to check the DCM version at runtime (in addition to the SDK runtime and individual algorithm versions). This is useful to ensure that the required Intel® RealSense™ technologies are installed. An outdated DCM may result in unexpected camera behavior, non-functional SDK features (that is, detection, tracking, and so on), or reduced performance. In addition, having the latest Gold DCM for the Intel RealSense camera SR300 is necessary to provide backward compatibility for apps designed on the F200 camera (the latest SR300 DCM must be downloaded on Window 10 machines using Windows Update). An application developed on an SDK earlier than R5 2015 for the F200 camera should verify both the camera model and DCM at startup to ensure compatibility on an SR300 machine.

In order to verify the camera driver version at runtime, use the QueryModuleDesc function, which returns the specified module’s descriptor in the ImplDesc structure. To retrieve the camera driver version, specify the capture device as the input argument to the QueryModuleDesc and retrieve the version member of the ImplDesc structure. Code sample 2 illustrates how to retrieve the camera driver version in the R5 version of the SDK using C++ code. Note that if the DCM is not installed on the host system, the QueryModuleDesc call returns a STATUS_ITEM_UNAVAILABLE error. In the event of a missing DCM or version mismatch, the recommendation is to instruct the user to download the latest version using Windows Update. For full details on how to check the SDK, camera, and algorithm versions, please reference the topic titled Checking SDK, Camera Driver, and Algorithm Versions on the SDK documentation website.

Code Sample 2:Using ImplDesc to get the algorithm and camera driver versions at runtime.

PXCSession::ImplVersion GetVersion(PXCSession *session, PXCBase *module) {
    PXCSession::ImplDesc mdesc={};
    session->QueryModuleDesc(module, &mdesc);
    return mdesc.version;
}
// sm is the PXCSenseManager instance
PXCSession::ImplVersion driver_version=GetVersion(sm->QuerySession(), sm->QueryCaptureManager()->QueryCapture());
PXCSession::ImplVersion face_version=GetVersion(sm->QuerySession(), sm->QueryFace());

Developing For Multiple Front-Facing Camera Models

Starting with the R5 2015 SDK release for Windows, a new front-facing camera model, named the Intel RealSense camera SR300, has been added to the list of supported cameras. The SR300 improves upon the Intel RealSense camera model F200 in several key ways, including increased tracking range, lower power consumption, better color quality in low light, increased SNR for the IR sensor, and more. Applications that take advantage of the SR300 capabilities can result in improved tracking quality, speed, and enhanced responsiveness over F200 applications. However, with the addition of a new camera in the marketplace comes increased development complexity in ensuring compatibility and targeting specific features in the various camera models.

This section summarize the key aspects developers must know in order to write applications that take advantage of the unique properties of the SR300 cameras or run in backward compatibility mode with only F200 features. For a more complete description of how to migrate F200 applications to SR300 applications, please read the section titled Working with Camera SR300 on the SDK documentation website.

Intel RealSense camera F200 Compatibility Mode

In order to allow older applications designed for the F200 camera to function on systems equipped with an SR300 camera, the SR300 DCM (gold or later) implements an F200 compatibility mode. It is automatically activated when a streaming request is sent by a pre-R5 application, and it allows the DCM to emulate F200 behavior. In this mode if the application calls QueryDeviceInfo, the value returned will be “F200” for the device name and model. Streaming requests from an application built on the R5 2015 or later SDK are processed natively and are able to take advantage of all SR300 features as hardware compatibility mode is disabled.

It is important to note that only one mode (native or compatibility) can be run at a time. This means that if two applications are run, one after the other, the first application will determine the state of the F200 compatibility mode. If the first application was compiled on an SDK version earlier than R5, the F200 compatibility mode will automatically be enabled regardless of the SDK version of the second application. Similarly, if the first application is compiled on R5 or later, the F200 compatibility mode will automated be deactivated and any subsequent applications will see the camera as an SR300. Thus if the first application is R5 or later (F200 compatibility mode disabled) but a subsequent application is pre-R5, the second application will not see a valid Intel RealSense camera on the system and thus will not function. This is because the pre-R5 application requires a F200 camera but the DCM is running in native SR300 mode due to the earlier application. There is currently no way to overwrite the F200 compatibility state for the later application, nor is it possible for the DCM to emulate both F200 and SR300 simultaneously.

Table 1 summarizes the resulting state of the compatibility mode when multiple Intel RealSense applications are running on the same system featuring an SR300 camera (application 1 is started before application 2 on the system):

Table 1: Intel RealSense camera F200 Compatibility Mode State Summary with Multiple Applications Running

Application 1

Application 2

F200 Compatibility Mode State

Comments

Pre-R5 Compilation

Pre-R5 Compilation

ACTIVE

App1 is run first, DCM sees pre-R5 app and enables F200 compatibility mode.

Pre-R5 Compilation

R5 or later Compilation

ACTIVE

App1 is run first, DCM sees pre-R5 app and enables F200 compatibility mode.

R5 or later Compilation

Pre-R5 Compilation

NOT ACTIVE

App1 is run first, DCM sees SR300 native app and disables F200 compatibility mode. App2 will not see a valid camera and will not run.

R5 or later Compilation

R5 or later Compilation

NOT ACTIVE

App1 is run first, DCM sees R5 or later app and disables F200 compatibility mode. Both apps will use native SR300 requests.

Developing Device-Independent Applications

To accommodate the arrival of the Intel RealSense camera SR300, many of the 2015 R5 Intel RealSense SDK components have been modified to maintain compatibility and to maximize the efficiency of the SR300’s capabilities. In most cases, developers should strive to develop camera-agnostic applications that will run on any front-facing camera to ensure maximum portability across various platforms. The SDK modules and stream interfaces provide the capability to handle all of the platform differentiation if used properly. However, if developing an application that uses unique features of either the F200 or SR300, the code must identify the camera model and handle cases where the camera is not capable of those functions. This section outlines the key details to keep in mind when developing front- facing Intel RealSense applications to ensure maximum compatibility.

SDK Interface Compatibility

To maintain maximum compatibility between the F200 and SR300 cameras, use the built-in algorithm modules (face, 3DS, BGS, and so on) and the SenseManager interface to read raw streams without specifying any stream resolutions or pixel formats. This approach allows the SDK to handle the conversion automatically and minimize necessary code changes. Keep in mind that the maturity levels of the algorithms designed for SR300 may be less than those designed for F200 given that the SR300 was not supported until the 2015 R5 release. Be sure to read the SDK release notes thoroughly to understand the maturity of the various algorithms needed for your application.

In summary, the following best practices are recommended to specify a stream and read image data:

  • Avoid enabling streams using specific configuration (resolution, frame rate):

    sm->EnableStream(PXCCapture::STREAM_TYPE_COLOR, 640, 480, 60);

    Instead, let the SenseManager select the appropriate configuration based on the available camera model:

    sm->EnableStream(PXCCapture::STREAM_TYPE_COLOR);

  • Use the Image functions (such as AcquireAccess and ExportData) to force pixel format conversion.

    PXCImage::ImageData data;

    image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB32,&data);

  • If a native pixel format is desired, be sure to handle all cases so that the code will work independent of the camera model (see SoftwareBitmapToWriteableBitmap sample in the appendix of this document).

  • When accessing the camera device properties, use the device-neutral device properties as listed in Device Neutral Device Properties.

Intel RealSense SDK Incompatibilities

As of the R5 2015 Intel RealSense SDK release, there remain several APIs that will exhibit some incompatibilities between the F200 and SR300 cameras. Follow the mitigation steps outlined in Table 2 to write camera-independent code that works for any camera:

Table 2: Mitigation Steps for Front-Facing Camera Incompatibilities

Feature

Compatibility Issue

Recommendations

Camera name

Friendly name and device model ID differ between F200 and SR300

- Do not use friendly name string as a unique ID. Only use to display device name in text to the user.

- Use the device model name to perform camera-specific operations. Use front-facing or rear-facing orientation value from DeviceInfo if sufficient.

SNR

IR sensor in SR300 has much higher SNR and native 10-bit data type (up from 8-bit on F200). As a result the IR_RELATIVE pixel format is no longer exposed

- Use AcquireAccess function to force a pixel format of Y16 when accessing SR300 IR stream data.

Depth Stream Scaling Factor

Native depth stream data representation has changed from 1/32 mm in F200 to 1/8 mm in SR300. If accessing native depth data with pixel format DEPTH_RAW, a proper scaling factor must be used. (Does not affect apps using pixel format DEPTH)

- Retrieve the proper scaling factor using the QueryDepthUnit or force a pixel format conversion from DEPTH_RAW to DEPTH using the AcquireAccess function.

Device Properties

Several of the device properties outlined in the F200 & SR300 Member Functions document have differences between the two cameras that should be noted:

- Filter option definition table has differences based on the different range capabilities between the two cameras

- SR300 only supports the FINEST option for the SetIVCAMAccurary function

- Avoid using camera-specific properties to avoid camera-level feature changes. Use the Intel RealSense SDK algorithm modules to have the SDK automatically set the best settings for the given algorithm.

Conclusion

This paper outlined several best-known practices to ensure high compatibility across multiple Intel RealSense camera models. The R5 2015 SDK release for Windows features built-in functions to mitigate compatibility. It is generally good practice to design applications to use only common features across all cameras to facilitate development time and ensure portability. If an application uses features unique to a particular camera, be sure to verify the system configuration both at install time and during runtime initialization. In order to facilitate migration of applications developed for the F200 camera to SR300 cameras, the SR300 DCM includes an F200 compatibility mode that will allow legacy applications to run seamlessly on the later-model camera. However, be aware that not updating legacy apps (pre-R5) may result in failure to run on SR300 systems running other R5 or later applications simultaneously. Finally, it is important to read all supporting SDK documentation thoroughly to understand the varying behavior of certain SDK functions with different camera models.

Resources

Intel RealSense SDK Documentation

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_devguide_introduction.html

SR300 Migration Guide

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_mgsr300_working_with_sr300.html

Appendix

SoftwareBitmapToWriteableBitmap Code Sample

// SoftwareBitmap is the UWP data type for images.
public SoftwareBitmapToWriteableBitmap(SoftwareBitmap bitmap,
WriteableBitmap bitmap2)
{
switch (bitmap.BitmapPixelFormat)
{
default:
using (var converted = SoftwareBitmap.Convert(bitmap,
BitmapPixelFormat.Rgba8))
converted.CopyToBuffer(bitmap2.PixelBuffer);
break;
case BitmapPixelFormat.Bgra8:
bitmap.CopyToBuffer(bitmap2.PixelBuffer);
break;
case BitmapPixelFormat.Gray16:
{
// See the UWP StreamViewer sample for all the code.
....
break;
}
}
}

About the Author

Tion Thomas is a software engineer in the Developer Relations Division at Intel. He helps deliver leading-edge user experiences with optimal performance and power for all types of consumer applications with a focus on perceptual computing. Tion has a passion for delivering positive user experiences with technology. He also enjoys studying gaming and immersive technologies.

Intel® Advisor XE 2016 Update 3 What’s new

$
0
0
We’re pleased to announce new version of the Vectorization Assistant tool - Intel® Advisor XE 2016 Update 3. Below are highlights of the new functionality and improvements.

 

Support for Intel® Xeon Phi™ processor (codename: Knights Landing)

You can run the Survey analysis on the Intel® Xeon Phi™ processors (codename Knights Landing), see Intel AVX-512 ISA metrics and new Intel AVX-512-specific traits, e.g. Scatter, Compress/Expand, masking manipulations, etc.

Other analysis types are not available for Intel® Xeon Phi™ processors yet. We will extend the platform support in future updates.

 

Analysis of non-Executed code paths

Using the Intel Advisor XE, you can get your code ready for the next generation CPUs and Intel® Xeon Phi™ coprocessors even if you don’t have access to the hardware yet. Enable this functionality by generating code for multiple vector instruction sets (including Intel AVX-512) using the Intel Compiler
–ax option, and then analyze the resulting binary with Vectorization Advisor.

Read the detailed article about using the feature with some examples: https://software.intel.com/en-us/blogs/2016/02/02/explore-intel-avx-512-code-paths-while-not-having-compatible-hardware.

 

New tab “Loop Analytics”

“Loop Analytics” combines several loop metrics in single place, with easy-to-read visualization. The tab contains brand new metric “Instruction mix” - percentage of Memory, Compute and other instructions with distribution to vector/scalar instruction types. Other metrics are ISA, Traits and Efficiency. In future updates we plan to extend this tab with even more information, such as detailed instruction mix break-downs,  explanation and performance impact break-down for various sophisticated vectorization notions (multi-pumping, vectorized remainder) and more.

 

Filter results data by thread

You can now filter data by thread in the Survey report view, to narrow down performance metrics:

 

New recommendations

We have updated recommendation for Fortran "contiguous" attribute and added two new recommendations:

  1. “Indirect function calls” recommendation, that is useful for vectorizing C++ codes with virtual methods/functions.

  1. Recommendation for better utilization of FMA instructions:

 

Extended CLI snapshot

You can now pack your Advisor results with caching of sources and binaries from command line. So, you can make self-contained packed result snapshot on a remote server or a cluster node, which then can be easily copied to your workstation for investigation in GUI. This option is also handy for keeping historical results:

advixe-cl --snapshot --project-dir ./my_proj --pack --cache-sources --cache-binaries -- /tmp/my_proj_snapshot

 

Get Intel Advisor and more information

Visit the product site, where you can download free evaluation and find videos and tutorials.

Intel® Math Kernel Library (Intel® MKL) 11.3 Update 2 for OS X*

$
0
0

Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance. Intel MKL 11.3 Update 2 packages are now ready for download. Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio . Please visit the Intel® Math Kernel Library Product Page.

Intel® MKL 11.3 Update 2 Bug fixes

New Features in MKL 11.3 Update 2

  • Introduced mkl_finalize function to facilitate usage models when Intel MKL dynamic libraries or third party dynamic libraries are linked with Intel MKL statically are loaded and unloaded explicitly
  • Compiler offload mode now allows using Intel MKL dynamic libraries
  • Dynamic libraries for OS X* have run-time dynamic library search path (rpath) to ensure Intel MKL-based applications compatibility with OS X 10.11 (El Capitan) System Integrity Protection (SIP). Please refer to Intel MKL Link Line Advisor for link line changes
  • Added Intel TBB threading for all BLAS level-1 functions
  • Intel MKL PARDISO:
    • Added support for block compressed sparse row (BSR) matrix storage format
    • Added optimization for matrixes with variable block structure
    • Added support for mkl_progress in Parallel Direct Sparse Solver for Clusters
    • Added cluster_sparse_solver_64 interface
  • Introduced sorting algorithm in Summary Statistics

Check out the latest Release Notes for more updates

Contents

  • File: m_mkl_11.3.2.146.dmg

    A File containing the complete product installation for OS X* (32-bit/x86-64bit development)

Intel® Parallel Studio XE 2016 installer freezes when updating Intel® Cluster Checker 3.0 in some instances

$
0
0

In the Intel® Parallel Studio XE 2016 Installer for Linux*, when updating to Intel® Parallel Studio XE 2016 Update 2 Cluster Edition through Intel® Software Manager, the online installer may freeze.

This will happen when the following two conditions are met:

  1. Intel® Cluster Checker 3.0 was absent in your previous build of Intel Parallel Studio XE 2016.
  2. You select “Intel® Cluster Checker 3.0 Update 2” in the ‘Component selection’ dialog box after starting the Install GUI.

To work around this issue, do not try to update Intel® Cluster Checker 3.0 using the Intel® Parallel Studio XE 2016 installer if Intel® Cluster Checker was not in your previous Intel® Parallel Studio XE 2016 build.

Use one of the following methods instead:

  1. Use the “Download only” (not “Update”) button through Intel Software Managers.  The “Download only” button will be  available when you select the “Intel® Cluster Checker 3.0” component from your list.
  2. Download the complete offline installer package for Intel® Cluster Checker 3.0 by logging into your account at the Intel® Registration Center.  After login, select to download the latest version of the Intel Cluster Checker “Version 3.0 (Update 2)”.

Known Issue with CSH Environment Variable Scripts

$
0
0

In Intel® Parallel Studio XE 2016 Updates 1 and 2 for Linux*, there is an issue with the cshell environment variable scripts.  The mpsvars.sh environment variable script contains a mistake which can cause errors.  In order to avoid this, the psxevars.csh script does not call the itacvars.csh script.  If you wish to enable the itacvars.csh script to be called from psxevars.csh script, please make the following modifications:

  1. Uncomment the lines calling itacvars.csh in the psxevars.csh script.
  2. Edit the $SCRIPTPATH/itac_9.1/bin/mpsvars.csh file by changing
setenv MPS_FILE_POSTFIX="_%D-%T"

to

setenv MPS_FILE_POSTFIX "_%D-%T"

This will enable the itacvars.csh and mpsvars.csh scripts to be called by psxevars.csh

Note:  This only applies to the cshell environment variable scripts.  The bash shell scripts are not impacted by this.


Meet the Experts - Sergey Kostrov

$
0
0

Sergey Kostrov

Senior C/C++ Software Engineer

As a Senior C/C++ Software Engineer Sergey's specialization is design and implementation of highly portable software systems and scientific algorithms. He has been involved in that kind of work since the middle of 1990th.

Since August 2009, Sergey has been working on a Set of Common Algorithms Library for Big Data Processing ( ScaLib BDP ) where his dreams about a concept on how highly portable software systems need to be designed and implemented finally came true. That is, design and implement a highly portable and super compact library ( some kind of framework ) of the most common scientific algorithms for Embedded 16-bit platforms, capable to do Data Intensive processing, then scale it up to 32-bit and 64-bit Desktop platforms.

Sergey holds a BS degree in Engineering with a specialization in Automation, Measurements and Instruments.

Sergey's Latest HPC achievement and know-how:

Accelerated Processing Technique ( APT ) for MKL 'cblas_?gemm' matrix multiply functions integrated with Strassen Heap Based ( Incomplete Non-Recursive and Complete Recursive ) algorithms. The APT improves performance of 'cblas_?gemm' functions for multiplication of square dense large matricies ( greater of 16Kx16K ) by more than 10 percent.

Sergey's most complex software engineering projects in the past:

  • Financial Software system which was certified at the National Bank of Ukraine in October 1996 ( 1994 - 1997 );
  • High-Performance Geo-Coding subsystem for a server side processing ( 2004 - 2006 );
  • Nonstationary Convolution model for the Veiling Glare correction of a Medical X-Ray imaging device ( 2006 - 2007 ).

Sergey has a great passion in finding Non-Standard solutions for software engineering problems related to design and implementation of scientific algorithms in a classic computing domain, and beyond.

Useful Links:

Meet the experts

2016 Release: What's New in Intel® Media Server Studio

$
0
0

Achieve Real-Time 4K HEVC Encode, Ensure AVC & MPEG-2 Decode Robustness

Intel® Media Server Studio 2016 is now available! With a 1.1x performance and 10% quality improvement in its HEVC encoder, Intel® Media Server Studio helps transcoding solution providers achieve real-time 4K HEVC encode with broadcast quality on Intel® Xeon® E3-based Intel® Visual Compute Accelerator and select Xeon® E5 processors.1 Robustness enhancements give extra confidence for AVC and MPEG-2 decode scenarios through handling of broken content seamlessly. See below for more details about new features to accelerate media transcoding.

As a leader in media processing acceleration and cloud-based technologies, - thanks to the power of Intel® processors and Intel® Media Server Studio, - Intel helps media solution providers, broadcasting companies, and media/infrastructure developers innovate and deliver advanced performance, efficiency and quality for media applications, and OTT/live video broadcasting.

Download Media Server Studio 2016 Now

Current Users (login required)  New Users: Get Free Community version, Pro Trial or Buy Now


 

Improve HEVC (H.265) Performance & Quality by 10%, Use Advanced GPU Analysis, Reduce Bandwidth

Professional Edition

  • With 1.1x performance and 10% quality increase (compared to the previous release), media solution developers can achieve real-time 4K HEVC encode with broadcast quality on select Intel Xeon E5 platforms1 using Intel HEVC software solution and Intel® Visual Compute Accelerator (Intel® VCA)1 by leveraging a GPU-accelerated HEVC encoder.

  • Improve HEVC GPU-accelerated performance by offloading the in-loop filters like deblocking filter (DBF) and sample adaptive offset (SAO) workload to the GPU (in prior releases these filters executed on the CPU(s)).

Figure 1. The 2016 edition continues the rapid cadence of innovation with up to 10% improved video coding efficiency over the 2015 R7 version. In addition to delivering real-time 4K30 encode on select Intel® Xeon® E5 processors, this edition now provides real-time 1080p50 encode on previous generation Intel® Core™ i7 and Xeon E3 platforms.** HEVC Software/GPU Accelerated Encode Quality vs. Performance on 4:2:0, 8-bit 1080p content. Quality data is baseline to ISO HM14 (“0 %”) and computed using Y-PSNR BDRATE curves. Performance is an average across 4 bitrates ranging from low bitrate (avg 3.8Mbps) to high bitrate (avg 25 Mbps). For more information, please refer to Deliver High Quality and Performance HEVC whitepaper.

  • With Intel® VTune™ Amplifier advancements, developers can more easily get and interpret graphics processor usage and performance of OpenCL* and Intel® Media SDK-optimized applications. Includes CPU and GPU concurrency analysis, GPU usage analysis using hardware metrics, GPU architecture diagram, and much more.

  • Reduce bandwidth when using HEVC codec by running Region of Interest (ROI) based encoding, where ROI can be least compressed to preserve details as compared to other surroundings. This feature improves Video Conferencing applications. This can be achieved by setting mfxExtEncoderROI structure in the application to specify different ROIs during encoding, and can be used at initialization or at runtime.

  • Video Conferencing - Connect business meetings and people together more quickly via video conferencing with specially tuned low-delay HEVC mode.

  • Innovate for 8K - Don't limit your application for encoding steams of 4K resolution, Intel's HEVC codec in Media Server Studio 2016 now supports 8K, both software and GPU-accelerated encoder

Advance AVC (H.264) & MPEG-2 Decode & Transcode

Community, Essentials, Pro Editions

  • Advanced 5th generation graphics and media accelerators, plus custom drivers unlock transcoding for up to 16 HD AVC streams real-time high quality per socket on Intel Xeon E3 v4 processors (or via Intel VCA)by taking advantage of hardware acceleration. 

  • Achieve up to 12 HD AVC streams on Intel® Core™ 4th generation processors with Intel® Iris graphics**. 

  • Utilize improved AVC encode quality for BRefType MFX_B_REF_PYRAMID.

  • AVC and MPEG2 decoder is more robust than ever in handling corrupted streams and returning failure errors. Get extra confidence for AVC and MPEG-2 decode scenarios with increased robustness and recovery to corrupted output, and seamless handling of broken content. Advanced error reporting allows developers to better find and analyze decode errors. 

Figure 2: In the 2016 version 40% performance gains are achieved in H.264 scenarios from improved hardware scheduling algorithms compared to the 2015 version.** This figure illustrates results of multiple H.264 encodes from a single H.264 source file accelerated using Intel® Quick Sync Video using sample multi_transcode (avail. in code samples). Each point is an average of 4 streams and 6 bitrates with error bars showing performance variation across streams and bitrates. Target Usage 7 (“TU7”) is the highest speed (and lowest quality) operating point. [1080p 50 content was obtained from media.xiph.org/video/derf/: crowd_run, park_joy (30mbps input; 5, 7.1, 10.2, 14.6, 20.9, 30 mbps output; in_to_tree, old_town_cross 15 mbps input, 2.5, 3.5, 5.1, 7.3, 10.4, 15 mbps output]. Configuration: AVC1→N Multi-Bitrate concurrent transcodes, 1080p, TU7 preset, Intel® Core™ i7-4770K CPU @ 3.50GHz ** Number of 1080p Multi-bitrate channels.

Other New and Improved Features

  • Improvements in Intel® SDK for OpenCL™ Applications for Windows includes new features for kernel development.

  • Added support for CTB-level delta QP for all quality presets i.e. Target Usage 1 through 7 for all rate control modes (CBR, VBR, AVBR, ConstQP) and all Profiles (MAIN, MAIN10, REXT).

  • Support for encoding IPPP..P stream i.e. no B frames by using Generalized P and B control for the applications where B frames are dropped to meet bandwidth limitations

  • H.264 encode natively consumes ARGB surfaces (captured from screen/game) and YUY2 surfaces, which reduces preprocessing overhead (i.e. color conversion from RGB4 to NV12 for the Intel® Media SDK to process), and increases screen capture performance.
     

Save Time by Using Updated Code Samples 

  • Major features are added to sample_multi_transcode by extending the pipeline to multiple VPP filters like composition, denoise, detail (edge detection), frame rate control (FRC), deinterlace, color space conversion(CSC).

  • Sample_decode in the Linux sample package has drm based rendering, which can use input argument "-rdrm". Now, sample_decode and sample_decvpp are merged in the decode sample with new VPP filters like deinterlace and color space conversion added.
     

For More Information

The above notes are just the top level features and enhancements in Media Server Studio 2016. Access the product site and review the various edition Release Notes for more details.


1 See Technical Specifications for more details.

**Baseline configuration: Intel® Media Server Studio 2016 Essentials vs. 2015 R7, R4 running on Microsoft Windows* 2012 R2. Intel Customer Reference Platform with Intel® Core-i7 4770k (84W, 4C,3.5GHz, Intel® HD Graphics 4600). Intel Z87KL Desktop board with Intel Z87LPC, 16 GB (4x4GB DDR3-1600MHz UDIMM), 1.0TB 7200 SATA HDD, Turbo Boost Enabled, and HT Enabled. Source: Intel internal measurements as of January 2016.

 

 

 

Intel® Math Kernel Library (Intel® MKL) 11.3 Update 2

$
0
0

Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance. Intel MKL 11.3 Update 2 packages are now ready for download. Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio . Please visit the Intel® Math Kernel Library Product Page.

Intel® MKL 11.3 Update 2 Bug fixes

New Features in MKL 11.3 Update 2

  • Introduced mkl_finalize function to facilitate usage models when Intel MKL dynamic libraries or third party dynamic libraries are linked with Intel MKL statically are loaded and unloaded explicitly
  • Compiler offload mode now allows using Intel MKL dynamic libraries
  • Added Intel TBB threading for all BLAS level-1 functions
  • Intel MKL PARDISO:
    • Added support for block compressed sparse row (BSR) matrix storage format
    • Added optimization for matrixes with variable block structure
    • Added support for mkl_progress in Parallel Direct Sparse Solver for Clusters
    • Added cluster_sparse_solver_64 interface
  • Introduced sorting algorithm in Summary Statistics

     What's New in Intel MKL 11.3:

  • Batch GEMM Functions
  • Introduced new 2-stage (inspector-executor) APIs for Level 2 and Level 3 sparse BLAS functions
  • Introduced MPI wrappers that allow users to build custom BLACS library for most MPI implementations
  • Cluster components (Cluster Sparse Solver, Cluster FFT, ScaLAPACK) are now available for OS X*
  • Extended the Intel MKL memory manager to improve scaling on large SMP systems

Check out the latest Release Notes for more updates

 

Concurrent Power and Performance Analysis on Android

$
0
0

Concurrent Power and Performance Analysis on Android using Intel® VTune™ Amplifier and SoC Watch.

With some of the new features available in Intel® VTune™ Amplifier 2016 Update 1 it is now relatively easy to obtain simultaneous capture of power and performance data on an unplugged Android device. 

Note: Although it's possible to connect with "ADB over WiFi" and thus skip the need for using the "Analyze unplugged device" option it has generally been considered easier to connect to the target device with a USB cable making this a relatively easy solution.  Should you instead wish to use "ADB over WiFi" instructions for setting that up are included in your Intel® VTune™ Amplifier product documentation. 

Step 1: Preparing Performance Analysis

When using the GUI simply select the "Analyze unplugged device" option in the Analysis Target tab of project properties under the Advanced options.  It is recommended to also select "Automatically stop collection after (sec):" and specify a collection time, this way it's simple enough to specify the same length of time for power analysis collection.

Then select your Analysis Type and run analysis as normal.  Go ahead and select all options as normal, but pause just before selecting "Start" in order to prepare the Power analysis collector to run concurrently.

Analyze unplugged Device with pop-up explaination

 

Step 2: Preparing Power Analysis

Using an independent command line ADB shell into the device and prepare for collecting a power analysis data.  If you are unfamiliar with the steps a walk-through guide is located here.

Before starting the actual collector make sure that your command line uses the "nohup" and run in background "&" options.  Then pause before actually hitting enter to execute and we'll sync up both collectors to run concurrently. Here is an example SoC Watch collection command-line from version 1.5.4.  If using SoC Watch version 2.0 or newer (such as 2.1.1) then don't forget to use "-r vtune" in order to get appropriate post-processing for import into Intel® VTune™ Amplifier.  

nohup ./socwatch -m -f sys -f wakelock -t 35 -o ./results/concurr_test &

Note: The & at the end of the command is a standard linux/unix option to run the command as a background task.  Use of the "nohup" at the beginning tells the system not to tie the background task to the existing terminal, in other words once the terminal is closed (adb connection lost when you remove the cable) continue to let the command run rather than terminating.

 

Step 3: Synchronizing the collector start times

At this point go back to the Intel® VTune™ Amplifier window and select "Start".  There will be the usual set of messages flashed on the device, then some new ones related to running in "unplugged" mode, and eventually you will get the message "Unplug the device please.  Collection will start automatically."

At this point return to the SoC Watch collection terminal and hit enter, then disconnect the USB cable from the device.

Run your workload of interest as normal.  Note: Without the cable connected Amplifier will not automatically stop the application after the collection time has completed, once you are certain the collection period has completed then re-connect the cable.

Step 4: Reviewing the results

Once collection has stopped and the cable is re-connected then performance data will be pulled, and finalized automatically just as if run on a continually connected device.

After the results have been finalized and displayed in Intel® VTune™ Amplifier then return to the SoC Watch terminal and pull the SoC Watch results over.  They may then be imported into Intel® VTune™ Amplifier as normal.

 

Intel® XDK FAQs - Crosswalk

$
0
0

How do I play audio with different playback rates?

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');
myAudio.play();
myAudio.playbackRate = 1.5;

Why are Intel XDK Android Crosswalk build files so large?

When your app is built with Crosswalk it will be a minimum of 15-18MB in size because it includes a complete web browser (the Crosswalk runtime or webview) for rendering your app instead of the built-in webview on the device. Despite the additional size, this is the preferred solution for Android, because the built-in webviews on the majority of Android devices are inconsistent and poorly performing.

See these articles for more information:

Why is the size of my installed app much larger than the apk for a Crosswalk application?

This is because the apk is a compressed image, so when installed it occupies more space due to being decompressed. Also, when your Crosswalk app starts running on your device it will create some data files for caching purposes which will increase the installed size of the application.

Why does my Android Crosswalk build fail with the com.google.playservices plugin?

The Intel XDK Crosswalk build system used with CLI 4.1.2 Crosswalk builds does not support the library project format that was introduced in the "com.google.playservices@21.0.0" plugin. Use "com.google.playservices@19.0.0" instead.

Why does my app fail to run on some devices?

There are some Android devices in which the GPU hardware/software subsystem does not work properly. This is typically due to poor design or improper validation by the manufacturer of that Android device. Your problem Android device probably falls under this category.

How do I stop "pull to refresh" from resetting and restarting my Crosswalk app?

See the code posted in this forum thread for a solution: /en-us/forums/topic/557191#comment-1827376.

An alternate solution is to add the following lines to your intelxdk.config.additions.xml file:

<!-- disable reset on vertical swipe down --><intelxdk:crosswalk xwalk-command-line="--disable-pull-to-refresh-effect" />

Which versions of Crosswalk are supported and why do you not support version X, Y or Z?

The specific versions of Crosswalk that are offered via the Intel XDK are based on what the Crosswalk project releases and the timing of those releases relative to Intel XDK build system updates. This is one of the reasons you do not see every version of Crosswalk supported by our Android-Crosswalk build system.

With the September, 2015 release of the Intel XDK, the method used to build embedded Android-Crosswalk versions changed to the "pluggable" webview Cordova build system. This new build system was implemented with the help of the Cordova project and became available with their release of the Android Cordova 4.0 framework (coincident with their Cordova CLI 5 release). With this change to the Android Cordova framework and the Cordova CLI build system, we can now more quickly adapt to new version releases of the Crosswalk project. Support for previous Crosswalk releases required updating a special build system that was forked from the Cordova Android project. This new "pluggable" webview build system means that the build system can now use the standard Cordova build system, because it now includes the Crosswalk library as a "pluggable" component.

The "old" method of building Android-Crosswalk APKs relied on a "forked" version of the Cordova Android framework, and is based on the Cordova Android 3.6.3 framework and is used when you select CLI 4.1.2 in the Project tab's build settings page. Only Crosswalk versions 7, 10, 11, 12 and 14 are supported by the Intel XDK when using this build setting.

Selecting CLI 5.1.1 in the build settings will generate a "pluggable" webview built app. A "pluggable" webview app (built with CLI 5.1.1) results in an app built with the Cordova Android 4.1.0 framework. As of the latest update to this FAQ, the CLI 5.1.1 build system supported Crosswalk 15. Future releases of the Intel XDK and the build system will support higher versions of Crosswalk and the Cordova Android framework.

In both cases, above, the net result (when performing an "embedded" build) will be two processor architecture-specific APKs: one for use on an x86 device and one for use on an ARM device. The version codes of those APKs are modified to insure that both can be uploaded to the Android store under the same app name, insuring that the appropriate APK is automatically delivered to the matching device (i.e., the x86 APK is delivered to Intel-based Android devices and the ARM APK is delivered to ARM-based Android devices).

For more information regarding Crosswalk and the Intel XDK, please review these documents:

How do I prevent my Crosswalk app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

How can I improve the performance of my Crosswalk app so it is as fast as Crosswalk 7 was?

Try adding the CrosswalkAnimatable option to your intelxdk.config.additions.xml file (details regarding the CrosswalkAnimatable option are available in this Crosswalk Project wiki post: Android SurfaceView vs TextureView):

<!-- Controls configuration of Crosswalk-Android "SurfaceView" or "TextureView" --><!-- Default is SurfaceView if >= CW15 and TextureView if <= CW14 --><!-- Option can only be used with Intel XDK CLI5+ build systems --><!-- SurfaceView is preferred, TextureView should only be used in special cases --><!-- Enable Crosswalk-Android TextureView by setting this option to true --><preference name="CrosswalkAnimatable" value="true" />

Also, see Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for some additional tools that can be used to modify the Crosswalk's webview runtime parameters. 

Beginning with the CLI 5.1.1 build system you must add the --ignore-gpu-blacklist option to your intelxdk.config.additions.xml file if you want the additional performance this option provides to blacklisted devices.

Why does the Google store refuse to publish my Crosswalk app?

There is a change to the version code handling by the Crosswalk and Android build systems based on Cordova CLI 5.0 and later. This change was implemented by the Apache Cordova project. This new version of Cordova CLI automatically modifies the android:versionCode when building for Crosswalk and Android. Because our CLI 5.1.1 build system is now more compatible with standard Cordova CLI, this change results in a discrepancy in the way your android:versionCode is handled when building for Crosswalk (15) or Android with CLI 5.1.1 when compared to building with CLI 4.1.2.

If you have never published an app to an Android store this change will have little or no impact on you. This change might affect attempts to side-load an app onto a device, in which case the simplest solution is to uninstall the previously side-loaded app before installing the new app.

Here's what Cordova CLI 5.1.1 (Cordova-Android 4.x) is doing with the android:versionCode number (which you specify in the App Version Code field within the Build Settings section of the Projects tab):

Cordova-Android 4.x (Intel XDK CLI 5.1.1 for Crosswalk or Android builds) does this:

  • multiplies your android:versionCode by 10

then, if you are doing a Crosswalk (15) build:

  • adds 2 to the android:versionCode for ARM builds
  • adds 4 to the android:versionCode for x86 builds

otherwise, if you are performing a standard Android build (non-Crosswalk):

  • adds 0 to the android:versionCode if the Minimum Android API is < 14
  • adds 8 to the android:versionCode if the Minimum Android API is 14-19
  • adds 9 to the android:versionCode if the Minimum Android API is > 19 (i.e., >= 20)

If you HAVE PUBLISHED a Crosswalk app to an Android store this change may impact your ability to publish a newer version of your app! In that case, if you are building for Crosswalk, add 6000 (six with three zeroes) to your existing App Version Code field in the Crosswalk Build Settings section of the Projects tab. If you have only published standard Android apps in the past and are still publishing only standard Android apps you should not have to make any changes to the App Version Code field in the Android Builds Settings section of the Projects tab.

The workaround described above only applies to Crosswalk CLI 5.1.1 and later builds!

When you build a Crosswalk app with CLI 4.1.2 (which uses Cordova-Android 3.6) you will get the old Intel XDK behavior where: 60000 and 20000 (six with four zeros and two with four zeroes) are added to the android:versionCode for Crosswalk builds and no change is made to the android:versionCode for standard Android builds.

NOTE:

  • Android API 14 corresponds to Android 4.0
  • Android API 19 corresponds to Android 4.4
  • Android API 20 corresponds to Android 5.0
  • CLI 5.1.1 (Cordova-Android 4.x) does not allow building for Android 2.x or Android 3.x

Back to FAQs Main

How to detect Knights Landing AVX-512 support (Intel Xeon Phi processor)

$
0
0

The Intel Xeon Phi processor, code named Knights Landing, is part of the second generation of Intel Xeon Phi products.  Knights Landing supports AVX-512 instructions, specifically AVX-512F (foundation), AVX-512CD (conflict detection), AVX-512ER (exponential and reciprocal) and AVX-512PF (prefetch).

If we want an application to run everywhere, in order to use these instructions in a program, we need to make sure that the operating system and the processor have support for them when the application is run.

The Intel compiler provides a single function _may_i_use_cpu_feature that does all this easily. This program shows how we can use it to test for the ability to use AVX-512F, AVX-512ER, AVX-512PF and AVX-512CD instructions.

#include <immintrin.h>
#include <stdio.h>

int main(int argc, char *argv[]) {
  const unsigned long knl_features =
      (_FEATURE_AVX512F | _FEATURE_AVX512ER |
       _FEATURE_AVX512PF | _FEATURE_AVX512CD );
  if ( _may_i_use_cpu_feature( knl_features ) )
    printf("This CPU supports AVX-512F+CD+ER+PF as introduced in Knights Landing\n");
  else
    printf("This CPU does not support all Knights Landing AVX-512 features\n");
  return 1;
}

if we compile with the -xMIC_AVX512 flag, the Intel compiler will automatically protect the binary and such checking is not necessary.  For instance, if we compile and run as follow we can see the result of running on a machine other than a Knights Landing.

icc -xMIC-AVX512 -o sample sample.c
./sample

Please verify that both the operating system and the processor support Intel(R) MOVBE, F16C, AVX, FMA, BMI, LZCNT, AVX2, AVX512F, ADX, RDSEED, AVX512ER, AVX512PF and AVX512CD instructions.


In order to run on all processors, we compile and run as follows:

icc -axMIC-AVX512 -o sample sample.c
./sample

When we run on a Knights Landing it prints:
This CPU supports AVX-512F+CD+ER+PF as introduced in Knights Landing

When we run on a processor without the AVX-512 support at least equivalent to Knights Landing it prints:
This CPU does not support all Knights Landing AVX-512 features

If we want to support compilers other than Intel, the code is slightly more complex because the function _may_i_use_cpu_feature is not standard (and neither are the __buildin functions in gcc and clang/LLVM).  The following code works with at least the Intel compiler, gcc, clang/LLVM and Microsoft compilers.

#if defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 1300)

#include <immintrin.h>

int has_intel_knl_features()
{
  const unsigned long knl_features =
      (_FEATURE_AVX512F | _FEATURE_AVX512ER |
       _FEATURE_AVX512PF | _FEATURE_AVX512CD );
  return _may_i_use_cpu_feature( knl_features );
}

#else /* non-Intel compiler */

#include <stdint.h>
#if defined(_MSC_VER)
#include <intrin.h>
#endif

void run_cpuid(uint32_t eax, uint32_t ecx, uint32_t* abcd)
{
#if defined(_MSC_VER)
  __cpuidex(abcd, eax, ecx);
#else
  uint32_t ebx, edx;
 #if defined( __i386__ ) && defined ( __PIC__ )
  /* in case of PIC under 32-bit EBX cannot be clobbered */
  __asm__ ( "movl %%ebx, %%edi \n\t cpuid \n\t xchgl %%ebx, %%edi" : "=D" (ebx),
 # else
  __asm__ ( "cpuid" : "+b" (ebx),
 # endif"+a" (eax), "+c" (ecx), "=d" (edx) );
	    abcd[0] = eax; abcd[1] = ebx; abcd[2] = ecx; abcd[3] = edx;
#endif
}

int check_xcr0_zmm() {
  uint32_t xcr0;
  uint32_t zmm_ymm_xmm = (7 << 5) | (1 << 2) | (1 << 1);
#if defined(_MSC_VER)
  xcr0 = (uint32_t)_xgetbv(0);  /* min VS2010 SP1 compiler is required */
#else
  __asm__ ("xgetbv" : "=a" (xcr0) : "c" (0) : "%edx" );
#endif
  return ((xcr0 & zmm_ymm_xmm) == zmm_ymm_xmm); /* check if xmm, zmm and zmm state are enabled in XCR0 */
}

int has_intel_knl_features() {
  uint32_t abcd[4];
  uint32_t osxsave_mask = (1 << 27); // OSX.
  uint32_t avx2_bmi12_mask = (1 << 16) | // AVX-512F
                             (1 << 26) | // AVX-512PF
                             (1 << 27) | // AVX-512ER
                             (1 << 28);  // AVX-512CD
  run_cpuid( 1, 0, abcd );
  // step 1 - must ensure OS supports extended processor state management
  if ( (abcd[2] & osxsave_mask) != osxsave_mask )
    return 0;
  // step 2 - must ensure OS supports ZMM registers (and YMM, and XMM)
  if ( ! check_xcr0_zmm() )
    return 0;

  return 1;
}
#endif /* non-Intel compiler */

static int can_use_intel_knl_features() {
  static int knl_features_available = -1;
  /* test is performed once */
  if (knl_features_available < 0 )
    knl_features_available = has_intel_knl_features();
  return knl_features_available;
}

#include <stdio.h>

int main(int argc, char *argv[]) {
  if ( can_use_intel_knl_features() )
    printf("This CPU supports AVX-512F+CD+ER+PF as introduced in Knights Landing\n");
  else
    printf("This CPU does not support all Knights Landing AVX-512 features\n");
  return 1;
}

Acknowledgment: Thank you to Max Locktyukhin (Intel) for his article 'How to detect New Instruction support in the 4th generation Intel® Core™ processor family' which served as the model for my Knights Landing detection code.

Modern Code: Making the Impossible, Possible

$
0
0

 

SC15 luminary panelists reflect on collaboration with Intel and how building on hardware and software standards facilitates performance on parallel platforms with greater ease and productivity. By sharing their experiences modernizing code we hope to shed light on what you might see from modernizing your own code.

View Webinar


Read Intel® RealSense™ Camera Streams with New MATLAB® Adaptor Code Sample

$
0
0

Download Code Sample 

Introduction

The downloadable code sample demonstrates the basics of acquiring raw camera streams from Intel® RealSense™ cameras (R200 and F200) in the MATLAB® workspace using the Intel® RealSense™ SDK and MATLAB’s Image Acquisition Toolbox™ Adaptor Kit. This code sample creates possibilities for MATLAB developers to develop Intel® RealSense™ applications for Intel® platforms and has the following features:

  • Multi-stream synchronization. Color stream and depth stream can be acquired simultaneously (see Figure 1).
  • Multi-camera support. Raw streams can be acquired from multiple cameras simultaneously.
  • User adjustable properties. This adaptor supports video input with different camera-specific properties.
Raw Intel® RealSense™ camera (F200) color and depth streams in the MATLAB* figure.
Figure 1. Raw Intel® RealSense™ camera (F200) color and depth streams in the MATLAB* figure.


Software Development Environment

The code sample was created on Windows 8* using Microsoft Visual Studio* 2013. The MATLAB version used in this project was MATLAB R2015a.

The SDK and Depth Camera Manager (DCM) version used in this project were:

  • Intel RealSense SDK V7.0.23.8048
  • Intel RealSense Depth Camera Manager F200 V1.4.27.41944
  • Intel RealSense Depth Camera Manager R200 V2.0.3.53109

Hardware Overview

We used the Intel® RealSense™ Developer Kit (F200) and Intel RealSense Developer Kit (R200).

About the Code

This code sample can be built into a dynamic link library (DLL) that implements the connection between the MATLAB Image Acquisition Toolbox™ and Intel RealSense cameras via the Intel RealSense SDK. Figure 2 shows the relationship of this adaptor to the MATLAB and Intel RealSense cameras. The Image Acquisition Toolbox™ is a standard interface provided by MATLAB to acquire images and video from imaging devices.

The relationship of the adaptor to the MATLAB* and Intel® RealSense™ cameras.
Figure 2. The relationship of the adaptor to the MATLAB* and Intel® RealSense™ cameras.


The MATLAB installation path I used was C:\MATLAB and the SDK installation path was C:\Program Files (x86)\Intel\RSSDK. Note that the include directories and library directories will need to be changed if your SDK and MATLAB installation paths are different. You will also need to set an environment variable MATLAB in system variables that contains the name of your MATLAB installation folder.

The file location I use to put the entire code sample RealSenseImaq was C:\My_Adaptor\RealSenseImaq. The RealSenseImaq solution can also be found under this directory. This RealSenseImaq solution actually consists of two projects:

  • The imaqadaptorkit is an adaptor kit project provided by MATLAB to make it easier to refer to some adaptor kit files in MATLAB. The file location of this project is: <your_matlab_installation_directory>\R2015a\toolbox\imaq\imaqadaptors\kit
  • The RealSenseImaq is an adaptor project that acquires the raw camera streams. The color and depth data from multiple cameras can be acquired simultaneously. It also contains functions to support video input with different camera-specific properties.

How to Run the Code

To build the DLL from this code sample:

  • First run Microsoft Visual Studio as administrator and open the RealSenseImaq solution. You must ensure that “x64” is specified under the platform setting in the project properties.
  • To build this code sample, right-click the project name RealSenseImaq in the solution explorer, then select it as the startup project from the menu option and build it.
  • For users who are MATLAB developers and not interested in the source code, pre-build DLL can be found in the C:\My_Adaptor\RealSenseImaq\x64\Debug\ folder. Note that the DLL directory will need to be changed if you put the code sample in a different location.

To register the DLL in the MATLAB:

  • You must inform the Image Acquisition Toolbox software of DLL’s existence by registering it with the Imaqregister function. The DLL can be registered by using the following MATLAB code:

Imaqregister (‘<your_directory>\RealSenseImaq.dll’);

  • Start MATLAB and call the imaqhwinfo function. You should be able to see the RealSenseImaq adaptor included in the adaptors listed in the InstalledAdaptors field.

To run the DLL in the MATLAB:

Three MATLAB scripts that I created have been put under the code sample directory C:\My_Adaptor\RealSenseImaq\matlab.

To start to run the DLL in MATLAB, use the scripts as follows:

  • MATLAB script “test1” can be used to acquire raw F200 color streams in MATLAB.
  • Raw color and depth streams from the Intel RealSense camera (F200) can be acquired simultaneously by using the MATLAB script “test2” (see Figure 1).
  • You can also use this adaptor to adjust the camera-specific property and retrieve the current value of the property. For example, the MATLAB script “test3” in the code sample file can be used to retrieve the current value of color brightness and adjust its value.

Check It Out

Follow the download link to get the code.

About Intel® RealSense™ Technology

To get started and learn more about the Intel RealSense SDK for Windows, go to https://software.intel.com/en-us/intel-realsense-sdk.

About MATLAB®

MATLAB is the high-level language and interactive environment to let you explore and visualize ideas and collaborate across disciplines. To learn more about the MATLAB, go to http://www.mathworks.com/products/matlab/.

About the Author

Jing Huang is a software application engineer in the Developer Relations Division at Intel. She is currently focused on performance of applications of the Intel Real Sense SDK on Intel platforms but has an extensive background in video and image processing and computer vision, mostly applied to medical imaging applications and multi-camera applications such as video tracking and video classification.

F1* 2015 Goes to the Next Level of Realism on the PC

$
0
0

Download [PDF 2MB]

Intro:

F1* 2015 is the latest FORMULA ONE* game produced by Codemasters, based on a unique iteration of their proprietary EGO engine. The game was created almost entirely from scratch with the new game engine dramatically improving both the visual quality and AI abilities. The F1 2015 engine is Codemasters’ first to target the eighth generation of consoles (PS4* and Xbox One*) and PCs. The new engine architecture was designed to run on the multiple cores found in the consoles with the intention that it would scale to work equally well on the PC. A patch in November 2015 that added an updated audio system, and together with higher quality settings for the CPU-driven particle system, makes full use of high-end gaming PCs. This article details most of the work done for the patch.


Figure 1:F1* 2015

Adapting to PC hardware

The ability of game code to adapt to different underlying CPU hardware is becoming increasingly important as hardware evolves to encompass a wider range of clock speeds and core counts. Games designed when PCs typically came with single and dual core processors or for earlier  console generations were normally designed around a limited number of threads. These primary threads were dedicated to rendering and the main game logic, while smaller tasks were distributed to any other processors in the system. On the PC this design method favored processors with very high single-threaded performance. Today both eighth-generation consoles have eight individual, relatively low-performance, high-efficiency x86 CPU cores with the game code using between six and seven of those cores (the remainder being reserved for the OS). This compares with modern consumer PCs that have anywhere between two and eight CPUs. If the core supports simultaneous multi-threading (SMT), each PC core can appear to the OS as two logical processors with technologies like Intel® Hyper-Threading Technology (Intel® HT Technology)i. This means a PC game can be running on up to 16 logical processors shared with the OS. Therefore, the old approach isn’t suitable for newer hardware, and while the code for consoles can be optimized to run on a very specific configuration, on a PC the game code is required to adapt to the hardware.

First PC Tests

As the game neared its July 10, 2015 release date, it became clear that the new engine had succeeded in its goal of more efficiently utilizing multiple CPU cores and had very different performance characteristics compared to previous FORMULA ONE titles that had been designed for the last generation of consoles. Figure 2 shows how the CPU and GPU workloads compare between Codemasters’ previous FORMULA ONE title, F1 2014  (left) and F1 2015(right), running at their highest quality settings at an unlocked frame rate.


Figure 2: GPUView on F1* 2014 vs F1* 2015.

The images are captured using Microsoft GPUView (included in the Windows* Performance Toolkit as part of the Windows Platform SDK) and show how well parallelized the work is between the CPU and the GPU. GPUView can also be used to detect synchronization and GPU starvation issues. The top section of the graph represents the GPU activity and a close up is shown in Figure 3. The gaps in the Purple line at the top left show breaks in the GPU activity on F1 2014, which mean the GPU is idling and not doing any actual work. The F1 2014 game is not GPU-limited on the high-end system tested, and at a resolution of 1080P it was CPU-bound.


Figure 3: Close up of GPU activity on the F1* 2014 and F1* 2015 games

The corresponding line on the right in Figure 3 (in blue) is solid, meaning the GPU is constantly active in F1 2015. In Figure 2 the lines below the Yellow separator represent the CPU thread activity; on F1 2014 (left),  a single CPU thread is almost constantly busy and is the limiting factor.  A close up of this is also shown below in Figure 4.


Figure 4: CPU activity

The different colors used for the bars represent the logical processors in the system. The changing color of the threads means they tend to run on a different logical processors on each frame. Another active thread can be seen at the bottom of the F1 2014 graph, and it corresponds to the graphics driver workload; while on the image on the right-hand side (F1 2015) all the CPU threads have some idle time and there is no obvious dependency between threads. When the GPU and CPU views are combined you can see that F1 2014 was limited by a critical path through the engine that feeds the graphics API from a single thread. Thus, F1 2014 represents the classic engine designed to run on previous consoles, with a workload optimized to run well on a fast dual-core processor with some benefits from moving to a quad core, but adding more than 4 cores didn’t provide any tangible benefit as the limiting factor is the  single main rendering thread.

The new engine on the other hand significantly reduced the CPU overhead associated with keeping the graphics card busy, making full use of Intel HT Technology and distributing the work evenly across logical processors.


Figure 5: GPUView of F1* 2015 at 60FPS

The coding team quickly saw that the PC was significantly faster than the console hardware for which they were doing much of their optimizing. While it had taken a lot of effort to achieve typical frame processing at 60FPS (16 ms of processing) on the consoles, on 4th generation Intel® Core™ processor (such as the Intel® Core™ i5-4670) the CPU was completing the work in a fraction of a frame, in many cases under 10ms. Figure 5 (above) shows the GPUView when the title is limited to a 60Hz refresh rate. Faster processors (e.g., 6th generation Intel® Core™ i7-6900K processors) made the imbalance even greater. Figure 6 shows the game running at an unlocked frame rate on a 5th generation Intel® Core™ i7-5960x processor with an NVIDIA GTX980 video card.


Figure 6: Unlocked GPUView on Intel® Core™ i7-5960x processor with 8 cores (16 threads)

When compared to the quad core system (Figure 2), the system with the Intel Core i7-5960x processor with 16 logical processors was idling the CPU for a larger percentage of the frame. This shows that the new engine benefits not only from increased single-threaded performance, but is also capable of benefiting from additional CPU cores. The result shows that even at slightly lower CPU frequencies the Intel Core i7-5960x processor with six cores can outperform the higher clocked Intel Core i7-6700k processor (quad core).

These initial tests showing the GPU was already being fully utilized shifted the emphasis on the PC. GPU-side optimizations continued, but instead of spending developer resources  optimizing the CPU rendering path (such as moving work to the GPU), the studio started investigating other ways to better utilize the CPU to improve user experience and improve the realism of the game.

Improving Realism

The challenge was to improve the game’s realism in ways that benefit users without affecting gameplay due to online multi-player requirements and without adding significantly more GPU-side work. As such, AI changes and improved physics accuracy were ruled out, as any improvements had to be achieveble on all PC HW that would be used to play the game to ensure multiplayer experience wouldn’t be compromised. Even in single-player mode any changes to the cars’ behavior would be problematic, requiring very careful rebalancing of the cars' handling. Instead, Codemasters concentrated on improving realism in two areas. Specifically, an upgraded audio engine and an increase in the amount of dynamic visual content via an upgraded particle system. These systems were chosen because they were already CPU-based and were previously limited by their console resource budget. The PC gave the designers room to create a more immersive experience using many of the effects they had originally been prevented from doing because of hardware limitations.

Audio

Improving the audio was seen as a scalable way to enhance the user experience without affecting gameplay. The audio in F1 2015 is handled by a middleware package that creates its own thread for mixing audio on the CPU. Codemasters previously found that if this thread was stalled or delayed, it would get audible dropouts. To prevent dropouts on the consoles, this audio thread would get a CPU core dedicated to itself to ensure nothing could delay processing. On the PC, the audio system had a dedicated logical processor with the game task system using the remaining cores. Consoles force the thread affinity to the worker threads, whereas the PC uses SetThreadIdealProcessor(), which helps the OS with prioritization.

Even with a dedicated core/logical processor, it was important to complete the mixing at a sufficient rate, and so the maximum number of audio voices was limited by the amount that could be processed in a worst case scenario, such as a crash. The limit for the audio was originally set  to 5 cars plus the player’s.

With a significantly more powerful processor to handle the CPU mixing and potentially more CPU cores, the OS was less likely to attempt to schedule additional jobs on the processor assigned to mix the audio. Correspondingly, a high-quality audio option could be added to the PC version that had the following improvements:

  • An increase in the number of cars contributing audio around the player from 6 (5 AI cars + player car) to 11 (10 AI cars + player car).
  • Removal of some instance limits so more voices would play when transitioning from one object to another, avoiding sudden cut-offs.
  • A replacement of some middleware reverb effects with more advanced ones, using more sophisticated algorithms, more reflections, and the use of pre-defined impulse files to simulate environments such as grandstands, bridges, tunnels, and the track side barriers together with passing cars.

Particles and Weather

Another improvement was an upgraded particle system. The particle system had been limited by the available CPU/GPU resources on the consoles, and art work had been authored within these constraints. The particle system was already CPU-based as Codemasters’ graphics programmer, Andrew Wright explains:

 “We anticipated (correctly) that we would be tighter on GPU time than on CPU time, so the particles system was always designed as an efficient heavily vectorized CPU system.”

This meant that the coding team had already started on a system that could scale based on the amount of CPU resources available. What’s more, they could do so in a way that didn’t necessarily increase the GPU work at the same time. Keeping the particles on the CPU also had other benefits, Andrew explains:

 “The CPU system is very versatile, and it handles collision against the track for particles flagged for that – mostly stuff like gravel and grass. This works for particles that are not visible, so a swift change of camera will catch previously invisible particles in mid-bounce. Collisions can trigger sound effects. This part might be hard on a GPU. “

The first part of the task was to increase the amount of generated particles with small and subtle changes to their art content, such as reducing the size of the particles as the particle density increased. These changes give a similar visual effect from a distance, but show much more detail up close. This was done for the various kick-up effects created when the car tires interact with the track (both on- and off-track surfaces). This effect is shown in Figure 7. 


Figure 7: Improved gravel effects

In a similar fashion, the car tire smoke effects were improved with relatively large “billboard” particles being replaced by much smaller particles that could better reflect the shape of the smoke, which worked particularly well. The effect improved the volumetric lighting applied to the smoke as the smaller particles allowed a better mathematical representation of the light fallout within the volume, as shown in Figure 8.


Figure 8: Improved tire smoke

Although the improvements to the particle systems were significant, they were only visible for short periods of time; for example, when the player or AI lost control of the car. The studio quickly realized that the enhanced particle system could be used to significantly improve wet weather effects--a part of the game that didn’t depend on the player’s skill level to showcase the improved visuals.

One of the main upgrades Codemasters promoted was the game’s improved handling, with significant improvements to the cars’ behavior in wet conditions. Wet weather plays a significant part of the real Formula 1 calendar, with many races renowned for the extreme weather conditions that affect the race. Table 1 shows the probability of rain in a race in the game's Champion and Pro season modes. On average 34% of the game’s races will be affected by rain for at least some part of a four-hour race.

The change to smaller, denser particles meant the water behavior could be modeled much more accurately. The particle system handles data curves over particle lifetime for properties like color, alpha, erosion, angular drag, linear drag, and gravity; it even ties in with the same wind system as the rain.

In Figures 9 and 10 you can see debug images showing the movement of the individual spray from the car wheels and its interaction with the air passing over the car surface. The effect is to create vortices spiralling off the back of the car


Figure 9: Debug vortices showing water spray movement


Figure 10: Debug vortices rear view

Using smaller particles also meant the existing lighting model worked much better on the spray. Lighting from emitters like the car engines could be much more accurately modelled on small particles than “billboards” representing a large volume of spray. Figure 11 shows the type of lighting seen in the spray trails.


Figure 11: Better lighting

Another upgraded area was the interaction with the surface water, including the amount of water kicked up into the trailing vortices that gets sprayed up from the contact points between the wheel and track surface. This helps visually ground the car in much the same way shadows do on a sunny day.

Figure 12 shows the before and after for the puddle interaction.


Figure 12: Puddle interaction

Figure 13 shows the track spray laid down behind each car (left) and a debug view taken from Intel® INDE Graphics Performance Analyzers for DirectX* (right).


Figure 13: Track spray

The final upgrade was an improved rain simulation. Originally, the game used a simple GPU-based algorithm that rendered a few thousand rain particles per frame, applying gravity to each rain drop as they fell. The new rain simulation moved more of the update routine to the CPU allowing interaction with the wind data used in other parts of the game. The number of rain drops was increased by a factor of 10x, reducing the size of the individual streaks to keep pixel coverage fairly similar. In Figure 14 the debug images are shown on the right of the rain taken from the same starting location. The lower image has 217K rain primitives compared to just 21K for the old system above. Despite the extra primitives, the actual number of pixels affected increased only slightly from 72k to 119K. 


Figure 14: Rain Debug View

When increasing the amount of rain and water vortices, care was also taken to adjust the transparency values used in the particle system to produce a similar overall distance fogging effect to ensure gameplay wasn’t altered by the visual settings.

Rebalancing the CPU Workload

Normally, F1 2015's limiting factor in PC performance was the GPU workload. This load was especially heavy on lower-end video cards. The most important part of developing and enabling these new effects was to ensure that these extra particles did not increase the GPU workload. The second resource consideration was to achieve good distribution of the CPU calculations, balancing the work across the available CPU resources and ensuring there was no increase in the work for the critical render path in the engine.

The GPU load control was achieved in two ways. First, vertex work was moved from the vertex shader to the CPU. This reduced the amount of work done per vertex on the GPU. It did not completely remove the GPU workload, but it was reduced by a factor of two. So rendering ten times as many particles only resulted in a five-fold increase in the vertex processing cost. The second large change was to reduce fill rate costs, such as with rain. A ten times increase in raindrops resulted in an increase of only 1.65x in actual rendered pixels. In the case of the vortex trails behind the cars the changes were even more pronounced. An increase in vertices from 3K to 70K actually resulted in a pixel drop of 5800K to 2500k, effectively halving the fill rate cost of the effect. The end result was an effect made up of 20 times the number of particles with no significant increase in GPU rendering cost.

Load balancing on the CPU was done by distributing the particle work across as many available logical processors as possible. Figure 15 shows the distibution of work on four-core and six-core systems (both with Intel HT Technology enabled). The purple and red blocks represent the particle and weather systems during very heavy rain on a six-core system (twelve logical processors). The particle work is done on 6 of those 12 processors, while on the four-core system, the work uses five out of the eight logical processors, but has to share those five with other tasks in the engine's task system.


Figure 15: CPU load balancing, 6 core system (left) and 4 core (right)

The F1 2015 engine uses a task-based system for its work distribution, with different task scheduling predefined for 2, 4, 8, 12, and 16 logical processor systems. The system tries to schedule tasks to reduce any dependencies.

Visual Comparisons

You can see in Figure 16 a side-by-side comparison of the original (left) and ultra particle systems (right). Note the improvements to the car vortices, with the spray more clearly tied to the individual cars that are generating it.


Figure 8: Car vortices

Improvements to the spray that the cars kick up when they lose grip on the track are clearly shown in Figure 17, with individual spray droplets visible against the F1 2015 logo on the higher quality settings. It’s also possible to see the improvements to the tire's interaction with the track.


Figure 9: Skid

Performance scaling

F1 2015 includes a built-in benchmark that uses the game's current graphics settings and can be configured to run under clear or stormy conditions. Figure 19 shows the benchmark performance numbers recorded on an Intel Core i7-5960x processor, running at a fixed 3.0ghz (its base frequency) using a NVIDIA TitanX* video card. Figure 20 shows the settings we used. The numbers reported are the average frame rate for the full benchmark run.


Figure 10: Multi-core Performance scaling


Figure 11: Multi-core Benchmark Settings

The number of physical cores was modified in the BIOS with all other hardware settings untouched. The tests were then repeated with Intel HT Technology disabled.

The game wasn’t run on a system with just 2 cores and no Intel HT Technology as this was below the game's listed minimum system requirements. In general, enabling Intel HT Technology increased performance approximately equal to adding 2 more cores to the PC, moving from two cores with Intel HT Technology to four cores with Intel HT Technology gave an increase of 79% in the game frame rate, while moving to a six-core system with Intel HT Technology gave an additional 17%. When utilizing the full 8 cores on the Intel Core i7-5960x processor, the benchmark numbers showed a 27% performance increase compared to the same system running with 4 cores and Intel HT Technology.ii

Given that the same system was used for all tests, the performance gains are a measure of how well the game's workload is threaded and how well the system configures the work based on the number of available logical processors. In all tests the game benefits from the large 20mb even when running with a reduced core count. The performance numbers can’t be compared directly to a retail four core system due to the differences in frequency and cache. Because of the large cache in the Intel Core i7-5960x processor, performance is better than a similarly clocked system with four cores.

Conclusions

F1 2015 leads the industry in showing how a modern CPU can be used to improve a game’s audio and visuals. Balancing the load between the CPU and GPU can bring better visuals to a game without a corresponding need to upgrade the GPU. Through the use of a CPU-based particle system, Codemasters was able to add visuals that complemented the other work performed on F1 2015 that improved the cars’ handling, especially in wet conditions with a set of state-of-the-art weather effects and closely ties the new visuals to the game physics systems. By making the game engine fully utilize both Intel HT Technology and all available CPU cores, significant performance gains were realized, providing smooth gameplay even with challenging visuals.

About the Author

Leigh Davies is a Senior Application Engineer at Intel with over 15 years of programming experience in the PC gaming industry, originally working with several developers in the UK and then with Intel. He is currently a member of the European Visual Computing Software Enabling Team, providing technical support to game developers. Over the last few years Leigh has worked on a wide variety of technology enabling areas from graphics techniques (optimization, Order Independent Transparency, and Adaptive Volumetric Shadow Mapping) to multi-core scaling, plus enabling platform optimizations such as touch and sensor controls. For the last two years Leigh has worked on Windows (DirectX 11 and 12) and Android* (GLES 3.1).

Codemasters Credits

Below is a list of the people, who directly contributed to the multi-core optimizations and visual enhancements, performed for the PC platform.

Codemasters F1 Team:

Tom Hammersley, Principal Programmer
Leigh Bradburn, Principal Programmer
Andrew Wright, Principal Programmer
David Larsson, Experienced Programmer
Andrew Stewart, Senior VFX Artist
Adrian Smith, Principle Programmer
Lars Hammer, Senior Programmer
Russell Wood, Senior Programmer
Craig Hupin, Experienced Programmer
David Beirne, Experienced Programmer
Glenn McDonald, Senior Level Designer
Ricky O’Toole, Level Designer

With thanks to:

Richard Kettlewell
Robert Rodriguez
Ben Pottage
Peter Tolnay

References

http://www.formula1-game.com/us/home

https://en.wikipedia.org/wiki/Hyper-threading

https://software.intel.com/en-us/gpa

https://dev.windows.com/en-us/downloads/windows-10-sdk

i Intel technologies may require enabled hardware, specific software, or services activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer.

ii Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

Get Amazing Intel GPU Acceleration for Media Pipelines

$
0
0

Online webinar: March 30, 9 a.m. (Pacific time)

Register NOW 

Amazing Intel GPU Acceleration for Media PipelinesMedia application developers unite! Accessing the heterogeneous capabilities of Intel® Core™ and Intel® Xeon® processors1 unlocks amazing opportunities for faster performance utilizing some of the most disruptive and rapidly improving aspects of Intel processor design.

Ensure that your media applications and solutions aren't leaving performance options untapped. Learn tips and tricks for adding hardware acceleration to your media code with advanced Intel media software tools in this webinar:

Get Amazing Intel GPU Acceleration for Media Pipelines webinar
March 30, 9 a.m. (Pacific time) - Sign Up Today

And what’s even better than that? Many of these options and tools are FREE - and already integrated into popular open source frameworks like FFmpeg and OpenCV (more details are below). 

Intel’s amazing GPU capabilities are easy-to-use, with an awesome set of tools to help you capture the best performance, quality and efficiency from your media workloads.This overview includes:

  • Intel GPU capabilities and architecture
  • Details on Intel's hardware accelerated codecs 
  • How to get started with rapid application development using FFmpeg and OpenCV (it can be easy!)
  • How to get even better performance by programming directly to Intel® Media SDK and Intel® SDK for OpenCL™ Applications
  • H.264 (AVC) and H.265 (HEVC) capabilities
  • Brief tools introduction, and more!

Register Now

Figure 1.  CPU/GPU Evolution

Figure 1 shows how Intel's graphics processor (GPU) has had increasing importance and placement with each generation of Intel Core processor. With potential video performance measured by the number of execution units (EUs) you can see how quickly the core processor has moved from only 12 to now 72 EUs.

 

Advanced Media Software Tools & Free Downloads  

 

Webinar Speakers

Intel-GPU-Webinar-Speakers


Future Webinars, Connect with Intel at Upcoming Events

More webinar topics are planned later this year; watch our site for updates. You can see Intel's media acceleration tools and technologies in action and meet with Intel technical experts at these upcoming industry events.

 

1See hardware requirements for technical specifications.

Oregon Health & Science University Uses Trusted Analytics Platform to Analyze Big Data from Wearables

$
0
0

Trusted Analytics Platform (TAP) helps cardiologists analyze Big Data in a next-generation clinical study that merges 24x7 lifestyle data with wearable devices, home monitoring devices, and clinical records.

The You 24x7 Cardiovascular Wellness Study has pioneered a Big Data approach to clinical research: using wearable devices to record minute-by-minute biometric and activity data from 359 participants for six months. A team of cardiologists and sleep experts at Oregon Health & Science University (OHSU) gathered a wide range of data from wearables, home monitoring devices, patient surveys, electronic health records (EHRs), and laboratory results to gain a fuller picture of participant wellness than is possible in traditional clinical studies. To perform advanced analytics on the 500 million data points collected, they collaborated with Intel using open source TAP.

Download complete white paper (PDF) Downloadapplication/pdfOHSU_wp_333222.pdf

 

 

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

Previously you needed to email us, but now you can download your Android (and Crosswalk) keystore file directly. Goto this page https://appcenter.html5tools-software.intel.com/certificate/export.aspx and login (if asked) using your Intel XDK account credentials. You may have to go back to that location a second time after logging in (do this within the same browser tab that you just logged in with to preserve your login credentials).

If successful, there is a link that, when clicked, will generate a request for an "identification code" for retrieving your keystore. Pushing this link will cause an email to be sent to the email address registered to your account. This email will contain your "identification code" but will call it an "authentication code," different term but same thing. Use this "authentication code" that you received by email to fill in the second form on the web page, above. Filing in that form with the code you received will take you to a new page where you will see:

  • a "Download keystore" link
  • your "key alias"
  • your "keystore password"
  • your "key password"

Make sure you copy down ALL the information provided! You will need all of that information in order to make use of the keystore. If you lose the password and alias information it will render the key useless!

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

Uploading an existing keystore in Intel XDK is not currently supported but you can send an email to html5tools@intel.com with this request. We can assist you there.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > del *.* /s/q

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > del *.* /s/q
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

To do the same on a Linux or Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app onto my test device Avast antivirus flags it as a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message it is likely due to the fact that you are side-loading the app onto your device (using a download link or by using adb) or you have downloaded your app from an "untrusted" store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Back to FAQs Main

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>