Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

A Comparison of Intel® RealSense™ Front Facing Camera SR300 and F200

$
0
0

Introduction

The SR300 is the second generation front-facing Intel® RealSense™ camera that supports Microsoft Windows* 10. Similar to the F200 camera model, the SR300 uses coded light depth technology to create a high quality 3D depth video stream at close range. The SR300 camera implements an infrared (IR) laser projector system, Fast VGA infrared (IR) camera, and a 2MP color camera with integrated ISP. The SR300 model uses Fast VGA depth mode instead of native VGA depth mode that the F200 model uses. This new depth mode reduces exposure time and allows dynamic motion up to 2m/s. This camera enables new platform usages by providing synchronized color, depth, and IR video stream data to the client system. The effective range of the depth solution from the camera is optimized from 0.2 to 1.2m for use indoors.


Figure 1: SR300 camera model.

The SR300 camera can use the Intel® RealSense™ SDK for Windows. The version that adds support for SR300 is SDK 2015 R5 or later. The SR300 will become available and built into form factors in 2016 including PCs, all-in-ones, notebooks and 2-in-1s. The SR300 model adds new features and has a number of improvements over the F200 model as follows:

  • Support for the new Hand Tracking Cursor Mode
  • Support for the new Person Tracking Mode
  • Increased Range and Lateral Speed
  • Improved Color Quality under Low-light Capture and Improved RGB Texture for 3D Scan
  • Improved Color and Depth Stream Synchronization
  • Decreased Power Consumption

Product Highlights

SR300

F200

Orientation

Front facing

Front facing

Technology

Coded Light; Fast VGA 60fps

Coded Light; native VGA 60fps

Color Camera

Up to 1080p 30 fps, 720p 60 fps

Up to 1080p 30 fps

SDK

SDK 2015 R5 or later

SDK R2 or later

DCM version

DCM 3.0.24.51819*

DCM 1.4.27.41994*

Operating System

Windows 10 64-bit RTM

Windows 10 64-bit RTM, Windows 8 64-bit

Range

Indoors; 20 – 120cm

Indoors; 20 – 120cm

* As of Feb 19th, 2016.

New Features only Supported by SR300

Cursor Mode

The standout feature for the SR300 camera model is Cursor Mode. This tracking mode returns a single point on the hand allowing accurate and responsive 3D cursor point tracking and basic gestures. Cursor mode also improves power and performance more than 50% compared to Full Hand mode but without latency or requiring calibration. It also increases range to 85cm and tracks hand motion speed up to 2m/s. Cursor Mode includes the Click gesture to simulate the mouse click using the index finger.


Figure 2: Click gesture.

Person Tracking

Another new feature provided for the SR300 model is Person Tracking. Person Tracking also supports the rear facing camera R200, but is not available for the F200. Person Tracking supports real-time 3D body motion tracking. It has three main tracking modes: the body movement, skeleton joints, and facial recognition.

  • Body movement: Locates the body, head and body contour.
  • Skeleton joints: Return the position of body’s joints in 2D and 3D data.
  • Facial recognition: Compares the current face against the database of registered users to determine the user’s identification.

Person Tracking

SR300

F200

Detection

50-250 cm

NA

Tracking

50-550 cm

NA

Skeleton

50-200 cm

NA

Increased Range and Lateral Speed

The SR300 camera model introduces a new Depth mode called Fast VGA. It captures frames at HVGA but interpolates the frames to VGA before transmitting to a client. This new depth mode reduces exposure time for scenes and allows hand motion speed up to 2m/s while native VGA F200 support accepts hand motion speed only up to 0.75m/s. The SR300 model also provides a significant improvement in range from the F200 model. Using hand tracking, the SR300 was able to achieve up to 85 cm while the F200 only achieved 60 cm. Hand segmentation range is increased up to 110 cm for the SR300 improved from 100 cm for the F200 model.

Hand Tracking Mode

SR300

F200

Cursor Mode - general

20-120 cm (2m/s)

NA

Cursor Mode - kids

20-80 cm (1-2m/s)

NA

Tracking

20-85 cm (1.5m/s)

20-60 cm (0.75m/s)

Gesture

20-85 cm (1.5m/s)

20-60 cm (0.75m/s)

Segmentation

20-120 cm (1m/s)

20-100 cm (1m/s)

The range for face recognition increases from 80 cm for the F200 up to 150 cm for the SR300 model.

Face Tracking Mode

SR300

F200

Detection

30-100 cm

25-100 cm

Landmark

30-100 cm

30-100 cm

Recognition

30-150 cm

30-80 cm

Expression

30-100 cm

30-100 cm

Pulse

30-60 cm

30-60 cm

Pose

30-100 cm

30-100 cm

The SR300 model improves RGB texture mapping and achieves a more detailed 3D scan. The range for 3D scan increases up to 70 cm while also allowing more details. Blob tracking speed increases up to 2m/s and its range increases up to 150 m/s in the SR300 model.

Others Tracking Mode

SR300

F200

3D scanning

25-70 cm

25-54 cm

Blob Tracking

20-150 cm (2m/s)

30-85 cm (1.5m/s)

Object Tracking

30-180 cm

30-180 cm

The depth range of the SR300 model was improved by 50%-60%. At the 80 cm range, both SR300 and F200 cameras detect the hand clearly. When the range gets longer than 120 cm, the SR300 can still detect the hand while F200 can’t detect the hand at all at that range.


Figure 3: SR300 vs F200 depth range.

Improved Color Quality Under Low-light Capture and Improved RGB Texture for 3D Scan

The new auto exposure feature is only available with the SR300 model. The exposure compensation feature allows the images taken in low-light or high-contrast to achieve better color quality. The color stream frame rate in the low-light condition might be lower when the color stream auto exposure is enabled.

Function

SR300

F200

Color EV Compensation Control

Yes

No

Improved Color and Depth Stream Synchronization

The F200 model only supports multiple depth and color applications running at the same frame rate. The SR300 supports multiple depth and color applications running at different frame rates, within an integer interval, while maintaining temporal synchronization. This allows software to switch between different frame rates without having to start or stop the video stream.

Camera Temporal Synchronization

SR300

F200

Sync different stream types of same frame rate

Yes

Yes

Sync different stream types of different frame rate

Yes

No

Decreased Power Consumption

The SR300 camera model enables additional power gear modes that can operate at lower frame rates. This allows the image system to reduce the power consumption of the camera, but still maintains awareness. With the power gears mode, SR300 can process the scene autonomously while the system is in standby mode.

Backward Compatibility with F200 Applications

The Intel RealSense Depth Camera Manager (DCM) 3.x enables the SR300 camera to function as an F200 camera to provide backwards compatibility for applications developed for the F200 camera model. The DCM emulates the capabilities of the F200 camera so that the existing SDK applications can work seamlessly on the SR300 model. SR300 features are supported in SDK R5 2015 or later.

When a streaming request comes from an SDK application compiled with SDK earlier than SDK R5 2015, the DCM will automatically activate the compatibility mode and send calls through the F200 pipe instead of the SR300 pipe. Most applications should work without any configuration on the new SR300 model.

Infrared Compatibility

The SR300 supports a 10-bit native infrared data format while the F200 supports an 8-bit native infrared data format. The DCM driver will provide compatibility by either removing or padding 2-bit of the data to fit the requested infrared data size.

Physical Connector

The motherboard and cable design for F200 and SR300 are identical. The F200 cable plug fits into an SR300 receptacle. Therefore, an F200 cable can be used for an SR300 camera model. Both models require fully powered USB 3.0.

SDK APIs

Most SDK APIs are shared between SR300, F200 and even R200 in some cases, and the SDK modules provide the proper interface depending on the camera found at runtime. Similarly, simple color and depth streaming that does not call specific resolutions or pixel formats will run without change required.

And by using the SenseManager to read raw streams, no code change is needed to pick stream resolutions, frame rate, and pixel format without hardcoding.

For the above automatic change depending on camera, it’s important for every app to check for camera model and configuration at runtime. See Installer Options in the SDK documentation.

DCM

As of this writing, the gold DCM version for SR300 is DCM 3.0.24.59748 and updates will be provided by Windows Update. Visit https://software.intel.com/en-us/intel-realsense-sdk/download to download the latest DCM. For more information on the DCM, go to Intel® RealSenseTM Cameras and DCM Overview.

Camera Type

SR300

F200

R200

DCM Installer Version

3.x

1.x

2.x

Hardware Requirements

To support the bandwidth needed by the Intel RealSense camera, a USB 3 port is required in the client system. For details on system requirements and supported operating systems for SR300 and F200, see https://software.intel.com/en-us/RealSense/Devkit/

Summary

This document summarizes the new features and enhancements available with the front-facing Intel RealSense 3D camera SR300 beyond those available with the F200. These new features are supported in SDK 2015 R5 and DCM 3.0.24.51819 or later. This new camera is available to order at http://click.intel.com/realsense.html.

Helpful References

Here is a collection of useful references for the Intel® RealSense™ DCM and SDK, including release notes and how to download and update the software.

About the Author

Nancy Le is a software engineer at Intel Corporation in the Software and Services Group working on Intel® Atom™ processor scale-enabling projects.


Intel® Parallel Computing Center at Argonne Leadership Computing Facility - Argonne National Laboratory

$
0
0

Principal Investigators:

Anouar Benali obtained a Ph.D. in Theoretical Physical Chemistry from the University of Toulouse (France) in 2010. He is an Assistant Computational Scientist at the Argonne Leadership Computing Facility and a fellow of the Computation Institute at the University of Chicago. His work focuses on implementing and speeding QMC algorithms for High Performance Computers.

 

Luke Shulenburger is a staff scientist at Sandia National Laboratories working on electronic structure calculations of materials with a particular focus on extremes of temperature and pressure. He received his PhD from the University of Illinois at Urbana-Champaign in 2008, and was a postdoctoral researcher at the Carnegie Institution of Washington until moving to Sandia in 2010.

 

Description:

Quantum Monte Carlo (QMC) has emerged as an important tool for extreme-scale calculations of complex material properties. QMCPACK is a code for calculating the electronic structure of materials with unprecedented accuracy. It works by stochastically solving the many-body Schrödinger equation. This method is uniquely suited for calculations of technologically important materials and has been shown to be predictive for a wide range of materials and molecules. Over the past decade, the size of the physical problems and computational facilities have been firmly in a regime where the method has been shown to scale nearly linearly with the number of computational elements available. The coming of the exascale era has allowed consideration of larger problems involving thousands of electrons that will need to utilize millions of threads, further straining this relationship. Additionally, the constant memory necessary for evaluating single-particle wavefunctions will grow beyond the fast device memory expected in heterogeneous architectures. Through the Intel® Parallel Computing Center, we aim to increase the current vectorization of the code, parallelize the work for each "walker" to achieve good parallel efficiency using nested threading, and finally develop a caching scheme to allow use of slower main memory for heterogeneous platforms with minor performance penalty. This project will pilot extreme-scale threading and vectorization in a popular QMC code and will disseminate the experience gained to other QMC codes, allowing the study of larger and more realistic systems with predictive accuracy.

Related websites:

http://qmcpack.org
http://www.alcf.anl.gov

DIY Pan-Tilt Person Tracking with Intel® RealSense™ Camera

$
0
0

Download Code Sample

Introduction

In the spirit of the Maker Movement and “America’s Greatest Makers” TV show coming this spring, this article describes a project I constructed and programmed: a pan-tilt person-tracking camera rig using an Intel® RealSense™ camera (R200) and a few inexpensive electronic components. The goal of this project was to devise a mechanism that extends the viewing range of the camera for tracking a person in real time.

The camera rig (Figure 1) consists of two hobby-grade servo motors that are directly coupled using tie wraps and double-sided tape, and a low-cost control board.

DIY pan-tilt camera rig

Figure 1. DIY pan-tilt camera rig.

The servos are driven by a control board connected to the computer’s USB port. A Windows* C# app running on a PC or laptop controls the camera rig. The app uses the Face Tracking and Person Tracking APIs contained in the Intel® RealSense™ SDK for Windows*.

The software, which you can download using the link on this page, drives the two servo motors in real time to physically move the rig nearly 180° degrees in two axes to center the tracked person in the field of view of the R200 camera. You can see a video of the camera rig in action here: https://youtu.be/v2b8CA7oHPw

Why?

The motivation to build a device like this is twofold: first, it presents an interesting control systems problem wherein the camera that’s used to track a moving person is also moving at the same time. Second, a device like this can be employed in interesting use cases such as:

  • Enhanced surveillance – monitoring areas over a wider range than is possible with a fixed camera.
  • Elderly monitoring – tracking a person from a standing position to lying on the floor.
  • Robotic videography – controlling a pan-tilt system like this for recording presentations, seminars, and similar events using a mounted SLR or video camera.
  • Companion robotics – controlling a mobility platform and making your robot follow you around a room.

Scope (and Disclaimer)

This article is not intended to serve as a step-by-step “how-to” guide, nor is the accompanying source code guaranteed to work with your particular rig if you decide to build something similar. The purpose of this article is to chronical one approach for building an automated person-tracking camera rig.

From the description and pictures provided in this document, it should be fairly evident how to fasten two servo motors together in a pan-tilt arrangement using tie wraps and double-sided tape. Alternatively, you can use a kit like this to simplify the construction of a pan-tilt rig.

Note: This is not a typical (or recommended) usage of the R200 peripheral camera. If you decide to build your own rig, make certain you securely fasten the camera and limit the speed and range of the servo motors to prevent damaging it. If you are not completely confident in your maker skills, you may want to pass on building something like this.

Software Development Environment

The software developed for this project runs on Windows 10 and was developed with Microsoft Visual Studio* 2015. The code is compatible with the Intel® RealSense™ SDK version 2016 R1.

This software also requires installation of the Pololu USB Software Development Kit, which can be downloaded here. The Pololu SDK contains the drivers, Control Center app, and samples that are useful for controlling servo motors over a computer’s USB port. (Note: this third-party software is not part of the code sample that can be downloaded from this page.)

Computer System Requirements

The basic hardware requirements for running the person-tracking app are:

  • 4th generation (or later) Intel® Core™ processor
  • 150 MB free hard disk space
  • 4GB RAM
  • Intel® RealSense™ camera (R200)
  • Available USB3 port for the R200 camera
  • Additional USB port for the servo controller board

Code Sample

The software developed for this project was written in C#/WPF using Microsoft Visual Studio 2015. The user interface (Figure 2) provides the color camera stream from the R200 camera, along with real-time updates of the face and person tracking parameters.

Custom software user interface

Figure 2. Custom software user interface.

The software attempts to track the face and torso of a single person using both the Face Tracking and Person Tracking APIs. Face tracking alone is performed by default, as it currently provides more accurate and stable tracking. If the tracked person’s face goes out of view of the camera, the software will resort to tracking the whole person. (Note that the person tracking algorithm is under development and will be improved in future releases of the RSSDK.)

To keep the code sample as simple as possible, it attempts tracking only if a single instance of a face or person is detected. The displacement of a bounding rectangle’s center to the middle of the image plane is used to drive the servos. The movements of the servos will attempt to center the tracked person in the image plane.

Servo Control Algorithm

The first cut at controlling the servos in software was to derive linear equations that effectively scale the servo target positions to the coordinate system shared by the face rectangle and image, as shown in the following code snippet.

Servo.cs
public class Servo
{
   public const int Up = 1152;
   public const int Down = 2256;
   public const int Left = 752;
   public const int Right = 2256;
   .
   .
   .
}

MainWindow.xaml.cs
private const int ImageWidthMin = 0;
private const int ImageWidthMax = 640;
private const int ImageHeightMin = 0;
private const int ImageHeightMax = 480;
.
.
.
ushort panScaled = Convert.ToUInt16((Servo.Right - Servo.Left) * (faceX –
ImageWidthMin) / (ImageWidthMax - ImageWidthMin) + Servo.Left);

ushort tiltScaled = Convert.ToUInt16((Servo.Down - Servo.Up) * (faceY –
	ImageHeightMin) / (ImageHeightMax - ImageHeightMin) + Servo.Up);

MoveCamera(panScaled, tiltScaled);

Although this approach came close to accomplishing the goal of centering the tracked person in the image plane, it resulted in oscillations that occurred as the servo target position and face rectangle converged. These oscillations could be dampened by reducing the speed of the servos, but this made the camera movements too slow to effectively keep up with the person being tracked. A PID algorithm or similar solution could have been employed to tune-out the oscillations, or perhaps employing inverse kinematics to determine the camera position parameters, but I decided to use a simpler approach instead.

The chosen solution simply compares the center of the face (faceRectangle) or person (personBox) to the center of the image plane in a continuous thread and then increments or decrements the camera position in both x and y axes to find a location that roughly centers the person in the image plane. Deadband regions (Figure 3) are defined in both axes to help ensure the servos stop “hunting” for the center position when the camera is approximately centered on the person.

Incremental tracking method

Figure 3. Incremental tracking method.

Building the Code Sample

The code sample has two dependencies that are not redistributed in the downloadable zip file, but are contained in the Pololu USB Software Development Kit:

  • UsbWrapper.dll (located in pololu-usb-sdk\UsbWrapper_Windows\)
  • Usc.dll (located in pololu-usb-sdk\Maestro\Usc\precompiled_obj\)

These files should be copied to the ServoInterface project folder (C:\PersonTrackingCodeSample\ServoInterface\), and then added as references as shown in Figure 4.

Third-party dependencies referenced in Solution Explorer

Figure 4. Third-party dependencies referenced in Solution Explorer.

Note that this project uses an explicit path to libpxcclr.cs.dll (the managed RealSense DLL): C:\Program Files (x86)\Intel\RSSDK\bin\win32. This reference will need to be changed if your installation path is different. If you have problems building the code samples, try removing and then re-adding this library reference.

Control Electronics

This project incorporates a Pololu Micro Maestro* 12-channel USB servo controller (Figure 5) to control the two servo motors. This device includes a fairly comprehensive SDK for developing control applications targeting different platforms and programming languages. To see how a similar model of this board was used refer to robotic hand control experiment article.

Pololu Micro Maestro* servo controller

Figure 5. Pololu Micro Maestro* servo controller.

I used Parallax Standard Servo motors in this project; however, similar devices are available that should work equally well for this application. The servos are connected to channels 0 and 1 of the control board as shown in Figure 5.

Servo Controller Settings

I configured the servo controller board settings before starting construction of the camera rig. The Pololu Micro Maestro SDK includes a Control Center app (Figure 6) that allows you to configure firmware-level parameters and save them to flash memory on the control board.

Control Center channel settings

Figure 6. Control Center channel settings.

Typically, you should set the Min and Max settings in Control Center to match the control pulse width of the servos under control. According to the Parallax Standard Servo data sheet, these devices are controlled using “pulse-width modulation, 0.75–2.25 ms high pulse, 20 ms intervals.” The Control Center app specifies units in microseconds, so Min would be set to 750 and Max set to 2250.

However, the construction of this particular device resulted in some hard-stops (i.e., positions that result in physical binding of the servo horn that can potentially damage the component). The safe operating range of each servo was determined experimentally, and these values were entered for channels 0 and 1 to help prevent it from inadvertently being driven to a binding position.

Summary

This article gives an overview of one approach to building an automated camera rig capable of tracking a person’s movements around a wide area. Beyond presenting an interesting control systems programming challenge, practical applications for a device like this include enhanced surveillance, elderly monitoring, etc. Hopefully, this project will inspire other makers to create interesting things with the Intel RealSense cameras and SDK for Windows.

Watch the Video

To see the pan-tilt camera rig in action, check out the YouTube video here: https://youtu.be/v2b8CA7oHPw

Check Out the Code

Follow the Download link to get the sample code for this project.

About Intel® RealSense™ Technology

To learn more about the Intel RealSense SDK for Windows, go to https://software.intel.com/en-us/intel-realsense-sdk.

About the Author

Bryan Brown is a software applications engineer at Intel Corporation in the Software and Services Group. 

Implementing an OpenStack Security Group Firewall Driver Using OVS Learn Actions

$
0
0

By Rodolfo Alonso Hernández

Motivation

Until now, only one firewall was implemented in OpenStack's Neutron project: an iptables-based firewall. This firewall is a natural fit for people using Linux Bridge for their networking needs. Unfortunately, Linux Bridge is not the only networking option in Neutron nor is it the most popular. This "award" instead goes to Open vSwitch (OVS), which currently powers an astonishing 46% of all OpenStack public deployments. While iptables may be a perfect complement for deployments using Linux Bridge, it isn't necessarily the best fit for OVS. Not only does meshing iptables with Open vSwitch require a lot of coding and networking "voodoo", but OVS itself already provides its own methods for implementing internal rules (using the OpenFlow protocol) that we should be using, which is what we did.

The firewall I present here is based entirely on OVS rules, thus creating a pure OVS model that is not dependent on functionality from the underlying platform. It uses the same public API to talk to the Neutron agent as the existing Linux Bridge firewall implementation and should be a straight swap for people already using OVS.

Technical approach

During the OpenStack Vancouver Summit 2015 a new firewall was presented. Although not yet implemented, it makes use of the "conntrack" module from Linux. This module provides a way to implement a "stateful firewall" through tracking of connection statuses. It is expected that using conntrack will minimize the need to bring the traffic packets up to user space to be processed and should therefore yield higher performance.

The firewall we implemented, on the other hand, is based on "learn action" These learn actions track the traffic in one direction and set up a new flow to allow the same traffic flow in the reverse direction. This implementation is fully based on the OpenFlow standard.

When a Security Group rule is added, a "manual" OpenFlow rule is added to the OVS configuration. This new rule allows, for example, ingress TCP traffic for a specific port. When a packet matches this rule, the "manual" rule allows the packet to be delivered to its destination. However, and this is a substantial aspect of the new firewall, a new "automatic" rule is to send reverse traffic replies back to the source.

We initially worried that this design could have an adverse effect on performance, due to the fact that using learn actions forces the processing of all packets in user space. However, the benchmark results provided later in this blog show that this design's performance was better than iptables. Even more significantly, this firewall allows the usage of the DPDK features of OVS, yielding performance that is more than four times higher than the performance of non-DPDK OVS without firewall.

How it works, in a nutshell

Traffic flows can be grouped into traffic between two internal virtual machines (east-west traffic) or traffic between an internal machine and an external host (north-south traffic). The Security Group rules only apply to machines controlled by Neutron and included in one or several security groups, which means only the virtual machines inside a compute host will be affected by these rules.

In a compute node several bridges are created inside the OVS to handle the traffic in the host. The bridge that the virtual machines are attached to is the "integration bridge". The firewall will only manage the rules of this bridge.

The firewall rules applied in the integration bridge begin to process traffic as soon as a packet arrives on this bridge. The following section describes the OVS processing tables and rules applied.

Processing tables, in-depth

The following graphics show the different processing tables, the rules applied, and the priority (from top to bottom).

Table 0: The Input Table

The Input Table

The first table, the input table (table = 0), is the default table, and all traffic injected inside a bridge is processed by this table. The ARP packets are processed with the highest priority. Each machine inside a VLAN must be able to populate its address among the other VLAN machines.

The rest of the traffic (no ARP packets) is sent to the selection table.

Table 1: The Selection Table

The Selection Table

The selection table watches if every packet is from or to a virtual machine. The rules added ensure only packets matching the stored port MAC address, VLAN tag and port number are allowed to pass. If a packet is a DHCP packet, the IP must be 0.0.0.0.

Traffic not matching this rule is dropped.

Table 2: The Ingress Table

The Ingress Table

The ingress table has three kinds of rules:

  • "Learn" input rules are created automatically when an output rule is matched. As stated previously, creation of this output rule also invokes the creation of a reverse input rule.
  • Services input rules are always added by the firewall to allow certain ICMP traffic and DHCP messages.
  • Manual input rules are added by the user in the Security Group.

If the traffic doesn't match any of these rules, it is dropped.

Table 3: The Egress Table

The Egress Table

The egress table has three kinds of rules (plus a fourth one):

  • "Learn" input rules are created automatically when an output rule is matched. As stated previously, creation of this output rule also invokes the creation of a reverse input rule.
  • Services output rules are always added by the firewall to allow certain ICMP traffic and DHCP messages.
  • Manual output rules are added by the user in the Security Group.

If the traffic doesn't match one of the previous rules, it is sent to the egress external traffic.

Egress external traffic

This table processes the north-south traffic. If the traffic needs to leave the integration bridge, then it reaches this table. A final check is made: if any traffic in this table is sent to a virtual machine, then this traffic is dropped. Only external egress traffic must be managed by this table. The traffic not filtered by these rules is sent using the "normal" action. The packets are sent by OVS using the built-in ARP table.

Benchmarks

We conducted the benchmark testing of this new firewall on a server with two Intel® Xeon® processors E5-2680 v2 @ 2.80GHz (40 threads) and 64 GB of RAM.

A unique controller/compute node was created for this test, with a virtual machine running on the same host. The virtual machine was booted with Ubuntu* Desktop Edition 14.04 LTS (3.16.0-30 kernel), 4 GB of RAM, and 8 cores.

To inject traffic inside the host and the virtual machine, an Ixia XG12 chassis was used with a 10 Gbps port.

No DPDK, no firewall

Packet size \ Users1101001,00010,000
6493.1486.6698.5896.41101.38
128121.08122.29118.03121.78118.17
256144.93197.10197.08197.09197.08
512366.59421.05420.78420.88420.83
1024870.21870.43814.53815.06759.62
12801,122.00984.451,053.851,053.03929.00
15181,125.551,264.421,264.041,195.111,056.33

No DPDK, iptables

Packet size \ Users1101001,00010,000
6493.1891.3884.7592.8384.20
128101.0688.3394.2987.5190.16
256197.10144.75197.10196.90144.86
512421.05312.54312.78366.90312.78
1024815.28594.25704.98760.15649.80
1280984.56707.33984.57983.91818.45
15181,125.80709.091,125.431,125.55875.31

No DPDK, learn actions

Packet size \ Users1101001,00010,000
6492.45101.9576.3693.24100.92
128131.15122.0099.48108.4091.91
256196.94197.09197.10197.06197.13
512366.59421.05421.04421.02366.95
1024759.61870.34815.04870.21704.87
1280983.69986.76929.21984.55929.15
1518931.071,264.45986.171,264.32931.54

No DPDK, learn actions

Packet size \ Users1101001,00010,000
641,641.76923.58952.421,010.71910.67
1282,761.311,575.621,578.311,708.011,543.14
2565,257.293,188.033,186.353,350.853,074.94
5128,946.625,304.725,393.625,616.175,410.86
10249,807.889,699.399,634.919,806.728,810.03
12809,845.589,845.579,845.559,845.539,845.50
15189,869.439,869.379,869.389,869.399,869.34

DPDK, learn actions

Packet size \ Users1101001,00010,000
64559.01359.72311.09252.73134.53
1281,022.93713.27540.54540.28151.89
2562,028.901,353.601,066.66996.52163.30
5123,742.672,537.262,043.941,898.97169.93
10246,619.184,817.833,829.523,712.83248.32
12808,012.435,858.604,709.674,534.04475.08
15188,946.966,786.095,472.005,296.74551.59

Note: All data presented in these tables is preliminary. Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.

How to deploy it

To use this firewall in combination with OVS DPDK, you need to download the networking-ovs-dpdk project:

git clone https://github.com/stackforge/networking-ovs-dpdk.git

Also you can install this project as a PIP package:

pip install networking-ovs-dpdk

To enable the firewall, you can add this section to your devstack/local.conf file:

[securitygroup]
#firewall_driver = neutron.agent.linux.iptables_firewall
firewall_driver = networking_ovs_dpdk.agent.ovs_dpdk_firewall.OVSFirewallDriver

 

Intel® SGX Product Licensing

$
0
0

The Intel® SGX SDK for Windows was recently made available on the Intel Developer Zone site.  The SDK is provided under an evaluation license.  Since the release of the SDK, we’ve received a number of inquiries about getting a production license for Intel® SGX.  While the particulars of the production license agreement are fairly routine, it might be helpful to those that have expressed an interest to get a better sense of the context within which production license requests are considered.

Developers should first consider whether a production license is necessary.  Intel® SGX is a CPU-based technology that allows developers to protect select portions of an application.  This protection is based on the use of Intel® SGX enclaves.  With the Intel® SGX SDK for Windows, it is possible to create debug enclaves.  A good description of the range of possibilities offered by debug enclaves is provided in this blog by SGX Program Architect Simon Johnson.  It can be inferred from Simon’s blog that a production license is required when developers plan to ship commercial software that needs to keep enclaved code confidential. 

This brings us back to the topic of considerations that factor into evaluating production license requests.  Since the ability to launch an enclave puts developers in a position of trust on a given platform, Intel assesses the ability of applicants for production licenses to meet critical security requirements underpinning the use of Intel® SGX. 

While not a complete list, the three areas below outline some key expectations of production license recipients.  Applicants should note that this list is not exhaustive and there may be additional requirements that must be fulfilled prior to being granted a production license.  At a minimum, potential licensees must have a demonstrated ability to perform:

  1. Secure Software Development:  Licensees must use good development techniques and programming practices, including those highlighted in the Intel ® SGX Enclave Writers Guide that accompanies the Intel® SGX SDK.  In addition, licensees must follow secure coding practices to avoid vulnerabilities; agree to notify Intel of, and fix, vulnerabilities within a pre-defined time; re-distribute and keep current the Intel® SGX Platform Software included with their SGX-enhanced application; and undertake not to write malware, spyware, nuisance-ware or fail to deliver on the security promise implied by the use of Intel® SGX enclaves.  Applications that may consume all available enclave memory, impact system stability, or affect user experience as a result of inability to launch their enclave(s) may require significant investigation and discussion.  The ability to uninstall licensee applications, upon user request, must be complete, including the removal of sealed data.
  2. Enclave Signing Key Management:  Developers requesting a Production License must demonstrate the ability to protect their enclave signing key and have a security protocol/program in place which accords with industry best practices for key management.  At a minimum, potential licensees must have information security procedures in place which implement the following requirements:

    Licensees should implement the principle of least privilege (multi-factor authentication for access, blocking unused ports, installing all security updates, running an updated AV scanner, separating networks and credentials used for development systems from other computing systems) for development and key management systems; ensure that code testing minimizes exposure of private keys and signing mechanisms by using an internal test signing Certificate Authority; set up a parallel code signing infrastructure for developers to use that internal CA; store keys in a secure, tamper-proof, cryptographic hardware device such as an HSM; and implement physical security measures (cameras, guards, fingerprint scanners, background checks) to protect against theft (by insiders and infiltrators), compromise, and abuse.  Licensees must agree to notify Intel of any breach, loss or theft of their enclave signing key within a predefined time.

  3. Relying Party Functions:   Licensees will act as a relying party to the Intel Attestation Verification Service.  As a result, licensees will be required to demonstrate their ability to manage, update, and control application servers that deliver Intel® SGX enhanced applications to capable platforms.  These application servers must comply with the requirements (SLAs, rate limiting, usage limits, DDoS prevention, etc.) of the Development and Production versions of the Intel Attestation Verification Service.  Relying party functionality relative to the Intel Attestation Verification Service includes the ability to process Linkable and Anonymous Quotes and to deliver updates of the Intel® SGX Platform Software.

 

With this context in mind, developers who want to ship commercial software that uses Intel® SGX should contact the SGX Program to initiate the process of applying for a production license as soon as they are:

  1. Ready to provide a detailed description of the application and intended SGX use case(s) and prepared to answer detailed follow-up questions.
  2. Able to demonstrate to Intel’s satisfaction that they have business processes and controls in place to meet or exceed the security requirements described above.

Intel will provide a non-disclosure agreement to cover the information above if we do not already have one in place with your company.

Open Source Downloads

$
0
0

This article makes available third-party libraries, executables and sources that were used in the creation of Intel® Software Development Products or are required for operation of those. Intel provides this software pursuant to their applicable licenses.

 

Required for Operation of Intel® Software Development Products

The following products require additional third-party software for operation.

Intel® Parallel Studio XE 2015 Composer Edition for C++ Windows* and
Intel® System Studio 2015 Composer Edition for Windows*:

The following binutils package is required for operation with Intel® Graphics Technology:
Downloadapplication/zipDownload
Please see Release Notes of the product for detailed instructions on using the binutils package.

The above binutils package is subject to various licenses. Please see the corresponding sources for more information:
Downloadapplication/zipDownload
 

Required for use of offload with Open Source Media Kernel Runtime for Intel® HD Graphics

The following products require additional third-party software for the mentioned operation.

Intel® Parallel Studio XE 2016 Composer Edition for C++ Linux* and
Intel® System Studio 2016 Composer Edition for Linux*:

The following installation guide together with the build and installation script are required:

Downloadapplication/pdfInstallation Guide for OTC CMRT.pdf

Downloadapplication/zipotc_cmrt_build_and_install.zip
This file contains the otc_cmrt_build_and_install.sh script. Please unpack it.

Used within Intel® Software Development Products

The following products contain Intel® Application Debugger, Intel® Debugger for Heterogeneous Compute, Intel® Many Integrated Core Debugger (Intel® MIC Debugger), Intel® JTAG Debugger, and/or Intel® System Debugger tools which are using third party libraries as listed below.

Products and Versions:

Intel® System Studio 2016*

  • Intel® System Studio 2016 Composer Edition
    (Initial Release and higher)

Intel® Parallel Studio XE 2016 for Linux*

  • Intel® Parallel Studio XE 2016 Composer Edition for C++ Linux*/Intel® Parallel Studio XE 2016 Composer Edition for Fortran Linux*
    (Initial Release and higher)

Intel® Parallel Studio XE 2015 for Linux*

  • Intel® Parallel Studio XE 2015 Composer Edition for C++ Linux*/Intel® Parallel Studio XE 2015 Composer Edition for Fortran Linux*
    (Initial Release and higher)

Intel® Composer XE 2013 SP1 for Linux*

  • Intel® C++ Composer XE 2013 SP1 for Linux*/Intel® Fortran Composer XE 2013 SP1 for Linux*
    (Initial Release and higher; 13.0 Intel® Application Debugger)

Intel® Composer XE 2013 for Linux*

  • Intel® C++ Composer XE 2013 for Linux*/Intel® Fortran Composer XE 2013 for Linux*
    (Initial Release and higher; 13.0 Intel® Application Debugger)

Intel® Composer XE 2011 for Linux*

  • Intel® C++ Composer XE 2011 for Linux*/Intel® Fortran Composer XE 2011 for Linux*
    (Update 6 and higher; 12.1 Intel® Application Debugger)
  • Intel® C++ Composer XE 2011 for Linux*/Intel® Fortran Composer XE 2011 for Linux*
    (Initial Release and up to Update 5; 12.0 Intel® Application Debugger)

Intel® Compiler Suite Professional Edition for Linux*

  • Intel® C++ Compiler for Linux* 11.1/Intel® Fortran Compiler for Linux* 11.1
  • Intel® C++ Compiler for Linux* 11.0/Intel® Fortran Compiler for Linux* 11.0
  • Intel® C++ Compiler for Linux* 10.1/Intel® Fortran Compiler for Linux* 10.1

Intel® Embedded Software Development Tool Suite for Intel® Atom™ Processor:

  • Version 2.3 (Initial Release and up to Update 2)
  • Version 2.2 (Initial Release and up to Update 2)
  • Version 2.1
  • Version 2.0

Intel® Application Software Development Tool Suite for Intel® Atom™ Processor:

  • Version 2.2 (Initial Release and up to Update 2)
  • Version 2.1
  • Version 2.0

Intel® C++ Software Development Tool Suite for Linux* OS supporting Mobile Internet Devices (Intel® MID Tools):

  • Version 1.1
  • Version 1.0

Intel AppUp™ SDK Suite for MeeGo*

  • Initial Release (Version 1.0)

Used third-party libraries:
Please see the attachments for a complete list of third-party libraries.

Note: The packages posted here are unmodified copies from the respective distributor/owner and are made available for ease of access. Download or installation of those is not required to operate any of the Intel® Software Development Products. The packages are provided as is, without warranty or support.

Intel C++ and Fortran Compilers for Windows* - Required Microsoft* development software

$
0
0

This article provides an overview of the Microsoft development software required to use the Intel® C++ and Intel® Visual Fortran Compilers as part of Intel® Parallel Studio XE 2016. For more details, refer to the product Release Notes. Abbreviated information on older versions is below.

Microsoft Development Software Requirements

To use the Microsoft Visual Studio* development environment or command-line tools to build IA-32 or Intel® 64 architecture applications, one of:

  • Microsoft Visual Studio 2015* Community Edition or higher with C++ component installed
    • NOTE! Microsoft Visual Studio 2015 does not install the C++ component by default - you must select it using the Customize option during installation.
  • Microsoft Visual Studio 2013* Community Edition or higher with C++ component installed
  • Microsoft Visual Studio 2012* Professional Edition or higher with C++ component installed
  • Microsoft Visual Studio 2010* Professional Edition or higher with C++ component installed
    • NOTE! Support for Microsoft Visual Studio 2010 is deprecated and will be removed in the next major release.
  • For Fortran only - Intel® Visual Fortran development environment based on Microsoft Visual Studio 2013 Shell (included with Commercial and Academic licenses of Intel® Parallel Studio XE)
    • NOTE! Intel® Visual Fortran development environment based on Microsoft Visual Studio 2013 Shell is included with Academic and Commercial licenses for Intel® Visual Fortran. It is not included with Evaluation or Student licenses. This development environment provides everything necessary to edit, build and debug Fortran applications. Development of C++ applications is not supported by this environment. Some features available with the full Visual Studio product are not included, such as:
      • Resource Editor (see ResEdit*, a third-party tool, for a substitute)
      • Automated conversion of Compaq* Visual Fortran projects

Microsoft Visual Studio Express Editions do not provide full functionality for Intel® compilers and are not recommended.

Intel® Advisor XE, Intel® Inspector XE, Intel® Trace Analyzer and Collector, and Intel® VTune Amplifier XE, included with some editions of Intel® Parallel Studio XE, are supported from the Fortran-only development environment based on Microsoft Visual Studio Shell.

Previous Versions

The following lists Microsoft Visual Studio versions supported by previous versions of the Intel® compilers . Also noted is information on the included Fortran development environment, if any.)

  • Intel® Parallel Studio XE 2015 Update 4 or later (compiler 15.0.4) VS2010, VS2012, VS2013, VS2015 (includes VS2010 Shell)
  • Intel® Parallel Studio XE 2015 Initial release through update 3 (compiler 15.0) VS2010, VS2012, VS2013 (includes VS2010 Shell)
  • Intel® Composer XE 2013 SP1 Update 1 or later (compiler 14.0.1) - VS2008, VS2010, VS2012, VS2013 (includes VS2010 Shell)
  • Intel® Composer XE 2013 SP1 initial release (compiler 14.0.0) - VS2008, VS2010, VS2012 (includes VS2010 Shell)
  • Intel® Composer XE 2013 (compiler 13.0 and 13.1) - VS2008, VS2010, VS2012 (includes VS2010 Shell)
  • Intel® Composer XE 2011 (compiler 12.0 and 12.1) - VS2005, VS2008, VS2010 (includes VS2008 Shell (12.0) or VS2010 Shell (12.1))
  • Intel® C++ and Fortran Compilers 11.1 - VS2005, VS2008 (includes VS2008 Shell)
  • Intel® C++ and Fortran Compilers 11.0 - VS2003, VS2005, VS2008 (includes VS2005 Premier Partner Edition)
  • Intel® C++ and Fortran Compilers 10.1 - VS2003, VS2005, VS2008 (includes VS2005 Premier Partner Edition)
  • Intel® C++ and Fortran Compilers 10.0 - VS2003, VS2005 (includes VS2005 Premier Partner Edition)
  • Intel® C++ and Fortran Compilers 9.1 - VS2002, VS2003, VS2005 (requires separate Visual Studio)
  • Intel® C++ and Fortran Compilers 9.0 - VS2002, VS2003 (requires separate Visual Studio)
  • Intel® C++ and Fortran Compilers 8.1 - VS2002, VS2003 (requires separate Visual Studio)
  • Intel® C++ and Fortran Compilers 8.0 - VS2002, VS2003 (requires separate Visual Studio)

Chat Heads with Intel® RealSense™ SDK Background Segmentation Boosts e-Sport Experience

$
0
0

Chat Heads is a sample that uses the Intel® RealSense™ SDK to overlay background segmented (BGS) player images on a 3D scene or video playback in a multiplayer setting. The code is written in C++ and uses DirectX*.

In this article, we demonstrate a novel Intel RealSense SDK use case that can improve the e-sport experience of a game by overlaying players’ background segmented video streams on the game. This example will help you understand the various pieces in the implementation (using the Intel RealSense SDK, multiplayer networking, and media encode and decode), their interactions, and the resulting performance.

Figure 1:Screenshot of the sample with two players with a League of Legends* video clip playing in the background.

Installing, Building, and Running the Sample

Download the sample at: https://github.com/GameTechDev/ChatHeads

The sample has many dependencies. It uses RakNet for networking, the Theora Playback Library to play back ogg videos and ImGui for the UI. These are included in the source code.

Windows Media Foundation* (WMF) is a required dependency for encoding and decoding the BGS video streams. The WMF runtime/SDK should be installed by default with a Windows* 8 or greater system. If it is not already installed, install the Windows SDK.

Building and Running the Sample:

Install the Intel® RealSense™ SDK (v5 or higher) prior to building the sample. The header and library include paths use the RSSDK_DIR environment variable, which is set during the SDK installation.

The solution file is at ChatheadsNativePOC\ChatheadsNativePOC and should build successfully with VS2013 and VS2015.

Install the Intel® RealSense™ Depth Camera Manager, which includes the camera driver, before running the sample. The sample has been tested on Windows 8.1 and Windows 10 using both the external and embedded Intel® RealSense™ cameras.

When you start the sample, the option panel shown in Figure 2 displays:

Figure 2:Option panel at startup.

There are four named sections that comprise three actions to take for startup:

  • Scene selection. Select between League of Legends* video, Hearthstone* video and a CPUT (3D) scene. Click the Load Scene button to render the selection. This does not start the Intel RealSense software; that happens in a later step.
  • Resolutions. The Intel RealSense SDK background segmentation modality supports a handful of profiles (color stream resolutions). Setting a new resolution results in a shutdown of the current Intel RealSense SDK session and initializes a new one.
  • Is Server / IP Address. If running as the server, select the Is Server box and then click Start. If running as a client, enter the IP and then click start. This initializes the network and Intel RealSense SDK and plays the selected scene. The maximum number of connected machines (server and client(s)) is hardcoded to 4 in code (NetworkLayer.h) 
    Note: While a server and client can be started on the same system, they cannot use different profiles (color stream resolutions). Attempting to do so will crash the Intel RealSense SDK runtime since two different profiles can’t run simultaneously on the same camera.

After the network and Intel RealSense SDK initialize successfully, the panels shown in Figure 3 display:

Figure 3:Chat Heads option panels.

The Option panel has multiple sections, each with their own control settings. The sections and their fields are:

  • Chat Head Option Panel
    • Scenes - Select a scene, and then click Load Scene to render it.
    • Resolutions - Select a resolution for the background modality. Click Set Resolution to apply the resolution.
      Note: the client and server cannot use different resolutions when run on the same machine.
  • BGS/Media controls
    • Show BGS Image - If disabled, the color stream is simply used (even though BGS processing still happens). This affects the remote Chat Heads as well (that is, if both sides have the option disabled, you’ll see the background in the video stream).
    • Pause BGS - Pause the BGS modality (CPU work for segmentation doesn't happen).
    • BGS frame skip interval - The frequency at which the BGS algorithm runs. Enter 0 to run every frame, 1 to run once in two frames, and so on. The limit exposed by the RSSDK is 4.
    • Encoding threshold - The encoding threshold is an 8-bit value that determines which pixels are background pixels. See the Implementation section for details.
    • Decoding threshold - The decoding threshold is an 8-bit value that determines which pixels are background pixels. See the Implementation section for details.
  • Size/Pos controls
    • Size - Click/drag within the boxes to resize the sprite. Use it with different resolutions to compare quality.
    • Pos - Click/drag within the boxes to reposition the sprite.
  • Network control/information
    • Network send interval (ms) - Time, in milliseconds, of how often video update data is sent.
    • Sent - Graph of data sent by a client or server.
    • Rcvd - Graph of data received by a client or server. Clients send their updates to the server, which then broadcasts it to the other clients. For reference, to stream 1080p Netflix* video, the recommended b/w required is 5 Mbps (640 KB/s).
  • Metrics
    • Process metrics
      • CPU Used - The BGS algorithm runs on several Intel® Threading Building Blocks threads and in the context of a game, can use more CPU resources than desired. Play with the Pause BGS and BGS frame skip interval options and change the Chat Head resolution to see how it affects the CPU usage.

Implementation with Intel® RealSense™ Camera

Since the Intel RealSense software updates its color buffer on an interval basis, with AcquireFrame() blocking until all color samples are ready, it is costly to execute it on the application thread. Thus, all calls to the Intel RealSense SDK happen in a separate thread. The blocking nature of AcquireFrame() also means that synchronization primitives between the application and RealSense thread is not necessary. The networking thread takes care of handling incoming messages and is woken up by the app thread on every Update().

Figure 4 shows the post-initialization interaction and data flow between these systems (threads).

Figure 4: Interaction flow between local and remote Chat Heads.

Color Conversion

The color conversion process prior to encode combines two BGRA pixels into one YUYV pixel. The Intel RealSense camera BGS image uses PXCImage::PixelFormat::PIXEL_FORMAT_RGB32 with alpha set to 0 for background pixels. That format maps directly to the DirectX texture format: DXGI_FORMAT_B8G8R8A8_UNORM_SRGB.

However, YUYV doesn't have alpha, so we use a simple hack of setting Y, U, and V channels to 0 for background pixels. The YUYV bitstream is then encoded using WMF’s H.264 encoder. While decoding, the decoded YUYV values can be non-zero for background pixels because of lossy compression. The specific way to work around the lack of an alpha channel is using encoding and decoding thresholds (exposed in the UI).

If the segmented image alpha is less than the encoding threshold for either of the BGRA pixels, the resulting YUYV pixel is set to 0. If the Y1, U, Y2, and V channels of the decoded pixel are less than the decoding threshold, the resulting BGRA pixels have alpha set to 0. When the decoding threshold is set to 0, you'll notice green highlights around the remote player(s). This is due to the math converting YUYV to BGR (0 YUYV doesn’t correspond to black; it is green).

Bandwidth

The amount of data sent depends on the network send interval and local chat head resolution. The bandwidth varies from ~10 KBps (80 kbps) to ~100 KBps (800 kbps) by reducing the send interval from 70 ms to 10 ms for a 320x240 resolution. Increasing the resolution doesn’t linearly increase the amount of encoded data sent, since a good chunk of the image is the background (YUYV set to 0) which results in much better compression.

Performance

The sample uses Intel® Instrumentation and Tracing Technology (Intel® ITT) markers and Intel® VTune Amplifier XE to help measure and analyze performance. To enable them, uncomment

//#define ENABLE_VTUNE_PROFILING // uncomment to enable marker code

In the file

ChatheadsNativePOC\itt\include\VTuneScopedTask.h

and rebuild.

A concurrency view with varying BGS work (taken on an Intel® Core™ i7-4770R processor with 8 logical cores) is shown below.

Conclusion

The Chat Heads usage is targeted at improving the game experience without sacrificing the game’s quality or performance, both as a player seeing friends’ reactions as you play and as a spectator seeing pro players react as they play. This sample lets you tinker with some of the options and judge the resulting experience.

Acknowledgements

A huge thanks to Jeff Laflam for pair-programming the sample. Thanks also to Brian Mackenzie for the WMF based encoder/decoder implementation and Doug McNabb for CPUT clarifications.


The Dark Side of the Internet of Things

$
0
0

 

../../../Users/leep/Desktop/118723

Most of us are keenly aware of the potential and promise of the Internet of Things (IoT). It’s easy to visualize a bright future arising from the many advantages of linking cars, shipping containers, office buildings, factories, refrigerators, cooking devices, health monitors, thermostats, and other things to a vast repository in the cloud where intelligence extracted from Big Data can inform our actions and enhance our lives. In the enthusiasm to embrace IoT technology, however, ongoing privacy issues and security threats are sometimes going unnoticed. These issues are gaining more attention, highlighting concerns that should be factored into planning, development projects, and broader IoT implementations.

IoT as a Spy Tool

In recent testimony before the Senate Armed Service Committee. The Director of National Intelligence James Clapper raised the issue of threats to global security posed by governments using the IoT as a spy tool.

Clapper said, “Smart devices incorporated into the electric grid, vehicles—including autonomous vehicles—and household appliances are improving efficiency, energy conservation, and convenience. However, security industry analysts have demonstrated that many of these new systems can threaten data privacy, data integrity, or continuity of services. In the future, intelligence services might use the loT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials."

So, as architects, developers, and builders of IoT devices and infrastructures, how do we best respond to this implied threat? Put security first may be the watchword in IoT development in the months to come.

The Magnitude of the Threat

By introducing IoT sensors and devices into a vast global network—integrated with healthcare equipment, the smart grid, aviation systems, industrial control systems, government institutions, and so on—hackers have a potential avenue to escalate their efforts beyond stealing money and shutting down websites to impacting vital infrastructures, causing large-scale system failures and massive destruction. We’ve not yet seen broad scale breaches only because the IoT has not yet reached a stage where hackers have the incentive to target it, according to Robin Duke-Woolley, CEO of Beecham Research.

Quoted in an IoT University article, Duke-Woolley said, “Security in the Internet of Things is significantly more complex than existing M2M applications or traditional enterprise networks. Data must be protected within the system, in transit or at rest and significant evolution is required in the identification, authentication and authorization of devices and people. We must also recognize that some devices in the field will certainly be compromised or simply fail; so there needs to be an efficient method of secure remote remediation – yet another challenge if the IoT is to live up to expectations.”

 
An IoT Security Threat Map developed by Beecham Research details the pathways for intrusion, application hijacking, authentication vulnerabilities, and identity theft.
 

Getting Ahead on IoT Security

Each major advance in computer technology has stimulated a re-examination of fundamental security provisions, from mainframe to client/server, to mobile devices, to the cloud. And, typically, establishing effective IT security provisions lags a bit behind these advances. The scale and scope of IoT with the prospect of unprecedented numbers and types of devices suddenly exchanging data in real time with event-driven applications and a mix of protocols raises the vulnerability threat considerably. Rather than relying on layered security protections—device-by-device—as has been done with smartphones and many mobile devices, enterprises may increasingly turn to consolidating protection at the gateway level. With IoT, we have the opportunity to establish a framework of deployment practices and address security protections early in the development process, rather than after vulnerabilities begin causing havoc.      

In an article for InformationWeek, technology reporter Jai Vijayan suggested these measures for confronting IoT security risks:

  1. Bake security into IoT applications from the start: IoT amplifies security vulnerabilities because of its interconnected nature; plan up front to deal with these vulnerabilities.

  2. Identify risks: Identify critical IoT vulnerabilities, including web interface authentication, insufficient security configurability, poor physical controls, and lack of transport encryption security.

  3. Segment networks: Keep IT networks properly segmented to avoid one security issue leading to other network problems.

  4. Implement a layered security system: Deploy multi-layered controls to mitigate threats. Move beyond traditional controls, such as firewalls, intrusion detection systems, and anti-virus tools.

What is your take on IoT security? Feel free to leave a comment below.

To learn more about Intel's IoT technology go to A Fast, Flexible, And Scalable Path To Commercial Iot Solutions.

 

Footnotes

  1. Kravets, David. 2016. Internet of Things to be used as spy tool by governments: US intel chief. Arstechnica. http://arstechnica.com/tech-policy/2016/02/us-intelligence-chief-says-iot-climate-change-add-to-global-instability/.
  2. Skeldon, Paul. 2016. ‘IoT Threat Map’ reveals extent of security challenges facing the Internet of Things. IoT University. https://www.iotuniversity.com/2016/02/iot-threat-map-reveals-extent-of-security-challenges-facing-the-internet-of-things/.
  3. Vijayan, Jai. 2015. 5 Ways to Prepare for IoT Security Risks. InformationWeek. http://www.darkreading.com/endpoint/5-ways-to-prepare-for-iot-security-risks/d/d-id/1319215.
 

 

 

Peel the onion (optimization techniques)

$
0
0

This paper is a more formal response to a IDZ Forum posting. See: (https://software.intel.com/en-us/forums/intel-moderncode-for-parallel-architectures/topic/590710).

The issue as expressed by original poster was that the code did not scale well using OpenMP on an 8 core E5-2650 V2 processor with 16 hardware threads. I took some time on the forum to aid the poster by giving him some pointers, but did not take sufficient time to fully optimize the code. This article will address additional optimizations that can be made beyond that laid out in the IDZ forum.

I have to say it is unclear as to the experience level of the original poster; I am going to assume he has recently graduated from an institution that may have taught parallel programming with an emphasis on scaling. In the outside world, the practicalities are:  systems have limited amount of processing resources (threads), and the emphasis should be on efficiency as well as scaling. The original sample code on that forum posting provides us with the foundation of a learning tool of how to address efficiencies, in the greater sense, and scaling in the lesser sense.

In order to present code for this paper, I took the liberty to re-work the sample code, while keeping with the overall design and spirit of the original code. This means, I kept the fundamental algorithm intact as the example code was taken from an application that may have had additional functionality requiring the given algorithm. The provided code sample used an array of LOGICALs (mask) for flow control. While the sample code could have been written without the logical array(s), the sample code provided may have been an abbreviated excerpt of a larger application, and these mask arrays may have been required for reasons not obvious in the sample code. Therefore the masks were kept.

Upon inspection of the code, and the poster’s first attempt at parallelization, it was determined that the place chosen to create the parallel region (parallel DO) had too short of run. The original code can be sketched like this:

bid = 1 ! { not stated in original posting, but would appeared to be in a DO bid=1,65 }
do k=1,km-1  ! km = 60
    do kk=1,2
        !$OMP PARALLEL PRIVATE(I) DEFAULT(SHARED)
        !$omp do 
        do j=1,ny_block     ! ny_block = 100
            do i=1,nx_block ! nx_block = 81
... {code}
            enddo
        enddo
        !$omp end do
        !$OMP END PARALLEL
    enddo
enddo

For the users first attempt at parallelization he placed the parallel do on the do j= loop. While this is the “hottest” loop levels, it is not the appropriate loop level for this problem and on this platform.

The number of threads involved was 16. With 16 threads, and the inner two loops performing a combined 8100 iterations, each thread would iterate about 506 iterations. However, the parallel region would be entered 120 times (60*2). The work performed in the inner most loop, while not insignificant, was also not significant. This resulted in the cost of the parallel region being a significant portion of the application. With 16 threads, and an outer loop count of 60 iterations (120 if loops fused), a better choice may be to raise the parallel region to the do k loop.

The code was modified to execute the do k loop many times and compute the average time to execute the entire do k loop. As optimization techniques are applied, we can then use the ratios of average times of original code to revised code as a measurement of improvement. While I did not have an 8 core E5-2650 v2 processor available for testing, I do have a 6 core E5-2620 v2 processor available.  The slightly reworked code presented the following results:

OriginalSerialCode
Average time 0.8267E-02
Version1_ParallelAtInnerTwoLoops
Average time 0.1746E-02,  x Serial  4.74

Perfect scaling on an 6 core E5-2620 v2 processor would have been somewhere between 6x and 12x (7x if you assume an additional 15% for HT). A scaling of 4.74x is significantly less than an expected 7x.

In the following sections of this paper will walk you through four additional techniques of optimization.

OriginalSerialCode
Average time 0.8395E-02
ParallelAtInnerTwoLoops
Average time 0.1699E-02,  x Serial  4.94
ParallelAtkmLoop
Average time 0.6905E-03,  x Serial 12.16,  x Prior  2.46
ParallelAtkmLoopDynamic
Average time 0.5509E-03,  x Serial 15.24,  x Prior  1.25
ParallelNestedRank1
Average time 0.3630E-03,  x Serial 23.13,  x Prior  1.52

Note, the ParallelAtInnerTwoLoops report in the second run illustrates a different multiplier factor than the first run. The principal cause for this is fortuitous code placement or lack thereof. The code did not change between runs. The only difference was the addition of the extra code and the insertion of the call statements to run those subroutines. It is important to bear in mind that code placement of tight loops can significantly affect the performance of those loops. Even adding or removing a single statement can significantly affect some code run times.

To facilitate ease of reading of the code changes, the body of the inner 3 loops was encapsulated into a subroutine. This makes the code easier to study as well as easier to diagnose with program profiler (VTune). Example from the ParallelAtkmLoop subroutine:

bid = 1
!$OMP PARALLEL DEFAULT(SHARED)
!$omp do 
do k=1,km-1 ! km = 60
    call ParallelAtkmLoop_sub(bid, k)
end do
!$omp end do
!$OMP END PARALLEL
endtime = omp_get_wtime()
...
subroutine ParallelAtkmLoop_sub(bid, k)
     ...
    do kk=1,2
        do j=1,ny_block     ! ny_block = 100
            do i=1,nx_block ! nx_block = 81
...
            enddo
        enddo
    enddo
end subroutine ParallelAtkmLoop_sub               

The first optimization I performed was to make two changes:

1) Move the parallelization up two loop levels to the do k loop level. Thus reducing the number of entries into the parallel region by a factor of 120. And,

2) The application used an array of LOGICAL’s as a mask for code selection.  I reworked the code used to generate the values to reduce unnecessary manipulation of the mask array.

These two changes resulted in an improvement of 2.46x over the initial parallelization attempt. While this improvement is great, is this as good as you can get?

In looking at the code of the inner most loop we find:

  ... {construct masks}
  if ( LMASK1(i,j) ) then
     ... {code}
  endif

  if ( LMASK2(i,j) ) then
     ... {code}
  endif

  if( LMASK3(i,j) ) then
     ... {code}
  endif

Meaning the filter masks results in the work load per iteration being unequal. Under this circumstance, it is often better to use dynamic scheduling. This next optimization is performed with ParallelAtkmLoopDynamic. This is the same code as ParallelAtkmLoop but with schedule(dynamic) added to the !$omp do.

This simple change added an additional 1.25x. Note, dynamic scheduling is not your only scheduling option. There are others that might be worth exploring, and note that the type of scheduling often includes a modifier clause (chunk size).

The next level of optimization, which provides an additional 1.52x performance boost in performance, is what one would consider aggressive optimization. The extra 52% does require significant programming effort (but not unmanageable). The opportunity for this optimization comes from an observation that can be made by looking at the Assembly code that you can view using VTune.

I would like to stress that you do not have to understand the assembly code when you look at it. In general you can assume:

more assembly code == slower performance

What you can do is to make an inference as to the complexity (volume) of assembly code has to potential missed optimization opportunities by the compiler. And, when missed opportunities are detected, how you can use a simple technique, to aid the complier with code optimization.

When looking at the body of main work we find:

subroutine ParallelAtkmLoopDynamic_sub(bid, k)
  use omp_lib
  use mod_globals
  implicit none
!-----------------------------------------------------------------------
!
!     dummy variables
!
!-----------------------------------------------------------------------
  integer :: bid,k

!-----------------------------------------------------------------------
!
!     local variables
!
!-----------------------------------------------------------------------
  real , dimension(nx_block,ny_block,2) :: &
        WORK1, WORK2, WORK3, WORK4   ! work arrays

  real , dimension(nx_block,ny_block) :: &
        WORK2_NEXT, WORK4_NEXT       ! WORK2 or WORK4 at next level

  logical , dimension(nx_block,ny_block) :: &
        LMASK1, LMASK2, LMASK3       ! flags
   
  integer  :: kk, j, i    ! loop indices
   
!-----------------------------------------------------------------------
!
!     code
!
!-----------------------------------------------------------------------
  do kk=1,2
    do j=1,ny_block
      do i=1,nx_block
        if(TLT%K_LEVEL(i,j,bid) == k) then
          if(TLT%K_LEVEL(i,j,bid) < KMT(i,j,bid)) then
            LMASK1(i,j) = TLT%ZTW(i,j,bid) == 1
            LMASK2(i,j) = TLT%ZTW(i,j,bid) == 2
            if(LMASK2(i,j)) then
              LMASK3(i,j) = TLT%K_LEVEL(i,j,bid) + 1 < KMT(i,j,bid)
            else
              LMASK3(i,j) = .false.
            endif
          else
            LMASK1(i,j) = .false.
            LMASK2(i,j) = .false.
            LMASK3(i,j) = .false.
          endif
        else
          LMASK1(i,j) = .false.
          LMASK2(i,j) = .false.
          LMASK3(i,j) = .false.
        endif
        if ( LMASK1(i,j) ) then
          WORK1(i,j,kk) =  KAPPA_THIC(i,j,kbt,k,bid)  &
            * SLX(i,j,kk,kbt,k,bid) * dz(k)
                           
          WORK2(i,j,kk) = c2 * dzwr(k) * ( WORK1(i,j,kk)            &
            - KAPPA_THIC(i,j,ktp,k+1,bid) * SLX(i,j,kk,ktp,k+1,bid) &
            * dz(k+1) )

          WORK2_NEXT(i,j) = c2 * ( &
            KAPPA_THIC(i,j,ktp,k+1,bid) * SLX(i,j,kk,ktp,k+1,bid) - &
            KAPPA_THIC(i,j,kbt,k+1,bid) * SLX(i,j,kk,kbt,k+1,bid) )

          WORK3(i,j,kk) =  KAPPA_THIC(i,j,kbt,k,bid)  &
            * SLY(i,j,kk,kbt,k,bid) * dz(k)

          WORK4(i,j,kk) = c2 * dzwr(k) * ( WORK3(i,j,kk)            &
            - KAPPA_THIC(i,j,ktp,k+1,bid) * SLY(i,j,kk,ktp,k+1,bid) &
            * dz(k+1) )

          WORK4_NEXT(i,j) = c2 * ( &
            KAPPA_THIC(i,j,ktp,k+1,bid) * SLY(i,j,kk,ktp,k+1,bid) - &
              KAPPA_THIC(i,j,kbt,k+1,bid) * SLY(i,j,kk,kbt,k+1,bid) )

          if( abs( WORK2_NEXT(i,j) ) < abs( WORK2(i,j,kk) ) ) then
            WORK2(i,j,kk) = WORK2_NEXT(i,j)
          endif

          if ( abs( WORK4_NEXT(i,j) ) < abs( WORK4(i,j,kk ) ) ) then
            WORK4(i,j,kk) = WORK4_NEXT(i,j)
          endif
        endif

        if ( LMASK2(i,j) ) then
          WORK1(i,j,kk) =  KAPPA_THIC(i,j,ktp,k+1,bid)     &
            * SLX(i,j,kk,ktp,k+1,bid)

          WORK2(i,j,kk) =  c2 * ( WORK1(i,j,kk)                 &
            - ( KAPPA_THIC(i,j,kbt,k+1,bid)        &
            * SLX(i,j,kk,kbt,k+1,bid) ) )

          WORK1(i,j,kk) = WORK1(i,j,kk) * dz(k+1)

          WORK3(i,j,kk) =  KAPPA_THIC(i,j,ktp,k+1,bid)     &
            * SLY(i,j,kk,ktp,k+1,bid)

          WORK4(i,j,kk) =  c2 * ( WORK3(i,j,kk)                 &
            - ( KAPPA_THIC(i,j,kbt,k+1,bid)        &
            * SLY(i,j,kk,kbt,k+1,bid) ) )

          WORK3(i,j,kk) = WORK3(i,j,kk) * dz(k+1)
        endif
 
        if( LMASK3(i,j) ) then
          if (k.lt.km-1) then ! added to avoid out of bounds access
            WORK2_NEXT(i,j) = c2 * dzwr(k+1) * ( &
              KAPPA_THIC(i,j,kbt,k+1,bid) * SLX(i,j,kk,kbt,k+1,bid) * dz(k+1) - &
              KAPPA_THIC(i,j,ktp,k+2,bid) * SLX(i,j,kk,ktp,k+2,bid) * dz(k+2))

            WORK4_NEXT(i,j) = c2 * dzwr(k+1) * ( &
              KAPPA_THIC(i,j,kbt,k+1,bid) * SLY(i,j,kk,kbt,k+1,bid) * dz(k+1) - &
              KAPPA_THIC(i,j,ktp,k+2,bid) * SLY(i,j,kk,ktp,k+2,bid) * dz(k+2))
          end if
          if( abs( WORK2_NEXT(i,j) ) < abs( WORK2(i,j,kk) ) ) &
            WORK2(i,j,kk) = WORK2_NEXT(i,j)
          if( abs(WORK4_NEXT(i,j)) < abs(WORK4(i,j,kk)) ) &
            WORK4(i,j,kk) = WORK4_NEXT(i,j)
          endif  
        enddo
      enddo
  enddo
end subroutine Version2_ParallelAtkmLoop_sub

Making an Intel Amplifier run (VTune), and looking at line 540 as an example:

We have part of a statement that performs the product of two numbers. For this partial statement you would expect:

                Load value at some index of SLX
                Multiply by value at some index of dz

Clicking on the Assembly button in amplifier:

Then, sorting by source line number:

And locating source line 540, we find:

We find a total of 46 assembler instructions use to multiply two numbers.

Now comes the inference part.

The two numbers are cells of two arrays. The array SLX has six subscripts the other has one subscript. You can also observe that the last two assembly instructions are vmovss from memory and vmulss from memory. We were expecting fully optimized code to produce something similar to our expectations. The code above shows 44 out of 46 assembly instructions are associated with computing the array indexes to these two variables. Granted, we might expect a few instructions to obtain the indexes into the arrays, but not 44 instructions. Can we do something to reduce this complexity?

In looking at the source code (most recent above) you will note that the last four subscripts of SLX, and the one subscript of dz are loop invariant for the inner most two loops. In the case of SLX, the left most two indices, the inner most two loop control variables, represents a contiguous array section. The compiler optimization failed to recognize the unchanging (right most) array indices as candidates for loop invariant code that can be lifted out of a loop. Additionally, the compiler also failed to identify the left two most indexes as a candidate for collapse into a single index.

This is a good example of what future compiler optimization efforts could address under these circumstances. In this case, the next optimization, which performs a lifting of loop invariant subscripting, illustrates a 1.52x performance boost.

Now that we know that a goodly portion of the “do work” code involves contiguous array sections with several subscripts, can we somehow reduce the number of subscripts without rewriting the application?

The answer to this is yes, if we encapsulate smaller array slices represented by fewer array subscripts. How do we do this for this example code?

The choice made was for two nest levels:

  1. at the outer most bid level (the module data indicates the actual code uses 65 bid values)

  2. at the next to outer most level, the do k loop level. In addition to this, we consolidate the first two indexes into one.

The outermost level passes bid level array sections:

        bid = 1 ! in real application bid may iterate
        ! peel off the bid
        call ParallelNestedRank1_bid( &
            TLT%K_LEVEL(:,:,bid), &
            KMT(:,:,bid), &
            TLT%ZTW(:,:,bid), &
            KAPPA_THIC(:,:,:,:,bid),  &
            SLX(:,:,:,:,:,bid), &
            SLY(:,:,:,:,:,bid))
…
subroutine ParallelNestedRank1_bid(K_LEVEL_bid, KMT_bid, ZTW_bid, KAPPA_THIC_bid, SLX_bid, SLY_bid)
    use omp_lib
    use mod_globals
    implicit none
    integer, dimension(nx_block , ny_block) :: K_LEVEL_bid, KMT_bid, ZTW_bid
    real, dimension(nx_block,ny_block,2,km) :: KAPPA_THIC_bid
    real, dimension(nx_block,ny_block,2,2,km) :: SLX_bid, SLY_bid
…

Note, for non-pointer (allocatable or fixed dimensioned) arrays, the arrays are contiguous. This provides you with the opportunity to peel off the right most indexes to pass on a contiguous array section, and do so with merely computing the offset to the subsection of the larger array. Whereas peeling indexes other than rightmost would require creating a temporary array, and should be avoided. Though there may be some cases where it might be beneficial to do so.

And the second nested level peeled off an additional array index of the do k loop, as well as compressed the first two indexes into one:

    !$OMP PARALLEL DEFAULT(SHARED)
    !$omp do 
    do k=1,km-1
        call ParallelNestedRank1_bid_k( &
            K_LEVEL_bid, KMT_bid, ZTW_bid, &
            KAPPA_THIC_bid(:,:,:,k), &
            KAPPA_THIC_bid(:,:,:,k+1),  KAPPA_THIC_bid(:,:,:,k+2),&
            SLX_bid(:,:,:,:,k), SLY_bid(:,:,:,:,k), &
            SLX_bid(:,:,:,:,k+1), SLY_bid(:,:,:,:,k+1), &
            SLX_bid(:,:,:,:,k+2), SLY_bid(:,:,:,:,k+2), &
            dz(k),dz(k+1),dz(k+2),dzwr(k),dzwr(k+1))
    end do
    !$omp end do
    !$OMP END PARALLEL
end subroutine ParallelNestedRank1_bid   

subroutine ParallelNestedRank11_bid_k( &
    k, K_LEVEL_bid, KMT_bid, ZTW_bid, &
    KAPPA_THIC_bid_k, KAPPA_THIC_bid_kp1, KAPPA_THIC_bid_kp2, &
    SLX_bid_k, SLY_bid_k, &
    SLX_bid_kp1, SLY_bid_kp1, &
    SLX_bid_kp2, SLY_bid_kp2, &
    dz_k,dz_kp1,dz_kp2,dzwr_k,dzwr_kp1)
    use mod_globals
    implicit none
    !-----------------------------------------------------------------------
    !
    !     dummy variables
    !
    !-----------------------------------------------------------------------
    integer :: k
    integer, dimension(nx_block*ny_block) :: K_LEVEL_bid, KMT_bid, ZTW_bid
    real, dimension(nx_block*ny_block,2) :: KAPPA_THIC_bid_k, KAPPA_THIC_bid_kp1
    real, dimension(nx_block*ny_block,2) :: KAPPA_THIC_bid_kp2
    real, dimension(nx_block*ny_block,2,2) :: SLX_bid_k, SLY_bid_k
    real, dimension(nx_block*ny_block,2,2) :: SLX_bid_kp1, SLY_bid_kp1
    real, dimension(nx_block*ny_block,2,2) :: SLX_bid_kp2, SLY_bid_kp2
    real :: dz_k,dz_kp1,dz_kp2,dzwr_k,dzwr_kp1
... ! next note index (i,j) compression to (ij)
    do kk=1,2
        do ij=1,ny_block*nx_block
            if ( LMASK1(ij) ) then

Note that at the point of the call, a contiguous array section (reference) is passed. The dummy arguments of the called routine specify a same sized contiguous chunk of memory with a different number of indexes.  As long as you are careful in Fortran, you can do this.

The coding effort was mostly a copy and paste, then a find and replace operation. Other than this, there was no code flow changes. A meticulous junior programmer could have done this with proper instructions.

While future versions of compiler optimization may make this unnecessary, a little bit of “unnecessary” programming effort now can, at times, yield substantial performance gains (52% in this case).

The equivalent source code statement is now:

And the assembly code is now:

We are now down from 46 instructions to 6 instructions a 7.66x reduction. This illustrates that by reducing the number of array subscripts, that the complier optimization can reduce the instruction count.

Introducing a 2-Level nest with peel yielded a 1.52x performance boost. As to if a 52% boost in performance is worth the additional effort, this is a subjective measure for you to decide. I anticipate that future compiler optimizations will perform loop invariant array subscript lifting as performed manually above. But until then you can use the index peel and compress technique.

I hope that I have provided you with some useful tips.

Jim Dempsey
Quickthread Programming, LLC
A software consulting company.

The Past, Present, and Future of IoT

$
0
0

It wasn’t too many years ago that the Internet of Things (IoT) was considered a pie-in-the-sky daydream, devised by starry-eyed technologists looking for the next big thing to spur development projects. Now that the tools, equipment, and infrastructures for enabling IoT have become real, it’s a good time to take a step back and look at how IoT has evolved and how future prospects are shaping up. In 2008, Time magazine listed IoT as one of the best inventions of the year and cited the formation of the IP for Smart Objects Alliance by Cisco and Sun as a defining milestone. 1In 2014, Gartner marked the IoT as at the pinnacle of the Gartner Hype Cycle for emerging technologies and predicted the plateau would be reached—where practical implementations would reach the mainstream—in 5 to 10 years. 2At the CES technology show in Las Vegas in early January 2016, the Amazon Echo was widely considered a breakaway hit as dozens of companies announced plans to integrate with it to provide home and automotive services. 3Great strides are being made in a number of industry sectors, including building automation, transportation, healthcare, and energy. According to IOT Analytics, the top four applications are: Smart Home, Wearables, Smart City, and Smart Grid. 

 

Future Prospects for IoT

Given the current momentum across the IoT ecosystem, what can we expect to see in the near future?

Services are quickly arising around IoT so that individual devices are no longer the primary focus. Enough IoT-capable devices are on the market that a distinct shift toward connecting everything together is becoming evident. Interoperability is the keyword and in an environment where 85% of the devices are not designed for connecting to the Internet or sharing data, that challenge is on par with another sizable challenge: capturing, analyzing, and harnessing the massive volumes of data generated by IoT implementations4

The connected car is quickly becoming commonplace and also being combined with services, such those offered as a part of as insurance telematics, which can lower your insurance rates and provide other benefits in exchange for giving insurers access to your driving behavior and data about the operation of your vehicle. Operational data can translate to useful driver benefits, such as reminders about when maintenance services are due, feedback on driving practices that can improve safety and economy, remote diagnostics, accident reconstruction, young driver coaching, and similar services.

With the help of IoT technologies, practical building automation solutions are moving beyond the province of large-scale enterprises and are now within reach of small- to mid-sized businesses. Heating, ventilation, and air conditioning can now be more efficiently monitored and controlled using sensors, intelligent thermostats, actuators, and control systems connected—in many cases—through a single control portal. Energy consumption throughout a building can be analyzed and lowered through pattern recognition and trend analysis. Security management can be performed effectively through a combination of real-time remote monitoring, motion sensors, and automated alarm configuration. Many of these systems can be monitoring or controlled using smartwatches, smartphones, and tablets, as well as conventional PCs.

As more and more personal data gets captured and circulated through global networks, security issues are finally coming to the forefront. For example, do you really want the detailed health information captured by your fitness tracker openly exchanged and available to any hacker with a modicum of skill? Expect to see the emphasis on security rise to a higher level as the implications of having home data, vehicle data, and your current geolocation all accessible and linkable. IoT-capable home security monitors may help offset some of the potential exposure risks, but having this much personal information in play without requisite protections securing the information suggests serious vulnerabilities and an opportunity for exploitation.

 

Seeking Common Ground

As the vision of IoT coalesces and pioneering devices and services enter the market, the need for a common language, technical standards expressed in that language, and a defining architectural framework to implement the framework are essential.

In writing to the IEEE membership in an article, Defining the Internet of Things, Roberto Minerva said:

Imagine a global network that provides efficiencies, improved productivity, forecasting and future innovation in our everyday lives, as well as countless industry verticals! The idea is powerful and it underscores the need to ensure that stakeholders are talking about the same thing as they go about developing the technologies and applications that will enable it5.

To this end, IEEE has launched the IEEE Internet of Things Initiative to provide a common platform for future developments.  

Intel is also helping resolve this challenge by offering the Intel® IoT Platform, an open, scalable platform that serves as a reference model for connecting devices and exchanging data securely in the cloud. And, while Intel is working with partners to develop an interoperable hardware foundation for IoT infrastructure building, the Open Connectivity Foundation (OCF)—launched in February 2016—is working to consolidate industry efforts around a common IoT interoperability specification and to establish certification processes. The momentum building around IoT unification received an earlier boost from IoTivity, an open source project lead by Intel and the Linux Foundation and originally sponsored by the Open Interconnect Consortium (OIC). The OIC Specification 1.0 is now available under the province of OCF.    

There’s plenty of work left to be done, but the convergence around IoT opportunities has brought a diverse group of industries together to build solutions and engineer the devices, software, and infrastructures that could dramatically enhance our lives. 

To learn more about IoT visit the Intel IoT Developer Zone

Feel free to add any additional comments below.

 

Footnotes


  1. 2008. Best Inventions of 2008. Time Magazine. http://content.time.com/time/specials/packages/article/0,28804,1852747_1854195_1854158,00.html
  2. Butler, Brandon. Gartner: Internet of Things has reached hype peak. Network World.  http://www.networkworld.com/article/2464007/cloud-computing/gartner-internet-of-things-has-reached-hype-peak.html
  3. Higginbotham, Stacey. 2016. The 6 Things CES Taught Us About The Internet of Things. Fortune. http://fortune.com/2016/01/11/ces-internet-of-things/
  4. 2014. Intel® IoT Platform Reference Model and Products Solution Brief. Intel. http://www.intel.com/content/www/us/en/internet-of-things/iot-platform-solution-brief.html
  5. Minerva, Robert. 2015. Defining the Internet of Things: A Work in Progress. ECN Magazine. http://www.ecnmag.com/blog/2015/11/defining-internet-things-work-progress

Using Open vSwitch* with DPDK on Ubuntu*

$
0
0

Overview

In our previous article Using Open vSwitch* with DPDK for Inter-VM NFV Applications,  we discussed how to manually build and configure Open vSwitch with the Data Plane Development Kit (OVS-DPDK, from here on) to take advantage of the accelerated user-space datapath for network traffic. 

It is not always convenient to manually build libraries and deploy them. Some distributions like Ubuntu have already released standard system packages for DPDK-enhanced Open vSwitch (openvswitch-switch-dpdk). In this article we discuss how to install, configure, and use this package for enhanced network throughput and performance.

Ubuntu has had OVS-DPDK support since Ubuntu 15.10 (Wily Werewolf), and even earlier via custom repos, but we found the vhost-user feature works best in the new release of the package, which you can find at https://launchpad.net/ubuntu/+source/openvswitch-dpdk.

The latest package release is available with Ubuntu 16.04 (Xenial Xerus), and also via PPAs for testing purposes at https://launchpad.net/~paelzer/+archive/ubuntu/dpdk-merge-2.2

With the new release of this package, OVS-DPDK has been updated to use the latest release of both the DPDK (v2.2) and Open Switch (v2.5) projects. We took it for a test drive and were impressed with how seamless and easy it is to use OVS-DPDK on Ubuntu.

We will repeat the same inter-virtual machine (VM) test case that we used in our previous article: We configure OVS-DPDK with two vhost-user ports and allocate them to two VMs. We then run a simple iperf3 test-case. The following diagram captures the setup.

Open V Switch Diagram

In this article we will use Ubuntu 16.04 64 bit server version on the host for demonstration purposes.

Installing OVS-DPDK using Ubuntu packages

The following steps install the OVS-DPDK using default Ubuntu packages and update the default ovs-vswitchd to use the ovs-vswitchd-dpdk package.

sudo apt-get install openvswitch-switch-dpdk
sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk

Configuring OVS-DPDK on Ubuntu

The OVS-DPDK package (primarily the “ovs-vswitchd-dpdk”) relies on the following config files:

  • /etc/default/openvswitch-switch – Passes in DPDK command-line options to ovs-vswitchd
  • /etc/dpdk/dpdk.conf – Configures hugepages
  • /etc/dpdk/interfaces – Configures/assigns NICs for DPDK use

For our sample test case, the following config options are used:

Configuring OVS-DPDK

The above options configure 64 2-MB hugepages for OVS-DPDK use and pass in DPDK command-line options of 0x3 as coremask (cores to run OVS-DPDK on), and 4 memory channels (default).

For details on why Hugepages are required, and how they can help in improved performance, please see http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment.

For detailed DPDK command-line options and their descriptions, please refer to following documentation: http://dpdk.org/doc/guides/testpmd_app_ug/run_app.html

The third config file (/etc/dpdk/interfaces) is not needed for this inter-VM use case. A follow-up article will cover this config file along with a different use case.

Reboot, or restart the ovs-vswitchd process, and ensure the ovs-vswitchd is being invoked with the requested DPDK options.

Reboot the ovs-vswitchd process

For our sample test case, we create a bridge and add two DPDK vhost-user ports:

sudo ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
sudo ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

Ensure the bridge and vhost-user ports have been properly set up and configured:

Vhost User Ports

Using DPDK vhost-user ports with VMs

Creating VMs is out of the scope of this document. Once we have two VMs created (for example, f21vm1.qcow2 and f21vm2.qcow2), the following commands show how to use the DPDK vhost-user ports we created earlier.

Ensure the qemu version on the system is v2.2.0 or above as discussed under “DPDK vhost-user Prerequisites“ in https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md

sudo sh -c "echo 1200 > /proc/sys/vm/nr_hugepages"
sudo qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda /home/user/f21vm1c1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=512M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc

sudo qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda /home/user/f21vm1c2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/run/openvswitch/vhost-user2 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \
-object memory-backend-file,id=mem,size=512M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc

DPDK vhost-user inter-VM test case with iperf3

In the previous step, we configured 2 VMs each with a virtio NIC that is connected to OVS-DPDK bridge. After the VMs are powered-up, check the VM to ensure the NIC is properly initialized:

iperf3

Configure the NIC IP address on both the VMs to be on same subnet. Install iperf3 from http://software.es.net/iperf and then run a simple network test-case.

On one VM, start iperf3 in server mode iperf3 -s and run the iperf3 client on another VM, iperf3 –c server_ip. The network throughput and performance varies depending on your system HW capabilities and configuration.

Summary

Ubuntu has standard packages available for using OVS-DPDK. In this article we discussed how to install, configure, and use this package for enhanced network throughput and performance. We also covered how to configure a simple OVS-DPDK bridge with DPDK vhost-user ports for an inter-VM application use case.

To read more about the DPDK with Open vSwitch, we suggest the following resource(s):

https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md

http://dpdk.org/doc/guides/index.html

Have a question? The SDN/NFV Forum is the perfect place to ask.

https://software.intel.com/en-us/forums/networking

Intel® XDK FAQs - Crosswalk

$
0
0

How do I play audio with different playback rates?

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');
myAudio.play();
myAudio.playbackRate = 1.5;

Why are Intel XDK Android Crosswalk build files so large?

When your app is built with Crosswalk it will be a minimum of 15-18MB in size because it includes a complete web browser (the Crosswalk runtime or webview) for rendering your app instead of the built-in webview on the device. Despite the additional size, this is the preferred solution for Android, because the built-in webviews on the majority of Android devices are inconsistent and poorly performing.

See these articles for more information:

Why is the size of my installed app much larger than the apk for a Crosswalk application?

This is because the apk is a compressed image, so when installed it occupies more space due to being decompressed. Also, when your Crosswalk app starts running on your device it will create some data files for caching purposes which will increase the installed size of the application.

Why does my Android Crosswalk build fail with the com.google.playservices plugin?

The Intel XDK Crosswalk build system used with CLI 4.1.2 Crosswalk builds does not support the library project format that was introduced in the "com.google.playservices@21.0.0" plugin. Use "com.google.playservices@19.0.0" instead.

Why does my app fail to run on some devices?

There are some Android devices in which the GPU hardware/software subsystem does not work properly. This is typically due to poor design or improper validation by the manufacturer of that Android device. Your problem Android device probably falls under this category.

How do I stop "pull to refresh" from resetting and restarting my Crosswalk app?

See the code posted in this forum thread for a solution: /en-us/forums/topic/557191#comment-1827376.

An alternate solution is to add the following lines to your intelxdk.config.additions.xml file:

<!-- disable reset on vertical swipe down --><intelxdk:crosswalk xwalk-command-line="--disable-pull-to-refresh-effect" />

Which versions of Crosswalk are supported and why do you not support version X, Y or Z?

The specific versions of Crosswalk that are offered via the Intel XDK are based on what the Crosswalk project releases and the timing of those releases relative to Intel XDK build system updates. This is one of the reasons you do not see every version of Crosswalk supported by our Android-Crosswalk build system.

With the September, 2015 release of the Intel XDK, the method used to build embedded Android-Crosswalk versions changed to the "pluggable" webview Cordova build system. This new build system was implemented with the help of the Cordova project and became available with their release of the Android Cordova 4.0 framework (coincident with their Cordova CLI 5 release). With this change to the Android Cordova framework and the Cordova CLI build system, we can now more quickly adapt to new version releases of the Crosswalk project. Support for previous Crosswalk releases required updating a special build system that was forked from the Cordova Android project. This new "pluggable" webview build system means that the build system can now use the standard Cordova build system, because it now includes the Crosswalk library as a "pluggable" component.

The "old" method of building Android-Crosswalk APKs relied on a "forked" version of the Cordova Android framework, and is based on the Cordova Android 3.6.3 framework and is used when you select CLI 4.1.2 in the Project tab's build settings page. Only Crosswalk versions 7, 10, 11, 12 and 14 are supported by the Intel XDK when using this build setting.

Selecting CLI 5.1.1 in the build settings will generate a "pluggable" webview built app. A "pluggable" webview app (built with CLI 5.1.1) results in an app built with the Cordova Android 4.1.0 framework. As of the latest update to this FAQ, the CLI 5.1.1 build system supported Crosswalk 15. Future releases of the Intel XDK and the build system will support higher versions of Crosswalk and the Cordova Android framework.

In both cases, above, the net result (when performing an "embedded" build) will be two processor architecture-specific APKs: one for use on an x86 device and one for use on an ARM device. The version codes of those APKs are modified to insure that both can be uploaded to the Android store under the same app name, insuring that the appropriate APK is automatically delivered to the matching device (i.e., the x86 APK is delivered to Intel-based Android devices and the ARM APK is delivered to ARM-based Android devices).

For more information regarding Crosswalk and the Intel XDK, please review these documents:

How do I prevent my Crosswalk app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

How can I improve the performance of my Crosswalk app so it is as fast as Crosswalk 7 was?

Beginning with the Intel XDK CLI 5.1.1 build system you must add the --ignore-gpu-blacklist option to your intelxdk.config.additions.xml file if you want the additional performance this option provides to blacklisted devices. See this forum post for additional details.

Also, you can experiment with the CrosswalkAnimatable option in your intelxdk.config.additions.xml file (details regarding the CrosswalkAnimatable option are available in this Crosswalk Project wiki post: Android SurfaceView vs TextureView).

<!-- Controls configuration of Crosswalk-Android "SurfaceView" or "TextureView" --><!-- Default is SurfaceView if >= CW15 and TextureView if <= CW14 --><!-- Option can only be used with Intel XDK CLI5+ build systems --><!-- SurfaceView is preferred, TextureView should only be used in special cases --><!-- Enable Crosswalk-Android TextureView by setting this option to true --><preference name="CrosswalkAnimatable" value="false" />

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for some additional tools that can be used to modify the Crosswalk's webview runtime parameters, especially the --ignore-gpu-blacklist option.

Why does the Google store refuse to publish my Crosswalk app?

There is a change to the version code handling by the Crosswalk and Android build systems based on Cordova CLI 5.0 and later. This change was implemented by the Apache Cordova project. This new version of Cordova CLI automatically modifies the android:versionCode when building for Crosswalk and Android. Because our CLI 5.1.1 build system is now more compatible with standard Cordova CLI, this change results in a discrepancy in the way your android:versionCode is handled when building for Crosswalk (15) or Android with CLI 5.1.1 when compared to building with CLI 4.1.2.

If you have never published an app to an Android store this change will have little or no impact on you. This change might affect attempts to side-load an app onto a device, in which case the simplest solution is to uninstall the previously side-loaded app before installing the new app.

Here's what Cordova CLI 5.1.1 (Cordova-Android 4.x) is doing with the android:versionCode number (which you specify in the App Version Code field within the Build Settings section of the Projects tab):

Cordova-Android 4.x (Intel XDK CLI 5.1.1 for Crosswalk or Android builds) does this:

  • multiplies your android:versionCode by 10

then, if you are doing a Crosswalk (15) build:

  • adds 2 to the android:versionCode for ARM builds
  • adds 4 to the android:versionCode for x86 builds

otherwise, if you are performing a standard Android build (non-Crosswalk):

  • adds 0 to the android:versionCode if the Minimum Android API is < 14
  • adds 8 to the android:versionCode if the Minimum Android API is 14-19
  • adds 9 to the android:versionCode if the Minimum Android API is > 19 (i.e., >= 20)

If you HAVE PUBLISHED a Crosswalk app to an Android store this change may impact your ability to publish a newer version of your app! In that case, if you are building for Crosswalk, add 6000 (six with three zeroes) to your existing App Version Code field in the Crosswalk Build Settings section of the Projects tab. If you have only published standard Android apps in the past and are still publishing only standard Android apps you should not have to make any changes to the App Version Code field in the Android Builds Settings section of the Projects tab.

The workaround described above only applies to Crosswalk CLI 5.1.1 and later builds!

When you build a Crosswalk app with CLI 4.1.2 (which uses Cordova-Android 3.6) you will get the old Intel XDK behavior where: 60000 and 20000 (six with four zeros and two with four zeroes) are added to the android:versionCode for Crosswalk builds and no change is made to the android:versionCode for standard Android builds.

NOTE:

  • Android API 14 corresponds to Android 4.0
  • Android API 19 corresponds to Android 4.4
  • Android API 20 corresponds to Android 5.0
  • CLI 5.1.1 (Cordova-Android 4.x) does not allow building for Android 2.x or Android 3.x

Why is my Crosswalk app generating errno 12 Out of memory errors on some devices?

If you are using the WebGL 2D canvas APIs and your app crashes on some devices because you added the --ignore-gpu-blacklist flag to your intelxdk.config.additions.xml file, you may need to also add the --disable-accelerated-2d-canvas flag. Using the --ignore-gpu-blacklist flag enables the use of the GPU in some problem devices, but can then result in problems with some GPUs that are not blacklisted. The --disable-accelerated-2d-canvas flag allows those non-blacklisted devices to operate properly in the presence of WebGL 2D canvas APIs and the --ignore-gpu-blacklist flag.

You likely have this problem if your app crashes after running a few seconds with the an error like the following:

<gsl_ldd_control:364>: ioctl fd 46 code 0xc00c092f (IOCTL_KGSL_GPMEM_ALLOC) failed: errno 12 Out of memory<ioctl_kgsl_sharedmem_alloc:1176>: ioctl_kgsl_sharedmem_alloc: FATAL ERROR : (null).

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for additional info regarding the --ignore-gpu-blacklist flag and other Chromium option flags.

Back to FAQs Main

Intel® and the National Taiwan Science Education Center Reach For The Stars

$
0
0

The combination of the Intel® Edison Board and Do It Yourself (DIY) Telescopes enables makers to take a peek into the mysteries of the universe.

The earliest recorded working telescopes were refracting devices that first appeared in the Netherlands in 1608. The following year, Galileo built his own telescope that improved upon the Dutch design, and in 1668 Isaac Newton built the first practical reflecting telescope, a design that bears his name—the Newtonian reflector.

Viewed from a historical perspective, you could say that Galileo and Newton were in the vanguard of application developers and makers. One can only imagine what innovations they might have been capable of creating had they had access to the incredible hardware, software and other development tools that enable today’s makers to expand the boundaries of the Internet of Things (IoT).

The Dawning of the Age of the DIY Telescope

Thanks to a recent collaboration between Intel and Professor Jiun-Huei Proty Wui from National Taiwan University, homemade DIY Telescopes are continuing along the path that Galileo and Newton forged. Professor Wu has devoted himself to telescope DIY promotion since 2003, and there are more than 500 sets of DIY Telescopes developed every year at the Taiwan International Science Fair. Many of those creators have gone on to become student finalists for the Intel­ International Science and Engineering Fair (Intel ISEF) over the course of the past decade.

Today, the efforts of Professor Wu and his student contributors have resulted in more than 8,000 self-made telescopes now in existence. And in January of this year, more than 60 science educators from 21 countries gathered in Taipei, Taiwan to take the next step in overturning the traditionally high-cost and complicated astrophotography and photomicrography inherent in the technology into affordable maker projects.

A First for the Intel® Edison Board

As one of the evangelists for the popular science and maker movement, Intel co-operated with Professor Wu for the first time to make another “first” happen—combining the Intel® Edison Board with a DIY Telescope that can be controlled by mobile devices. Instructed by Professor Wu, people at the workshop learned how to build their own maker devices—known broadly under the banner of Protison Maker—for astrophotograpy and photomicrography. Intel volunteers at the workshop provided technical assistance to help make it all happen.

Why Intel® Edison technology? First of all, it’s small. It’s a wirelessly enabled platform that provides low power consumption. It supports a variety of external interfaces. And finally, Intel Edison is a Linux-based platform, paving the way for open source development.

Here’s how it works: Protison combines the Intel® Edison mini board, 3D printing and a webcam to make astrophotography and photomicrography possible through Wi-Fi control by mobile devices or computers. The core of Protison is the Intel® Edison Breakout Board Kit, which is 3.5 cm long and 2.5 cm wide. It contains a processor and memory, operated by homemade software and powered by a commercial battery or any power bank.

When plugged in with a webcam and operated by any mobile device or computer through Wi-Fi, the system can be mounted onto a telescope or a microscope to record images and videos. Users can then easily transfer the resulting media files onto USB drives.

Taking in the View for Future Innovation

There was a tangible sense of excitement among the attendees at the workshop, many of whom were reluctant to leave and stayed after the event to learn more. In fact, the only “lowlight” associated with the event was that there simply wasn’t enough time to sate the technical appetites of those in attendance. Many of the participants left the workshop enthused about the possibility of driving the future adaptation and implementation of DIY Telescope in their own countries.

Equally important, the Intel engineering team on hand collaborated to create a competitive Intel Edison maker manual/SOP (Standard Operating Procedure) to serve as a foundation for future maker DIY Telescope innovations.

And finally, the workshop resulted in the building of a new and strong usage model to employ the Intel Edison board and associated technologies—not only to enable future innovations in the evolving DIY Telescope market, but also to open up new possibilities for Intel Edison and the Intel® Developer’s Kit across the broader scope of the entire field of science.

From Newton to New Horizons

We started this blog with a tip of the hat to Galileo and Isaac Newton—and so we’ll end it with the lyrics culled from an English language nursery rhyme:

“Star light, star bright
The first star I see tonight;
I wish I may, I wish I might,
Have the wish I wish tonight.”

It’s safe to say that, with the help of Intel® Edison technology and a host of enthusiastic application developers, makers will able to view the stars from their own back yards, and realize the dream of having their wishes come true. By any measure, the skies are the limit.

Intel® Parallel Computing Centers Newsletter Archive

$
0
0

Intel® Parallel Computing Program (Intel® PCC) newsletters are issued monthly and highlights news and events from universities, institutions, and labs in the field of parallel computing.

Map Image

February 2016

Modern Code Training

See what's happening in HPC this month.
Lustre is now a part of the Intel® PCC ecosystem.

Read this issue ›

 


Archived Newsletters 2016 | 2015

2016

  • January 2016: Re-watch the noteworthy successes in 2015 code modernization

2015

  • December 2015: Catch the highlights from the SC’15 Conference
  • November 2015: Intel® Manycore Platform Software Stack Version 3.6 has been released
  • October 2015: See how HPC is now impacting Oil & Gas exploration workflow
  • September 2015:“High Performance Parallelism Pearls” - Volume 2 book is now here
  • August 2015: See how the industry is utilizing Intel® Architecture
  • July 2015: Watch the best techniques for getting the most performance from modern hardware
  • June 2015: Trending now: Code Modernization and what it means for you
  • May 2015: Think Parallelization and watch our free webinars
  • April 2015: View the latest recipes, case studies, and success stories on Intel® Xeon Phi™ Processor

Building IoT Teams, Lessons Learned

$
0
0

It’s always a challenge to round up the right combination of talent for a developer team, and the uncharted territory of the Internet of Things (IoT) can make the path to success even harder to navigate.

As a team of anthropologists, software architects, and designers in Intel Labs, we observed dozens of organizations and interviewed a hundred-plus developers who build IoT solutions. They helped us identify what works for them (and what doesn’t) by telling us how they work, how they work together, the jobs they have to get done, and the challenges they face on the way.

Here’s what we learned.

 

Know Your Coding Expertise

Their first advice for an IoT team? Know thyself. Take inventory of your group’s combined coding expertise, and see where your strengths and weaknesses lie. Working development teams have told us that most IoT project work boils down to the following four types:

  • “Thing” development: get under the hood of devices to connect them as data sources and actuation points
  • Middleware development: weave together data sources to make coherent and actionable wholes
  • App development: build the interactions that engage and help users
  • Data analytics: create analysis pipelines that turn data into action, insight, and decision making

Great teams often tap into several of these coding domains. But how they do so varies.

We saw that many individual programmers working on IoT projects either already had experience in more than one of these categories or were in the process of working toward that experience. In fact, savvy project leads told us how they actively sought ways to cross-train their teams to ensure broader team-wide coding expertise and tighter team collaboration.

Not surprisingly, we often encountered what we called “hybrid coders,” a particularly valuable type of team member whose expertise spanned at least two domains and, as a result, often had a high-level understanding of the IoT initiative as a whole. They often assumed leadership roles on the project. Encouraging hybrid expertise for select members of an IoT development team can be a powerful counterweight to common pressures such as resource constraints and the development bottlenecks described below.

 

Know Your IoT Coding Challenges

Even with the huge variety of IoT solutions in development, IoT project teams struggled with a common set of coding challenges. Knowing which challenge is foremost for each IoT project is instructive for IoT project planning. It also helps pinpoint which coding disciplines are needed for the project to be successful.

We observed the following common IoT coding challenges (and we suspect that more will emerge as IoT matures—especially IoT analytics):

  • Tame the Wild West at the edge: gather and normalize data to and from large numbers of endpoints
  • Orchestrate system-wide data: integrate diverse data points so they can work together
  • Manage edge data: control and analyze data at the edge to optimize its system-wide flows
  • Deliver just-in-time responsiveness: coordinate system-wide data flows and user interactions to deliver results to the right people at the right time

We found that most IoT teams target one or more of these challenges (taking on all four can be unwieldy). To do so, they leaned on specific combinations of developer expertise, as shown in the following table:

ChallengePrimary Expertise RequiredSupporting Expertise Required

Tame the Wild West at the edge

  • "Thing development"
  • Middleware development
Orchestrate system-wide data
  • Middleware development
  • App-development
Manage edge data
  • Data analytics
  • "Thing" development
  • Middleware development
Deliver just-in-time responsiveness
  • Middleware development
  • App development
  • "Thing" development

By examining your IoT initiatives in light of these common coding challenges, you can assess who you have and who you need on board. That can help you determine where additional training or hiring is most valuable.

 

Optimize Your IoT Development Teams

To keep up with the fast pace of industry change, IoT project teams and developers constantly learn on the job. We don’t know if IoT projects attract developers with a passion to hybridize or if the developers working on these projects simply must cross-train to survive. In any case, programmers and project leads alike actively seek training, mentorship, communities of interest, and meetups to expand their coding horizons and skills.

Managers added that the cross-pollination of expertise and mentor relations helped smooth team collaboration and overall organizational morale. They also remarked that culture clashes between different coding disciplines were inevitable. But anticipating them and facilitating communication among those involved went a long way toward minimizing negative impacts on the team and its work.

 

Lessons Learned

IoT is evolving fast, but by observing IoT development teams, their successes, and their challenges, we identified four steps to help chart your team’s IoT vision:

  • Inventory your team’s coding expertise
  • Determine coding challenges related to your IoT solution
  • Identify gaps in your team’s expertise
  • Resolve the gaps through hiring and cross-training

Visit the Intel® Developer Zone to learn about Intel® IoT technologies.

What suggestions do you have for building IoT teams? We’d love to see your comments below.

Chat Heads with Intel® RealSense™ SDK Background Segmentation Boosts e-Sport Experience

$
0
0

Intel® RealSense™ Technology can be used to improve the e-sports experience for both players and spectators by allowing them to see each other on-screen during the game. Using background segmented video (BGS), players’ “floating heads” can be overlaid on top of a game using less screen real-estate than full widescreen video and graphics can be displayed behind them (like a meteorologist on television). Giving players the ability to see one another while they play in addition to speaking to one another will enhance their overall in-game communication experience. And spectators will get a chance to see their favorite e-sports competitors in the middle of all the action.

In this article, we will discuss how this technique is made possible by the Intel® RealSense™ SDK. This sample will help you understand the various pieces in the implementation (using the Intel RealSense SDK for background segmentation, networking, video compression and decompression), the social interaction, and the performance of this use case. The code in this sample is written in C++ and uses DirectX*.

Figure 1:Screenshot of the sample with two players with a League of Legends* video clip playing in the background.

Figure 2:Screenshot of the sample with two players and a Hearthstone* video clip playing in the background.

Installing, Building, and Running the Sample

Download the sample at: https://github.com/GameTechDev/ChatHeads

The sample uses the following third-party libraries: 
(i) RakNet for networking 
(ii) Theora Playback Library to play back ogg videos 
(iii) ImGui for the UI 
(iv) Windows Media Foundation* (WMF) for encoding and decoding the BGS video streams

(i) and (ii) are dynamically linked (required dll’s are present in the source repo), while (iii) is statically linked with source included.
(iv) is dynamically linked, and is part of the WMF runtime, which should be installed by default on a Windows* 8 or greater system. If it is not already present, please install the Windows SDK.

Install the Intel RealSense SDK (2015 R5 or higher) prior to building the sample. The header and library include paths in the Visual Studio project use the RSSDK_DIR environment variable, which is set during the RSSDK installation.

The solution file is at ChatheadsNativePOC\ChatheadsNativePOC and should build successfully with VS2013 and VS2015.

Install the Intel® RealSense™ Depth Camera Manager, which includes the camera driver, before running the sample. The sample has been tested on Windows® 8.1 and Windows® 10 using both the external and embedded Intel® RealSense™ cameras.

When you start the sample, the option panel shown in Figure 3 displays:

Figure 3:Option panel at startup.

  • Scene selection: Select between League of Legends* video, Hearthstone* video and a CPUT (3D) scene. Click the Load Scene button to render the selection. This does not start the Intel RealSense software; that happens in a later step.
  • Resolutions: The Intel RealSense SDK background segmentation module supports multiple resolutions. Setting a new resolution results in a shutdown of the current Intel RealSense SDK session and initializes a new one.
  • Is Server / IP Address: If you are running as the server, check the box labeled Is Server. 
    If you are running as a client, leave the box unchecked and enter the IP address you want to connect to.

    Hitting Start initializes the network and Intel RealSense SDK and plays the selected scene. The maximum number of connected machines (server plus client(s)) is hardcoded to 4 in the file NetworkLayer.h

    Note: While a server and client can be started on the same system, they cannot use different color stream resolutions. Attempting to do so will crash the Intel RealSense SDK runtime since two different resolutions can’t run simultaneously on the same camera.

    After the network and Intel RealSense SDK initialize successfully, the panels shown in Figure 4 display:

Figure 4:Chat Heads option panels.

The Option panel has multiple sections, each with their own control settings. The sections and their fields are:

  • BGS/Media controls
    • Show BGS Image – If enabled, the background segmented image (i.e., color stream without the background) is shown. If disabled, the color stream is simply used (even though BGS processing still happens). This affects the remote chat heads as well (that is, if both sides have the option disabled, you’ll see the remote players’ background in the video stream).

      Figure 5:BGS on (left) and off (right). The former blends into Hearthstone*, while the latter sticks out

    • Pause BGS - Pause the Intel RealSense SDK BGS module, suspending segmentation processing on the CPU
    • BGS frame skip interval - The frequency at which the BGS algorithm runs. Enter 0 to run every frame, 1 to run once in two frames, and so on. The limit exposed by the Intel RealSense SDK is 4.
    • Encoding threshold – This is relevant only for multiplayer scenarios. See the Implementation section for details.
    • Decoding threshold - This is relevant only for multiplayer scenarios. See the Implementation section for details.
  • Size/Pos controls
    • Size - Click/drag within the boxes to resize the sprite. Use it with different resolutions to compare quality.
    • Pos - Click/drag within the boxes to reposition the sprite.
  • Network control/information (This section is shown only when multiple players are connected)
    • Network send interval (ms) - how often video update data is sent.
    • Sent - Graph of data sent by a client or server.
    • Rcvd - Graph of data received by a client or server. Clients send their updates to the server, which then broadcasts it to the other clients. For reference, to stream 1080p Netflix* video, the recommended b/w required is 5 Mbps (625 KB/s).
  • Metrics
    • Process metrics
      • CPU Used - The BGS algorithm runs on several Intel® Threading Building Blocks threads and in the context of a game, can use more CPU resources than desired. Play with the Pause BGS and BGS frame skip interval options and change the Chat Head resolution to see how it affects the CPU usage.

Implementation

Internally, the Intel RealSense SDK does its processing on each new frame of data it receives from the Intel RealSense camera. The calls used to retrieve that data are blocking, making it costly to execute this processing on the main application thread. Therefore, in this sample, all of the Intel RealSense SDK processing happens on its own dedicated thread. This thread and the application thread never attempt to write to the same objects, making synchronization trivial.

There is also a dedicated networking thread that handles incoming messages and is controlled by the main application thread using signals. The networking thread receives video update packets and updates a shared buffer for the remote chat heads with the decoded data.

The application thread takes care of copying the updated image data to the DirectX* texture resource. When a remote player changes the camera resolution, the networking thread sets a bool for recreation, and the application thread takes care of resizing the buffer, recreating the DirectX* graphics resources (Texture2D and ShaderResourceView) and reinitializing the decoder.

Figure 6 shows the post-initialization interaction and data flow between these systems (threads).

Figure 6: Interaction flow between local and remote Chat Heads.

Color Conversion

The Intel RealSense SDK uses 32-bit BGRA (8 bits per channel) to store the segmented image, with the alpha channel set to 0 for background pixels. This maps directly to the DirectX texture format DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for rendering the chat heads. In this sample, we convert the BGRA image to YUYV, wherein every pair of BGRA pixels is combined into one YUYV pixel. However, YUYV does not have an alpha channel, so to preserve the alpha from the original image, we set the Y, U, and V channels all to 0 in order to represent background segmented pixels.

The YUYV bit stream is then encoded using WMF’s H.264 encoder. This also ensures better compression, since more than half the image is generally comprised of background pixels.

When decoded, the YUYV values meant to represent background pixels can be non-zero due to the lossy nature of the compression. Our workaround is to use 8 bit encoding and decoding thresholds, exposed in the UI. On the encoding side, if the alpha of a given BGRA pixel is less than the encoding threshold, then the YUYV pixel will be set to 0. Then again, on the decoding side, if the decoded Y, U, and V channels are all less than the decoding threshold, then the resulting BGRA pixel will be assigned an alpha of 0.

When the decoding threshold is set to 0, you may notice green pixels (shown below) highlighting the background segmented image(s). This is because in YUYV, 0 corresponds to the color green and not black as in BGRA (with non-zero alpha). When the decoding threshold is set to 0, you may notice green pixels (shown below) highlighting the background segmented image(s). This is because in YUYV, 0 corresponds to the color green and not black as in BGRA (with non-zero alpha).

Figure 7: Green silhouette edges around the remote player when a 0 decoding threshold is used

Bandwidth

The amount of data sent over the network depends on the network send interval and local camera resolution. The maximum send rate is limited by the 30 fps camera frame rate, and is thus 33.33 ms. At this send rate, a 320x240 resolution video feed consumes 60-75 KBps with minimal motion (kilobytes per second) and 90-120 KBps with more motion. Note that the bandwidth figures depend on the number of pixels covered by the player. Increasing the resolution to 1280x720 doesn’t impact the bandwidth cost all that much; the net increase is around 10-20 KBps, since a sizable chunk of the image is the background (YUYV set to 0) which results in much better compression.
Reducing the send interval to 70ms results in a bandwidth consumption of ~20-30 KBps. 

Performance

The sample uses Intel® Instrumentation and Tracing Technology (Intel® ITT) markers and Intel® VTune Amplifier XE to help measure and analyze performance. To enable them, uncomment

//#define ENABLE_VTUNE_PROFILING // uncomment to enable marker code

In the file

ChatheadsNativePOC\itt\include\VTuneScopedTask.h

and rebuild.

With the instrumentation code enabled, an Intel® VTune concurrency analysis of the sample can help understand the application’s thread profile. The platform view tab shows a colored box (whose length is based on execution time) for every instrumented section, and can help locate bottlenecks. The following capture was taken on an Intel® Core™ i7-4770R processor (8 logical cores) with varying BGS work. The “CPU Usage” row on the bottom shows the cost of executing the BGS algorithm every frame, every alternate frame, once in three frames and when suspended. As expected, the TBB threads doing the BGS work have lower CPU utilization when frames are skipped.

Figure 8: VTune concurrency analysis platform view with varying BGS work

A closer look at the RealSense thread shows the RSSDK AcquireFrame() call taking ~29-35ms on average, which is a result of the configured frame capture rate of 30 fps.

Figure 9: Closer look at the RealSense thread. The thread does not spin, and is blocked while trying to acquire the frame data

The CPU usage info can be seen via the metrics panel of the sample as well, and is shown in the table below:

BGS frequencyChat Heads CPU Usage (approx.) 
Every frame23%
Every alternate frame19%
Once in three frames16%
Once in four frames13%
Suspended9%

Doing the BGS work every alternate frame, or once in three frames, results in a fairly good experience when the subject is a gamer because of minimal motion. The sample currently doesn’t update the image for the skipped frames – it would be interesting to use the updated color stream with the previous frame’s segmentation mask instead.

Conclusion

The Chat Heads usage enabled by Intel RealSense technology can make a game truly social and improve both the in-game and e-sport experience without sacrificing the look, feel and performance of the game. Current e-sport broadcasts generally show full video (i.e., with the background) overlays of the professional player (and/or) team in empty areas on the bottom UI. Using the Intel RealSense SDK's background segmentation, each players’ segmented video feed can be overlaid near the player’s character, without obstructing the game view. Combined with Intel RealSense SDK face tracking, it allows for powerful and fun social experiences in games. 

Acknowledgements

A huge thanks to Jeff Laflam for pair-programming the sample and reviewing this article. 
Thanks also to Brian Mackenzie for the WMF based encoder/decoder implementation, Doug McNabb for CPUT clarifications and Geoffrey Douglas for reviewing this article.

 

UX Best Practices for Intel® RealSense™ Camera (User Facing) - Technical Tips

$
0
0

In a previous article, Best UX Practices for Intel® RealSense™ Camera (F200) Applications,  we showed you a series of 15 short videos that members of the Experience Design and Development team within Intel’s Perceptual Computing (PerC) Group recorded to help you learn best practices for developing a natural user interface (NUI) application for the user-facing F200 and SR300 cameras using the Intel® RealSense™ SDK.

To help you implement these UX best practices, we’ve created six more short videos to help you learn the technical best practices to use the software and hardware for best effect. Topics covered include Boundary Boxes, Capture Volume, Interaction Zone, Occlusion, Speed and Precision, and World Space.

Intel® RealSense™ Camera (F200 or SR300) Tech Tips: Capture Volume

The capture volume of the Field-of-View is different between color and depth cameras. An additional dimension developers have to keep in mind is the way users interact with the camera on different form factors. Understanding how to determine the field of view for both the color and depth cameras, and the effective range of the camera, will help provide visual feedback to show users when they move out the camera detection zone. In this video, we will show some of the APIs to get this data in real time.

For additional information, view these videos created by the PerC group:


Intel® RealSense™ (F200 or SR300) Tech Tips: Interaction Zone

The fidelity and the capture volumes of the color and the depth cameras differ within the F200 camera. Depending on the algorithm that you want to use, the interaction zones will differ. Detecting the interaction zone and operating within it may not be very obvious to the end user. This video shows how developers can detect the interaction zones for the hand and face modules through the alerts built into the SDK middleware, and build an effect visual mechanism that tells the end users when to adapt to the interaction zones.

This short video from the PerC group provides more information about the interaction zone:


Intel® RealSense™ (F200 or SR300) Tech Tips: Boundary Boxes

Object interaction is a key component with most RealSense apps. Understanding some of the challenges that users could have with incorrect object placements on the UI is critical. In this video, we will introduce the concept of bounding boxes that the SDK supports to allow for an effective handling of objects within the interaction zones. We will also show you how to use SDK APIs to implement bounding boxes in your apps as an effective visual feedback mechanism.

For additional information, view this video from the PerC group:


Intel® RealSense™ (F200 or SR300) Tech Tips: Occlusion

Since RealSense modalities include interactions that are non-tactile, it is hard to envision when your hand or face could be occluded by a part of your body or by an object. In this video, we will talk about some of the supported, partially supported and unsupported occlusion scenarios for the hand and face, the alert mechanisms available in the SDK, and how to leverage them in applications to provide visual feedback indicating end users when occlusion and loss of tracking happens.

To learn more about occlusion, view this video created by the PerC group:


Intel® RealSense™ (F200 or SR300) Tech Tips: Speed and Precision

The precision you can get with each of the Intel® RealSense™ SDK algorithms differs with the speed of interaction. In this video, we will provide guidance using the SDK to accommodate different speeds of operation and the expected amount of precision with each. Some of the utilities that help improve precision are also introduced. We will further show you how to implement alerts specific to handling the speed of operation and how they translate to visual feedback when alerts are raised.

To better understand speed and precision, view the following videos created by the PerC group:


Intel® RealSense™ (F200 or SR300) Tech Tips: World Space

When developing apps for RealSense, it is very important for developers to understand how to translate the world space (the area the camera can see) to screen space and vice versa. In this video, we demonstrate the Projection tool that is available as a part of the SDK installation and walk you through a visualization of the translation between screen space to world space as well as color to depth projection.

This video provides more information on world space, view this video created by the PerC group:

Intel® XDK FAQs - App Designer

$
0
0

Which App Designer framework should I use? Which App Designer framework is best?

There is no "best" App Designer framework. Each framework has it's pros and cons. You should choose that framework which serves your applicaton needs best. The list below provides a list of pros and cons for each of the frameworks that are available as part of App Designer.

  • Framework 7 -- Pros: provides pixel perfect layout with device-specific UI elements for Android and iOS platforms. Cons: difficult to customize and modify.

  • Twitter Bootstrap 3 -- Pros: a very clean UI framework that relies primarily on CSS with very little JavaScript trickery. Thriving third-party ecosystem with many plugins and add-ons, including themes. Cons: to be written.

  • App Framework 3 -- Pros: an optimized for mobile library that is very lean. Cons: to be written.

  • Ionic -- to be written.

  • Topcoat -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can still use this framework in apps that you create using the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using Topcoat please visit the Topcoat project page and the Topcoat GitHub repo for documentation.

  • Ratchet -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can still use this framework in apps that you create using the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using Ratchet please visit the Ratchet project page and the Ratchet GitHub repo for documentation.

  • jQuery Mobile -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can still use this framework in apps that you create using the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using jQuery Mobile please visit the Ratchet project page and the jQuery Mobile API page and jQuery Mobile GitHub page for documentation.

What does the Google* Map widget’s "center type" attribute and its values "Auto calculate,""Address" and "Lat/Long" mean?

The "center type" parameter defines how the map view is centered in your div. It is used to initialize the map as follows:

  • Lat/Long: center the map on a specific latitude and longitude (that you provide on the properties page)
  • Address: center the map on a specific address (that you provide on the properties page)
  • Auto Calculate: center the map on a collection of markers

This is just for initialization of the map widget. Beyond that you must use the standard Google maps APIs to move and/or modify the map. See the "google_maps.js" code for initialization of the widget and some calls to the Google maps APIs. There is also a pointer to the Google maps API at the beginning of the JS file.

To get the current position, you have to use the Geo API, and then push that into the Maps API to display it. The Google Maps API will not give you any device data, it will only display information for you. Please refer to the Intel XDK "Hello, Cordova" sample app for some help with the Geo API. There are a lot of useful comments and console.log messages.

How do I size UI elements in my project?

Trying to implement "pixel perfect" user interfaces with HTML5 apps is not recommended as there is a wide array of device resolutions and aspect ratios and it is impossible to insure you are sized properly for every device. Instead, you should use "responsive web design" techniques to build your UI so that it adapts to different sizes automatically. You can also use the CSS media query directive to build CSS rules that are specific to different screen dimensions.

Note:The viewport is sized in CSS pixels (aka virtual pixels or device independent pixels) and so the physical pixel dimensions are not what you will normally be designing for.

How do I create lists, buttons and other UI elements with the Intel XDK?

The Intel XDK provides you with a way to build HTML5 apps that are run in a webview on the target device. This is analogous to running in an embedded browser (refer to this blog for details). Thus, the programming techniques are the same as those you would use inside a browser, when writing a single-page client-side HTML5 app. You can use the Intel XDK App Designer tool to drag and drop UI elements.

Why is the user interface for Chrome on Android* unresponsive?

It could be that you are using an outdated version of the App Framework* files. You can find the recent versions here. You can safely replace any App Framework files that App Designer installed in your project with more recent copies as App Designer will not overwrite the new files.

How do I work with more recent versions of App Framework* since the latest Intel XDK release?

You can replace the App Framework* files that the Intel XDK automatically inserted with more recent versions that can be found here. App designer will not overwrite your replacement.

Is there a replacement to XPATH in App Framework* for selecting nodes from an XML document?

This FAQ applies only to App Framework 2. App Framework 3 no longer includes a replacement for the jQuery selector library, it expects that you are using standard jQuery.

App Framework is a UI library that implements a subset of the jQuery* selector library. If you wish to use jQuery for XPath manipulation, it is recommend that you use jQuery as your selector library and not App Framework. However, it is also possible to use jQuery with the UI components of App Framework. Please refer to this entry in the App Framework docs.

It would look similar to this:

<script src="lib/jq/jquery.js"></script><script src="lib/af/jq.appframework.js"></script><script src="lib/af/appframework.ui.js"></script>

Why does my App Framework* app that was previously working suddenly start having issues with Android* 4.4?

Ensure you have upgraded to the latest version of App Framework. If your app was built with the now retired Intel XDK "legacy" build system be sure to set the "Targeted Android Version" to 19 in the Android-Crosswalk build settings. The legacy build targeted Android 4.2.

How do I manually set a theme?

If you want to, for example, change the theme only on Android*, you can add the following lines of code:

  1. $.ui.autoLaunch = false; //Stop the App Framework* auto launch right after you load App Framework*
  2. Detect the underlying platform using either navigator.userAgent or intel.xdk.device.platform or window.device.platform. If the platform detected is Android*, set $.ui.useOSThemes=false todisable custom themes and set <div id=”afui” class=”android light”>
  3. Otherwise, set $.ui.useOSThemes=true;
  4. When device ready and document ready have been detected, add $.ui.launch();

How does page background color work in App Framework?

In App Framework the BODY is in the background and the page is in the foreground. If you set the background color on the body, you will see the page's background color. If you set the theme to default App Framework uses a native-like theme based on the device at runtime. Otherwise, it uses the App Framework Theme. This is normally done using the following:

<script>
$(document).ready(function(){ $.ui.useOSThemes = false; });</script>

Please see Customizing App Framework UI Skin for additional details.

Back to FAQs Main

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

Previously you needed to email us, but now you can download your Android (and Crosswalk) keystore file directly. Goto this page https://appcenter.html5tools-software.intel.com/certificate/export.aspx and login (if asked) using your Intel XDK account credentials. You may have to go back to that location a second time after logging in (do this within the same browser tab that you just logged in with to preserve your login credentials).

If successful, there is a link that, when clicked, will generate a request for an "identification code" for retrieving your keystore. Pushing this link will cause an email to be sent to the email address registered to your account. This email will contain your "identification code" but will call it an "authentication code," different term but same thing. Use this "authentication code" that you received by email to fill in the second form on the web page, above. Filing in that form with the code you received will take you to a new page where you will see:

  • a "Download keystore" link
  • your "key alias"
  • your "keystore password"
  • your "key password"

Make sure you copy down ALL the information provided! You will need all of that information in order to make use of the keystore. If you lose the password and alias information it will render the key useless!

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

Uploading an existing keystore in Intel XDK is not currently supported but you can send an email to html5tools@intel.com with this request. We can assist you there.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > del *.* /s/q

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > del *.* /s/q
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

To do the same on a Linux or Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app onto my test device Avast antivirus flags it as a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message it is likely due to the fact that you are side-loading the app onto your device (using a download link or by using adb) or you have downloaded your app from an "untrusted" store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

Back to FAQs Main

Viewing all 3384 articles
Browse latest View live