Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Introduction to the Storage Performance Development Kit (SPDK)

$
0
0

Introduction

Solid-state storage media is in the process of taking over the data center. Current- generation flash storage enjoys significant advantages in performance, power consumption, and rack density over rotational media. These advantages will continue to grow as next-generation media enter the marketplace.

Introduction to the Storage Performance Development Kit (SPDK)Customers integrating current solid-state media, such as the Intel® P3700 Non-Volatile Memory Express* (NVMe*) drive, face a major challenge: because the throughput and latency performance are so much better than that of a spinning disk, the storage software now consumes a larger percentage of the total transaction time. In other words, the performance and efficiency of the storage software stack is increasingly critical to the overall storage system. As storage media continues to evolve, it risks outstripping the software architectures that use it, and in coming years the storage media landscape will continue evolving at an incredible pace.

To help storage OEMs and ISVs integrate this hardware, Intel has created a set of drivers and a complete, end-to-end reference storage architecture, calling the effort the Storage Performance Development Kit (SPDK). The goal of SPDK is to highlight the outstanding efficiency and performance enabled by using Intel’s networking, processing, and storage technologies together. By running software designed from the silicon up, SPDK has demonstrated that millions of I/Os per second are easily attainable by using a few processor cores and a few NVMe drives for storage with no additional offload hardware. Intel provides the entire Linux* reference architecture source code free of charge under an Intel license, and has open sourced the user mode NVMe driver to the community through 01.org, with other elements of the development kit slated to be open sourced throughout 2016.

Software Architectural Overview

How does SPDK work? The extremely high performance is achieved using two key techniques: running at user level and in polled-mode. Let’s take a closer look at these two software engineering terms.

First, running our device driver code at user level means that, by definition, driver code does not run in the kernel. Avoiding the kernel context switches and interrupts saves a significant amount of processing overhead, allowing more cycles to be spent doing the actualstoring of the data. Regardless of the complexity of the storage algorithms (deduplication, encryption, compression, or plain block storage), fewer wasted cycles means better performance.

Second, Polled Mode Drivers (PMDs) are continually awaiting work instead of being dispatched to work. Think of the challenge of hailing a cab downtown on a busy Saturday night, hands waving as cab after cab passes with someone already in the back seat. Think of the unpredictability of the wait, the impossibility of saying how many minutes might be spent waiting on the curb for a ride. This is what it can be like to get a “ride” for a packet or block of data in a traditional interrupt-dispatched storage I/O driver. On the other hand, imagine the process of getting a cab at the airport. There is a cab driver watching, sitting at the front of the line, pulling up reliably in a few seconds to transport passengers and cargo to their intended destinations. This is how PMDs work and how all the components of SPDK are designed. Packets and blocks are dispatched immediately and time spent waiting is minimized, resulting in lower latency, more consistent latency (less jitter), and improved throughput.

SPDK is composed of numerous subcomponents, interlinked and sharing the common elements of user level and polled-mode operation. Each of these components was created to overcome a specific performance bottleneck encountered while creating the end-to-end SPDK architecture. However, each of these components can also be integrated into non-SPDK architectures, allowing customers to leverage the experience and techniques used within SPDK to accelerate their own software. For example, the Userspace Network Services (UNS) library was created to overcome the performance limits of the Linux kernel TCP/IP stack. By creating a user-mode, polled implementation of the TCP/IP stack, SPDK was able to realize substantially higher IOPS performance by spending fewer processor cycles handling TCP/IP packet sorting and processing.

Introduction to the Storage Performance Development Kit (SPDK)

There are three basic categories of subcomponents: the network front end, the processing framework, and the back end.

The front end is composed of the Data Plane Development Kit (DPDK) NIC driver and the Userspace Networking Services (UNS) components. DPDK provides a framework for high-performance packet processing at the NIC, providing a fast path for the data to arrive from the NIC to user space. The UNS code then takes over, cracking the TCP/IP packets and forming the iSCSI commands.

At this point the processing framework takes the packet contents and translates the iSCSI commands into SCSI block-level commands. However, before it sends these commands to the back-end drivers, SPDK presents an API framework to add customer-specific features—“special sauce”—within the SPDK framework (see the green box in the figure above). Examples might include caching, deduplication and compression of data, encryption, or RAID or Erasure Coding calculations. Examples of these kinds of features are included with SPDK, though these are solely intended to help us model real-world use cases and should not be confused with production-ready implementations.

Finally the data reaches the back-end drivers, where the interactions with the physical block devices take place; that is, the reads and writes. SPDK includes user-level PMDs for several storage media: NVMe devices; Linux AIO devices such as traditional spinning disks; and memory drivers for block-addressed memory applications (for example, RAMDISKS) and devices that can use the Intel® I/O Acceleration Technology (code-named Crystal Beach DMA). This suite of back-end drivers spans the spectrum of storage device performance tiers, ensuring relevance for nearly every storage application.

SPDK does not fit every storage architecture. Here are a few questions that might help users determine if SPDK components are a good fit for their architecture:

Is the storage system based on Linux?
        SPDK is currently tested and supported only on Linux.

Does the performance path of the storage system currently run in user mode?
        SPDK is able to improve performance and efficiency by running the performance path from NIC to disk exclusively in user mode.

Can the system architecture incorporate lockless PMDs into its threading model?
        Since PMD continually run on their threads (instead of sleeping or ceding the processor when unused), they have specific thread model requirements.

Does the system currently use the Data Plane Development Kit (DPDK) to handle network packet workloads?
        DPDK contains the framework for SPDK, so customers currently using DPDK will likely find the close integration with SPDK very useful.

Can your license model use a non-redistributable source?
        Some portions of SPDK are available as open source, BSD-licensed components (such as the NVMe and CBDMA userspace drivers). Other portions are licensed under an Intel license (UNS and the Userspace iSCSI Target) for the time being, though this is certainly subject to change. All source code for SPDK is provided free of charge.

Does the development team have the expertise to understand and troubleshoot problems themselves?
        Intel shall have no support obligations for this reference software. While Intel and the open source community around SPDK will use commercially reasonable efforts to investigate potential errata of unmodified released software, under no circumstances will Intel have any obligation to customers with respect to providing any maintenance or support of the software.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at intel.com.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Performance testing configuration:

  • 2S Xeon® E5-2699v3: 18C, 2.3GHz (HyperThreading off)
    • Note: Single socket was used while performance testing
  • 32 GB DDR4 2133 MT/s
    • 4 Memory Channel per CPU
      • 1x 4GB 2R DIMM per channel
  • Ubuntu (Linux) Server 14.10
  • 3.16.0-30-generic kernel
  • Ethernet Controller XL710 for 40GbE
  • 8x P3700 NVMe devices for storage
  • NVMe Configuration
    • Total 8 PCIe Gen 3 x 4 NVMes
      • 4 NVMes coming from 1st x16 slot
        (bifurcated to 4 x4s in the BIOS)
      • Another 4 NVMes coming from 2nd x16 slot (bifurcated to 4 x4s in the BIOS)
    • Intel SSD DC P3700 800 GB
    • Firmware: 8DV10102
  • FIO BenchMark Configuration
    • Direct: Yes
    • Queue depth
    • 4KB Random I/O: 32 outstanding I/O
    • 64KB Seq. I/O: 8 outstanding I/O
    • Ramp Time: 30 seconds
    • Run Time:180 seconds
    • Norandommap: 1
    • I/O Engine: Libaio
    • Numjobs: 1
  • BIOS Configuration
    • Speed Step: Disabled
    • Turbo Boost: Disabled
    • CPU Power and Performance Policy: Performance

For more information go to http://www.intel.com/performance.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2015 Intel Corporation.


Building Gesture Recognition Web Apps with Intel® RealSense™ SDK

$
0
0

Download  samplecode.zip

Introduction

In this article, we will show you how to build a web application that can detect various types of gestures using the Intel® RealSense™ SDK and front facing (F200) camera. Gesture recognition will give users of your application another innovative means for navigation and interface interaction. You will need basic knowledge of HTML, JavaScript*, and jQuery in order to complete this tutorial.

Hardware Requirements

  • 4th generation (or later) Intel® CoreTM processor
  • 150 MB free hard disk space
  • 4 GB RAM
  • Intel® RealSense™ camera (F200)
  • Available USB3 port for the Intel RealSense camera (or dedicated connection for integrated camera)

Software Requirements

  • Microsoft Windows* 8.1 (or later)
  • A web browser such as Microsoft Internet Explorer*, Mozilla Firefox*, or Google Chrome*
  • The Intel RealSense Depth Camera Manager (DCM) for the F200, which includes the camera driver and service, and the Intel RealSense SDK. Go here to download components.
  • The Intel RealSense SDK Web Runtime. Currently, the best way to get this is to run one of the SDK’s JavaScript samples, which can be found in the SDK install directory. The default location is C:\Program Files (x86)\Intel\RSSDK\framework\JavaScript. The sample will detect that the web runtime is not installed, and prompt you to install it.

Setup

Please make sure that you complete the following steps before proceeding further.

  1. Plug your F200 camera into a USB3 port on your computer system.
  2. Install the DCM.
  3. Install the SDK.
  4. Install the Web Runtime.
  5. After installing the components, navigate to the location where you installed the SDK (we’ll use the default path):

C:\Program Files (x86)\Intel\RSSDK\framework\common\JavaScript

You should see a file called realsense.js. Please copy that file into a separate folder. We will be using it in this tutorial. For more information on deploying JavaScript applications using the SDK, click here.

Code Overview

For this tutorial, we will be using the sample code outlined below. This simple web application displays the names of gestures as they are detected by the camera. Please copy the entire code below into a new HTML file and save this file into the same folder as the realsense.js file. Alternatively, you can download the complete web application by clicking on the code sample link at the top of the article. We will go over the code in detail in the next section.

The Intel RealSense SDK relies heavily on the Promise object. If you are not familiar with JavaScript promises, please refer to this documentation for a quick overview and an API reference.

Refer to the Intel RealSense SDK documentation to find more detail about SDK functions referenced in this code sample. The SDK is online, as well as in the doc directory of your local SDK install.

<html><head><title>RealSense Sample Gesture Detection App</title><script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script><script type="text/javascript" src="https://autobahn.s3.amazonaws.com/autobahnjs/latest/autobahn.min.jgz"></script><script type="text/javascript" src="https://www.promisejs.org/polyfills/promise-6.1.0.js"></script><script type="text/javascript" src="realsense.js"></script><script>
        var sense, hand_module, hand_config
        var rs = intel.realsense

        function DetectPlatform() {
            rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
                if (info.nextStep == 'ready') {
                    Start()
                }
                else if (info.nextStep == 'unsupported') {
                    $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
                    $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
                }
                else if (info.nextStep == 'driver') {
                    $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
                }
                else if (info.nextStep == 'runtime') {
                    $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="https://software.intel.com/en-us/realsense/webapp_setup_v6.exe">here</a> </b>')
                }
            }).catch(function (error) {
                $('#info-area').append('Error detected: ' + JSON.stringify(error))
            })
        }

        function Start() {
            rs.SenseManager.createInstance().then(function (instance) {
                sense = instance
                return rs.hand.HandModule.activate(sense)
            }).then(function (instance) {
                hand_module = instance
                hand_module.onFrameProcessed = onHandData
                return sense.init()
            }).then(function (result) {
                return hand_module.createActiveConfiguration()
            }).then(function (result) {
                hand_config = result
                hand_config.allAlerts = true
                hand_config.allGestures = true
                return hand_config.applyChanges()
            }).then(function (result) {
                return hand_config.release()
            }).then(function (result) {
                sense.captureManager.queryImageSize(rs.StreamType.STREAM_TYPE_DEPTH)
                return sense.streamFrames()
            }).catch(function (error) {
                console.log(error)
            })
        }

        function onHandData(sender, data) {
            for (g = 0; g < data.firedGestureData.length; g++) {
                $('#gesture-area').append(data.firedGestureData[g].name + '<br />')
            }
        }

    $(document).ready(DetectPlatform)
    </script></head><body><div id="info-area"></div><div id="gesture-area"></div></body></html>

The screenshot below is what the app looks like when you run it and present different types of gestures to the camera.

Detecting the Intel® RealSense™ Camera on the System

Before we can use the camera for gesture detection, we need to see if our system is ready for capture. We use the detectPlatform function for this purpose. The function takes two parameters: the first is an array of runtimes that the application will use and the second is an array of cameras that the application will work with. We pass in ['hand'] as the first argument since we will be working with just the hand module and ['front'] as the second argument since we will only be using the F200 camera.

The function returns an info object with a nextStep property. Depending on the value that we get, we can determine if the camera is ready for usage. If it is, we call the Start function to begin gesture detection. Otherwise, we output an appropriate message based on the string we receive back from the platform.

If there were any errors during this process, we output them to the screen.

rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
    if (info.nextStep == 'ready') {
        Start()
    }
    else if (info.nextStep == 'unsupported') {
        $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
        $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
    }
    else if (info.nextStep == 'driver') {
        $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
    }
    else if (info.nextStep == 'runtime') {
        $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="https://software.intel.com/en-us/realsense/webapp_setup_v6.exe">here</a> </b>')
    }
}).catch(function (error) {
    $('#info-area').append('Error detected: ' + JSON.stringify(error))
})

Setting Up the Camera for Gesture Detection

rs.SenseManager.createInstance().then(function (instance) {
    sense = instance
    return rs.hand.HandModule.activate(sense)
})

You need to follow a sequence of steps to set up the camera for gesture detection. First, create a new SenseManager instance and enable the camera to detect hand movement. The SenseManager is used to manage the camera pipeline.

To do this, we will call the createInstance function. The callback returns the instance that we just created, which we store in the sense variable for future use. We then call the activate function to enable the hand module, which we will need for gesture detection.

.then(function (instance) {
    hand_module = instance
    hand_module.onFrameProcessed = onHandData
    return sense.init()
})

Next, we need to save the instance of the hand tracking module that was returned by the activate function into the hand_module variable. We then assign the onFrameProcessed property to our own custom callback function called onHandData whenever new frame data is available. Finally, we initialize the camera pipeline for processing by calling the Init function

.then(function (result) {
    return hand_module.createActiveConfiguration()
})

To configure the hand tracking module for gesture detection, you have to create an active configuration instance. This is done by calling the createActiveConfiguration function.

.then(function (result) {
    hand_config = result
    hand_config.allAlerts = true
    hand_config.allGestures = true
    return hand_config.applyChanges()
})

The CreateActiveConfiguration function returns the instance of the configuration, which is stored in the hand_config variable. We then set the allAlerts property to true to enable all alert notifications. The alert notifications give us additional details such as the frame number, timestamp, and the hand identifier that triggered the alert. We also set the allGestures property to true, which is needed for gesture detection. Finally, we call the applyChanges function to apply all parameter changes to the hand tracking module. This makes the current configuration active.

.then(function (result) {
    return hand_config.release()
})

We then call the release function to release the configuration.

.then(function (result) {
    sense.captureManager.queryImageSize(rs.StreamType.STREAM_TYPE_DEPTH)
    return sense.streamFrames()
}).catch(function (error) {
    console.log(error)
})

Finally, the next sequence of functions sets up the camera to start streaming frames. When new frame data is available, the onHandData function will be invoked. If any errors were detected, we catch them and log all errors to the console.

The onHandData function

function onHandData(sender, data) {
    for (g = 0; g < data.firedGestureData.length; g++) {
        $('#gesture-area').append(data.firedGestureData[g].name + '<br />')
    }
}

The onHandData callback is the main function where we check to see if a gesture has been detected. Remember this function is called whenever there is new hand data and that some of the data may or may not be gesture-related data. The function takes in two parameters, but we use only the data parameter. If gesture data is available, we iterate through the firedGestureData array and get the gesture name from the name property. Finally, we output the gesture name into the gesture-area div, which displays the gesture name on the web page.

Note that the camera remains on and continues to capture gesture data until you close the web page.

Conclusion

In this tutorial, we used the Intel RealSense SDK to create a simple web application that uses the F200 camera for gesture detection. We learned how to detect whether a camera is available on the system and how to set up the camera for gesture recognition. You could modify this example by checking for a specific gesture type (e.g., thumbsup or thumbsdown) using if statements and then writing code to handle that specific use case.

About the Author

Jimmy Wei is a software engineer and has been with Intel Corporation for over 9 years.

Related Resources

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

Intel® Integrated Performance Primitives (Intel® IPP) 8.2 Update 3

$
0
0

Intel® Integrated Performance Primitives (Intel® IPP) is an extensive library of software functions to help you develop multimedia, data processing, and communications applications. Intel® IPP 8.2 Update 3 are now ready for download. Intel® IPP is available as part of Intel® Parallel Studio XEIntel® System Studio,  Intel® Integrated Native Developer Experience (Intel® INDE), Intel® Integrated Native Developer Experience 2015 Build Edition for OS X*

Intel® IPP 8.2 Update 3 is a bug fix release. Check out the Release Notes

Contents

Windows*:

  • File:  w_ipp_8.2.3.280_online.exe
      Online Installer for Windows*
  • File: w_ipp_8.2.3.280.exe
     A File containing the complete product installation for Windows (32-bit/x86-64bit development)

Linux*:

  • File:  l_ipp_online_8.2.3.223.sh
      Online Installer for Linux*
  • File: l_ipp_8.2.3.223.tgz
    A File containing the complete product installation for Linux (32-bit/x86-64bit development)

OS X*:

  • File: m_ipp_online_8.2.3.222.dmg
    Online Installer for OS X*
  • File: m_ipp_8.2.3.222.dmg
    A File containing the complete product installation.

Hands-On Intel® IoT Developer Kit Using Intel® XDK

$
0
0

Introduction

 

This article covers and expands upon the material from the Hands-on Lab Intel Internet of Things (IoT) Developer Kit SFTL005 presented at the Intel Developer Forum 2015 (IDF15) August 18–20, 2015 in San Francisco, California. This document helps developers learn how to connect the Intel® Edison platform to build an end-to-end IoT solution and describes related concepts that can be applied to other projects. Included are a series of hands-on exercises using the Intel® Edison platform, showing developers how to set up and use the Intel Edison hardware and software environment, connect the platform to the Internet, interface to sensors, and communicate data to a cloud service. This document also shows how to create a touch monitoring application to monitor the status of a touch sensor remotely.

The Intel IoT Developer Kit is a complete hardware and software solution to help developer explore the IoT and implement innovative projects. The Intel® Edison development platform is the small, low-power, and powerful computing platform designed for prototyping and producing IoT and wearable computing products.  This platform is powered by the dual-core, dual-threaded Intel® Atom™ processor system-on-chip at 500 MHz and a 32-bit Intel® Quark microcontroller at 100 MHz. It features integrated Wi-Fi* and Bluetooth* connectivity. More information of the Intel® Edison platform is available at http://download.intel.com/support/edison/sb/edison_pb_331179001.pdf.

The Intel® Edison platform is based on the Yocto Project*. The Yocto Project is an open source collaboration project that provides templates, tools, and methods that help developers create custom Linux*-based systems for embedded products. The Intel® Edison platform merges an Arduino* development environment with a complete Linux-based computer system, allowing developers to incorporate Linux system calls and OS-provided services in the Arduino sketches. The Intel® Edison platform is a powerful controller, and developers can develop their project in JavaScript* with Intel® XDK, C/C++ with Eclipse*, Arduino IDE, visual programming with Wyliodrin*, Python*, or on a terminal if you prefer to use a command-line environment. In this article, we’ll guide you through the process of creating IoT solutions using JavaScript with Intel XDK, and how to deploy, run, and debug on the IoT device.

Contents

 

Key Elements of IoT

 

The IoT has four key elements: data generators, data aggregators, cloud-based services, and decision. The data generators consist of the sensors and actuators. The Intel IoT Developer Kit makes it easy to add sensors and actuators to the IoT project and gather information. The information sensed at the device level is transmitted to a cloud-based service and then delivered to the end user.

 

Hardware Components

 

The hardware components referenced in this document are listed below:

For detailed instructions on how to assemble the Intel® Edison board, see https://software.intel.com/en-us/assembling-intel-edison-board-with-arduino-expansion-board.

Hardware Diagram

Figure 1: Hardware Diagram

 

Getting Started

 

To download the latest firmware for the Intel® Edison board and the Intel® Phone Flash Tool Lite, browse to Intel® Edison Board Software Downloads: https://software.intel.com/en-us/iot/hardware/edison/downloads. Read the https://software.intel.com/en-us/articles/flash-tool-lite-user-manual to install the Intel Phone Flash Tool Lite and then flash the image to the latest firmware for the Intel® Edison board. These are the basic steps:

  1. Install the Microsoft Windows* 64-bit Integrated Installer driver software. This driver also installs the Arduino IDE software.
  2. Download the latest Intel® Edison board firmware software release 2.1.
  3. Download and install Intel Phone Flash Tool Lite.
  4. Flash the Intel® Edison platform using the downloaded firmware.
  5. Set up a serial terminal.

 

Configuring the Intel® Edison Platform

 

Once the serial terminal is set up, you can configure the Intel® Edison platform.

  • At the Intel® Edison platform console, type: configure_edison --setup.
  • Follow the setup prompts to configure the board name and root password. This step is needed because connecting to Intel XDK requires that a username and password be set.

Configure Edison – Command Line

Figure 2: Configure Edison – Command Line

Be sure to assign a unique name. Please do not use “edison” as the name, a common practice that causes problems for the mDNS.

 Configure Edison – Name the Device

Figure 3: Configure Edison – Name the Device

For a detailed description of how to connect the Intel® Edison platform to a local Wi-Fi network, see https://software.intel.com/en-us/connecting-your-intel-edison-board-using-wifi.

  • After you have connected to a local Wi-Fi, type: wpa_cli status.
  • Verify that the connection state is COMPLETED and that there is an IP address assigned.

Connection Status

Figure 4: Connection Status

 

Seeed Studios Grove* Starter Kit Plus

 

Grove Starter Kit Plus is a collection of various modular sensors and actuators, which IoT developers can use to develop projects without soldering. This kit includes a variety of basic input and output modules and sensors. Instructions on how to install the Grove base shield and connect a Grove component are here: https://software.intel.com/en-us/node/562029.

 

Intel® XDK IoT Edition Installation

 

To download, install, and connect Intel XDK to the Intel® Edison platform, visit https://software.intel.com/en-us/getting-started-with-the-intel-xdk-iot-edition. The tool is provided free of charge. The Intel XDK IoT Edition allows developers to create, test, debug, and run application on Intel’s IoT platform, and it provides code templates that interact with sensors and actuators. It also offers a list of virtual mobile device that developers can use to test their applications.

 

Creating a New Project

 

You can start a new project using either a templates or a blank project. This section guides you through the steps to create a simple light sensor Intel XDK project.  

Start a New Project

Figure 5: Start a New Project

Create a blank template and name it LightSensor.

Create a Blank Project

Figure 6: Create a Blank Project

The sample code for the light sensor is available at https://software.intel.com/en-us/iot/hardware/sensors. To browse the sample code for the light sensor,

  • In the Connection Type drop-down on the left, select AIO.
  • In the list that displays, select Grove Light Sensor

Filter Sensor by Connection Type

Figure 7: Filter Sensor by Connection Type

  • Copy the Grove Light Sensor JavaScript* sample code into main.js of the Light Sensor project you just created.
  • Connect both the Intel® Edison module and the computer into the same Wi-Fi network.

The Intel® Edison module Wi-Fi and password should already be set up from the Configuring the Intel® Edison Platform steps above.

  • To change Wi-Fi, at the Intel® Edison platform console,
  • To retrieve the IP address, at the Intel® Edison platform console, type: wpa_cli status

 Connect Intel® XDK to the Intel® Edison Module

Figure 8: Connect Intel® XDK to the Intel® Edison Module

The sample code uses the light sensor as an analog input AIO pin 0. Simply connect the light sensor to analog pin 0.

Light Sensor Hardware Connection

Figure 9: Light Sensor Hardware Connection

Build and upload the LightSensor project using Intel XDK IoT Edition.

LightSensor Sample Project

Figure 10: LightSensor Sample Project

 

Run the LightSensor project.

Run the LightSensor Project

Figure 11: Run the LightSensor Project

 

The Cloud

 

There are many cloud application for the IoT. In this paper, we’ll talk about ThingSpeak*. ThingSpeak provides many apps that run in the cloud to help us build connected applications and release connected products for the IoT.

ThingSpeak

ThingSpeak is a platform that provides a service for building IoT applications. Features of ThingSpeak include real-time data collection and data processing, data visualization in the form of charts and graphs, ability to create plug-ins and apps for ThingTweet*, ThingHTTP*, TweetControl*, TimeControl*, React*, and more.

The first step is to sign up for a ThingSpeak account at https://thingspeak.com/ and then create a new channel.

ThingSpeak* – New Channel

Figure 12: ThingSpeak* – New Channel

The channel is a place where your application stores and retrieves any type of data within the ThingSpeak API. Every channel has a unique Channel ID. When the application reads data from the channel, the Channel ID is used to identify it. Each channel provides up to eight data fields. After the channel is created, ThingSpeak publishes and processes the data, and your project retrieves the data. If you make your channel publishable, other people can find it and access data. If you make it private, only you can access the channel.

The next step is to name the field so you know what data you put in that field.

ThingSpeak* – Creating a New Channel

Figure 13: ThingSpeak* – Creating a New Channel

Next, move to the API keys tab and get the API “writeKey” and “readKey” for writing into the channel.

ThingSpeak* – writeKey and readKey

Figure 14: ThingSpeak* – writeKey and readKey

The simplest way to upload data is to change a field’s value manually using encoded URL. If the browser window shows the result ‘0’, an error occurred while attempting to send your submission. Otherwise, your submission was successful. “api_key” of the update request is the “writeKey” of the ThingSpeak channel shown in Figure 14: ThingSpeak – writeKey and readKey.

http://api.thingspeak.com/update?api_key=KSX88EAFTV19S2CH&field1="110"

Uploading multiple fields’ value:

http://api.thingspeak.com/update?api_key=KSX88EAFTV19S2CH&field1="110"&field2="120"

ThingSpeak is an open source code IoT application. To begin using ThingSpeak on the Intel® Edison platform, install the thingspeakclient module on the serial terminal:

Installing the thingspeakclient Module

Figure 15: Installing the thingspeakclient Module

Now you are ready to write a sample application using ThingSpeakClient. The default of the ThingSpeakClient() URL is https://api.thingspeak.com.

var ThingSpeakClient = require('thingspeakclient');
var client = new ThingSpeakClient();

The default timeout value of handling between update requests per channel is true. To disable the client timeout:

var client = new ThingSpeakClient({useTimeoutMode:false});

The default timeout value of an update per channel is 15 seconds. Here is an example of how to change the timeout value to 20 seconds.

var client = new ThingSpeakClient({useTimeoutMode:2000});

To perform the updateChannel(), first attach to the channel with the ThingSpeak writeKey only or to both the ThingSpeak readKey and writeKey of the channel. The optional callback will return error and response.

// Attached a channel with only a writeKey for update a channel:
client.attachChannel(50568, {writeKey:'KSX88EAFTV19S2CH'}, callBack);

// Attached a channel with both writeKey and readKey:
client.attachChannel(50568, {writeKey:'KSX88EAFTV19S2CH', readKey:'B2PPOW7HGOCL4KZ6'}, callBack);

Code Sample 1: ThingSpeak* – Attach to a Channel

// Create the light sensor object using AIO pin 0
var light = new groveSensor.GroveLight(0);
var lightValue = light.value();

// Update field 3 of a channel
client.updateChannel(50568, { field3: lightValue }, function(err, response) {
});

Code Sample 2: ThingSpeak* – Update a Channel

One of the methods used to read the data is:

client.getLastEntryInFieldFeed(50568, 3, function(err, response) {
    if (err == null) {
        console.log('read successfully. value is: ' + response.field3);
    }
});

Code Sample 3: ThingSpeak* – getLastEntryInFieldFeed()

 

ThingSpeak Sample Sketch

The following code sample combines the light sensor example in Figure 10: Light Sensor Sample Project with the ThingSpeak sample code. It reads the value of the light sensor and then uploads to ThingSpeak.

var ThingSpeakClient = require('thingspeakclient');
var client = new ThingSpeakClient();

//var client = new ThingSpeakClient({useTimeoutMode:false});
var client = new ThingSpeakClient({updateTimeout:20000});

//var client = new ThingSpeakClient({useTimeoutMode:false});
// Set the timeout to 20 seconds (Default value is 15 secondes)
var client = new ThingSpeakClient({updateTimeout:20000});

var callBack = function(err) {
    if(!err) {
        console.log('error: ' + err);
    }
}

// Attached a channel with only a writeKey for update a channel:
//client.attachChannel(50568, {writeKey:'KSX88EAFTV19S2CH'}, callBack); // ok

// Attached a channel with both writeKey and readKey:
client.attachChannel(50568, {writeKey:'KSX88EAFTV19S2CH', readKey:'B2PPOW7HGOCL4KZ6'}, callBack);

// Load Grove module
var groveSensor = require('jsupm_grove');

// Create the light sensor object using AIO pin 0
var light = new groveSensor.GroveLight(0);

// Read the input and print both the raw value and a rough lux value.
// Upload the light value to field 3 of the channel.
function uploadLightValueToThingSpeak() {
    var lightValue = light.value();
    console.log(light.name() + " raw value is " + light.raw_value() +
            ", which is roughly " + light.value() + " lux");
    client.updateChannel(50568, { field3: lightValue }, function(err, response) {
        console.log('err = ' + err);
        console.log('response = ' + response);
        if (err == null && response > 0) {
            console.log('Update successfully. Entry number was: ' + response);
        }
    });
}
setInterval(uploadLightValueToThingSpeak, 20000);

Code Sample 4: Sending SMS Using Twilio* Example

The light sensor data was published to ThingSpeak via field 3 “Light” of the channel in real-time:

ThingSpeak* – Light Sensor Value in Real-Time

Figure 16: ThingSpeak* – Light Sensor Value in Real-Time

 

ThingTweet

The ThingTweet app links your Twitter* account to ThingSpeak and sends Twitter messages using a simple API.

To link to a Twitter account,

  • Logon to https://thingspeak.com/
  • On the Apps tab, click ThingTweet, and then click Link Twitter Account to authorize the app to confirm the right Twitter account.

ThingSpeak* – Link Twitter Account

Figure 17: ThingSpeak* – Link Twitter Account

If you don’t have a Twitter account, sign up at https://twitter.com and then authorize the app.

ThingSpeak* – Authorize App

Figure 18: ThingSpeak* – Authorize App

Now you can send a twitter message in updateChannel() by passing in the Twitter Username and the tweet message as shown below.

client.updateChannel(50568, { field3: lightValue, twitter: ' IoTIntelEdison ', tweet: ‘Intel Edison platform is awesome’}, function(err, response) {
        if (err == null && response > 0) {
            console.log('Update successfully. Entry number was: ' + response);
        }
    });

Code Sample 5: ThingSpeak* – getLastEntryInFieldFeed()

 Tweet Message from UpdateChannel()

Figure 19: Tweet Message from UpdateChannel()

 

ThingHTTP

The ThingHTTP app allows you to connect things to a web service via an HTTP request. The methods for ThingHTTP are GET, POST, PUT, and DELETE. Twilio is also a cloud communication platform for SMS messages and phone calls. It also supports HTTPs methods and is able to interface with the ThingHTTP app with Intel® Edison platform. Below is an example of sending an SMS message with Twilio using the ThingHTTP app.

To get started, sign up for Twilio and then click Show API Credentials to get a Twilio ACCOUNT_SID and AUTH_TOKEN.

 Create Twilio* SMS ThingHTTP* App

Figure 20: Create Twilio* SMS ThingHTTP* App

 

TweetControl

TweetControl listens to commands from Twitter and then performs the action. In the example below, TweetControl listens to Twitter for the trigger word “cool” and then performs the ThingHTTP “Twilio SMS” action.

  • Twitter Account: The username of the Twitter account. If Anonymous TweetControl is checked, anyone can trigger your TweetControl.
  • Trigger: The word from Twitter message that the triggered TweetControl needs to process.
  • ThingHTTP Action: Select a ThingHTTP request to perform.

 Create a TweetControl* App

Figure 21: Create a TweetControl* App

Now that you have ThingHTTP and TweetControl set up, you can send a tweet message from your Twitter account. The Tweet structure is filter triggered. In order for TweetControl to execute, include a filter keyword in your Tweet message. The Tweet keywords are:

  • #thingspeak
  • thingspeak
  • #tweetcontrol
  • tweetcontrol

Tweet structure:

filter trigger

 Sample Tweet Message

Figure 22: Sample Tweet Message

After you send a Tweet message “#thingspeak IntelEdison is so cool!”, TweetControl is triggered by the trigger “cool” and then invoke Twilio SMS ThingHTTP to send an SMS message “Hello Intel Edison” to our mobile device.

Mobile Device – Received an SMS from Twilio*

Figure 23: Mobile Device – Received an SMS from Twilio*

TimeControl

TimeControl also performs ThingHTTP requests, but it performs automatically at predetermined times and schedules. Create a new TimeControl and fill out the form as follows:

  • Name: Name the TimeControl
  • Date and Time: Select date and time the TweetControl to be processed
  • Action: For this example, select ThingHTTP app then select the ThingHTTP Twilio SMS request to perform

Create TimeControl

Figure 24: Create TimeControl

When the TimeControl timer triggers, Twilio SMS ThingHTTP app is executed and sent a text message “Hello Intel Edison” to our mobile device as in Figure 23: Mobile Device – Received an SMS from Twilio.

React

React performs a ThingHTTP request or a send a ThingTweet message when data in our ThingSpeak Channel meets a certain condition. Check out http://community.thingspeak.com/documentation/apps/react/ to create a Light React to tweet a message “Your light is dim” using ThingTweet when the light value is less than 6.

Create a Light React App

Figure 25: Create a Light React App

 

Creating a Touch Notifier Monitoring App Using an Emulator

 

The Intel XDK IoT Edition allows us to create a Cordova app to monitor the Grove sensors using HTML5, CSS, and JavaScript. The app can be tested using an emulator or a real device. We’ll create a Touch notifier monitoring app that receives data wirelessly and notifies a user if the touch sensor is being touched. This app takes the touch readings with the Grove touch sensor and changes the color of the circle on the device.

  • To see a list of available templates, go to Start a New Project on the left side and under the Internet of Things Embedded Application section, click Template.
  • Select Touch Notifier and then continue to create a new project.

Create an IoT-Embedded Application Touch Notifier Template

Figure 26: Create an IoT-Embedded Application Touch Notifier Template

  • Connect the Touch sensor to the Grove Shield’s jack labeled D2 and the Buzzer sensor to Grove Shield’s jack labeled D6.
  • Connect Intel XDK to the Intel® Edison platform, then build and run the Touch Notifier app on the IoT device.

Create an IoT Embedded Application Touch Notifier

Figure 27: Create an IoT Embedded Application Touch Notifier

  • Because the Touch Notifier application attaches socket.io to the HTTP server listening on port 1337, be sure to install socket.io module before running the application.
  • Note the port number, which you will need later in Figure 30: Edison IP Address and Port Number.

Installing socket.io Module

Figure 28: Installing socket.io Module

  • To create an Apache Cordova* application, go to HTML5 Companion Hybrid Mobile for Web App on the left side, and then click Samples and Demos, and then click General.
  • To display a list of templates, go to the HTML5 + Cordova tab.
  • To create the application, click Touch Notifier Companion App.

 Installing socket.io Module

Figure 29: Create an Apache Cordova* Touch Notifier Application

You are now at the Cordova Touch Notifier project screen.

  • Click the Emulate tab, and then select the mobile device from the drop-down list in the upper-left corner. The default is Motorola Droid 2.
  • Enter the Intel® Edison board IP address, port number 1337 and then click Submit. The Intel® Edison board IP address can be retrieved by typing “wpa_cli status” at the Intel® Edison platform console. For the port number, refer to Figure 27: Create an IoT Embedded Application Touch Notifier. Note that the Intel® Edison platform should be on the same local Wi-Fi network as the computer.

Intel® Edison Platform IP Address and Port Number

Figure 30: Intel® Edison Platform IP Address and Port Number

If a pop-up message “Connection Error! Server Not Available” displays, make sure the Touch sensor application from the Internet of Things Embedded Application is still running. Now touch the Touch sensor to see the color of the circle change to green and hear the buzz from the Buzzer sensor.

Run an Apache* Cordova Touch Notifier App on Emulator

Figure 31: Run an Apache* Cordova Touch Notifier App on Emulator

 

Creating a Touch Notifier Monitoring App using a Real Mobile Device

 

To run the Cordova app on a real mobile device such as on a phone, table, or an Apple iPad*,

  • Download and install the Intel® App Preview app from Google Play*, Window Store*, or the Apple AppStore*.
  • Switch to the Test tab in the Intel XDK, and then click I have installed app preview.

Intel® App Preview Installation Confirmation

Figure 32: Intel® App Preview Installation Confirmation

If the message below displays, click Sync to sync with the testing server.

Sync with Testing Server Pop-Up

Figure 33: Sync with Testing Server Pop-Up

The mobile device, the computer, and the Intel® Edison platform must be on the same local Wi-Fi network in order to communicate with each other.

  • Open Intel® App Preview on the mobile device, switch to ServerApps, and then select the Touch Notifier app to launch it.
  • Log on using the Intel® Edison platform IP address and port number 1337. If the “Connection Error! Server Not Available” message displays, check whether the Touch sensor application from the Internet of Things Embedded Application is running.

Intel® App Preview – Select a Project

Figure 34: Intel® App Preview – Select a Project

After logging on successfully, the Touch Notifier application launches as shown below.

Launch Touch Notifier Application on a Real Mobile Device

Figure 35: Launch Touch Notifier Application on a Real Mobile Device

 

Summary

In this article, we described how to set up the Intel® Edison platform to begin interfacing with sensors and communicating data to the ThingSpeak cloud service. To try different sensors in the Grove Starter Kit Plus and experiment with more sensors, go to https://software.intel.com/en-us/iot/hardware/sensors. This article also showed how to create a touch monitoring application to monitor the status of a touch sensor remotely. Think of what you want to create and then experiment and enjoy the power of the Intel® Edison platform.

References

About the Author

Nancy Le and Whitney Foster are software engineers at Intel Corporation in the Software and Services Group working on Intel® Atom™ processor scale-enabling projects.

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

This sample source code is released under the Intel Sample Source Code License AgreementLike   SubscribeAdd new commentFlag as spam  .Flag as inappropriate  Flag as Outdated 

Intel, the Intel logo, and Intel Atom are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© 2015 Intel Corporation.

 

 

 

 

 

Building an Android* command-line application using the NDK build tools

$
0
0

Download PDF

Introduction

Libraries and test apps are often written in C/C++ for testing hardware and software features on Windows*. When these same features are added to an Android* platform, rewriting these libraries and tests in Java* is a large task. It would be preferable to simply port the existing C/C++ code, but many believe that these libraries and tests need to be accessed as Java-based Android applications. But that’s not always the case. If the code to be ported is written in ANSI C/C++ and doesn’t have any OS-specific dependencies, it can be rebuilt using the Android NDK build tools and run from the command line in a shell in much the same way you can run command-line apps from the command prompt in Windows.

This article shows how to write a simple “Hello World” application and run it on an Android device using a remote shell.

Setting up your development environment

Download and install the Android NDK

Go to https://developer.android.com/ndk/downloads/index.html and download the appropriate NDK for your OS. Follow the instructions to extract and install the NDK on your computer. 

Set up your build environment

Modify your path environment variable to indicate the location of the NDK directory. This allows you to run the NDK build tools from any other location on your computer without having to specify the tool’s entire path.

On Linux*, you can modify the variable for your local shell with the following command:

export PATH=$PATH:/your/new/path/here

If you’d like the change to be permanent and present in each shell upon opening, add the following line to your ~/.profile or /etc/profile:

PATH=$PATH:/your/new/path/here

Modify the path environment variable

Figure 1. Modify the path environment variable.

On Windows, you can modify your environment variables by opening the Control Panel > System and Security > System > Advanced system settings > Environment Variables. Find your path variable in the System variables list, and then click Edit. Add a semicolon to the end of the last path, and then add your NDK path to the end. Click OK on each dialog.

Modify the environment variables

Figure 2. Modify the environment variables.

Writing the code and build scripts

Creating your makefiles

To build for Android, you need to have at least two makefiles: Android.mk and Application.mk. Android.mk is similar to the makefiles you might be familiar with for building on Linux from the command line. In this file you can define the source files to build, the header include directories, compiler flag options, libraries and their locations, the module name, and much more. Application.mk is for specifying Android-specific properties, such as the target Android platform, SDK version, and platform architecture.

The Android.mk makefile

Figure 3. The Android.mk makefile.

In Android.mk (Figure 3), you can see that a LOCAL_PATH directory is specified. This is initialized to the current directory so that you can use relative paths to other files and directories in the build environment.

The line that includes CLEAR_VARS clears existing local variables that might have been set from previous builds or more complex builds that have multiple makefiles.

The LOCAL_MODULE variable specifies the output name of the binary you’re creating.

The LOCAL_C_INCLUDES variable specifies the directories you want the preprocessor to search for additional include files.

The LOCAL_SRC_FILES variable specifies the specific source files you’d like to be built for this application/library. Place your .c or .cpp files here.

The final line is the key portion to indicate the building of an executable instead of a library. Most native code is built into libraries for Android applications, but changing the value to $(BUILD_EXECUTABLE) in Android.mk will result in an executable.

The Application.mk makefile

Figure 4. The Application.mk makefile.

In Application.mk (Figure 4), the first line indicates a build for x86 versus ARM. This tells the NDK to use the correct tool-chain for x86 target architecture.

The second line specifies the platform to build for. In this case, it is version 21, which is for Android 5.0 also known as Lollipop*.

The third line indicates the use of the static version of the standard library runtime.

The final line indicates the name of the main makefile for this application.

Writing your application

A command-line application in Android is written in the same way, regardless of the platform. Figure 5 shows an example of a simple “Hello World” application. The cout function is used to print to the screen, and myPrint() is defined in another file.

An example of a simple “Hello World” application

Figure 5. An example of a simple “Hello World” application.

Figure 6 shows the layout of the folder structure for the project and source files.

The layout of the folder structure for the project and source files

Figure 6. The layout of the folder structure for the project and source files.

Building your application

To have the ndk-build script build an application, first create a project folder. In the folder, create a folder named jni. In the jni folder, place the Android.mk file, Application.mk file, and source files.

Then navigate to the project folder in a terminal and execute ndk-build. ndk-build is a script that resides in the root folder of your NDK installation directory. The ndk-build script will parse the project directory and its subfolders and build the application.

Since this example is about building a command-line application, the structure of having it built under a jni folder doesn’t make much sense since there is no Java code or any code to interface with Java. However, removing the jni folder results in two additional steps that must be taken in the build process.

The first step is to specify the NDK project path. Here, it is set to dot (.) for the current working directory.

export NDK_PROJECT_PATH=.

Then navigate to the project directory and use the ndk-build script. The second step is to specify where the Application.mk file is. Place it in the project directory, so the build command looks like this:

ndk-build NDK_APPLICATION_MK=./Application.mk

This prints your compile steps, executable creation, and where it “installs” your app. In this case, it creates a libs directory under your project directory. In the libs directory, it will create an x86 directory and place your executable in there.

The build command

Figure 7. The build command.

Below is the project tree with the source code and output from the build.

The project tree with the source code and output from the build

Figure 8. The project tree with the source code and output from the build.

Deploying the application

Installing your application

In order to install your application, you’re going to need a host machine and an adb (Android debug bridge) connection to the Android device from it. You’ll need to use adb to connect to the device. The adb application comes with the Android SDK and can be downloaded as part of the platform tools bundle. Move your application to your host machine (main.out, in this instance). Using adb in a command prompt, you can push the file to your Android device.

adb push /path/to/your/file /data/path/to/desired/folder/

Using adb in a command prompt

Figure 9. Using adb in a command prompt.

Now the main.out executable is on your Android device in the folder you specified.

Running your application

To run your application, first you need to open a shell to your device. You do this with the adb shell command. Now you have a Unix*-like shell open.

Change to the directory where you stored your sample. You can’t yet execute it though. On Unix systems, a file needs to be marked as executable for you to be able to run it. You can do this with the chmod command.

Now you’re ready to run your command line app. Execute it by typing ./<filename> or in this instance: ./main.out.

Running the application.

Figure 10. Running the application.

Congratulations! You can now build and run a command-line application on an Android device.

Author bio and photo

Gideon Eaton is a member of the Intel® Software and Services Group and works with independent software vendors to help them optimize their software for Intel® Atom™ processors. In the past he worked on a team that wrote Linux* graphics drivers for platforms running the Android OS.

Gideon Eaton

Spotlight: Intel® x86 and Unity* Contest Challenge Winners

$
0
0

Download Document

Earlier this year, Intel and Unity teamed up on an exciting contest that offered game developers the opportunity to build their games with native x86 support for Android* using Unity 5. They received over 300 submissions from across the globe. A wide range of game developers was represented, from small shops to well-known publishers. We saw a lot of variety in the types of games submitted, too. They included action-heavy, first-person-shooter games, contemplative puzzle games, RPG games, racing games, and other more one-of-a-kind games. To learn more about our recent contest and how you can easily add x86 support to your existing build in Unity, check out our recent blog post on the Intel Android* Developer Zone blog. Without further ado, we’d like to introduce some of the winners, who were awarded one of three prizes: a Unity 5 Pro license, an Acer Iconia* Tab 8 with Android, or an Intel® Solid State Drive 730 Series 240-GB drive.

Bis Games Urban Drift City 3D, from Turkey’s Mosil Games, is a fast-paced racing and drift simulator in which you rocket through an urban environment in the car of your choice. Vivid graphics transport you directly into gritty streets while realistic controls let you fine-tune your racing experience as you fly towards your destination.

Cubot, from French developer Nicoplv games, is a seemingly simple but deceptively complex puzzle game in which you must successfully guide a series of multicolored cubes through a grid using a variety of increasingly sophisticated techniques and tools such as elevators, color swapping, and teleportation.

Dolmus Driver, from Turkey’s Gripati Digital Entertainment, transports you into the world of Turkish dolmus taxis. Pick up funny passengers and whisk them to their destinations as quickly as you can without getting stopped by the police. Master three tracks in Istanbul using your choice of four stylish cars and charismatic drivers.

Farming USA is a terrific farming simulator from American game developer Bowen Games, LLC. In this smoothly rendered 3D world, you perform all of the daily tasks that a farmer would need to accomplish in order to build and develop a successful farm: planting, growing and harvesting crops, as well as feeding and raising livestock.

Gear Jack Black Hole, from well-known Italian publisher and developer Crescent Moon Games, offers a unique runner’s quest within a striking visual environment. As the protagonist Jack, you must speedily dodge the Black Hole while avoiding dangerous alien monsters and tricky traps in order to survive and prevail.

MOB: The Prologue, a new title from Korea’s SOGWARE, spirits you away to the world of Soria. In this lushly designed fantasy realm, you must call on the powers of lovely Hunting Girls to defend small Sorian towns from a series of nasty monsters through thrilling battles and epic boss fights. Arm your Hunting Girls with special amulets that come in handy during major showdowns.

Neon Beat, also from Gripati Digital Entertainment in Turkey, is a power-packed breakout game in which you hurtle a neon ball towards the blocks in the center of the screen, racking up bonus items and special time-limited powers. Boss fights await you underneath the pixel art, challenging you with increasing intensity as you clear each of the 60 levels in the game.

Phrase Wheel, from Turkey’s SADEGAMES, is a fun word game somewhat reminiscent of the American TV show Wheel of Fortune, in which you spin a phrase wheel and rack up points by guessing the phrase hidden on the screen. Choose from a range of topics including sports, countries and capitals, literature, music, actors and actresses, and movies.

Shepherd Saga, from Japanese game developer Element Cell Game Limited, is an adorable sheepherding simulator in which you build a ranch in the remote area of Diva Terra. Collect and nurture more than 80 different breeds of sheep, protect them from predators and diseases, and make your ranch prosper by selling more sheep than your competitors.

3D Tanks Online: Tanktastic, from America’s G.H.O.R. Corporation, is a heady descent into the thrill and danger of battlefield combat. Face-off against competitors from all over the world in a 3D multiplayer environment, camouflaging your tank for cover at key moments in the conflict and then availing yourself of more than 95 tank models to demonstrate ultimate dominance on the field.

Voxel Rush: 3D Racer Free is an exhilarating ride from British game publisher Hyperbees. Zoom through streamlined, starkly hued, minimalist 3D landscapes at a breakneck pace while avoiding the many obstacles that suddenly topple into your path. Feeling competitive? Fight and beat your friends in a 60-second multiplayer contest to win credits or virtual cash.

Here is a full list of the Intel® x86 and Unity* Contest winners by category. Do you see any games you love to play?

Action and Adventure Games

Battle Galaxy
ExZeus 2
MOB: The Prologue
Gear Black Jack Hole
Goalkeeper Premier Soccer Game
LoL Runners
3D Tanks Online: Tanktastic
Terrible Tower
Vimala Defense Warlords

Arcade Games

Candy World Quest
Elypt
Fruits War
HARDKOUR – Parkour Runner
Help the Zombies
Hungry Cat Run
Kill the Grinch Save Christmas
Micronytes Director’s Cut
Neon Beat
Papa Panda Adventure Run
Roly Poly Penguin
Special Delivery

Educational Games and Brain Teasers

Cubot
Hitung Tepat
Iterazer
Kiduoso – First Words
Math Challenge: Are You Smart?
Phrase Wheel

Puzzle and Simulation Games

Billionaire Blitz
Cubique
Farming USA
Mahjong In Poculis
Puzzle Maniac
Shepherd Saga
Sokoban 3D

Racing and Drifting Games

Bis Games Urban Drift City 3D
Car Slalom
Country Ride 2
124 Drift
Dolmus Driver
International Rally Car Race
Paco – Car Chase Simulator
Road Crash Racing
Voxel Rush: 3D Racer Free/p>

Stay tuned for our next post, where we recap interviews with some of the winning game developers about their experiences participating in the contest and learn from them just how easy it is to include x86 support for your existing Android build in Unity.

April Game Dev Hardware Seeding Contest Winner Success Story

$
0
0

Will Bucknum, Voice To Game

The laptop has made my life about 1000 times easier. I am able to easily edit and record audio on the road, so I'm no longer tethered to my home studio as much as before. The processing speed and graphics card are quite good, so playing games or builds of things I'm working on isn't a problem - nor is running multiple audio programs packed with tons of audio plug-ins. The RAM doesn't crash, the graphics don't crash and the machine is very stable. My home PC actually crashed semi-regularly using Pro Tools, and this laptop hasn't crashed at all using Pro Tools. It seems the operating system and drivers work together better in this laptop.

As I work as a contractor mostly for indie developers, anything that may seem small to help us tangibly move forward in our careers can make a bigger difference than many might think. We're not raking in hundreds of thousands of dollars (usually), so we have to be smart about expenditures. A new laptop is something I'd eventually need to get, but couldn't responsibly budget for taking into account all of the other things I'm trying to do to grow my business. In a real way, the laptop served as an accelerator for me, allowing me to travel more, participate in more fully in game jams (and make better connections and the potential of starting new projects from these jams leading to commercially marketed games) and expand the way I use my software professionally to be more efficient and do more things, sometimes in surprising ways.

 

 

For instance, our band, Gravity Nocturne, founded during the Make-A-Band event in Eugene by myself and other game audio producers, will be performing at Indie Game Con. I'll be on the stage playing through the laptop using a MIDI controller keyboard on multiple virtual instruments and live sax through plug-in effects in Pro Tools. This serves as great marketing for myself and for the game audio community in Eugene, and it wouldn't have been at all possible on my old laptop.

 

So far, I have used the laptop on two commercially marketable games, one that has been released on BETA - Flying Tigers and another unannounced game that I probably won't be able to talk about for quite a while.

 

I stopped by the Intel booth briefly at PAX and have seen some events that Intel has put on lately, and I am really impressed with how much Intel is trying to help the game development community. Every time I meet people who are trying to increase their presence in game development and are looking for places to check out, I send them to check out Intel because they're always doing things for game developers, esp. indies. Since Intel puts out quality products and is so supportive of the work I, and so many others do in making games, I basically consider myself Team Intel now.

Multi-Adapter Support in DirectX* 12

$
0
0

Download PDF

Introduction

This sample shows how to implement an explicit multi-adapter application using DirectX 12. Intel’s integrated GPU (iGPU) and a discrete NVIDIA GPU (dGPU) are used to share the workload of ray-tracing a scene. The parallel use of both GPUs allows for an increase in performance and for more complex workloads.

This sample uses multiple adapters to render a simple ray-traced scene using a pixel shader. Both adapters render a portion of the scene in parallel.

Explicit Multi-Adapter Overview

Support for explicit multi-adapter is a new feature in DirectX 12. This feature allows for the parallel use of multiple GPUs regardless of manufacturer and type (for example, integrated or discrete). The ability to separate work across multiple GPUs is provided by the presence of independent resource management and parallel queues for each GPU at the API level.

DirectX 12 introduces two main API features that help enable multi-adapter applications:

  • Cross-adapter memory that is visible to both adapters.

    DirectX 12 introduces cross-adapter-specific resource and heap flags:
    • D3D12_RESOURCE_FLAG_ALLOW_CROSS_ADAPTER
    • D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER
    The cross-adapter resources exist in memory on the primary adapter and can be referenced from another adapter with minimal cost.

  • Parallel queues and cross-adapter synchronization that allows for parallel execution of commands. A special flag is used when creating a synchronization fence: D3D12_FENCE_FLAG_SHARED_CROSS_ADAPTER.

    A cross-adapter fence allows a queue on one adapter to be signaled by a queue on the other.



    The above diagram shows the use of three queues to facilitate copying into cross-adapter resources. This is the technique used in this sample and showcases the following steps:
    1. Queue 1 on GPU A and Queue 1 on GPU B render portions of a 3D scene in parallel.
    2. When rendering is complete, Queue 1 signals, allowing Queue 2 to begin copying.
    3. Queue 2 copies the rendered scene into a cross-adapter resource and signals.
    4. Queue 1 on GPU B waits for Queue 2 on GPU A to signal and combines both rendered scenes into the final output.

Cross-Adapter Implementation Steps

Incorporating a secondary adapter into a DirectX 12 application involves the following steps:

  1. Create cross-adapter resources on the primary GPU as well as a handle to these resources on the secondary GPU.
    // Describe cross-adapter shared resources on primaryDevice adapter
    D3D12_RESOURCE_DESC crossAdapterDesc = mRenderTargets[0]->GetDesc();
    crossAdapterDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_CROSS_ADAPTER;
    crossAdapterDesc.Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR;
    
    // Create a shader resource and shared handle
    for (int i = 0; i < NumRenderTargets; i++)
    {
        mPrimaryDevice->CreateCommittedResource(&CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT),
            D3D12_HEAP_FLAG_SHARED | D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER,&crossAdapterDesc,
            D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE,
            nullptr,
            IID_PPV_ARGS(&shaderResources[i]));
    
        HANDLE heapHandle = nullptr;
        mPrimaryDevice->CreateSharedHandle(
            mShaderResources[i].Get(),
            nullptr,
            GENERIC_ALL,
            nullptr,&heapHandle);
    
        // Open shared handle on secondaryDevice device
        mSecondaryDevice->OpenSharedHandle(heapHandle, IID_PPV_ARGS(&shaderResourceViews[i]));
    
        CloseHandle(heapHandle);
    }
    
    // Create a shader resource view (SRV) for each of the cross adapter resources
    CD3DX12_CPU_DESCRIPTOR_HANDLE secondarySRVHandle(mSecondaryCbvSrvUavHeap->GetCPUDescriptorHandleForHeapStart());
    for (int i = 0; i < NumRenderTargets; i++)
    {
        mSecondaryDevice->CreateShaderResourceView(shaderResourceViews[i].Get(), nullptr, secondarySRVHandle);
        secondarySRVHandle.Offset(mSecondaryCbvSrvUavDescriptorSize);
    }
  2. Create synchronization fences for the resources that are shared between both adapters.
    // Create fence for cross adapter resources
    mPrimaryDevice->CreateFence(mCurrentFenceValue,
        D3D12_FENCE_FLAG_SHARED | D3D12_FENCE_FLAG_SHARED_CROSS_ADAPTER,
        IID_PPV_ARGS(&primaryFence));
    
    // Create a shared handle to the cross adapter fence
    HANDLE fenceHandle = nullptr;
    mPrimaryDevice->CreateSharedHandle(
        primaryFence.Get(),
        nullptr,
        GENERIC_ALL,
        nullptr,&fenceHandle));
    
    // Open shared handle to fence on secondaryDevice GPU
    mSecondaryDevice->OpenSharedHandle(fenceHandle, IID_PPV_ARGS(&secondaryFence));
  3. Render on primary GPU into an offscreen render target, and signal queue on completion.
    // Render scene on primary device
    mPrimaryCommandQueue->ExecuteCommandLists(1, primaryCommandList);;
    
    // Signal primary device command queue to indicate render is complete
    mPrimaryCommandQueue->Signal(mPrimaryFence.Get(), currentFenceValue));
    fenceValues[currentFrameIndex] = currentFenceValue;
    mCurrentFenceValue++;
  4. Copy resources from offscreen render target into cross-adapter resources, and signal queue on completion.
    // Wait for primary device to finish rendering the frame
    mCopyCommandQueue->Wait(mPrimaryFence.Get(), fenceValues[currentFrameIndex]);
    
    // Copy from off-screen render target to cross-adapter resource
    mCopyCommandQueue->ExecuteCommandLists(1, crossAdapterResources->mCopyCommandLists.Get());
    
    // Signal secondary device to indicate copy is complete
    mCopyCommandQueue->Signal(mPrimaryCrossAdapterFence.Get(), mCurrentCrossAdapterFenceValue));
    mCrossAdapterFenceValues[mCurrentFrameIndex] = mCurrentCrossAdapterFenceValue;
    mCurrentCrossAdapterFenceValue++;
  5. Render on secondary GPU, using handle to cross-adapter resource to access resources as a texture.
    // Wait for primary device to finish copying
    mSecondaryCommandQueue->Wait(mSecondaryCrossAdapterFence.Get(), mCrossAdapterFenceValues[mCurrentFrameIndex]));
    
    // Render cross adapter resources and segmented texture overlay on secondary device
    mSecondaryCommandQueue->ExecuteCommandLists(1, secondaryCommandList);
  6. Secondary GPU displays frame to screen.
    mSwapChain->Present(0, 0);
    MoveToNextFrame();

Note that the code provided above has been modified for simplification with all error checking removed. It is not expected to compile.

Performance and Results

Using multiple adapters to render a scene in parallel yields a significant increase in performance compared to relying on a single adapter to perform the entire rendering workload.


Figure 1. Frame time of 100 frames in milliseconds versus work split between integrated and discrete cards.

In the sample ray-traced scene, a decrease of approximately 26 milliseconds was observed when both an NVIDIA GeForce* 840M and Intel® HD Graphics 5500 were used to share the rendering load.

By parallelizing the workload, it is possible to reduce the frame time required by nearly 50 percent compared to using a single adapter.

Note that the workload shown in this sample is easily parallelizable and may not immediately translate to real-world gaming applications.

Appendix: Sample Architecture Overview

This sample is architected as follows:

  • WinMain.cpp
    • Entry point to the application
    • Creates objects and instantiates renderer
  • DXDevice.cpp
    • Encapsulates ID3D12Device object alongside related objects
    • Contains command queue, allocator, render targets, fences, and descriptor heaps
  • DXRenderer.cpp
    • Base renderer class
    • Implements shared functionality (for example, creating vertex buffer or updating texture)
  • DXMultiAdapterRenderer.cpp
    • Perform all core, implementation-specific rendering functionality (that is, set up pipeline, load assets, and populate command lists)
  • DXCrossAdapterResources.cpp
    • Abstracts creation and updating of multi-adapter resources
    • Handles copying of resources and fencing between both GPUs

DXMultiAdapterRenderer.cpp consists of the following functions:

public:
    DXMultiAdapterRenderer(std::vector<DXDevice*> devices, MS::ComPtr<IDXGIFactory4> dxgiFactory, UINT width, UINT height, HWND hwnd);
    virtual void OnUpdate() override;
    float GetSharePercentage();
    void IncrementSharePercentage();
    void DecrementSharePercentage();
protected:
    virtual void CreateRootSignatures() override;
    virtual void LoadPipeline() override;
    virtual void LoadAssets() override;
    virtual void CreateCommandLists() override;
    virtual void PopulateCommandLists() override;
    virtual void ExecuteCommandLists() override;
    virtual void MoveToNextFrame() override;

This class implements all the core rendering functionality. The LoadPipeline() and LoadAssets() functions are responsible for creating all necessary root signatures, compiling shaders, and creating pipeline state objects as well as specifying and creating all textures, constant buffers, and vertex buffers and their associated views. All necessary command lists are created at this time as well.

For each frame, PopulateCommandLists() and ExecuteCommandLists() are called.

In order to separate the traditional DirectX 12 rendering functionality from that which is necessary for using multiple-adapters, all of the necessary cross-adapter functionality is encapsulated in the DXCrossAdapterResources class, which contains the following functions:

public:
    DXCrossAdapterResources(DXDevice* primaryDevice, DXDevice* secondaryDevice);
    void CreateResources();
    void CreateCommandList();
    void PopulateCommandList(int currentFrameIndex);
    void SetupFences();

The CreateResources(), CreateCommandList(), and SetupFences() functions are all called upon initialization to create the cross-adapter resources and initialize the synchronization objects.

Every frame, the PopulateCommandList() function is called to populate the copy command list.

The DXCrossAdapterResources class contains a separate command allocator, command queue, and command list that are used for copying resources from a render target on the primary adapter into the cross-adapter resources.

About the Author

The author of this article is Nicolas Langley, who is an intern working with Intel's Visual Computing Software Division on Multi-Adapter Support in DirectX* 12 project.

Notices:

Intel technologies may require enabled hardware, specific software, or services activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© 2015 Intel Corporation.


IDF 2015 – Observations and Wrap Up

$
0
0

Download Document

IDF 2015 has come and gone, but it was a great ride. Game developers had a lot to see and learn at this year’s Intel Developer Forum. It was a good opportunity to network, learn about the latest Intel technology, and share development techniques. This year one of the big topics of Intel Developer Forum (IDF) was gaming and game development, learning about and sharing ideas for Intel® RealSense™ technology and Intel graphics.

Unfortunately not every game developer could make it to the Moscone Center in San Francisco, so this recap highlights the key events. Enjoy, and hope to see you at IDF 2016!

The “Game”- Changer with Doug Fisher and Kirk Skaugen

Doug Fisher, Senior Vice President and General Manager, Software and Services Group, and Kirk Skaugen, Senior Vice President and General Manager, Client Computing Group, shared Intel’s role as the partner of innovation on the PC. Including tools for game developers that make it easier to develop across multiple operating environments. During the mega-session Chris Roberts, founder of Roberts Space Industries, creator of Star Citizen and the Wing Commander* universe made an appearance. He discussed the difficulty of creating a seamless game universe with no loading screens and how 6th Generation Intel® Core™ Processor Family (codenamed SkyLake) and Intel® Optane™ technology will help. CLG Red, the Intel-sponsored, all-woman professional gaming group from Counter Logic Gaming, made an appearance. Doug Fisher also talked about Achievement Unlocked, the Intel Game Developer Program, and revealed the 2015 Intel® Level Up contest winner.

You can watch the entire session here.

Intel® Buzz Workshop Game Developer Camp

This year we brought the successful and popular Intel® Buzz Workshop to IDF. At this day-long event, attendees participated in a Q&A session, a panel session, and a speed-dating-style mentoring session. Many people won prizes, like Intel® Solid-State Drives and the Intel® Compute Stick.

The workshop kicked off with the announcement of the Intel® Game Quality Assurance Evangelism Program. This program can help you deliver a top-notch PC game experience through QA testing assistance. Intel is working with top QA labs in the industry to help educate and support QA testing for game developers.

 

The first guest speaker of the day was Susan O'Connor. Her topic was “How to Change the Dev Cycle for the Better.” Susan has been a game writer for over 10 years, working on big games such as BioShock* and the Tomb Raider* reboot. Susan talked about the importance of writing and story in game development

 

 

 

After Susan O’Conner’s great talk, we held a panel on alternative funding methods.

Participating in the panel were:

Together they discussed the myriad ways to fund game development, from the “friends and family plan” to angel investors. The discussion was fun, informative, and engaging, with many questions from the game developers in attendance.

 

Following the panel, the last session of the day was “Speed Dating with Mentors.” During the session, participants had the opportunity to meet with all of the speakers and industry mentors present, and ask questions and solicit advice on their experiences in the industry. In addition to speakers we had:

 

 

 

IDF 2015 Technical Sessions

Technical Sessions were presented by Intel and industry experts. Developers got relevant technical information on the latest technologies and products. Topics included hyper-scaling, 6th Generation Intel Core Processor Family (codenamed SkyLake), PCI Express* 4.0, and 3D optimization for Intel Graphics. Also included were hands-on labs. Here is a list of all the technical sessions held at IDF this year. Webcasts or audio recordings are available for most sessions.

Developer Showcase

Developers received hands-on experience with disruptive and experimental software solutions from Intel® Software Innovators at the Intel® Software Innovations Pavilion. These early adopters are leveraging the latest in Intel RealSense technology to create new experiences in the areas of gaming, and entertainment. The developer showcase featured 9 indie games at the Intel Software Innovations Pavilion.

 

 

Experience the Possibilities

This was the first IDF with a focus on gaming, and it will not be the last. It was a great event with lots to see and learn for game developers. Game developers were able to experience the latest Intel technology and see some of the possibilities that it offers. Attendees of the Intel Buzz Workshop Game Developer Camp were able to interact with industry mentors and discuss many topics. The “Game”- Changer with Doug Fisher and Kirk Skaugen showed everyone the possibilities with the 6th Generation Intel Core Processor and guests such as Chris Roberts gave insight into new game experiences with Intel technology. Follow this link to see all that happened at IDF 2015 and hope to see you at IDF 2016!

Best UX Practices for Intel® RealSense™ Camera (F200) Applications

$
0
0

Intel® RealSense™ technology enables us to redefine how we interact with our computing devices, including allowing the user to interact naturally through hand gestures. To help you learn best practices for developing a natural user interface (NUI) application for the F200 camera using the Intel® RealSense™ SDK, members of the Experience Design and Development team within Intel’s Perceptual Computing Group recorded a series of 15 short videos. The goal of the series is to enable you to design a successful user interface experience into your project from the start. The videos cover a broad range of topics, from basics like understanding the user interaction zone and hand tracking considerations to user tutorials and testing. You can watch all the videos packaged into a series or choose individual videos from the list below.

Watch the whole series:

Browse and select topics of interest from the full list of videos:

How to Conduct User Testing | Intel® RealSense™ Camera (F200)

When developing an Intel® RealSense™ application using a natural user interface it is important to keep the user in mind. User testing is one of the best ways to determine how users are using the features you are providing them. This tutorial gives developers some best practices on how to conduct user testing to create better applications using the F200 depth camera. With Lisa Mauney.

How to Design for Two-Handed Interactions | Intel® RealSense™ Camera (F200)

Intel® RealSense™ technology can detect two hands, but there are problems specific to two-handed interactions that you might encounter during development of gesture applications for the F200 depth camera. This tutorial provides some best practices and simple solutions to these issues. With Lisa Mauney.

How to Provide User Tutorials and Instructions | Intel® RealSense™ Camera (F200)

When using Intel® RealSense™ technology in your application, you may present the end user with one or more new methods of interaction, such as gesture. To ensure a positive experience, you’ll want to provide these users with a guide to help them understand how to interact with the technology. This tutorial covers some best practices you can use to educate and inform your users. With Chandrika Jayant.

How to Minimize User Fatigue: Timing and Repetition | Intel® RealSense™ Camera (F200)

When designing a natural user interface application for Intel® RealSense™ technology, be sure to account for user fatigue by building in breaks that support the ways users will interact with your application. This tutorial covers some best practices for designing applications built for the F200 depth camera. With Chandrika Jayant.

How to Understand Speed of Motion | Intel® RealSense™ Camera (F200)

When developing with Intel® RealSense™ technology you need to understand the speed and precision of hand gestures and how that can impact your application’s user experience. This tutorial gives you some best practices focused on how to make your application provide the feedback required to give users a better feeling of control. With Robert Cooksey.

How to Use Background Segmentation/Separation | Intel® RealSense™ Camera (F200)

Segmentation is one of the features developers can use with Intel® RealSense™ technology. This tutorial covers some of the best practices for using this feature in applications built for the F200 depth camera. With Robert Cooksey.

How to Understand World Space and Versus Screen Space | Intel® RealSense™ Camera (F200)

When creating an application for Intel® RealSense™ technology, developers must understand the relationship between the screen space and physical space. This tutorial provides some best practices to help your application better utilize the spaces the F200 depth camera can work in. With Robert Cooksey.

How to Design around Occlusion: Hand and User Blocking | Intel® RealSense™ Camera (F200)

A common problem in Intel® RealSense™ application development involving gestures is occlusion, where the user’s hand blocks the screen or one hand covers the other hand. If the application doesn’t handle this situation well, the problem can negatively impact the user experience. This tutorial gives developers some best practices to avoid occlusion problems when using the F200 depth camera. With Lisa Mauney.

How to Minimize User Fatigue: Supporting Natural Motion | Intel® RealSense™ Camera (F200)

When designing a natural user interface application using Intel® RealSense™ technology, remember that users are interacting within a physical space, and your design should incorporate and support natural motion. This tutorial provides some best practices to help you take into account the real world natural motions of users in your application designed for the F200 depth camera. With Robert Cooksey.

How to Understand the Interaction Zone | Intel® RealSense™ Camera (F200)

Understanding the physical space in which a user can interact with your Intel® RealSense™ application is important in ensuring that users have a positive experience with your application. This tutorial covers some best practices for understanding the interaction zones using the F200 depth camera. With Chandrika Jayant.

How to Use Visual Feedback: Interacting with Objects | Intel® RealSense™ Camera (F200)

User should always know what objects they can interact with onscreen in an Intel® RealSense™ application. This tutorial discusses some best practices that focus on providing visual feedback that will help your users better interact with objects in the application. With Chandrika Jayant.

How to Minimize User Fatigue: Understanding the Limits of Input Precision | Intel® RealSense™ Camera (F200)

Several factors limit input precision in natural user interface applications for Intel® RealSense™ technology. When developing your application, it is important to be forgiving of user interaction that might drift or need a buffer. This tutorial covers several of the best practices that developers can use to mitigate problems that users might have with precise movements. With Robert Cooksey.

How to Develop for Multiple Form Factors | Intel® RealSense™ Camera (F200)

When developing your application to take advantage of Intel® RealSense™ technology, you need to be aware of the full range of device types that may be used to interact with your application. Each form factor, including laptops, tablets, and all-in-ones, has a different cone of interaction to take into account. This tutorial gives you some best practices for developing applications across multiple platforms using the F200 depth camera. With Chandrika Jayant.

How to Use Visual Feedback: Cursors | Intel® RealSense™ Camera (F200)

In a natural user interface Intel® RealSense™ application, users need to know where they are within the interaction zone in order to give them a sense of control. Providing a cursor onscreen is one of the best ways to manage the user experience. This tutorial gives you some best practices for providing hand location cues when developing an application that uses the F200 depth camera. Chandrika Jayant.

How to Design for Different Range Options | Intel® RealSense™ Camera (F200)

While the F200 camera has a range of 20–120 cm, it doesn't make sense to develop your Intel® RealSense™ application to cover the entire range. This tutorial gives some best practices for using the F200 depth camera range across the different uses that the camera supports. With Lisa Mauney.

Performance Gains for SunGard’s Adaptiv Analytics* on the Intel® Xeon® Processor E7-8890 V3

$
0
0

Introduction

SunGard’s Adaptiv Analytics* allows traders to run pre-deal cost-of-credit calculations. Due to the volume and complexity of products, these calculations are often time consuming, causing delays that can lead to missed opportunities or taking action with incomplete information.

Since SunGard’s customer usage model is often running multiple instances simultaneously instead of running a single instance as fast as possible, running SunGard’s Adaptiv Analytics on systems with more cores can help improve the performance dramatically. SunGard’s adoption of Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel’s investment in parallel computing through the use of vectorization lanes and registers has helped provide superior scalability and performance for SunGard’s industry-leading risk management solution. These improvements are helping to meet the growing computational requirements of the market and the regulatory environment.

This paper describes how Adaptiv Analytics running on systems equipped with Intel® Xeon® processor E7-8890 v3 gained a performance improvement over running on systems with the previous generation of Intel® Xeon® processor E7-4890 v2.

SunGard’s Adaptiv Analytic and Intel® Xeon® Processor E7-8890 V3

For hardware, Intel Xeon processor E7-8890 v3 has 18 cores over comparing to Intel Xeon E7-4890 v2 that has 15 cores resulting in increasing parallelism in E7-8890 v3. In additional to that, E7-8890 v3 has larger memory bandwidth comparing to E7-4890 v2 and uses DDR4 memory while E7-4890 v2 uses DDR3 memory, thus speeding up the executions.

For hardware, the Intel Xeon processor E7-8890 v3 has Intel AVX2 while the Intel Xeon processor E7-4890 v2 only supports Intel® Advanced Vector Extensions (Intel® AVX). Let see how Intel AVX2 improves the performance of this produce.

 

# Cores

# Threads

Memory

Vectorization

Intel® Xeon® E7-4890 v2

15

30

DDR3

AVX

Intel® Xeon® E7-8890 v3

18

36

DDR4

AVX2

Table 1. Processors Comparison

SunGard’s Adaptiv Analytics uses the Monte Carlo simulation to perform risk analysis. The Monte Carlo simulation is often used whenever there is a need to analyze the behavior of activities or processes that involve uncertainty, such as risk management. This simulation calculates the results multiple times using a random set of values, giving the decision maker a range of possible outcomes. The random set of values is generated from the probability functions.

To increase the accuracy of the possible outcome, Monte Carlo simulation needs to run for a long time period, possibly repeating up to 10,000 times. This is where Intel® AVX2 along with features of E7-8890 v3 mentioned above can provide advantages over those of E7-4890 v2.

The following paragraph talks about functions frequently used by Monte Carlo simulation for vector or matrix manipulations and are optimized by Intel AVX2.

daxpy

Function daxpy computes the following operation on double-precision values:

A× α + B

Where:

A and B: matrix or vector

α: Constant

 

dgemv

Function dgemv calculates the following operation on double-precision values:

α× A× x + β× y

Or

α× AT× x + β× y

Where:

α and β : Constants

x and y : Vectors

A: Matrix

Functions daxpy and dgemv are implemented in the Intel® Math Kernel Library (Intel® MKL) and Intel® Integrated Performance Primitives (Intel® IPP). Starting with version 11 of Intel MKL and version 8 of Intel IPP, the two functions are optimized using Intel AVX2. SunGard’s Adaptiv Analytics uses the daxpy and dgemv versions of Intel MKL and Intel IPP, thus taking advantage of Intel AVX2 performance improvements in the Intel Xeon processor E7-8890 v3. Using Intel’s libraries means that developers don’t have to modify their codes to take advantage of new enhancement features in future Intel® Xeon® processors.

Performance test procedure

To prove that Intel AVX2 along with the new microarchitecture in the Intel Xeon processor E7 v3 improve the performance of SunGard’s Adaptiv Analytics, we performed tests on two platforms. One system was equipped with the Intel Xeon processor E7-8890 v3 and the other with the Intel Xeon processor E7-4890 v2.

We created a launcher to execute x amount of instances of a command-line tool called RunCalcDef.exe that performs the calculations using the SunGard’s Adaptiv Analytics engine. On the system equipped with the Intel Xeon processor E7-8890 v3, we launched 18 instances with 8 nodes per instance while launching 30 instances with 4 nodes per instance on the system equipped with the Intel Xeon processor E7-4890 v2. Node is the term used in SunGard’s Adaptiv Analytics that specifies how many threads are operating on a subset of the 10k scenarios of the Monte Carlo simulation.

Why didn’t we use the same amount of instances and nodes on both systems? The reason: The system equipped with the Intel Xeon processor E7-8890 v3 has 4 sockets, each of which can handle 36 threads with hyper-threading on for a total of 144 threads for the whole system. On the other hand, the system equipped with the Intel Xeon processor E7-4890 v2 has 4 sockets, each of which can handle 30 threads with hyper-threading on for a total of 120 threads for the whole system. Using the same amount of instances and nodes on the system with Intel Xeon processor e7-8890 v3 as on the system with Intel Xeon processor E7-4890 v2 would result in over-subscribing the cores leading to a decrease in performance.

The tests computed the throughput, calculations per second, by dividing the total number of calculations executed (a pre-known value based on the number of instances) by the average execution time in seconds.

Test configurations

System equipped with Intel Xeon processor E7-8890 v3

  • System: Pre-production
  • Processors: Intel Xeon processor E7-8890 v3 @2.5 GHz
  • Cores: 18
  • Memory: 384 GB DDR4-2133 MHz

System equipped with Intel Xeon processor E7-4890 v2

  • System: Pre-production
  • Processors: Intel Xeon processor E5-4890 v2 @2.8 GHz
  • Cores: 15
  • Memory: 512 GB DDR3-1600 MHz

Operating System: Microsoft Windows Server* 2012 R2

Application: SunGard Adaptiv Benchmark v13.1

Test results


Figure 1. Performance comparison between processors.

Figure 1 shows a 1.47x performance gain of the system with the Intel Xeon processor E7-8890 v3 over that of the system with the Intel Xeon processor E7-4890 v2. The performance gain is due to the enhanced microarchitecture, increase in core count, better memory type (DDR4 over DDR3), and Intel AVX2.

Conclusion

More cores, enhanced microarchitecture, and the support of DDR4 memory contributed to the performance improvement of SunGard’s Adaptiv Analytics on systems equipped with the Intel Xeon processor E7-8890 v3 compared to those with Intel Xeon processor E7-4890 v2. With the introduction of Intel AVX2, matrix manipulations get a boost. In addition, applications that make use of Intel MKL and Intel IPP will receive a performance boost without having to change the source code, since their functions are optimized using Intel AVX2.

References

[1] Wikipedia. Basic Linear Algebra Subprograms. https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms

[2] Investopedia. Corporate Finance – Risk-Analysis Techniques. http://www.investopedia.com/exam-guide/cfa-level-1/corporate-finance/risk-analysis-techniques.asp

[3] Intel® Integrated Performance Primitives (Intel® IPP). https://software.intel.com/en-us/intel-ipp

[4] Intel® Math Kernel Library (Intel® MKL) https://software.intel.com/en-us/intel-mkl

[5] LAPACK: Linear Algebra PACKage – dgemv.f. http://www.netlib.org/lapack/explore-html/dc/da8/dgemv_8f_source.html

[6] LAPACK: Linear Algebra PACKage – daxpy. http://www.netlib.org/lapack/explore-html/d9/dcd/daxpy_8f.html

Intel RealSense SDK - Primo Contatto

$
0
0

In questo articolo ci avvicineremo all’SDK di Intel® RealSense™ cercando di capire di cosa si tratta, quali sono le caratteristiche e in cosa può esserci utile.
L’articolo si riferisce alla versione 6.0.x dell’SDK di Intel® RealSense™ attualmente scaricabile all’indirizzo https://software.intel.com/en-us/intel-realsense-sdk/download

Le Camere 3D

Per chi fosse completamente digiuno di Intel® RealSense™, si tratta, in poche parole, di una piattaforma, hardware e software che permette la realizzazione di applicazioni “immersive” di nuovissima generazione che possono sfruttare il concetto di Natural User Interface (gesture, voice, posture, etc., etc.) per fornire all'utente una UX evoluta.
Parliamo di hardware perché il fulcro di tutta la piattaforma sono due camere 3D, rispettivamente la Intel® RealSense™ Camera F200 e R200.
La F200, mostrata nella foto seguente, è una “User Facing Camera”, cioè una camera che può essere utilizzata direttamente dall’utente poiché è posizionata frontalmente allo stesso (da qui il prefisso F del nome).

La R200, invece, è una “World Facing Camera” cioè una camera che viene posizionata nella parte posteriore dei device (tipicamente dei tablet) in grado di inquadrare, quindi, il mondo circostante

La F200 sfrutta una tecnologia basata sull’emissione di infrarossi per ottenere una immagine 3D dell’ambiente, mentre la R200 utilizza la tecnica della visione stereoscopica (dispone, cioè, di due normali camere) e ricostruisce l’ambiente circostante sfruttando le differenze ottenute dalle due inquadrature non in asse.
L’SDK è in grado di lavorare con entrambe anche se le due camere hanno differenti caratteristiche, forniscono differenti funzionalità e servono per scenari differenti.
Maggiori informazioni sulle camere (e possibilità di acquistarle) all’indirizzo https://software.intel.com/en-us/intel-realsense-sdk/download.
Esistono già prodotti in vendita che ne montano di serie delle versioni miniaturizzate, un elenco dei quali può essere visionato a questo indirizzo http://www.intel.com/content/www/us/en/architecture-and-technology/realsense-devices.html
Per completare, la F200 permette di abilitare Windows Hello sul nuovo sistema operativo Windows 10.

Installare l’SDK

Utilizzando l’url riportato precedentemente possiamo scaricare il package contenente l’installazione dell’SDK oppure procedere anche utilizzando un web installer.
Vi consiglio di scaricare il corposo pacchetto in modo da poterlo utilizzare più volte senza impegnare ogni volta la vostra connessione internet.
Il package off-line occupa più di un Giga di spazio su disco (circa 1.3 Gb) ma all’interno trova spazio veramente tanta roba, quindi l’occupazione è del tutto giustificata.
Una volta eseguito il package, l’installazione provvede ad estrarre tutto il necessario prima di visualizzare la schermata che ci consente di selezionare ciò che ci interessa installare.


Il package di installazione ci permette di andare manualmente a selezionare le funzionalità che ci interessano o ci fornisce dei “profili” per lo sviluppo su una delle due camere in particolare.

Selezionato ciò che ci interessa, possiamo procedere all’installazione vera e propria dell’SDK e di tutte le sue funzionalità.
Una volta terminata l’installazione, abbiamo a disposizione una cartella sul desktop in cui possiamo trovare:

  • Una cartella Documentation con pdf, chm e html che ci aiutano nello sviluppo;
  • Una cartella Samples in cui troviamo esempi di applicazioni (sia come eseguibili che come sorgenti) nei linguaggi e nelle piattaforme di sviluppo supportate da Intel® RealSenseâ„¢ SDK;
  • Una cartella Tools in cui troviamo degli eseguibili che ci permettono di capire immediatamente se l’eventuale camera attaccata al nostro pc sta lavorando correttamente o meno.

Tra questi menzionerei Camera Explorer che permette di verificare lo stream video e lo stream di profondità della camera:

e SDK Information Viewer che ci permette di verificare informazioni sulla versione dell’SDK installata, sulle caratteristiche del sistema e della telecamere e tanto altro

E’ necessario ricordarsi di scaricare e installare anche il Depth Camera Manager (DCM) per la camera che si sta utilizzando. Il DCM è un servizio di Windows che permette a più applicazioni sviluppate con l’SDK e ad una sola applicazione che non utilizza l’SDK di accedere simultaneamente alle sorgenti dati della camera senza darsi fastidio.
Il DCM è esterno all’SDK poiché è specifico della camera che si utilizza (come una sorta di driver). L’SDK è, invece, indipendente dalla camera.
Tra le altre cose il DCM consente anche l’aggiornamento del firmware della camera nel caso ce ne fosse bisogno.

Caratteristiche Hardware

Per concludere questo articolo, vediamo quali sono le caratteristiche hardware minime deve avere il PC su cui abbiamo intenzione di utilizzare l’SDK.

  • Processore Intel® Coreâ„¢ di quarta generazione (o successivo)
  • 8 GB di spazio libero su disco fisso
  • Sistema operativo Microsoft Windows* 8.1-10 a 64 bit in modalità desktop
  • USB 3.0 per la camera

Intel® INDE Professional Edition Promotion: Terms & Conditions

$
0
0

Intel® INDE Professional Edition Promotion
Terms and Conditions
August 27, 2015

  1. By registering with Intel Registration Center, you may download a free copy of the INDE Professional Edition Tool Suite, subject to the terms and conditions of the accompanying End User License Agreement.
  2. No purchase of any kind is required to participate in this offer.
  3. This offer is void where prohibited and only valid from August 31st, 2015 through January 31, 2016. Dates are subject to change.
  4. Downloads are limited to one copy per person.
  5. Intel is not responsible for failed downloads due to lost, failed, delayed or interrupted connections or miscommunications, or other electronic malfunctions, or if the registrant submits an incorrect or invalid name, email and mailing address.
  6. You have read and agree to Intel’s privacy policy found here.
  7. For questions, contact Intel Customer Support.

Intel® Parallel Studio XE 2015 Update 5 Professional Edition for Fortran Windows*

$
0
0

Intel® Parallel Studio XE 2015 Update 5 Professional Edition for Fortran parallel software development suite combines Intel's Fortran compiler; performance and parallel libraries; error checking, code robustness, and performance profiling tools into a single suite offering.  This new product release includes:

  • Intel® Parallel Studio XE 2015 Update 5 Composer Edition for Fortran - includes Intel® Visual Fortran Compiler and Intel® Math Kernel Library (Intel® MKL)
  • Intel® Advisor XE 2015 Update 1
  • Intel® Inspector XE 2015 Update 1
  • Intel® VTune™ Amplifier XE 2015 Update 4.1
  • Sample programs
  • Documentation

New in this release:

  • Components updated to current versions

Note:  For more information on the changes listed above, please read the individual component release notes.

 See the previous releases' ReadMe to see what was new in that release.

Resources

Contents 
File:  parallel_studio_xe_2015_update5_online_setup.exe
Online installer

File:  parallel_studio_xe_2015_update5_setup.exe
Product for developing 32-bit and 64-bit applications

Intel® Parallel Studio XE 2015 Update 5 Professional Edition for Fortran and C++ Windows*

$
0
0

Intel® Parallel Studio XE 2015 Update 5 Professional Edition for Fortran and C++ parallel software development suite combines Intel's C/C++ compiler and Fortran compiler; performance and parallel libraries; error checking, code robustness, and performance profiling tools into a single suite offering.  This new product release includes:

  • Intel® Parallel Studio XE 2015 Update 5 Composer Edition for Fortran and C++ - includes Intel® Visual Fortran Compiler, Intel® C++ Compiler, Intel® Integrated Performance Primitives (Intel® IPP), Intel® Threading Building Blocks (Intel® TBB) and Intel® Math Kernel Library (Intel® MKL)
  • Intel® Advisor XE 2015 Update 1
  • Intel® Inspector XE 2015 Update 1
  • Intel® VTune™ Amplifier XE 2015 Update 4.1
  • Sample programs
  • Documentation

New in this release:

  • Components updated to current versions

Note:  For more information on the changes listed above, please read the individual component release notes.

 See the previous releases' ReadMe to see what was new in that release.

Resources

Contents 
File:  parallel_studio_xe_2015_update5_online_setup.exe
Online installer

File:  parallel_studio_xe_2015_update5_setup.exe
Product for developing 32-bit and 64-bit applications


GPU Detect

$
0
0

Microsoft Windows* SDK May 2010 or newer (compatible with June 2010 DirectX SDK) GPU Detect

Intel Corporation


Features / Description

Date: 9/9/2015

GPU Detect is a short sample demonstrates a way to detect the primary graphics hardware present in a system (including the 6th Generation Intel® Core™ processor family). The code download includes documentation and is meant to be used as a guideline, and should be adapted to the game’s specific needs

 

System Requirements

Hardware:

 
  • CPU: Intel® CPU supported
  • GFX: uses Microsoft DirectX* 10 graphics API on Microsoft DirectX* 10 (or better) hardware
  • OS: Microsoft Windows Vista, Windows* 7 SP1 or newer
 

Software:


Toolkits Supported:

  Microsoft Windows* SDK May 2010 or newer (compatible with June 2010 DirectX SDK)

Compilers Supported:

  • Microsoft Visual Studio* 2008
  • Microsoft Visual Studio* 2010
  • Microsoft Visual Studio* 2013

Libraries Required:
  • Microsoft* C Run-Time Libraries (CRT) 2008/2010/2013

Dependencies



     
     

    WebGL* in Chromium*: Behind the scenes

    $
    0
    0

    Have you ever wondered how your WebGL* code is executed and what happens before it hits the drivers? You might have heard about a few things already, for example that Chromium* uses a separate process to execute GPU code and has its own wrappers around the GL calls. This article is exactly about this abstraction layer and is meant for people who want a better understanding of WebGL or who want to start developing the GPU code in Chromium.

    Chromium uses a multi-process1 architecture. Each webpage has its own rendering process, which runs in a sandbox and is very restricted in what it can access. This makes it much harder for malicious web content to mess with your computer. However, this is bad news for GPU acceleration since the renderer doesn't even have access to the GPU. This is solved by adding an extra process just for the GPU commands—which sounds horrible at first, as it introduces a lot of interprocess communication, and the textures also have to be copied between the processes, but it's not as bad as you'd imagine. For example, the textures usually only have to be copied once at initialization, and modern OpenGL is designed to minimize the number of commands that have to be sent to the GPU. This separation actually improves performance because WebGL can execute independently of all the other rendering and parsing.

    The Command Buffer

    The GPU process and renderer process communicate using a server-client model, where the renderer is the client. When a renderer wants to execute a GL command like glViewport, it cannot directly call the function from the driver because of the security sandbox. Instead, it creates a representation of the command, puts it into a buffer in shared memory called CommandBuffer2, and sends a message to the server to communicate approximately, "Hey, I put some stuff into the buffer, please execute it for me." Realistically, the renderer puts a number of commands in the buffer and sends the server a message of, "Hey, I put 10 commands in the buffer, please execute them."

    This is all nice and fast as long as you just send commands and don't ask questions. Most commands don't return a value, and thus most of the communication can be asynchronous. But as soon as you ask the GPU process a question, like, "What is the result of this command?" the communication has to do a round trip, and the renderer has to wait for the result. Checking the result values when you don't need them can make the performance of your application much worse. Even something like checking glGetError has a high cost.

    The commands you can send to the GPU process are pretty much the same as those you can send to the GL ES 2.0 API,3 apart from a few incompatibilities.4 However, what actually gets executed depends on your platform. It could be OpenGL, it could be DirectX (through ANGLE5), and some commands could get executed through GL extensions if the extensions are deemed faster. The results might not be the same as if you wrote native OpenGL ES 2.0 code because Chromium enforces better security measures. For example, it does extra validation of the parameters and clears the allocated buffer memory so that one web page cannot read leftover data from a different page.

    There is only one GPU process, and it doesn't care who sends it commands to—it just remembers the context for each source and keeps a separate shared memory block and command buffer for each source. It isn't just used for WebGL. Normal rendering is done via Skia6, which also does its requests through the command buffer and not directly to the driver. Since a single web page can contain multiple elements that need GL commands (for example: multiple canvas elements), it actually renders into a texture (using FBOs7) instead of the framebuffer directly, and the compositor takes care of the page arrangement.

    Example: glViewport

    This diagram shows an outline of what happens when the glViewport command gets executed, starting from the left and eventually getting to the GPU:

    Below is a simplified stack trace of the renderer (GPU client) when glViewport gets called from WebGL, with the most recent function call at the top:

    gpu::gles2::GLES2CmdHelper::Viewport
    gpu::gles2::GLES2Implementation::Viewport
    blink::WebGLRenderingContextBase::viewport
    blink::HTMLCanvasElement::getContext
    v8::internal::Builtin_HandleApiCall
    ...

    Once JavaScript* gets parsed and executed in the V8 engine8, the glViewport API call gets handled in Blink9 (a fork of WebKit), which gets the correct rendering context, checks the parameters (in this case, checking that the viewport width and height are positive), and then sends the command to the GLES2CmdHelper. This class takes care of the whole business with the command buffer by creating a representation of the command, putting it into the buffer, and (eventually) sending a message to the GPU process that there is a command it should handle.

    On the GPU server side:

    gfx::GLApiBase::glViewportFn
    gpu::gles2::GLES2DecoderImpl::DoViewport
    gpu::gles2::GLES2DecoderImpl::HandleViewport
    gpu::gles2::GLES2DecoderImpl::DoCommands
    gpu::CommandParser::ProcessCommands
    gpu::GpuScheduler::PutChanged
    gpu::CommandBufferService::Flush
    content::GpuChannel::HandleMessage
    base::MessageLoop::Run
    ...

    The GPU process sits there and waits in a loop for messages (things to do), as you can see on the bottom of this simplified stack trace. When a message arrives, the GPU process jumps through a couple of callbacks and handlers depending on the type of message. In this case, the message is something like, "I put some commands into your buffer." The buffer gets flushed (meaning it synchronises the buffer position between the two processes, but the renderer can keep adding commands to it), and via another set of callbacks reaches the GpuScheduler, which starts processing the commands. The GLES2DecoderImpl is the main monster class that handles all the GPU commands and sends them to the driver. Again, it checks whether the parameters to glViewport are valid. Now we have almost hit the drivers, and the GLAPIBase contains thousands of lines of auto-generated bindings that do the equivalent of driver->glSomeCommand.

    Security

    Effectively, WebGL allows you to execute arbitrary code on the GPU. If there is an exploit targeting the drivers, it could possibly break out and take control of your computer. But since the GPU process doesn't have free reign and is sandboxed-just slightly less restricted than the renderer and thus can call the 3D API directly, it’s much less likely that an exploit could do damage. Drivers on various platforms are quite prone to have bugs in them. That is why Chromium wraps each of them to work around the issues and blacklists old and buggy hardware, drivers, or GL extensions. If you are trying to figure out why some feature is not working on your device, you can find the blacklists in the source code10 in a fairly readable format. You might have also noticed in the glViewport that extra parameter checks are done before the command even reaches the driver, which significantly decreases the chances of triggering a bug.

    Having to deal with an extra process for GPU commands adds a number of complications, but the benefits are worth it. If we gave the renderer rights to access the GPU, it would be difficult to make sure that all GL commands are going through the safety controls, and eventually something would leak through. The client-side API to the command buffer doesn’t even have any external dependencies, which means NaCl11 has a smaller attack surface. Right now, even if a GL command hits into a bug, it will be only the GPU process that crashes, and it can be restarted. Even despite the extra overhead of communication between processes, the perceived performance is better thanks to being able to execute independently from the rest of rendering, taking advantage of multi-core CPUs.

    1Chromium Design Documents: Multi-process Architecture
    2Chromium Design Documents: GPU Command Buffer
    3OpenGL ES 2.0 Reference
    4Chromium Design Documents: GPU Command Buffer - OpenGL ES 2.0 incompatibilities
    5The ANGLE* project
    6Skia* project
    7OpenGL Framebuffer Object (FBO)
    8V8 JavaScript Engine
    9Blink layout engine
    10You can look at the disabled features or list of systems where software rendering is used. Try restarting Chromium with the “--ignore-gpu-blacklist” command-line flag to ignore both of those lists.
    11Google Native Client, also known as NaCl

    Storage: Accelerate Hash Function Performance Using the Intel® Intelligent Storage Acceleration Library

    $
    0
    0

    Abstract

    With the growing number of devices connected to the cloud and the Internet, data is being generated from many different sources including smartphones, tablets, and Internet of Things devices. The demand for storage is growing every year. For cloud storage developers who are looking for ways to speed up their storage performance, the optimized hash functions in the Intel® Intelligent Storage Acceleration Library (Intel® ISA-L) accelerate the computation, providing up to a 8x performance gain over OpenSSL* algorithms. After a study of performance using version 2.14, the latest version of Intel ISA-L, the data shows a potential performance gain for developers to apply Intel ISA-L to their existing application.

    This article captures the performance data and the system configuration for developers interested in reproducing this experiment in their own environment. Intel ISA-L can run on various Intel® server processors and provides operation acceleration through the following instruction sets:

    • Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI)
    • Intel® Streaming SIMD Extensions (Intel® SSE)
    • Intel® Advanced Vector Extensions (Intel® AVX)
    • Intel® Advanced Vector Extensions 2 (Intel® AVX2)

    Benefits

    Intel ISA-L multibinary support functions allow an appropriate version to be selected at first run (based on the supported instruction set) and can be called instead of the architecture-specific versions. Developers can deploy a single binary with multiple function versions and then choose features at runtime. If code size is a concern, just call the architecture-specific version directly to reduce the code size. In default mode the base functions are written in C and the multibinary function will call those if none of the required instructions sets are enabled. 

    For example, if the code is compiled on an Intel® Xeon® E5 v3 processor family and there are three versions of a particular functions (funct1_sse (), func1_avx(), func1_avx2 ()), the function (func1()) will determine that the appropriate function to call is func1_sse(). There is also a base function (func1_base()), which the multibinary function calls if none of the required instruction sets are enabled.

    By including the Intel® instruction extensions listed above, Intel ISA-L reduces the number of instructions providing the ability to manipulate multiple datum in one instruction. See the reference section below to learn more about the extensions. The intelligence of selecting the right instruction extension for the processor allows the application to take full advantage of the system bandwidth. Figure 1 below shows a process where a developer can apply the Intel ISA-L functions in their deduplication application. In a quick study (see Figure  2), the performance run of the hash functions were able to achieve up to a 8x performance gain on the Intel® Xeon® processor E5-2650 v3.


    Figure 1. One method of applying Intel® Intelligent Storage Acceleration Library into the data deduplication process.


    Figure 2. Hash functions’ relative performance using OpenSSL* versus Intel® Intelligent Storage Acceleration Library.

    Setting Up Intel® Intelligent Storage Acceleration Library On the System

    1. To access the full suite of Intel ISA-L functions, please fill out and submit this request form.
      You will receive an email that provides information on how to get the complete ISA-L zip file.
    2. Download and unzip the library source into the OS.
    3. Read the ISA-L_Getting_Started.pdf and Release_notes.txt supplied with the source. From the Guide, choose and follow the instructions to build the source depending on your needs.

    Running the Provided Benchmarks

    1. Install “automake” to build the library and included unit tests.
    2. Run “make perfs”. This builds all unit function tests set for ‘cache cold – larger data set exceeds LLC size.’
    3. Run “make perf”. This runs each unit test supported by the platform architecture. Performance results are output to the console.

    Optional: Run “make igzip/igzip_file_perf” and “make igzip/igzip_stateless_file_perf”. This builds additional compression functions and unit tests. Compression tests (igzip_file_perf and igzip_stateless_file_perf) are run using each file of a standard corpus—The Calgary Corpus— as an input. It is available here

    Table 1 describes the platform configuration we used in our testing.

    Table 1. Tested System Configuration

    Related Links and Resources

    Intel® XDK FAQs - General

    $
    0
    0
    Q1: How can I get started with Intel XDK?

    There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps that you think fits your app idea best and learn or take parts from multiple apps.

    Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using Intel XDK. Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

    You can do the following to access our demo apps:

    • Select Project tab
    • Select "Start a New Project"
    • Select "Samples and Demos"
    • Create a new project from a demo

    If you have specific questions following that, please post it to our forums.

    Q2: Can I use an external editor for development in Intel® XDK?

    Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

    Some popular editors among our users include:

    • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
    • Notepad++* for a lighweight editor
    • Jetbrains* editors (Webstorm*)
    • Vim* the editor
    Q3: How do I get code refactoring capability in Brackets*, the code editor in Intel® XDK?

    You will have to add the "Rename JavaScript* Identifier" extension and "Quick Search" extension in Brackets* to achieve some sort of refactoring capability. You can find them in Extension Manager under File menu.

    Q4: Why doesn’t my app show up in Google* play for tablets?

    ...to be written...

    Q5: What is the global-settings.xdk file and how do I locate it?

    global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

    You can locate global-settings.xdk here:

    • Mac OS X*

      ~/Library/Application Support/XDK/global-settings.xdk
    • Microsoft Windows*

      %LocalAppData%\XDK
    • Linux*

      ~/.config/XDK/global-settings.xdk

    If you are having trouble locating this file, you can search for it on your system using something like the following:

    • Windows:

      > cd /

      > dir /s global-settings.xdk
    • Mac and Linux:

      $ sudo find / -name global-settings.xdk
    Q6: When do I use the intelxdk.js, xhr.js and cordova.js libraries?

    The intelxdk and xhr libraries are only needed with legacy build tiles. The Cordova* library is needed for all. When building with Cordova* tiles, intelxdk and xhr libraries are ignored and so they can be omitted.

    Q7: What is the process if I need a .keystore file?

    Please send an email to html5tools@intel.com specifying the email address associated with your Intel XDK account in its contents.

    Q8: How do I rename my project that is a duplicate of an existing project?

    Make a copy of your existing project directory and delete the .xdk and .xdke files from them. Import it into Intel XDK using the ‘Import your HTML5 Code Base’ option and give it a new name to create a duplicate.

    Q9: How do I try to recover when Intel XDK won't start or hangs?
    • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
    • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
    • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
    • Clear Intel XDK's program cache directories and files.

      On a [Windows*] machine this can be done using the following on a standard command prompt (administrator not required):

      > cd %AppData%\..\Local\XDK

      > del *.* /s/q

      To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

      $ sudo find / -name global-settings.xdk

      $ cd <dir found above>

      $ sudo rm -rf *

      You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
    • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
    • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.

    Please refer to this post for more details regarding troubles in a VM. It is possible to make this scenario work but it requires diligence and care on your part.

    • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
    • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

    If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to mailto:html5tools@intel.com.

    Q10: Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

    No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

    The following open source components are the major elements that are being used by Intel XDK:

    • Node-Webkit
    • Chromium
    • Ripple* emulator
    • Brackets* editor
    • Weinre* remote debugger
    • Crosswalk*
    • Cordova*
    • App Framework*
    Q11: How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

    Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at http://developer.android.com/tools/help/draw9patch.html on how to create a 9 patch png image. We also plan to incorporate them in some of our sample apps to illustrate their use.

    Q12: How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

    You can try adding nw.exe as the app that needs an exception in AVG.

    Q13: What do I specify for "App ID" in Intel XDK under Build Settings?

    Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

    Here are some useful articles on how to create an App ID for your

    iOS* App

    Android* App

    Windows* Phone 8 App

    Q14: Is it possible to modify Android* Manifest through Intel XDK?

    You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file which can then be add to the AndroidManifest.xml file during the build process. In essence, you need to change the plugin.xml file of the locally cloned plugin to include directives that will make those modifications to the AndroidManifext.xml file. Here is an example of a plugin that does just that:

    <?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="com.tricaud.webintent" version="1.0.0"><name>WebIntentTricaud</name><description>Ajout dans AndroidManifest.xml</description><license>MIT</license><keywords>android, WebIntent, Intent, Activity</keywords><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

    You can check the AndroidManifest.xml created in the apk, using the apktool with the command line:  

    aapt l -M appli.apk >text.txt  

    This adds the list of files of the apk and details of the AndroidManifest.xml to text.txt.

    Q15: How can I share my Intel XDK app build?

    You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image. 

    Q16: Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

    Common reasons include:

    • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
    • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
    • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.
    Q17: How do I add multiple domains in Domain Access? 

    Here is the primary doc source for that feature.

    If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab. 

    Q18: How do I build more than one app using the same Apple developer account?

    On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

    Q19: How do I include search and spotlight icons as part of my app?

    Please refer to this article in the Intel XDK documentation. Create an intelxdk.config.additions.xml file in your top level directory (same location as the other intelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

    <!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

    For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

    For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

    NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

    Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

    Q20: Does Intel XDK support Modbus TCP communication?

    No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

    Q21: How do I sign an Android* app using an existing keystore?

    Uploading an existing keystore in Intel XDK is not currently supported but you can send an email to html5tools@intel.com with this request. We can assist you there.

    Q22: How do I build separately for different Android* versions?

    Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

    Q23: How do I display the 'Build App Now' button if my display language is not English?

    If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

    Q24: How do I update my Intel XDK version?

    When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

    Q25: How do I import my existing HTML5 app into the Intel XDK?

    If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

    If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on the Projects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

    If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

    It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

    Q26: I am unable to login to App Preview with my Intel XDK password.

    On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

    Try the following if you are having such difficulties:

    • Reset your password, using the Intel XDK, to something short and simple.

    • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

    • Confirm that this new password works with the Intel Developer Zone login.

    • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

    • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

    If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

    Q27: How do I completely uninstall the Intel XDK from my system?

    See the instructions in this forum post: https://software.intel.com/en-us/forums/topic/542074. Then download and install the latest version from http://xdk.intel.com.

    Q28: Is there a tool that can help me highlight syntax issues in Intel XDK?

    Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

    Q29: How do I manage my Apps in Development?

    You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

    Q30: I need help with the App Security API plugin; where do I find it?

    Visit the primary documentation book for the App Security API and see this forum post for some additional details.

    Q31: When I install my app onto my test device Avast antivirus flags it as a possible virus, why?

    If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message it is likely due to the fact that you are side-loading the app onto your device (using a download link or by using adb) or you have downloaded your app from an "untrusted" store. See the following official explanation from Avast:

    Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

    1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
    2. The source is not an established market (Google Play is an example of an established market).

    If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

    Q32: How do I add a Brackets extension to the editor that is part of the Intel XDK?

    The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

    Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

    • exit the Intel XDK
    • download a ZIP file of the extension you wish to add
    • on Windows, unzip the extension here: %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
    • on Mac OS X, unzip the extension here: /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
    • start the Intel XDK

    Note that the locations given above are subject to change with new releases of the Intel XDK.

    Q33: Why does my app or game require so many permissions on Android when built with the Intel XDK?

    When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

    A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

    Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

    If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

    • android.permission.INTERNET
    • android.permission.ACCESS_NETWORK_STATE
    • android.permission.ACCESS_WIFI_STATE
    • android.permission.INTERNET
    • android.permission.WRITE_EXTERNAL_STORAGE

    then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

    BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

    Q34: How do I make a copy of an existing Intel XDK project?

    Use the process described below to make a copy of an XDK project, in case you want to experiment. This process will insure that the build system does not get confused about the ID of your project. If you do not follow the procedure below you will have multiple projects with the same unique ID (a special GUID that is stored inside the Intel XDK project file in the root directory of your project).

    • exit the Intel XDK
    • make a copy of your existing project, the entire project directory

      on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"

      on Mac use Finder to to "right-click" and then "duplicate" your project directory
    • inside the new project that you made above, rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just different than the original, preferably the same name as the new project folder in which you are doing this)
    • using a TEXT EDITOR (only), such as Notepad or Sublime or Brackets or some other TEXT editor, open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:

      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • save the modified "project-copy.xdk" file
    • open the Intel XDK
    • go to the Projects tab
    • select "Open an Intel XDK Project" (green button) at the bottom left of the Projects tab
    • locate the new "project-copy.xdk" file inside the new project folder you copied above to open this new project
    Q35: My project does not include a www folder. How do I fix it so it includes a www or source directory?

    The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

    This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

    • exit the Intel XDK
    • make a copy of your existing project, the entire project directory

      on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"

      on Mac use Finder to to "right-click" and then "duplicate" your project directory
    • create a "www" directory inside the new duplicate project you just created above
    • move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
    • inside the new project that you made above, rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just different than the original, preferably the same name as the new project folder in which you are doing this)
    • using a TEXT EDITOR (only), such as Notepad or Sublime or Brackets or some other TEXT editor, open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:

      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • a few lines down find: "sourceDirectory": "",
    • change it to this: "sourceDirectory": "www",
    • save the modified "project-copy.xdk" file
    • open the Intel XDK
    • go to the Projects tab
    • select "Open an Intel XDK Project" (green button) at the bottom left of the Projects tab
    • locate the new "project-copy.xdk" file inside the new project folder you copied above to open this new project

    Back to FAQs Main

    Intel® XDK FAQs - Cordova

    $
    0
    0
    Q1: How do I set app orientation?

    If you are using Cordova* 3.X build options (Crosswalk* for Android*, Android*, iOS*, etc.), you can set the orientation under the Projects panel > Select your project > Cordova* 3.X Hybrid Mobile App Settings - Build Settings. Under the Build Settings, you can set the Orientation for your desired mobile platform.  

    If you are using the Legacy Hybrid Mobile App Platform build options (Android*, iOS* Ad Hoc, etc.), you can set the orientation under the Build tab > Legacy Hybrid Mobile App Platforms Category- <desired_mobile_platform> - Step 2 Assets tab. 

    [iPad] Create a plugin (directory with one file) that only has a config xml that includes the following: 

    <config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

    Add the plugin on the build settings page. 

    Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. You can import it as a third-party Cordova* plugin using the Cordova* registry notation:

    • net.yoik.cordova.plugins.screenorientation (includes latest version at the time of the build)
    • net.yoik.cordova.plugins.screenorientation@1.3.2 (specifies a version)

    Or, you can reference it directly from the GitHub repo: 

    The second reference provides the git commit referenced here (we do not support pulling from the PhoneGap registry).

    Q2: Is it possible to create a background service using Intel XDK?

    Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK’s build system will work with it.

    Q3: How do I send an email from my App?
    You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.
    Q4: How do you create an offline application?
    You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.
    Q5: How do I work with alarms and timed notifications?
    Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK’s build system will work with it. 
    Q6: How do I get a reliable device ID? 
    You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8. 
    Q7: How do I implement In-App purchasing in my app?
    There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called ‘In App Purchase’ which can be downloaded here.
    Q8: How do I install custom fonts on devices?
    Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.
    Q9: How do I access the device’s file storage?
    You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is a Cordova* file plugin for that.
    Q10: Why isn't AppMobi* push notification services working?
    This seems to be an issue on AppMobi’s end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.
    Q11: How do I configure an app to run as a service when it is closed?
    If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.
    Q12: How do I dynamically play videos in my app?

    1) Download the Javascript and CSS files from https://github.com/videojs

    2) Add them in the HTML5 header. 

    <config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

     3) Add a panel ‘main1’ that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

    <div class=”panel” id=”main1” data-appbuilder-object=”panel” style=””><video id=”example_video_1” class=”video-js vjs-default-skin” controls=”” preload=”auto” width=”200” poster=”camera.png” data-setup=”{}”><source src=”JAIL.mp4” type=”video/mp4”><p class=”vjs-no-js”>To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target=”_blank”>supports HTML5 video</a></p></video><a onclick=”runVid3()” href=”#” class=”button” data-appbuilder-object=”button”>Back</a></div>

     4) When the user clicks on the video, the click event sets the ‘src’ attribute of the video element to what the user wants to watch. 

    Function runVid2(){
    
          Document.getElementsByTagName(“video”)[0].setAttribute(“src”,”appdes.mp4”);
    
          $.ui.loadContent(“#main1”,true,false,”pop”);
    
    }

     5) The ‘main1’ panel opens waiting for the user to click the play button.

    Note: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

    Q13: How do I design my Cordova* built Android* app for tablets?
    This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.
    Q14: How do I resolve icon related issues with Cordova* CLI build system?

    Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses. 

    <icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

    These are not required in the build system and so you will have to include them in the additions file. 

    For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

    Q15: Is there a plugin I can use in my App to share content on social media?

    Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

    Q16: Iframe does not load in my app. Is there an alternative?
    Yes, you can use the inAppBrowser plugin instead.
    Q17: Why are intel.xdk.istablet and intel.xdk.isphone not working?
    Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user’s screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.
    Q18: How do I work with the App Security plugin on Intel XDK?

    Select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. Building it as a Legacy Hybrid app has been known to cause issues when compiled and installed on a device.

    Q19: Why does my build fail with Admob plugins? Is there an alternative?

    Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

    To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

    Q20: Why does the intel.xdk.camera plugin fail? Is there an alternative?
    There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.
    Q21: How do I resolve Geolocation issues with Cordova?

    Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

    Geo fine might not work because of the following reasons:

    1. Your device does not have a GPS chip
    2. It is taking a long time to get a GPS lock (if you are indoors)
    3. The GPS on your device has been disabled in the settings

    Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

    Q22: Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

    Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

    To make this work you will need to do the following:

    • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
    • Include the plugin only on the Android* platform and use <video> on iOS*.
    • Create conditional code to do what is appropriate for the platform detected 

    You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

    1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
    2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->. 

    More information is available here and this is what an additions file can look like:

    <preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

    This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

    Q23: How do I display a webpage in my app without leaving my app?

    The most effective way to do so is by using inAppBrowser.

    Q24: Does Cordova* media have callbacks in the emulator?

    While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

    Q25: Why does the Cordova version not match between the Projects tab Build Settings, Emulate tab, App Preview and my built app?

    This is due to the difficulty in keeping different components in sync and is compounded by the version convention that the Cordova project uses to distinguish build tools (the CLI version) from frameworks (the Cordova version) and plugins.

    The CLI version you specify in the Projects tab Build Settings section is the "Cordova CLI" version that the build system will use to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova framework versions, which vary as a function of the target platform. For example, the Cordova CLI 5.0 platformsConfig file is "pinned" to the Android Cordova framework version 4.0.0, the iOS Cordova framework version 3.8.0 and the Windows 8 Cordova framework version 3.8.1 (among other targets). The Cordova CLI 4.1.2 platformsConfig file is "pinned" to Android Cordova 3.6.4, iOS Cordova 3.7.0 and Windows 8 Cordova 3.7.1.

    This means that the Cordova framework version you are using "on device" with a built app will not equal the version number that is in the CLI field that you specified in the Build Settings section of the Projects tab when you built your app. Technically, the target-specific Cordova frameworks can be updated [independently] within a given version of CLI tools, but our build system always uses the Cordova framework versions that were "pinned" to the CLI when it was released (that is, the Cordova framework versions specified in the platformsConfig file).

    The reason you may see Cordova framework version differences between the Emulate tab, App Preview and your built app is:

    • The Emulate tab has one specific Cordova framework version it is built against. We try to make that version of the Cordova framework match as closely the default Intel XDK version of Cordova CLI.
    • App Preview is released independently of the Intel XDK and, therefore, may support a different version than what you will see reported by the Emulate tab and your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is the default version of the Intel XDK at the time App Preview is released; but since the various tools are not released in perfect sync, that is not always possible.
    • Your app always uses the Cordova framework version that is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section, when you built your app.
    • BTW: the version of the Cordova framework that is built into Crosswalk is determined by the Crosswalk project, not by the Intel XDK build system. There is some customization the Crosswalk project team must do to the Cordova framework to include Cordova as part of the Crosswalk runtime engine. The Crosswalk project team generally releases each Crosswalk version with the then current version of the Android Cordova framework. Thus, the version of the Android Cordova framework that is included in your Crosswalk build is determined by the version of Crosswalk you choose to build against.

    Do these Cordova framework version numbers matter? Not that much. There are some issues that come up that are related to the Cordova framework version, but they tend to be few and far between. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose and the specific webview present on your test devices. See this blog for more details about what a webview is and why the webview matters to your app: When is an HTML5 Web App a WebView App?.

    p.s. The "default version" of the CLI that the Intel XDK uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and the various Intel XDK components. Also, we are unable to implement every release that is made by the Cordova project; thus the reason why we do not support every Cordova release that is available to Cordova CLI users.

    Q26: How do I add a third party plugin?
    Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.
    Q27: How do I make an AJAX call that works in my browser work in my app?
    Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.
    Q28: I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

    When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

    When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

    Q29: How do I target my app for use only on an iPad or only on an iPhone?

    There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in the intelxdk.config.additions.xml file you should get what you need:

    <preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn’t fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

    If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

    Q30: Why does my build fail when I try to use the Cordova* Capture Plugin?

    The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

    Q31: How can I pinch and zoom in my Cordova* app?

    For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

    .

    Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

    http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

    https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

    Another device oriented approach is to enable it by turning on Android accessibility gestures.

    Q32: How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

    The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

    You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

    Q33: How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

    The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

    • copy your XX and XXX icons into your source directory (usually named www)
    • add the following lines to your intelxdk.config.additions.xml file
    • see this Cordova doc page for some more details

    Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into your intelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

    <!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

    The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

    You can continue to insert the other icons into your app using the Intel XDK Projects tab.

    Q34: Which plugin is the best to use with my app?

    We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

    Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

    See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

    Q35: What are the rules for my App ID?

    The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

    CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

    • Each section of the App ID must start with a letter
    • Each section can only consist of letters, numbers, and the underscore character
    • Each section cannot be a Java keyword
    • The App ID must consist of at least 2 sections (each section separated by a period ".").

    Back to FAQs Main 

    Viewing all 3384 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>