Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Using SR-IOV to Share an Ethernet Port Among Multiple VMs

$
0
0

This article describes how to use single root I/O virtualization (SR-IOV), which enables configuration of a single physical network port to provide virtual functions (VFs) to a set of virtual machines (VMs). Depending on the Ethernet controller, you can create 63 or more VFs per physical port. Then with network data transferred directly between port and VM, bypassing the hypervisor network stack (and switch layer), traffic on the bus and processor interrupts are significantly reduced.

SR-IOV is available on a variety of Ethernet controllers across multiple operating systems (both hypervisor and guest OS). This article covers the basic steps to create VFs using SR-IOV on Fedora* 4.0.3.

Basic Steps to Configure SR-IOV

In this section, we walk through the basic steps required to configure SR-IOV.

  1. Confirm that your Ethernet controller, hypervisor, and guest OS are supported by checking the FAQ for Intel® Ethernet Server Adapters with SR-IOV.
  2. Confirm iommu (or Intel VT-d) is enabled in the BIOS.
    Note: iommu enables mapping of virtual memory addresses to physical addresses.

  3. In the grub.conf file, turn on iommu and set it to passthrough mode:
    intel_iommu=on
    iommu=pt
  4. Be sure to update grub file, then reboot. The grub file gets updated automatically if using you’re using a desktop system, however If you’re using a server system, you must enter the update-grub command before rebooting system.
  5. Verify the setup by typing the steps below: 
    mesg | grep Virtualization   ##shows if VT-d is enabled
    cat /proc/cmdline    ##checks that iommu parameters were passed
    cd /root/DPDK/dpdk-16.07    ##checks for DPDK and moves to next step
    tools/dpdk-devbind.py --st    ##checks network devices status
  6. Load the driver 
    modprobe uio
    insmod igb_uio.ko
  7. Set up the virtual functions (VFs) - in this case on device 04, first 2 ports: 
    echo 1 > /sys/bus/pci/devices/0000\:04\:00.0/SR-IOV_numvfs
    echo 1 > /sys/bus/pci/devices/0000\:04\:00.1/SR-IOV_numvfs
  8. Confirm the VF configuration 
    lspci | grep Eth  OR   ./dpdk-devbind.py --st
    cat /sys/bus/pci/devices/0000\:04\:00.0/SR-IOV_numvfs
    cat /sys/bus/pci/devices/0000\:04\:00.1/SR-IOV_numvfs

Note: In these steps we created one VF for each port of a physical NIC. You can create many VFs depending upon your requirements and the capacity of your physical NIC. For example, if you're using the Intel 82599 10 Gigabit Ethernet NIC, you can enable maximum of 63 Virtual Functions per port. Find your NIC specification at http:/ark.intel.com. Look at the product brief for information about number of supported VFs.

The video below is a companion to this article.

Summary

This article provided a definition of SR-IOV, and provided the steps required to configure a system to share one physical NIC among multiple VMs.

SR-IOV Resources

SR-IOV Mode Utilization in a DPDK Environment

SR-IOV Technology Primer

FAQ for Intel® Ethernet Server Adapters with SR-IOV

SR-IOV and OVS-DPDK Hands-on Labs

Single-Root Input/Output Virtualization (SR-IOV) with Linux* Containers

About the Author

Nancy Yadav is a Software Dataplane (Platform) Application Engineer at Intel. She has worked on software development, algorithm optimization, and now helping customers to optimize network functions for SDN & NFV.


Intel® Parallel Studio XE 2018 Beta Documentation

$
0
0

What's New and Release Notes

Installation Guides

Getting Started Guides

Developer, User and Reference Guides

Compilers:

Threading and Performance Libraries:

Performance Analyzers and Debuggers:

 

Build a Task Tracking System Using the Intel® Edison Board

$
0
0

Introduction

This article will demonstrate how to build a smart task tracking board that can be used to track any set of tasks such as day to day work around the house, or even chores for children.  Powered by an Intel® Edison breakout board, and a handful of inexpensive hardware components, this taskboard can run on a rechargeable battery and will auto synchronize its status to a backend data service.

 

The taskboard is designed to be used by simply touching the task that has been completed, resulting in an LED indicator letting the user know the task has been completed.  In addition, the status of each task is stored remotely on an Intel® IOT Gateway using a NoSQL MongoDB* database.

The tutorial will walk through construction of a client side Node.js application running on the Intel Edison board, and a server side Node.js application running on an Intel® IOT gateway.  Basic bread boarding and soldering skills are required, as well as familiarity of with the Intel® XDK and the Node.js development language. 

Parts List

This project requires the following hardware and tools. 

Components:

  • Intel Edison with breakout board allowing both digital and analog interface GPIO pins.
  • Micro USB cables – for connecting to the Intel® Edison board
  • Hookup wire
  • One of each of these per task:
    • Green LED
    • Resistor – Unique value for each task
    • Resistor – 10Ω x 1
    • Four pin male header
  • Shift Register – 74LS164
  • Perfboard
  • Foam board

Tools:

  • Soldering Iron
  • Wire strippers and cutters
  • OPTIONAL for housing
    • Razor blade for cutting panels
    • Crafter’s glue

 

Design

The system design is divided into two parts – client and server.  The client side is composed of an Intel® Edison board which waits for users to click a button, marking a task as complete.  When selected, the client will reach out to the server to update the status of the task.  The server accepts TCP client connection requests, and task updates using a simple JSON object format.  Clients can request to update a task or receive a list of the last recorded task status, an imperative feature during startup.  Both requests by a client result in a reply that outputs the latest task status.

REQUEST:  Client to Server

{             

“request” : {‘UPDATE’;’GET’},

“user”:1,

“task”:3,

“status”: 1

}

RESPONSE:  Server to Client

[1,0,1]

The following sequence diagrams explain the program execution including startup and task updates.  The left side of the diagram represents client actions, followed by server actions listed on the right side (TaskServer.js). 

STARTUP

Task Tracker Design Sequence - Startup
[Figure 1 - Sequence Diagram of Startup Flow]

TASK UPDATE

 

Task Tracker Design Sequence - Update Task

[Figure 2 - Sequence Diagram of Update Flow]

Construction

The modular design of the taskboard is made up of a breadboard as the main unit accompanied by one child component board per task.  The main board houses one shift register, and interfaces to the Intel® Edison using six connections (Ground, Ground, Vcc, AnalogIn, GPIO_DS, GPIO_CK), each of which are explained below.

Task component Prototype

Each task components plugs into the main board using a four pin interface (Vin, Ground, LEDStatus, ButtonStatus), and maintains a resistor with a unique value for that task, an LED, and a momentary button.  

 

 

 

 

Sizing

Starting with the mainboard, determine how many tasks are able to fit by laying out individual perfboards side by side on top of the breadboard.  The tasks will hang off the breadboard as in the following picture.  Notice how the taskboards line up side by side. 

Taskboard Breadboard with two tasks attached

 

Managing Multiple Inputs

If there are more than three tasks to manage, it is best to use an input multiplexing technique that allows many buttons to interface with one analog input.  This is required because there are a limited number of Digital GPIO pins available.  One method is to catalog the input voltage on an analog input pin when using different resistors. 

First, determine resistor values that will be used for each button by connecting each resistor to 5 volts, and monitor the observed voltage inputs using the Analog Read example that comes with the Intel® XDK.  Be sure to pull the input port to low when not applying any voltage.  This will prevent erratic reads from the input port.  Be sure to run many tests to capture a valid set of resistor ranges as these values can vary widely based on temperature and other noise that occurs on the circuit. 

The Intel® XDK provides a sample AnalogRead that can be modified to write output data based on certain ranges.  Adjust the 

XDK AnalogRead logoaccuracy of the ranges by indicating the integer size in bits on the configuration object.  A larger value will return a larger number.  Start with this example to build the finalized Client.js application.

 

 

 

Be sure to add a dependency for node-sleep. This component allows all JS execution to stop for a configurable amount of time.

SAMPLE – Client package.json

….
 "dependencies": { "node-sleep" : "git://github.com/erikdubbelboer/node-sleep"
  }

In the configuration file, set the bit precision to 16 or something that works best for the resistors used in the application.  A larger precision will yield a larger number.  The goal is to ensure a wide margin between resistor samplings to ensure a button can be uniquely identified.

SAMPLE – Client cfg-app-platform.js

    cfg.io = {} ;               // used by caller to hold mraa I/O object
    cfg.ioPin = 1 ;            // set to unknown pin (will force a fail)
    cfg.ioBit = 16 ;            // preferred return resolution for analog i/o reads

In main.js, add a minimum and maximum value parameter that represents the range for each button value.  Because these values will vary based on your observations, creating constants ahead of time will make changing values in the conditionals easier later.

SAMPLE – Client main.js

….
var button1SignalMin = 100;
var button1SignalMax = 900;

var button2SignalMin = 901;
var button2SignalMax = 1500;

The periodicActivity function is executed based on an interval timer.  Modify this default callback code to indicate which button was pressed, and to also call the Status LED and backend updates indicated in the design.  It is also good to implement a lock around this logic to ensure there are no erroneous reads that are captured while the system is updating the task status or during duplicate button reads due to voltage fluxuations (debouncing.)

SAMPLE – Client main.js

….
var periodicActivity = function() {

    var analogIn = cfg.io.read() ;
    var taskID;

    if ( !buttonClicked /* && (analogIn>MIN_INPUT) && (analogIn<MAX_INPUT) */) {
        process.stdout.write(analogIn + "") ;

        if ( (analogIn>button1SignalMin) && (analogIn<button1SignalMax) ){
            toggleButtonStatus(0);
            taskID = 1;
            buttonClicked = true;
        }… else if other button range checks

        if (buttonClicked)
        {
            updateLEDStatus();
            updateTaskOnGateway(taskID, buttonStatus[taskID-1]);

            //Give everything time to settle after the click
            setTimeout(function(){
                buttonClicked = false;
                console.info("Polling allowed again..");
            }, 1000);
        }
    }
};

Run the client application and record the different values of the resistors into the constants previously created.  Below is a debugging screenshot showing a few different values observed during testing.  Once an accurate range of resistors has been identified, construct the rest of the mainboard by following the steps in the next section.

XDK Analog Read output

[Screenshot 1 - Intel XDK Debugging output]

XDK Analog Read output

[Screenshot 2 - Intel XDK Debugging output with Buttons mapped]

A nice Instructable is available that walks through the process of mapping multiple button inputs can be referenced here:

http://www.instructables.com/id/How-to-access-5-buttons-through-1-Arduino-input/?ALLSTEPS

Hardware – Task Construction

Each individual task can be constructed on a standalone module.  In this example, we used a small 2”x3” perf board to add the minimal required components; an LED, button, and resistors.  To help with the modularity of the design, we added a 4 pin male header to the board by pushing the pins into the header package and soldering it from the top down.

1 – Ground
2 – Status/LED
3 – Button Input
4 – Vcc 

Task component boardDesign for Task

Hardware – Main Board Construction

Taskboard with tasks attached

[Image 1 - Fully assembled Taskboard with tasks]

Individual tasks plug into the task main-board.  For simplicity, a breadboard is used in this example.  After determining where each task will plug into the board, wire the interface connections to match the pinout of the taskboard.

1 – Ground

2 – Status/LED

3 – Button Input

4 – Vcc

 

The pinout of the 74LS164 shift register is as follows:

1, 9, 14 – Vcc
2 – Data In
7 – Ground
8 – Clock
3, 4, 5, 6 – Output 1,2,3,4
10, 11, 12, 13 – Output 5,6,7,8

 

Design Main Task board

[Figure 3 - Mainboard wiring diagram]

Software – Client

As mentioned previously, a shift register is used in this project to avoid a 1:1 mapping for LED to GPIO pins.  The shift register works as a FIFO ordered array, meaning that a value (1 or 0) is set on the device and pushed into a stack, resulting in the current data to be shifted to the next register.  Values set on the Data Input pin are pushed into the registers only when CLK is HIGH.  

From client.js

//...

cfg.io = new cfg.mraa.Aio(buttonPin) ;
cfg.io.setBit(cfg.ioBit) ;

if (LED_USING_SHIFT_REGISTER){
    pinClock = new cfg.mraa.Gpio(pinNumberShiftRegisterClock);
    pinData = new cfg.mraa.Gpio(pinNumberShiftRegisterDataIn);

    pinClock.dir(cfg.mraa.DIR_OUT);
    pinData.dir(cfg.mraa.DIR_OUT);
}

…
function updateLEDStatus(){

    console.info("Updating LEDs " + buttonStatus.length);

    if (LED_USING_SHIFT_REGISTER){

        for (var i=buttonStatus.length-1;i>=0;i--){
            pinClock.write(0);
            sleep.msleep(50);
            pinData.write(buttonStatus[i]);
            console.info(buttonStatus[i]);
            sleep.msleep(50);
            pinClock.write(1);
        }
    }
}

Before updating a client task to the backend, first update the status locally.  This ensures the status and light are correct without depending on a backend connection.  A local buttonStatus array holds the status of each task.

From client.js

function toggleButtonStatus(btn){
    var btnPressed = btn+1;
    console.log("Button " + btnPressed + " pressed.");
    if (buttonStatus[btn] === 0)
        buttonStatus[btn] = 1;
    else
        buttonStatus[btn] = 0;
}

Next, make a socket connection to the server to update the task status or to request the status of all tasks.   The reply data is handled asynchronously, and is pushed back into the local status array.  A simple JSON.Parse() can be used to convert the JSON format into a usable array.  Both calls return the latest status of tasks.

 

From client.js– Update Task Status on Gateway

function updateTaskOnGateway(taskID, status){
    client.connect(gatewayPortNumber, gatewayIPAddress, function(){
        var updateStr = '{"user":1, "task":' + taskID + ', "status":' + status + '1}';

        console.info("Connected to server.  Updating status...");
        client.write(updateStr);
        });
}

From client.js– Get Tasks Status from Gateway

function getTaskStatusFromServer(){

    client.connect(gatewayPortNumber, gatewayIPAddress, function(){
        console.info("Connected to server.  Updating status...");
        client.write('{"request" : "GET", "user" : 1}');
        });
}

From client.js– Handle reply from Gateway

client.on('data', onServerReply);

function onServerReply(data){
    console.info('Received ' + data); //Example: (Pin1Status, .., PinNStatus)  [0,0,1,0]

    buttonStatus = JSON.parse(data.toString('utf8'));

    updateLEDStatus();

    client.destroy();
}

The full client.js software can be found at the end of this article.  In this client section, we have described how to:

  • Categorize unique resistor values and respond to button clicks given sets of ranges
  • Toggle the matching task LED using a shift register
  • Update status locally
  • Update task status to a backend TCP service
  • Update local task status given response from a backend TCP service

 

The next section will detail the server side solution, including how to store the tasks in an offline database and how to update and fetch status for any task. 

Software – Server

To setup the server, in this example we used Ubuntu 16.04 Operating System installed on an Intel® IOT Gateway.  Using Node.js, we created a small service that will accept connections using TCP, and manage data using MongoDB.  All code is written in Node.js.

Installing nodejs*

sudo apt-get install nodejs

Installing MongoDB*

1. To install MongoDB NoSQL , start by importing the public key for the download repository and create the .list file that will help locate apt-get during installation.

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list

2. Now reload the local package database

sudo apt-get update

3. Install NPM

Sudo apt-get npm

4. Install Mongo DB

sudo apt-get install mongodb-org

One final step is required to ensure the MongoDB service can be started correctly.  This is due to the fact that starting with Ubuntu* 16.04,  the process for starting services has changed.  Instead of a proprietary launcher called upstart, Ubuntu is now using the more familiar systemd, which aligns to how other Debian* based Unix systems work as well.  More background on this switch can be found here:  https://wiki.ubuntu.com/SystemdForUpstartUsers

 

5. Create a service file that is required by the systemd to launch the mongodb. The file will tell the system what to launch and under which user to run the service.

File contents of /etc/systemd/system/mongodb.service

After=network.target

[Service]
User=mongodb
Group=mongodb
ExecStart=/usr/bin/mongod –quite –config /etc/mongod.conf
ExecReload=/bin/kill –HUP #MAINPID
StandardOutput=syslog
StandardErr=syslog

[Install]
WantedBy=multi-user.target

6. Start the Mongo service

sudo systemctl start mongodb

7. To ensure it starts on next reboot

Sudo systemctl enable mongodb.service

To verify the server software is installed correctly, create a simply TestMongo.js file that opens a connection to the database.  No errors will indicate that the connection from Node.js* to MongoDB* is working correctly.

SAMPLE - TestMongo.js

var MongoClient = required(‘mongodb’).MongoClient;

MongoClient.connect(“mongodb://localhost:27017/exampleDb”, function(err,db){

if(err){
  return console.dir(“Error connecting to MongoDB.”);
}
else{
  return console.dir(“Connected successfully.”);
}
});

More information on MongoDB installation can be found here:

https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-ubuntu/

Now that dependencies are installed and verified, we can now write our gateway service that will accept TCP client connections for getting or updating task status.  The complete source for the server.js can be found at the end of this article.

 

First, create the MongoDB collection that will hold our tasks.  After creating a new database, each user will have a separate collection that maintains the tasks and active status.  The naming convention for each  user collection is to append a unique userID to the word ‘UserTasks’.  After the collection is created, a few rows of sample data are inserted which represent each task.  The names of the tasks can optionally be stored in the database, however they are not referenced in this sample.

SAMPLE CreateTasks.js

var MongoClient = require('mongodb').MongoClient,
	Db = require('mongodb').Db,
	Server = require('mongodb').Server
	assert = require('assert');


var db = new Db('MyTasks', new Server('localhost', 27017));

db.open(function(err, db){
if(err) {
  console.error(err);
}
else{
  console.info("Database: " + db.Name + " created");
  db.createCollection('UserTasks1', function(err, usertasks){

   if(err){
 	return console.dir("Error creating collection usertasks");
   }
   else{
	//Insert the tasks
	var task1 = {id: 1, title: 'Make up bed', status: 0};
	var task2 = {id: 2, title: 'Put away dishes', status: 0};
	var task3 = {id: 3, title: 'Brush teeth', status: 0};
	var task4 = {id: 4, title: 'Put shoes away', status: 0};

	usertasks.insert([task1, task2, task3, task4], function(err, result){
		if(err){
			return console.dir("Error inserting tasks");
		}
		else{
			console.log('Records inserted:  "_id"', result.length, result);
		}
	});
      }
});
}

});

With the data storage completed, the service can now be created to allow updating and fetching of tasks.  First, create a TCP server and assign a handler.  The handler will parse the input data into appropriate fields representing the type of request, userID, taskID, and status to set the task too.  

FROM server.js

var clientSocket = null;

server = net.createServer();
server.listen(Constants.SERVER_PORT, Constants.SERVER_IP_ADDRESS);

server.on('connection', handleClientConnection);

function handleClientConnection(socket){
	clientSocket = socket;
	socket.on('data', function(data){
		var clientJSON = JSON.parse(data.toString('utf8'), (key,value)=>{
			return value;
		});
		switch(clientJSON.request)
		{
		  case "UPDATE":
			updateTaskStatus(clientJSON);
			socket.write("Task updated");

		  case "GET":
			console.log("Getting all tasks for user " + clientJSON.user);

			var alltasks = getAllTasks(clientJSON.user);
			socket.write(alltasks);
			break;
		}
	});
}

To get all tasks for a user, a MongoDB cursor is referenced to iterate the requested collection of tasks.  The data is pushed onto a global array and then returned to the caller.

FROM server.js

function getAllTasks(userID){

    var qry = { "id": userID };
    var collectionName = "UserTasks" + userID;

    MongoClient.connect("mongodb://localhost:" + Constants.SERVER_MONGO_PORT + "/MyTasks", function(err, db){

        var taskCollection = db.collection(collectionName);
        var taskCursor = taskCollection.find().forEach(taskIterator);

        });

    return userTasks;
}
function taskIterator(doc){
	userTasks.push(doc.id + ":" + doc.status);
	if (++global.CurrentRecordCount===global.totalTaskCount)
		allRecordsCollected();
}

FROM server.js

function updateTaskStatus(inputJSON){

    MongoClient.connect("mongodb://localhost:" + Constants.SERVER_MONGO_PORT + "/MyTasks", function (err, db) {

    var qry = { "id": inputJSON.task };
    var collectionName = "UserTasks" + inputJSON.user;
    var taskCollection = db.collection(collectionName);
    var taskCursor = taskCollection.update(qry, {$set: {status:inputJSON.status} });

    getAllTasks(inputJSON.user);
  });}

Complete Source Code

The full source code for the example described in this article is available below.  

Client

/* client.js
*  - Poll for button clicks
*  - OnClick, send TCP request to update status
*  - Sync status of button with reply from backend [task1, task2, …, taskN]
*
* Matt Chandler
* Intel, Corp.  2017
*
*/
/* jslint node:true */
/* jshint unused:true */

"use strict" ;
var APP_NAME = "IoT Analog Read" ;
var cfg = require("./utl/cfg-app-platform.js")() ;      // init and config I/O resources
var net = require('net');
var client = new net.Socket();
var sleep = require('sleep');

var TIMER_POLL_BUTTON_MS = 600;
var TIMER_UPDATE_LEDS_MS = 10000000;
var buttonPin = 2; //Analog input for button

var button1SignalMin = 100;
var button1SignalMax = 900;
var button2SignalMin = 901;
var button2SignalMax = 1500;
var button3SignalMin = 3000;
var button3SignalMax = 5000;
var button4SignalMin = 2800;
var button4SignalMax = 3500;
var pinNumberShiftRegisterDataIn = 2;
var pinNumberShiftRegisterClock = 4;
var pinNumberTaskStatus1 = 2;
var pinNumberTaskStatus2 = 3;
var pinNumberTaskStatus3 = 4;
var pinClock, pinData, pinTaskStatus1, pinTaskStatus2, pinTaskStatus3;
var gatewayIPAddress = '192.168.1.225';//10.88.65.118
var gatewayPortNumber = 20174;
var MIN_INPUT = -1;
var MAX_INPUT = 1000000;
var LED_USING_SHIFT_REGISTER = true;
var TASK_STATUS_IN_CLOUD = false;
var buttonStatus = [1,0,1];
var pinLEDs = [];
var buttonClicked = false;

console.log("\n\n\n\n\n\n") ;                           // poor man's clear console
console.log("Initializing " + APP_NAME) ;

process.on("exit", function(code) {                     // define up front, due to no "hoisting"
    clearInterval(intervalButtonPoll) ;
    clearInterval(intervalUpdateStatusLEDs);
    console.log("\nExiting " + APP_NAME + ", with code:", code) ;
}) ;

// confirm that we have a version of libmraa and Node.js that works
// exit this app if we do not

if( !cfg.test() ) {
    process.exit(1) ;
}

if( !cfg.init() ) {
    process.exit(2) ;
}

cfg.io = new cfg.mraa.Aio(buttonPin) ;          // construct our I/O object
cfg.io.setBit(cfg.ioBit) ;

if (LED_USING_SHIFT_REGISTER){
    pinClock = new cfg.mraa.Gpio(pinNumberShiftRegisterClock);
    pinData = new cfg.mraa.Gpio(pinNumberShiftRegisterDataIn);

    pinClock.dir(cfg.mraa.DIR_OUT);
    pinData.dir(cfg.mraa.DIR_OUT);
}
else{
    pinTaskStatus1 = new cfg.mraa.Gpio(pinNumberTaskStatus1);
    pinTaskStatus2 = new cfg.mraa.Gpio(pinNumberTaskStatus2);
    pinTaskStatus3 = new cfg.mraa.Gpio(pinNumberTaskStatus3);

    pinTaskStatus1.dir(cfg.mraa.DIR_OUT);
    pinTaskStatus2.dir(cfg.mraa.DIR_OUT);
    pinTaskStatus3.dir(cfg.mraa.DIR_OUT);

    pinLEDs.push(pinTaskStatus1);
    pinLEDs.push(pinTaskStatus2);
    pinLEDs.push(pinTaskStatus3);

}

if (TASK_STATUS_IN_CLOUD){
    getTaskStatusFromServer();
    sleep.msleep(500);
}
else{
    onServerReply('[0,0,0]')
    sleep.msleep(500);
}
console.log("Syncing Task LEDs...");
updateLEDStatus();

function getTaskStatusFromServer(){

    client.connect(gatewayPortNumber, gatewayIPAddress, function(){
    client.write('{“request”:”GET”, "user":1}');
    });
}
client.on('data', onServerReply);
client.on('close', function(){
   console.log("connection closed");
});
function onServerReply(data){
    console.info('Received ' + data); //Example: (Pin1Status, .., PinNStatus)  [0,0,1,0]
    //based on data received, set matching lights and persist settings
    buttonStatus = JSON.parse(data.toString('utf8'));
    client.destroy();
}

//Callback for period button status read
var periodicActivity = function() {

    var analogIn = cfg.io.read() ;

    if ( !buttonClicked /* && (analogIn>MIN_INPUT) && (analogIn<MAX_INPUT) */) {
        process.stdout.write(analogIn + "") ;

        if ( (analogIn>button1SignalMin) && (analogIn<button1SignalMax) ){
            toggleButtonStatus(0);
            buttonClicked = true;
        }
        else if ( ( analogIn>button2SignalMin) && (analogIn<button2SignalMax) ){
            toggleButtonStatus(1);
            buttonClicked = true;
        }
        else if ( ( analogIn>button3SignalMin) && (analogIn<button3SignalMax) ){
            toggleButtonStatus(2);
            buttonClicked = true;
        }

        if (buttonClicked)
        {
            updateLEDStatus();

            //Give everything time to settle after the click
            setTimeout(function(){
                buttonClicked = false;
                console.info("Polling allowed again..");

            }, 1000);
        }
    }
} ;
var intervalButtonPoll = setInterval(periodicActivity, TIMER_POLL_BUTTON_MS) ;
var intervalUpdateStatusLEDs = setInterval(updateLEDStatus, TIMER_UPDATE_LEDS_MS);
function toggleButtonStatus(btn){
    var btnPressed = btn+1;
    console.log("Button " + btnPressed + " pressed.");
    if (buttonStatus[btn] === 0)
        buttonStatus[btn] = 1;
    else
        buttonStatus[btn] = 0;
}
function updateLEDStatus(){

    if (LED_USING_SHIFT_REGISTER){

        for (var i=buttonStatus.length-1;i>=0;i--){
            pinClock.write(0);
            sleep.msleep(50);
            pinData.write(buttonStatus[i]);
            sleep.msleep(50);
            pinClock.write(1);
        }
    }
    else{
       pinTaskStatus2.write(0);
       for (var idx=0;idx<pinLEDs.length;idx++){
           pinLEDs[idx].write(buttonStatus[idx]);
        }
    }
}

Server

/*
 - Task Tracker Gateway -
 -
 - Wait for Connections
 - UPDATE - Update Task Status and return all tasks for User
 - GET - Return all for User
 -
 - Matt Chandler
 - Intel, Corp.  2017

*/
var MongoClient = require('mongodb').MongoClient;
var net = require('net');
var Constants = require('./Constants');

global.Currentcount = 0;
tasksToReturn = [];
var socket = null;
var clientSocket = null;
global.totalTaskCount = 0;

server = net.createServer();
server.listen(Constants.SERVER_PORT, Constants.SERVER_IP_ADDRESS);
server.on('connection', handleClientConnection);
server.on('error', function(err){ console.log(err);});

function handleClientConnection(socket) {

    global.CurrentRecordCount = 0;
	clientSocket = socket;
	socket.on('data', function (data) {

	    var clientJSON = JSON.parse(data.toString('utf8'), (key, value) =>{
			return value;
		});
	    socket.on('error', function(err){ console.log(err);});

		switch(clientJSON.request)
		{
		  case "UPDATE":
			updateTaskStatus(clientJSON);
			break;
		  case "GET":
			var alltasks = getAllTasks(clientJSON.user, socket);
			break;
		}
    });
}
function getAllTasks(userID){

    var qry = { "id": userID };
    var collectionName = "UserTasks" + userID;
    var taskCursor;

    MongoClient.connect("mongodb://localhost:" + Constants.SERVER_MONGO_PORT + "/MyTasks", function(err, db){

        var taskCollection = db.collection(collectionName);
        taskCollection.count(function(err,count){ global.totalTaskCount = count;});
        taskCursor = taskCollection.find().forEach(taskIterator);
    });
}
function taskIterator(doc){

	tasksToReturn.push(doc.id + ":" + doc.status);
	if (++global.CurrentRecordCount===global.totalTaskCount)
		allRecordsCollected();
}
function updateTaskStatus(inputJSON) {
    MongoClient.connect("mongodb://localhost:" + Constants.SERVER_MONGO_PORT + "/MyTasks", function (err, db) {

    var qry = { "id": inputJSON.task };
    var collectionName = "UserTasks" + inputJSON.user;
    var taskCollection = db.collection(collectionName);
    var taskCursor = taskCollection.update(qry, {$set: {status:inputJSON.status} });

    getAllTasks(inputJSON.user);
  });
}
function allRecordsCollected(){
	clientSocket.write(JSON.stringify(tasksToReturn));
}

Summary

This article demonstrated how to construct an electronic taskboard that provides a user the ability to physically touch, and mark a task as completed in an offline system.  Client software was described that demonstrates how to react to the physical touch of updating a task as well as calling out to a backend data service to update the status of the task.  In addition, server software was outlined to listen for and manage incoming task updates and persist states into a MongoDB* database.  Finally, custom hardware was constructed to detail how to manage multiple inputs and promote a modular design allowing additional tasks to be added in the future.

About the Author

Matt Chandler is a software engineer at Intel working on scale enabling projects for Internet of Things.

References

Installation of Mongo DB on Ubuntu:   https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-ubuntu/

MongoDB introduction:https://mongodb.github.io/node-mongodb-native/api-articles/nodekoarticle1.html

Running mongodb on Ubuntu 16.04 LTS: http://stackoverflow.com/questions/37014186/running-mongodb-on-ubuntu-16-04-lts/37058244

The 74HC164 Shift Register and your Arduino:  http://www.instructables.com/id/The-74HC164-Shift-Register-and-your-Arduino/

74LS164 Datasheet:  http://html.alldatasheet.com/html-pdf/64008/HITACHI/74LS164/246/1/74LS164.html

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others

**This sample source code is released under the Intel Sample Source Code License Agreement.

© 2017 Intel Corporation.

Using Visualization to Tell a Compelling Data Story

$
0
0

Data is everywhere today. Terabytes of data are collected every second from weblogs, sensors, network devices, social media, and so on. Data is rich, and it can be powerful. But only if we can unleash the power hidden in the complex web of data all around us.

How we can we unleash this power and put it to work? Not just by the smartest of handful of humans, but by everyone, in every field, at every level? The answer lies in data visualization.

In this article, we explain the core elements of a data story using a case study to understand how a data scientist explains, sells, and motivates the audience. The article describes the common mistakes that data producers make—and the confusion that results from these mistakes—which often leave them with a perception that the real question was never answered. The article also provides a framework to help avoid these mistakes.

CASE STUDY

A fictitious London-based training company, WeTrainYou, wants to start a local training facility in California. It is looking for a city where ample Salesforce* development jobs are available. Its goal is to train engineers and fulfill full-time and part-time jobs. WeTrainYou has hired you to determine where they should set up the business. You need to answer this question: which city in California has the most Salesforce developer jobs?

Understand the Challenge

To understand the challenge let’s look at the core elements of the problem statement.

  • WeTrainYou
    • A London-based training company.
    • It has 58 full-time trainers with Salesforce certifications.
    • Its core competency is the Salesforce platform.
    • It is an expert at training engineers and placing them as full-time or part-time Salesforce developers.
  • WeTrainYou wants to start a local training facility in California.
    • According to Wikipedia, California is divided into 58 counties and contains 482 municipalities.
    • California law makes no distinction between "city" and "town," and municipalities may use either term in their official names.
    • According to the 2010 Census, 30,908,614 of California's 37,253,956 residents lived in urban areas, accounting for 82.97 percent of the population.
  • Our challenge is to find out:
    • Which city in California has the most Salesforce developer jobs?

Rubric for a Good Solution

  • Our job:
    • Using data, determine which city in California has the maximum number of  Salesforce developer jobs
    • Use a method that a decision maker can easily understand
  • We can expect our stakeholders to be
    • Executives and board members
    • Sales and marketing personnel
    • Legal and public relations personnel
  • An effective solution would:
    • Be a simple, actionable visual
    • Suggest locations where the most Salesforce developer jobs are available in California

Anatomy of a Data Story

Now let’s examine the framework to solve this challenge.

Figure 1 illustrates the data science pipeline, showing the steps from data ingestion to data visualization.

Figure 1

Figure 1.Data science pipeline.

A data science pipeline starts with data. There are three steps, framed as questions below, that we must address in the process of formulating a compelling answer for this data challenge.

  1. Where will the data come from and how will we ingest it?
  2. Once the data is ingested, how will we cleanse it? How will we correlate it?
  3. And finally, how will we present it?

Figure 2

Figure 2.Core steps in the process.

Figure 2 illustrates these three steps. We will focus on the last step: presenting the data with data visualization.

Questions to Ask (and Answer) in Creating a Data Visualization Pipeline

Outlining the Story

  • What question or questions do you need to answer?
  • Is this a long-term question? Or do they need to act on it now?
  • Is this an informational, a motivational, or a sales story?
  • What will be the flow of my data story?
  • How many slides do I need?
  • What should be the message on each slide?

Understanding the problem and “so what” test

  • Why are the stakeholders asking this question?
  • What are they going to do with my recommendation?
  • What action(s) will they take?
  • What is the goal of this data story?

The challenge is to understand the problem so we can present an accurate and meaningful response using data visuals.

Data

  • What data do I need to answer the question?
  • Where are the data sources?
  • Do I need primary data (that is, collect by survey and write Python* scripts) or secondary data (for example, open government data) to answer the previous question?
  • What tools and techniques do I need to collect the data?
  • Am I going to use BeautifulSoup* to scrape the data? Or do I need to send one million surveys to get the data?
  • How am I going to store the data? Is it going to need billions of rows?
  • Do I need to worry about setting up NOSQL such as MongoDB*? Or can I get away with saving the data in simple flat file (for example, CSV)?
  • How dirty is the data going to be?
  • Do I have access to clean government data in a wonderful Excel* format? Or do I need to write Python scripts to cleanse (e.g. deduplication, normalization, taking care of missing values)
  • How many features am I getting in the data? Do I need heavy feature engineering to answer this question?

Algorithm

Now we have tons of data. We understand the format, structure, and features of the data.

  • What algorithm do I need to answer the question?
  • Is this a supervised machine learning (we provide true labels to the engine and use the model to predict) problem?
  • Is this an unsupervised (for example, clustering) problem?
  • Will this algorithm be too slow for this question?
  • Do I need to support near-real-time visuals? 
  • Do I really need any algorithm? Or can my question be answered with the data available?

Visual encoding

  • What markers (for example, lines or circles) and channels (for example, color, size, or tilt) are best suited to present the story?
  • What colors should I use?
  • Am I presenting this to an audience that is sensitive to certain markers or channels?
  • Is the audience tech-savvy?
  • How interactive are my data visuals going to be?
  • Is this fully interactive?
  • How is the audience going to consume my visuals? Are they going to pinch or zoom in on a visual?
  • Are they going to use their smartphone to tables?
  • What tools am I going to use to develop the visuals?
  • Can Tableau offer me what I want to present or I need D3.js to create my visuals?

Story flow and insights

  • Are my three slides making sense?
  • Is the flow working?
  • Is the question answered?
  • “Email test” – If I email this visual to someone in England, will they understand it (without explaining each visual component in the email)

Act on the story

  • Will they act on my story?
  • Will they act on my data visuals?
  • Did I motivate them enough to act (assuming this was a motivational story)?
  • Did their question get answered?

Avoiding Pitfalls (Working on Feedback)

A situation may arise in which you don’t have the right data. If this happens, you need to go back to the drawing board and collect new data from scratch.

Another situation could be if you get feedback that the data story is not working (for example, the flow or visualization doesn’t work well).

To mitigate these situations, we present the framework for questions to ask while troubleshooting your data story.

You did not understand the problem

  • Why are they asking for this visual?
  • What is the story behind it?
  • A simple answer is to get the domain expert on your team. If you are building a healthcare data story visualization, it’s a good idea to have a doctor as mentor of the team.
  • Interview target users both as a group and one-on-one.
  • Document the “ask” very clearly for the entire team.

Visual coding is incorrect

  • Did we use too many channels?
  • Did we understand how much color to use?
  • Did we use too much animation? 3D?
  • Were the visual elements too complex?
  • How can we keep it simple?

Algorithm is too slow

  • Did we use the right algorithm?
  • Is this a staging issue?
  • Is this a production infrastructure issue?
  • Is there another issue?
    • Run tests to see what component is slow.
    • Measure the time based on the data set, and so on.

Act

  • Why has no one used the feature, product, or visual?
  • How can you build a talkback? Where did they click?
  • Can we use tools like Amazon Mobile Analytics* to understand each component?
  • Did they click to “drill-down” on data?
  • Were they trying to download?
  • What part of the visual data story was not even viewed?

Back to the Case Study

Now let’s look at our problem statement one more time.

A fictitious London-based training company, WeTrainYou, wants to start a local training facility in California. They are looking for a city where ample Salesforce development jobs are available. Their goal is to train engineers and fulfill those full-time and part-time jobs. They have hired you to explain where they should set up the business. You need to answer this question: which city in California has the most Salesforce developer jobs?

The problem statement

  • CEO needs to decide on a location (city) in which to open a training facility in California.
  • This is a priority action task and they need to make a decision quickly.

What data do we need and where can we get it from?

  • Dice.com, a job board, will provide the raw data.
  • We will use a BeautifulSoup Python script to scrape the data.
  • To keep it simple, we will just grab “Title” and “Location.”
  • We need city information in this data.
  • We need to have geocoding information in the data.
  • To display the map, we need longitude and latitude. Luckily, Tableau has built-in geocoding.

Sample code

Here is the Python script.

## (C) DataTiles.ai
## (C) DataTiles.io
## This is Proof of concept script, please do not use in production
## Sudhir Wadhwa, Jyoti Wadhwa, January 2016
import bs4 as bs
import csv
import requests
holder = dict()
myurl = 'tps://www.dice.com/jobs?q=Salesforce+Developer&l=CA'
try:
# For Python 3.0 and later
from urllib.request import urlopen
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen
sourcehtml = urlopen("https://www.dice.com/jobs?q=Salesforce+Developer&l=CA")
soup = bs.BeautifulSoup(sourcehtml,"lxml")
with open('TableauJobsLocations.csv', 'w') as csvfile:
fieldnames = ['Title','Location']
jobwriter = csv.DictWriter(csvfile, fieldnames=fieldnames,dialect="excel",lineterminator='\n')
jobwriter.writeheader()
counter = 0
for a in soup.find_all('a', {"class": "dice-btn-link"}, href=True):
url = a['href']
if  url.find('jobs/detail') > 0:
response=requests.get(url)
soup=bs.BeautifulSoup(response.text)
jobDesc = soup.find("div", { "id" : "jobdescSec" }).get_text().encode('ascii','ignore').upper()
holder['Title'] = soup.find("h1", { "class" : "jobTitle" }).get_text().encode('ascii','ignore').strip()
holder['Location'] = soup.find("li", { "class" : "location" }).get_text().encode('ascii','ignore').strip()
jobwriter.writerow(holder)
holder.clear

Sample output

Here is the output stored in TableauJobsLocations.csv

sudhirwadhwa ~/Desktop/tbd/SCU $ cat TableauJobsLocations.csv
Title,Location
Sr. Salesforce Developer,"San Marcos, CA"
Salesforce Developer,"Los Angeles, CA"
Senior Salesforce Developer,"San Francisco, CA"
Salesforce Developer - FTE,"San Francisco, CA"
Salesforce Developer - Burbank - 125k+ DOE,"Burbank, CA"
Salesforce Developer,"San Francisco, CA"
Senior Salesforce Developer,"Los Angeles, CA"
Senior Salesforce Developers,"San Diego, CA"
Junior Salesforce developer,"Aromas (monterey County), CA"
Sr. Salesforce Developer,"Santa Clara, CA"
Mid-Level Salesforce Developer,"El Segundo, CA"
Lead Salesforce Developer,"San Bruno, CA"
Lead Salesforce Developer,"San Bruno, CA"
Salesforce Developer,"San Diego, CA"
Salesforce Dev/Admin,"Los Angeles, CA"
Salesforce Developer,"Burbank, CA"
Salesforce developer/Admin,"Oakland, CA"
Sr. Salesforce Developer,"San Marcos, CA"
Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"Vista, CA"
Salesforce Developer,"San Ramon, CA"
SalesForce Developer,"Burbank, CA"
Senior Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"San Ramon, CA"
SalesForce Developer,"Burbank, CA"
Salesforce Developer,"Burbank, CA"
Salesforce Developer,"San Rafael, CA"
Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"San Diego, CA"
Senior Salesforce Developer,"Milpitas, CA"
Sr. Salesforce Developer,"San Marcos, CA"
Salesforce Developer,"Los Angeles, CA"
Senior Salesforce Developer,"San Francisco, CA"
Salesforce Developer - FTE,"San Francisco, CA"
Salesforce Developer - Burbank - 125k+ DOE,"Burbank, CA"
Salesforce Developer,"San Francisco, CA"
Senior Salesforce Developer,"Los Angeles, CA"
Senior Salesforce Developers,"San Diego, CA"
Junior Salesforce developer,"Aromas (monterey County), CA"
Sr. Salesforce Developer,"Santa Clara, CA"
Mid-Level Salesforce Developer,"El Segundo, CA"
Lead Salesforce Developer,"San Bruno, CA"
Lead Salesforce Developer,"San Bruno, CA"
Salesforce Developer,"San Diego, CA"
Salesforce Dev/Admin,"Los Angeles, CA"
Salesforce Developer,"Burbank, CA"
Salesforce developer/Admin,"Oakland, CA"
Sr. Salesforce Developer,"San Marcos, CA"
Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"Vista, CA"
Salesforce Developer,"San Ramon, CA"
SalesForce Developer,"Burbank, CA"
Senior Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"San Ramon, CA"
SalesForce Developer,"Burbank, CA"
Salesforce Developer,"Burbank, CA"
Salesforce Developer,"San Rafael, CA"
Salesforce Developer,"San Francisco, CA"
Salesforce Developer,"San Diego, CA"
Senior Salesforce Developer,"Milpitas, CA"
sudhirwadhwa ~/Desktop/tbd/SCU $

Figure 3.Sample output

Manipulate the output in Tableau*

Next we bring the data into Tableau and split the location into State and City columns. A snapshot of the data source looks like Figure 4.

Figure 4

Figure 4.Data source split in Tableau* with geocoding.

Create a dashboard

Next create two workbooks and use them in a dashboard (see Figure 5).

Figure 4

Figure 5.Shows the winner is San Francisco (based on data set).

Conclusion

Effective storytelling with data visualization not only helps us unleash the power hidden in the terabytes of complex data, but also enables our audience to understand the data and in turn make it actionable.

For instance, data visualization can do the following:

  • Meaningfully answer the questions executives are asking.
  • Enable data scientists and data engineers to not get lost in the data ocean.
  • Empower middle management to draw actionable insights
  • Enable data driven decision making at every level.
  • And more.

How to move the Intel Software License Manager to a new server

$
0
0

Moving the Intel® Software License Manager involves many of the same steps as performing the initial install.  It will prevent license checkout until the steps are completed.

  1. Download the Intel® Software License Manager User's Guide.
  2. Determine the hostname and host ID for the new server.  Instructions here.
  3. Log into the Intel® Registration Center.
  4. Download the latest version of the Intel Software License Manager and copy it to your new server.
  5. Modify the host information for your license by following the instructions here.
  6. Download your new server license to your new server.  The default folder used by the license server is /opt/intel/licenses/ for Linux* and [Program Files]\Common Files\Intel\ServerLicenses\ for Windows*.
  7. Download the client license file.
  8. Run the license manager installer according to the instructions in the User's Guide and provide the new license file\folder.
  9. After starting the license manager (lmgrd) process, make sure that the lmgrd and INTEL vendor daemon ports are not blocked by a firewall.
  10. Update the client machines to access the new server.  Check the INTEL_LICENSE_FILE environment variable.
    1. If it uses port@host, change them to the new server.  Most likely only the host needs to change.  
    2. If it contains a path, check the path for the floating license.  Remove this floating license, and replace it with the client license file downloaded from the registration center in step 6.  

For additional support, please file a ticket via the Online Service Center.

Beating Tune Out with Useful Content

$
0
0

Guiding the Developer Path

If you want to win the hearts and minds of developers in their I-want-to-know, I-want-to-do, and I-have-to-fix moments, you’ll need to do more than just show up. 

You need to be useful and meet their needs in those moments. That means connecting developers to what they’re looking for in real time and providing relevant information when they need it. Users gravitate toward brands with snackable, educational content.

Only 9% of users will stay on a mobile site if it doesn’t satisfy their needs (for example, to find a solution to their coding problem). With an average of XX% of our developers on mobile, that's a significant portion of your traffic. Without utility, they will not only move on in the moment, they actually might not ever come back.

69% of users agree that the quality, timing, or relevance of a company’s message influences their perception of a brand.

Our most popular content is centered around the “how-to” search. It’s the “I need to fix a performance issue” moment or the “I want to add a new feature to my app” moment. This is where video content can play a huge role, since it allows them to learn at their own pace, often with step-by-step instructions. 

 

"I want it NOW"

That sounds like something a toddler in the terrible twos would say, but it’s also what our audience is saying. They want immediate gratification, and they’re working faster than ever before. How can you improve the content and simplify the flow to get developers to what they want quickly?

 

Eliminate Steps

Think about the goal of your site: are you trying to drive awareness, downloads, registrations, registrations, or consumption? Everything you do should have a singular focus. Start with that goal and think about how you can cut the number of steps a user must take to reach it. 

 

Anticipate Needs

Being quick also involves knowing what the developer wants BEFORE they want it. Put your big stuff first. You aren't writing a mystery novel where everything will be revealed at the last moment. The goal of every page should be easy to understand and the first thing developers see. You may have secondary goals, but they should never interfere.

 

Do a Reality Check

Grab your phone and try a few of these tasks. Even better, find someone that isn't familiar with your content and ask them to perform these tasks. How well does your content hold up? Can you streamline it further?

  1. Think of the key action you want a user to take. How long did it take to perform?
  2. Think of the most searched-for topics for your area. Try those searches. Are you there and do you like what you see?
  3. Find one of your new articles. How long does it take to read? 
  4. Think about which elements on your site are absolutely, positively, undeniably essential for your developers. How fast can you find it?
  5. Does every page you go to clearly state its goal at the top? Is anything else adding to the clutter?
  6. Can you easily remember the top key points about your content (your 15sec pitch)?
  7. If you scan down the page quickly, what do you remember seeing?
  8. Are you fighting with yourself or other Intel properties for developer attention in search results? 

 

Intel Solutions and Technologies for the Evolving Data Center

$
0
0

 

One Stop for Optimizing Your Data Center

From AI to Big Data to HPC: End-to-end Solutions

Whether your data center is data- or compute-intensive and whether it serves cloud, high-performance computing, enterprise, storage, networking, or big data analytics, we have solutions and technologies to make your life easier. 

Explore

 

Data center managers, integrators, and developers can now optimize the entire stack to run faster and more efficiently on Intel® architecture. The Intel® Xeon® and Intel® Xeon Phi™ product family paired with Intel® Solid State Drives and NVMe* storage provide a strong foundation. Intel is committed to a standardized, shared platform for virtualization including SDN/NFV (networking), while providing hardware-based security and manageability for now and in the future.

But Intel is more than a hardware innovator. Regardless of your challenges, Intel provides optimized industry SDKs, libraries, and tuning tools. And these tools are supplemented by expert-provided training plus documentation including code samples, configuration guides, walk-throughs, use cases, and support forums.
 

 

AI: MACHINE LEARNING AND DEEP LEARNING

Intel supports rapid innovation in artificial intelligence focusing on community, tools, and training. Starting with the Intel® Nervana™ AI Academy, this section of the Intel® Software Developer Zone drills down into to computational machine learning and deep learning, with extensive Intel-optimized libraries and frameworks along with documentation and tutorials.

The Deep Learning Training Tool Beta helps you easily develop and train deep learning solutions using your own hardware. It can ease your data preparation, as well as design and train models using automated experiments and advanced visualizations.

Tools available include:
BigDL open source distributed library for Apache Spark*
Intel® Distribution for Python*
Deep Learning Webinar

 

MODERN CODE

You’ve no doubt heard of recent hardware innovations of the Intel® Many Integrated Core Architecture (Intel® MIC) including the multilevel extreme parallelism, vectorization and threading of the Intel® Xeon® and Intel® Xeon Phi™ product family. Plus, there are larger caches, new SIMD extensions, new memory and file architectures and hardware enforced security of select data and application code via Intel® Software Guard Extensions (Intel® SGX).

But they all require code and tool changes to get the most from the data center. To address this, Intel provides training and tools to quickly and easily optimize code for new technologies.

Extensive free training on code improvements and parallel programming is available online and by workshops and events.

Tools available include:
Intel® Parallel Studio XE (vectorization advisor and MPI profiling)
Intel® Advisor (vectorization optimization and threading design tool)
Intel® C/C++ Compilers and Intel® Fortran Compilers
Intel® VTune™ Amplifier XE (performance analysis of multiple CPUs and FPUs)
Application Performance Snapshot Tool

 

BIG DATA ANALYTICS

When handling huge volumes of data, Intel can help you provide faster, easier and more insightful big data analytics using open software platforms, libraries, developer kits and tools that take advantage of the Intel Xeon and Intel Xeon Phi product family’s extreme parallelism and vectorization. Fully integrated with popular platforms (Apache* Hadoop*, Spark*,R, Matlab* Java*, and NoSQL), Intel optimizations have been well-tested and benchmarked.

Extensive documentation is available on how real-life developers are using Intel hardware, software, and tools to effectively store, manage, process, and analyze data.

The Intel® Data Analytics Acceleration Library (Intel® DAAL) provides highly-optimized algorithmic building blocks and can be paired with the Intel® Math Kernel Library (Intel® MKL) containing optimized threaded and vectorized functions. In fact, the TAP Analytics Toolkit (TAP ATK) provides both Intel® DAAL and Intel® MKL already integrated with Spark.

 

HIGH-PERFORMANCE STORAGE

Intel is at the cutting edge of Storage not only with Intel® SSDs and NVMe but by working with the open source community to optimize and secure the infrastructure. Training is available at Intel® Storage Builders University.


Major tools available include:
Intel® Intelligent Storage Acceleration Library (Intel® ISA-L)
Storage Performance Development Kit (SPDK)
Intel® QuickAssist Technology
Intel® VTune™ Amplifier
Storage Performance Snapshot
Intel® Cache Acceleration Software (Intel® CAS)

 

SDN/NFV NETWORKING

Besides providing a standardized open platform ideal for SDN/NFV (virtualized networking) and the unique hardware capabilities in Intel’s network controllers, Intel has provided extensive additions to, and testing of, the Data Plane Development Kit (DPDK) and training through Intel® Network Builders University. Check out the thriving community of developers and subscribe to the 'Out of the Box' Network Developers Newsletter.

   

HPC AND CLUSTER

If you run visualization or other massive parallelism applications, you know the advantages of using the Intel Xeon and Intel Xeon Phi product family with MCDRAM and associated NUMA/Memory/Cache Modes, wide vector units and up to 68 cores. While the Intel® Scalable System Framework (Intel® SSF) and Intel® Omni-Path Architecture (Intel® OPA) focus on performance, balance and scalability, Intel is working with research and production HPC and clusters to support integration with all the major stacks as well as developing code and tools to optimize and simplify the work.

The Intel® HPC Orchestrator provides a modular integrated validated stack including the Lustre* parallel file system. It is supplemented by critical tools for cluster optimization:

Intel® Trace Analyzer and Collector which quickly finds MPI bottlenecks
Intel® MPI Library and docs to improve implementation of MPI 3.1 on multiple fabrics
MPI Performance Snapshot to help with performance tuning.
Intel® VTune™ Amplifier XE for performance analysis of multiple CPUs, FPUs and NUMA

 

 

Conclusion

Regardless of your job title and data center activities, Intel helps streamline and optimize your work to gain a competitive edge with end-to-end solutions, from high-performance hardware to new technologies, optimizations, tools and training. See what resources Intel provides to optimize and speed up your development now and remain competitive in the industry.

Explore

The New Issue of The Parallel Universe is Here: Transform Sequential C++ Code to Parallel with Parallel STL

$
0
0

Get your hands on the new issue of The Parallel Universe, Intel’s quarterly magazine that explores inroads and innovations in software development.

This issue’s feature article, Parallel STL: Boosting Performance of C++ STL Code, gives an overview of the Parallel Standard Template Library in the upcoming C++ standard (C++17) and provides code samples illustrating its use.

This issue’s other hot topics include:

  • Happy 20th Birthday, OpenMP*: Making parallel programming accessible to C/C++ and Fortran* programmers
  • Solving Real-World Machine Learning Problems with Intel® Data Analytics Acceleration Library: Models are put to the test in Kaggle competitions
  • HPC with R: The Basics: Satisfying the need for speed in data analytics
  • BigDL: Optimized Deep Learning on Apache Spark*: Making deep learning more accessible

Read it now >


Deploy an SDN Wired/Wireless Network with Open vSwitch* and Faucet*

$
0
0

By Shivaram Mysore

Overview

This article describes a Software Defined Networking (SDN) enabled wireless network using Intel hardware, Open vSwitch* (OvS) and Faucet*, which is an open source SDN controller. Instructions on how to configure this network are included.

Why an SDN-Enabled Network?

Unlike a traditional L2/L3 switch, an SDN-enabled switch provides for control and data plane separation. In the above illustration, Faucet represents a controller, and OvS can represent a data plane switch. With the use of standards-based OpenFlow* protocol, you don’t have to write device-specific drivers to handle various switch data paths. Additional advantages include the following:

  • Upgrade controller in <1 sec while the network is still running, without having to reboot the hardware; help prevent zero-day attacks
  • Easy automation and integration with YAML-based configuration
  • Configurable learning; example: unicast flooding
  • Configurable routing algorithms
  • ACLs, policy-based forwarding (PBF), policy-based routing (PBR) based on OpenFlow matches
  • Stacking of vendor-agnostic switches (fabric)
  • High availability via Idempotency
  • Scalability
  • Data plane for network functions virtualization (NFV)
  • NFV offload support: DHCP, DNS, NTP, BGP, Radius, and more
  • Dynamic segmentation based on 802.1x
  • Real-time network statistics and flow information (at most a 10 second delay). Statistics are in time-series Influx database so that one can look at the network historically. Flows are stored in CouchDB*.
  • Applications are written to Apache CouchDB™ and InfluxDB* APIs for network state information without causing network switch performance overhead.

Refer to Faucet presentations and articles for more detailed information.

Faucet Network Deployment

The figure above shows the network configuration. The Intel® processor-based server hosts the OvS (v2.6) software on Ubuntu* v16.10 to serve as the OpenFlow switch data plane. Another Intel® Celeron® processor-based box (QOTOM Mini PC) running Ubuntu 16.10 serves as the host for the Faucet and Gauge* controller. A pfSense*-based router is used for network isolation.

Setting Up the Software

PfSense

Download and install the open source PfSense router software on an Intel processor-based box (example: QOTOM Mini PC). Most of the required services run out of the box on installation.

Faucet and Gauge Controller

  1. Install Ubuntu 16.10 server on an Intel processor-based box (example: QOTOM Mini PC). Alternatively, a virtual machine or Docker image can be used. Refer to the Faucet website for more information.
  2. After installation, as user root, git clone the repository.
    $ sudo su
    $ cd ~/
    # git clone https://github.com/shivarammysore/faucetsdn-intel/
    # cd ~/faucetsdn-intel/src/scripts/install
  3. Run the script to set up Faucet, CouchDB, Grafana* Server for Gauge.
    # cd ~/faucetsdn-intel/src/scripts/install
    # ./install_4faucet.sh
  4. Make sure to update the configuration files with the correct Datapath ID (dp_id) and port information. For more information on modifying the files, check out the Faucet YouTube* demo videos :
    # /etc/ryu/faucet/faucet.yaml
    # /etc/ryu/faucet/gauge.yaml


    Restart services as needed.
    # systemctl restart faucet
    # systemctl restart gauge
  5. This should start the Faucet and Gauge services. Note the IP address of the machine. Faucet runs on port 6653 and Gauge on port 6654.

Installing Open vSwitch on Intel x86

This section lists the steps for getting OvS working as a software switch. Here we will make sure all the connectivity and software stack works.

  1. Install the Ubuntu 16.10 server on the Intel processor-based server as shown in the figure above. You only need to install the OpenSSH* basic system utilities packages.
  2. After installation, as user root, git clone the repository
    $ sudo su
    $ cd ~/
    # git clone https://github.com/shivarammysore/faucetsdn-intel/
    # cd ~/faucetsdn-intel/src/scripts/install
  3. Edit the installation script - Step1_install_u16_10s_pkgs.sh
    1. Things to edit - USER_LIST - name of the user on the system
    2. Check all the interface names to make sure that they match your system.
  4. Run the script to install the required packages, and then set up Docker. Note: For simplification, at this time Docker is not used, so you may want to consider commenting out the Docker section as appropriate.
    # ./Step1_install_u16_10s_pkgs.sh
  5. Edit the ovswitch.properties file, which is self-descriptive.
    1. Make sure IPV6 = false and DPDK = false
    2. Refer to the above figure for various port names and relationships.
  6. Run the script to set up OvS
    # ./Step3_setup_ovs.sh
  7. Make sure that the value of DATAPATH_ID in ovswitch.properties is that same as the one in /etc/ryu/faucet/faucet.yaml file. This tells the controller which switch it needs to manage and monitor.
  8. If everything is set up right, OvS is running and should already be managed by the controller.

Wireless

  1. Connect any wireless access point, such as the low cost TP-LINK TL-WA855RE, to one of the OpenFlow ports managed by the OvS bridge.
  2. Because the LAN cable terminates on an OpenFlow port, any client connected to the wireless AP will be served by OpenFlow-enabled OvS and controlled by Faucet.

Summary

In this article, we learned how simple it is to configure an Enterprise grade, fully programmable SDN Wired/Wireless network using off-the-shelf Intel x86 Hardware and Faucet SDN Controller software. This setup enables on-demand programming for security and network operations scenarios.

Questions and Support

About the Author

Shivaram Mysore has been a serial entrepreneur, proven results oriented business leader and recognized technical expert in Security (Cryptography, Identity, Web Services, XML) & Networking (SDN). He has worked for companies such as Sun Microsystems (Sunlabs/JavaSoft), Microsoft, Infoblox, consulted for fortune 500 companies apart from contributing to development of many industry standards at W3C, ONF, PC/SC Workgroup and ANSI . Currently, he contributes as a core team member to Faucet SDN related open source initiatives and helping organizations deploy SDN.  He can be reached via shivaram dot mysore at gmail dot com.

References

How to Set Up Your Intel® NUC Kit

$
0
0

Setting up your new hardware, once purchased, can be a daunting experience. This article will demonstrate how simple it is to set up and complete your new Intel® NUC kit.

Intel® NUC is a Big Player in a Tiny Box

The Intel® NUC Kit NUC6i7KYK with its sleek and compact form-factor is a work-horse with Intel® Iris® Pro graphics. The Intel® NUC has proven to be a tiny, powerful PC that is great for gaming, living rooms, tradeshows, festivals, and anywhere where space is at a premium. This PC platform has many advantages for the game developer. It is easy to customize, configure, and take with you anywhere where space is at a premium.

Selecting the right hardware and then setting it up correctly can prove to be a daunting experience. This article will demonstrate how simple it is to set up and complete your new Intel® NUC kit.

Intel® NUCs are Fully Customizable

You may be wondering why the Intel® NUCs are shipped as incomplete, bare-bones PCs. Because it is user choice. The buyer may need two massive SSDs in Raid 0 for maximum storage in a compact form-factor. Need extra memory? Having a small, customizable PC with the power of a 6th generation Intel® Core™ i7-6770HQ processor is what many enthusiasts need.

Since the Intel® NUC ships without a hard drive or memory, it is up to the user to purchase compatible hardware. For memory, this can be confusing. Thankfully, most motherboard manufacturers include a list of compatible memory hardware. Simply visit the System Memory for Intel® NUC Kit NUC6i7KYK page, select the memory amount and speed you need, then copy and paste the part number into your shopping cart where you would purchase memory. For example: HX421S13IBK2/16 is a 16GB kit of memory available at many retailers.

Next you will need an M.2 drive (or two!) such as the Intel® SSD 600p Series. They can be configured in Raid 0 for extra speedy configurations, but we won’t be covering RAID setups today.

That’s all the hardware you’ll need to set up your Intel® NUC. Now onto the fun part - installation.

Unboxing your Intel® NUC

Start by taking your Intel® NUC out of the packaging. Make sure the power cable is not plugged in. There are 4 screws on the bottom – loosen them! Note that the screws will not come all the way out of the chassis lid.

Tools you will need

Once you have removed the lid you’ll be greeted by a nice bare board ready for components to be slotted in.

Meet your unboxed system

Start by inserting your memory into the DDR4 DIMM slots. When it’s completed, it should look like the picture below. The memory should make a satisfying snap sound to confirm it has been properly seated and the contacts should mostly be hidden.

Intel® NUC with memory installed

Here is an up-close image:

A closer view of the Intel® NUC with memory installed

Next we’ll install an M.2 hard drive. Locate the M.2 HDD installation point and remove one of the anchoring screws. The screws are opposite the side of the machine where the memory is installed. Unlike the screws for the chassis lid, these screws will fully come out of their sockets.

View of the Intel® NUC – M.2 HDD installation point

Once the screw is removed, insert the M.2 drive into its socket. Screw the screw back in to hold the drive in place. Once completed, it should resemble the below picture.

Intel® NUC with M.2 drive installed

With both the M.2 SSD and the memory installed, your Intel® NUC hardware installation is complete!

View of the Intel® NUC with memory and M.2 drive installed

Installing Windows* on your Intel® NUC

The next step is to begin your Windows* installation. Microsoft offers the Media Creation Tool in order to aid system builders in creating their own Windows 10 installation media. Note that the Utility will wipe your drive so move any needed information off of the drive before you proceed. Follow the instructions on the Microsoft website to download and install Windows.

After completing the installation of Windows, it is a good idea to make sure your BIOS is up to date. The most current BIOS can be found on Intel’s Driver and Support website.

After you download and run the BIOS update, it can take up to 3 minutes to install and will reboot your machine. Do not attempt to power off, reboot, or remove the power cable while the BIOS is updating!

The final step is to run Windows Update. Windows Update will retrieve the latest Intel drivers, or you can download the latest drivers from Intel’s Driver and Support website (the same website linked previously).

And you’re done! Install your development environment and create awesome content!

Summary

In this article, we discussed the benefits of the Intel® NUC for users and particularly for gaming enthusiasts. We stepped through the installations of memory and hard drive and finally the installation of Windows on your Intel® NUC.

References

  1. Mighty Meets Mine: Intel® Skull Canyon

About the Author

Landyn Pethrus is an engineer at Intel, avid gamer, and hardware enthusiast.He specializes in fountain sniping opponents with Ancient Apparition in Dota 2* and slaying bosses in World of Warcraft*.

Introduction to the Zephyr* Real-Time Operating System (RTOS) with the Intel® Quark™ microcontroller D2000

$
0
0

Overview

This article introduces you to the the Zephyr* RTOS and explains how to configure it for the Intel® Quark™ microcontroller D2000.

Zephyr* RTOS with the Intel® Quark™ microcontroller D2000

Welcome to The Zephyr* RTOS with the Intel® Quark™ microcontroller D2000! Intel is now building embedded microcontrollers. They’ve taken the Pentium® processor and taken it down to microcontroller size to be the heart of small battery-powered devices. The Intel® Quark™ microcontroller D2000,  based on Intel’s lowest power Pentium® processor, is designed to control battery-powered electronics like wireless sensors and wearables. To support development with the Intel® Quark™ microcontroller D2000 and to make it easy to build devices with other Intel® Quark™ microcontrollers and beyond, Intel worked with the Linux Foundation* to build a real-time operating system (RTOS), called Zephyr*. Zephyr is an open-source RTOS designed to operate in microcontrollers with limited memory. The Zephyr RTOS is a software platform that simplifies software development, freeing you up to focus more on algorithms and less on hardware.

The Zephyr RTOS includes driver libraries to:

  • Talk to sensors
  • Keep track of time
  • Send messages to the internet
  • Communicate using radios, like Bluetooth® technology or Wi-Fi
  • Manage power consumption to extend battery life

The Zephyr RTOS is compatible with an array of processors, not just those in Intel® Quark™ microcontrollers. The description in this article additionally applies to the use of the Zephyr RTOS with an array of available microcontrollers from other manufacturers.

Before you begin, download the Zephyr RTOS from the Zephyr Project* website or as part of Intel® System Studio for Microcontrollers.

What is an RTOS?

An RTOS is an operating system with a focus on real-time applications. Zephyr is similar to operating systems you find on desktop computers and laptops. The difference is that an RTOS performs tasks in a predictable, scheduled manner with a focus on getting the most important tasks done on time. In an embedded device, timing is critical. On a desktop, it doesn’t matter much if your computer decides to check for new emails before it starts playing a video. The operating system has a running list of tasks and decides which tasks are most important. Chances are user software is not the highest priority task. An extreme example is an embedded system in a car. It matters if the microcontroller decides to check for email when the microcontroller should be triggering the airbag. An RTOS is an operating system that you control completely.

Why Use an RTOS?

As the Internet of Things expands, formerly unconnected devices are getting “smarter” (e.g., able to send data to the cloud) and increasingly complicated. As complexity grows, software becomes more difficult to manage. Simple devices with single purposes probably don’t need to run an RTOS. But, complicated devices with multiple sensors and radios that need to be smart (connected and responsive) are easier to build and maintain with an RTOS.

The RTOS manages software complexity by encapsulating all the activities the microcontroller needs to perform into individual tasks. Then, the RTOS provides tools to prioritize the tasks, determining which tasks always need to execute on time and which tasks are more flexible. Some applications, like communication over radios to the internet, have strict timing requirements and complicated communication protocols. You can rely on the Zephyr RTOS to make sure that communications happen on time and that your microcontroller responds appropriately without you having to write any software to make it happen.

Writing software with an RTOS is a familiar process for developers coming to microcontrollers from desktop programming. For embedded developers with a background in bare metal firmware on microcontrollers, an RTOS is a powerful new tool. The structure of the RTOS improves encapsulation, isolating different functional pieces of software from each other and provides tools to exchange information between different functional code blocks. This prevents one of the greatest hazards of microcontroller firmware development: memory management. For developers to take advantage of the Zephyr RTOS, it’s important to understand how it works. In the next section, we take a look at the features and capabilities of the Zephyr RTOS.

Zephyr Kernel Fundamentals

What’s a Kernel?

The core functionality of the Zephyr RTOS is the Zephyr kernel. The kernel is software that manages every aspect of hardware and software functionality. The Zephyr kernel is designed to be small, requiring little program and data memory. There are two main components to the Zephyr kernel: a microkernel and nanokernel. Each has different memory requirements and features.

Nanokernel

The nanokernel is the smaller of the two. It’s designed to operate on smaller microcontrollers in devices with less functionality (e.g., a sensor measuring only temperature). It requires as little as two kiloBytes of program memory, which means it can be used in all but the very smallest microcontrollers.

Microkernel

The microkernel is a full-featured kernel for more complex devices: a smartwatch with a display, multiple sensors, and multiple radios. The microkernel is designed for larger microcontrollers with memory between 50 and 900 kiloBytes. Every feature of the nanokernel is available to the microkernel, but not the other way around. With 32 kiloBytes of memory, the Intel® Quark™ microcontroller D2000 is ideal for the nanokernel. A simple microkernel project may fit, but if you don’t need any particular microkernel functions, pick the nanokernel which is better suited to the size of the microcontroller’s  memory. The core functionality of the kernels is the same either way, you just won’t be able to use advanced memory management features that aren’t supported by the nanokernel.

Three Contexts in the Zephyr RTOS

The Zephyr kernel provides three main tools for organizing and controlling software execution: tasks, fibers, and interrupts. In the Zephyr documentation, these tools are called contexts because they provide the context within which software executes and each context has different capabilities. 

Tasks

In the Zephyr RTOS, major software functionality is encapsulated in a task. A task is a piece of software that performs processing that takes a long time or is complicated (e.g., interacting with a server on the internet over Wi-Fi* or analyzing sensor data looking for patterns).

Tasks are assigned priorities with more important activities assigned higher priorities. Lower priority tasks can be interrupted if a higher priority task needs to take action. When a higher priority task interrupts a lower priority task, the lower task’s data and state is stored and then the higher priority task’s data and state is invoked. When the higher priority task finishes its work, the lower priority task is restored and starts again at the point it was interrupted. Tasks take over the processor, perform their function, and then go to sleep to wait until they are needed again. The Zephyr kernel contains a scheduler that determines which task needs to run at any time. You can precisely control when a task executes, based on the passage of time, in response to a hardware signal, or based on the availability of new data to analyze. If it’s important that a task responds quickly to the trigger, it should be assigned a higher priority. Tasks execute in an endless loop, sleeping most of the time waiting to be called to perform their function.

Fibers

Fibers are smaller than tasks, used to perform a simple function or just a portion of the processing. Fibers cannot be interrupted by other tasks or fibers. They should be used for performance critical work that requires immediate action. Fibers are defined and started by tasks. Fibers are prioritized over tasks. Tasks can only operate when no fiber needs to execute, so you need to make sure that fibers don’t monopolize the system. Fibers are prioritized like tasks, but no fiber can interrupt a running fiber. However, fibers always interrupt tasks. Fibers should be used for timing sensitive operations, like communicating with sensors where the timing of responses could cause a problem. Fibers should not be used for processing which takes a long period of time.

Interrupts

Interrupts are the highest priority context. The execution of interrupts takes precedence over fibers and tasks. They allow for the fastest possible response to an event, whether it’s a hardware signal from a safety mechanism or the reception of critical communications. Interrupts are prioritized like tasks and fibers, so that higher priority interrupts can take over the processor from lower priority interrupts. When the higher priority interrupt finishes, the low priority interrupt is re-entered. Interrupts are handled with software functions called Interrupt Service Routines (ISRs). Each interrupt has an ISR that runs whenever the interrupt occurs. Interrupts, as a rule, should be kept as short as possible so that they don’t interfere with the schedule for the rest of the system. Commonly, an ISR sends a message to a task or fiber, passing data, or telling it to run. This keeps the interrupt service routine short and offloads longer processing to parts of the application that can be preempted.

Kernels and Tasks

Nanokernel

The nanokernel, as described above, is the smaller of the two kernels. The nanokernel has only one task, known as the background task, which can only execute when no fiber or interrupt needs to execute. The background task is the main() function. The nanokernel can have zero fibers or as many as your application needs. The nanokernel also has no limits on the number of interrupts up to the limits imposed by the hardware of the microcontroller and the size of your program memory. As we said before, the nanokernel is better suited to the Intel® Quark™ microcontroller D2000 because of the size of its program memory.

Microkernel

The microkernel is more powerful than the nanokernel. It also requires more memory resources. The microkernel supports having more than one task and allows you to group tasks to work together to perform a larger function. Microkernel fibers and interrupts are the same as in the nanokernel. The microkernel has more sophisticated functionality for handling memory, for sending data between tasks and fibers, and for managing the microcontroller’s power consumption.

Advanced Zephyr Kernel Functionality

You can make complete applications with just the features described above, but to get the most out of Zephyr, you should get familiar with some of its advanced features. The Zephyr kernel includes functionality to synchronize operations, to pass data between tasks, and to trigger execution of tasks based on external events. It’s beyond the scope of this introduction to go into detail on all of the features. For more information about the deeper features of the Zephyr RTOS, consult the Zephyr Project* Documentation.

Getting Started with the Zephyr Kernel

First Steps

To get started with Zephyr, you’ll need to download the Zephyr kernel and follow the instructions to set up the Zephyr Development Environment on your computer. With Zephyr installed, it’s a good idea to start with a sample project like the hello world project, which you can find in the samples directory where you installed the Zephyr Project code. Follow the instructions for building an application shown in the Getting Started guide. You’ll then understand how to compile your own application and verify that you’ve gotten everything set up correctly. Take a look at the hello world sample application directory. We’ll go through it to understand what all the files are, their purpose, and how you can modify them to build your own applications.

Nanokernel or Microkernel?

The first thing you’ll notice is that there’s a directory for the nanokernel and another for the microkernel. You can build either and they will look the same from the outside. As mentioned earlier, the nanokernel is the more likely kernel for the Intel® Quark™ microcontroller D2000. Still, it’s important to understand the differences to know which kernel is right for your application. Let’s start with the microkernel.

Microkernel Organization

A microkernel project consists of at least five files:

  1. A configuration file that instructs the kernel to enable features you want to use in your application. Based on the instructions in the configuration file, the kernel will enable hardware functionality and include the appropriate drivers in your project.
  2. A microkernel object definition file that initializes RTOS features, like tasks and interrupts.
  3. An application makefile that informs the Zephyr kernel about which processor you are using, which kernel you are using, whether nano or micro, the name of the project configuration file, and the name of your microkernel object definition file.
  4. Your source code, contained in a subfolder in the project folder.
  5. A makefile that instructs the compiler how to build your source code.

Let’s take a look at each of these files and how you can modify them for your application.

Kernel Configuration File

The Zephyr RTOS is highly configurable with a huge array of options for tailoring the kernel to meet your application’s needs. In the configuration file, commonly named prj.conf, you determine which Zephyr features you’ll use in your application. By only turning on the features that you need, you control the size of the Zephyr libraries included alongside your application code. Every feature that you intend to use in your application needs to be explicitly enabled using definition statements like the one you see in the prj.conf file in the ‘Hello World’ project.

CONFIG_STDOUT_CONSOLE=y

This statement tells Zephyr to include the driver for the standard output console, which you use to send statements to be displayed on your computer. Other configuration options take the same form. Many drivers have an array of options for setting up hardware to operate exactly how you need it to work. The list of available options and configurations is quite extensive. To see all the available configurations and options, see the Zephyr Configuration Options Reference Guide.

Microkernel Object Definition File

The microkernel object definition file contains definitions of tasks and any other kernel objects your project needs. You should define any objects in this file that you want to be available to your entire application, across any number of source files. In the definition of a task, you need to give the task a name, a priority, a size of memory to use, and to assign that task to a group. In the ‘Hello World’ project, the prj.mdef definition file contains the following task definition:

% TASK NAME  PRIO ENTRY STACK GROUPS
% ==================================
    TASK TASKA    7 main  2048 [EXE]

The lines with “%” at the beginning are just comments to clarify the code for the reader. They are not read by the Zephyr compiler. The task name is a Zephyr name for the task, not the name of the function. The priority is what it sounds like. In Zephyr, the lowest priority is 7 and the highest priority is 1. Main is the name of the function that the task will call as its entry point to start running. 2048 is the size of memory, in bytes, allocated to the task. This may seem like a lot, but this memory stores Zephyr data that keeps track of the tasks state and allows for the task to be suspended and restarted. [EXE] is the name of the executable group. In your own project, you can use this same structure to create tasks, as well as defining all of the more advanced features of the microkernel.

The Application Makefile

The application makefile informs the Zephyr kernel which files to use in building your application. It specifies the name of the kernel configuration file, the name of the microkernel configuration file, which processor architecture you are using, and the name of your source code application file. Generally, you won’t need to make changes to the standard Zephyr application makefile.

The Source Code Makefile

The source code makefile is necessary to inform the compiler how to build your source code. Underneath, the Zephyr Development uses an open source compiler to convert your software into machine instructions. The source code makefile tells the compiler which files to include in the process and conveys compiler configuration instructions.

Your Source Code

The source code includes your application, structured in as many files as you need or prefer. To use any Zephyr functionality, a file must include zephyr.h as well as header files for any drivers that you intend to use. Source code files are generally written in C, although the Zephyr compiler allows the use of C++ outside of tasks, fibers, interrupts, and other Zephyr RTOS code. If you look at main.c in the hello world project, you’ll see a standard C file using Zephyr functions.

Nanokernel Differences

The nanokernel configuration is broadly the same as the microkernel configuration, except nanokernel projects don’t have microkernel object definition files. Since the nanokernel only uses one task, it’s automatically generated by Zephyr and will use your main function as the entry point. Fibers and interrupt routines are defined inside your source code. If you compare the microkernel project to the nanokernel project, there isn’t much difference. The prj.mdef file is not necessary, so it’s been removed, and the reference to it in the makefile has also been changed. Otherwise, the nanokernel and microkernel from this perspective are largely the same.

The Zephyr API

The Zephyr API is the toolset you will use to build your application code. It’s full of functions for very quickly prototyping hardware. There are libraries to define and use Zephyr functionality like tasks, fibers, interrupt routines, and timers and drivers for communication buses or to talk to specific pieces of hardware. With the Zephyr API, you can hook up a development board and a sensor shield and be up and running, gathering data in no time. For more information about the Zephyr API, consult the API Documentation.

Next Steps

As you build applications and become familiar with Zephyr, you’ll start to think of writing software in different and new ways. Structuring your software according to tasks and their priority helps to make your software more responsive, more compact, and better organized.

 

Download    Get Started    Code Sample

Resources

How to Install the Neon™ Framework on Ubuntu*

$
0
0

Introduction

The neon™ framework is Intel® Nervana™ Python*-based deep-learning framework designed to use deep neural networks such as AlexNet, VGG, and GoogLeNet.

There are many ways to install the neon framework. Users will need to install additional dependencies and packages in order to install the neon framework successfully.

This article presents a simple step-by-step way to install the neon framework in Ubuntu* 14.04 using the Anaconda* Python distribution. It also guides users through what to do if errors are encountered during the installation process. Additional installation instructions or further troubleshooting can be found here.

Installing the Neon Framework

This section will show how to install neon in a virtual environment. This way neon will be in a self-contained environment with other dependencies and user-preferred Python version different from that of the main environment. Furthermore, Anaconda incorporates the Intel® Math Kernel Library (Intel® MKL), which helps improve the performance of common packages like NumPy*, NumExpr*, SciPy*, and Scikit-learn*.

Follow these steps to install the neon framework:

  1. Install the Anaconda distribution of Python if it is not already there.
    1. Go to the Anaconda download website, select the Download for Linux* option and download either the 2.x or 3.x Python version of Anaconda for 64-bit Linux.
    2. Execute the following command to install Anaconda:
      bash Anaconda2-x.x.x-Linux-x86_64.sh (for python 2.x)
      or
      bash Anaconda3-x.x.x-Linux-x86_64.sh (for python 3.x)


      Note:

      - At the time of this writing, the latest version of Anaconda is 4.3.1. Therefore, the above commands should be written as follow:
      bash Anaconda2-4.3.1-Linux-x86_64.sh (for python 2.x)
      or
      bash Anaconda3-4.3.1-Linux-x86_64.sh (for python 3.x)

      - If Anaconda has been installed, update it to the latest version using the following commands:
      conda update conda
      conda update ananconda
  2. After Anaconda has been installed, create a new conda environment for the neon framework. We’ll name it neon, but you can use any name you want.

    conda create --name neon pip

    conda create --name neon pip

    Figure 1. Create the neon™ framework environment.
    Figure 1. Create the neon™ framework environment.

    Figure 1 shows that the neon framework environment was created successfully.

  3. Activate the new environment using the following command:

    source activate neon

  4. Download the neon framework package using Git*:

    git clone  https://github.com/NervanaSystems/neon.git

    Figure 2. Cloning the neon™ framework from GitHub*.
    Figure 2. Cloning the neon™ framework from GitHub*.

    Figure 2 displays the result when the cloning process is successful.

    Note: if git is not already set up on your computer, you can install it by typing the following:
     

    sudo apt install git

  5. Install the neon framework package using make. Make sure to go to the folder containing the package before using make:

    cd neon && make sysinstall

    Figure 3. Installing the neon™ framework.
    Figure 3. Installing the neon™ framework.

    Figure 3 shows the messages that display when the neon™ framework has been successfully installed.

    If there are errors, the screen will look like that in Figure 4:

     

    Messages that display if errors occur while installing the neon™ framework.Figure 4. Messages that display if errors occur while installing the neon™ framework.

    Figure 5 shows a situation when the neon framework cannot be installed in the system due to missing components.

    Figure 5. Errors messages that display during installation when there are missing components.
    Figure 5. Errors messages that display during installation when there are missing components.

    From Figure 5, it appears that the installation cannot find the file “pillow.” There are two possible reasons: the package containing pillow is corrupted or the package containing pillow has not been installed. The safe way to fix the problem is to uninstall the package and reinstall it. Use the following command to install the package:

    conda install pillow

    To uninstall a package, use the following command:

    conda uninstall <package to uninstall>

    Figure 6. Installing missing components.
    Figure 6. Installing missing components.

    Figure 6 shows how to install missing components for the neon framework.

  6. Deactivate the environment when the installation completes:

    source deactivate neon

    Figure 7. Deactivating the neon™ framework environment.
    Figure 7. Deactivating the neon™ framework environment.

Testing the Neon Framework Installation

To ensure the neon framework is installed correctly, run the MNIST multi-layer perceptron example included in the package. This example is running on a CPU as a CPU was detected and parameters were not changed to run on available GPU. Follow these steps to run the example:

You will see the prompt change to reflect the activated environment. To start the neon framework and run the MNIST multi-layer perceptron example:

  1. Activate the neon framework environment.
    source activate neon
  2. Run the example by issuing the following command:
    examples/mnist_mlp.py

    Figure 8. Running examples under the neon™ framework.
    Figure 8. Running examples under the neon™ framework.

    If the example is running correctly, you will see something similar to that in Figure 8.

  3. Deactivate the environment when you are finished running the example.

    Source deactivate neon

Conclusion

This article described a simple way to install the neon framework on Ubuntu using the Anaconda Python distribution.

Zephyr* Scheduling Basics with the Intel® Quark™ microcontroller D2000

$
0
0

Overview

In this article, you’ll learn about:

  • The fundamentals of scheduling software execution with the Zephyr* Real-time Operating System (RTOS) and the Intel® Quark™ microcontroller D2000.
  • Zephyr software mechanisms called tasks and fibers, which are essential components of all Zephyr applications.
  • Initialization and use of tasks and fibers in your applications.
  • Common problems when getting started with the Zephyr* RTOS and how to avoid them.

The Intel® Quark™ microcontroller D2000 and the Zephyr* RTOS

With the Intel® Quark™ microcontroller D2000, Intel is staking its place at the edge of the Internet of Things (IoT). The Intel® Quark™ microcontroller D2000 was designed from the ground up for IoT applications where low power is important. Small battery-powered sensor devices, gathering data in homes, businesses, factories, and farm fields require ultra-low power consumption electronics. With sleep currents in the single digit microAmps, a sensor device (powered by an Intel® Quark™ microcontroller D2000) transmitting data over a Bluetooth® low energy radio could run for a couple years on a pair of lithium-ion batteries.

The core of the Intel® Quark™ microcontroller D2000 is a Pentium® processor. Low power but still powerful enough for IoT, it’s fully compatible with the x86 instruction set and capable of executing code written for its desktop counterparts. Benefiting from decades of Pentium® processor architecture refinement and software execution optimization, the Intel® Quark™ microcontroller D2000 is a modern microcontroller with a reliable history.

To support software development with this new microcontroller, Intel partnered with the Linux Foundation* to build an open-source real-time operating system (RTOS). Based on source code developed by Wind River*, a wholly owned subsidiary of Intel Corp., the Zephyr* RTOS is built for resource constrained microcontrollers with less than 512kB of system memory. The Zephyr RTOS comes in two sizes and is highly configurable, allowing the user to choose an appropriate feature set and enable only necessary software features to minimize Zephyr’s memory footprint. The Zephyr RTOS includes an Application Programming Interface, or API, with tools and drivers that make working with embedded devices, like sensors and radios, a relatively simple process. If you’re new to working with an RTOS, you’ll find that writing applications with Zephyr will shorten processor bring-up, reduce software issues during hardware validation, and streamline multi-threaded development. If you’re experienced with an RTOS, you’ll find that Zephyr provides all the tools of a world-class RTOS in a fresh package, custom designed to meet the needs of modern designers of IoT.

Zephyr RTOS Fundamentals

Zephyr is a multi-threaded operating system, meaning that Zephyr can effectively perform multiple operations at the same time. Functional blocks of code are executed in turn, according to priorities that you assign to tasks and fibers. Separate blocks of code aren’t actually running simultaneously. Since the Intel® Quark™ microcontroller D2000 has only one processor core, it executes functions one at time, handling higher priority code first and executing lower priority code when higher priority code is idle. You decide which functions are most important and Zephyr will prioritize their execution to meet critical timing requirements.

In Zephyr, functional blocks of code can be executed in your choice of three execution contexts: task, fiber, or interrupt. A task is for larger pieces of code that take longer to execute and aren’t as time sensitive. A fiber is for smaller operations with stricter timing requirements like hardware drivers. An interrupt is for the smallest operations which are the most time critical, like responding to a hardware or software event. Tasks are the lowest priority and can be preempted whenever a higher priority task, a fiber, or an interrupt needs to execute. Fibers always interrupt a task when they need to execute. They can only be preempted by interrupts, not by tasks or other fibers, even higher priority ones. As you can see, interrupts are the highest priority and they always interrupt a task or fiber when they need to run. This article covers tasks and fibers but not interrupts. For more information on interrupts, consult the Zephyr Project* Documentation.

Kernels

The core of the Zephyr RTOS is called the kernel, which contains the software system for scheduling code execution. The kernel also contains software subsystems like device drivers and networking software. The Zephyr kernel is comprised of the nanokernel and the microkernel.

Nanokernel

The nanokernel is the lighter of the two kernels, with a reduced feature set to achieve a smaller memory footprint. It’s designed for microcontrollers with less than 50kB of system memory, like the Intel® Quark™ microcontroller D2000. The nanokernel is better suited to handle simpler applications, like reading a small number of sensors and communicating over a single radio.

The nanokernel is only allowed to have a single task, usually the main function. Nanokernel applications are not restricted in the number of fibers they can use, up to the limits of their memory. For most applications with the Intel® Quark™ microcontroller D2000, the nanokernel should be the best kernel option. While it’s possible to compile a microkernel application for the Intel® Quark™ microcontroller D2000, you’ll have less room for your application code. Only use the microkernel with the Intel® Quark™ microcontroller D2000 if you absolutely need microkernel features only (multiple tasks, sophisticated memory management tools, etc.).

Microkernel

Everything the nanokernel can do, the microkernel can do and more. The microkernel is the full-featured version of the Zephyr RTOS. Geared toward complex applications, the microkernel can coordinate multiple tasks, like handling reading sensors and performing data analysis while communicating with the cloud over multiple radio channels. The microkernel can run more than one task as well as an unlimited number of fibers. The microkernel has more available features for managing data flow and memory and synchronizing execution. 

Scheduling with Zephyr

The Tick Timer

Everything in the Zephyr RTOS marches to the timing of the Zephyr tick timer. The tick timer is derived from a 64-bit system clock in the Intel® Quark™ microcontroller D2000 which takes its count from a 32-bit hardware timer. Zephyr’s tick timer defines the granularity of timing in your application. The default step size of the tick timer is 10 milliseconds. The period of the tick timer determines the minimum resolution you can achieve with software timers in your application. It also determines the shortest interval in which the Zephyr RTOS will change between equally prioritized tasks and fibers. A longer tick timer period can potentially make your code less responsive. On the other hand, a shorter tick timer period increases the operating system overhead because changing tasks takes time and resources. You can reduce the tick timer period if you need finer timing resolution or increase it if you want to reduce processor activity. If your application doesn’t require it, the tick timer is best left alone.

Tasks

Now, let’s take a look at how to configure and use tasks and fibers to build your application. You need to be aware of differences between the nanokernel and the microkernel. There are differences in how each handles tasks and fibers. First, let’s look at tasks in each of the kernels.

Nanokernel Tasks

In the nanokernel, you’re only allowed one task. Zephyr requires at least one task to operate and uses your main() function as that one task. Zephyr refers to the main() task in a nanokernel application as the background task. As the name suggests, your main() function will only execute when no fiber or Interrupt Service Routine (ISR) needs to run. This fact has some implications for how you write your code and how you structure your application.

Unlike a main() function in a non-RTOS application, the Zephyr background task doesn’t end when execution reaches the end of the function; the task runs again in a loop. To avoid running initialization code again, your main function should have an endless loop at its end.

The next consideration is to place code in the main task that is not time critical. Since any other application code can interrupt it at any time, your main task may not always execute with consistent timing. It’s also possible that you could construct an application where the main task doesn’t ever get to execute at all. If fibers and interrupts monopolize the processor by taking too long to execute or by containing prolonged delays, you can cause what’s referred to as task starvation. Chances are that simple applications won’t encounter task starvation. It’s just something that you need to be aware of now that you’re working with an operating system.

Your main() function/task with Zephyr’s nanokernel should look like this:

void main(void){

/*Hardware initialization here*/

while(1){
		/*Endless loop*/
	}
}

If you want the main task to pause operation, you can use timers and wait for them to expire, or you can use the task_sleep() function to idle the task for a specified length of time. Using the tick timer with the task_sleep() function, you tell the task to sleep for a certain number of ticks. The task will then sleep for a length of time of the number of ticks times the tick timer period. For example, if you would like to put the main() task to sleep for 10 timer ticks, or 100 milliseconds with a 10 millisecond tick timer, use this:

task_sleep(10);

Using this basic timing functionality, you can create software events which occur at regular intervals.

Microkernel Tasks

With the microkernel, as with the nanokernel, your main() function is a task. However, unlike the nanokernel, the microkernel can handle multiple tasks. When you design your software system architecture, tasks should contain longer and more complex code functions that like the main task are too lengthy to be performed by a fiber.

Microkernel tasks are very different from nanokernel tasks in terms of how they are initialized and how they work. Tasks in the microkernel are given priorities to determine which is the most important. Priorities for microkernel tasks can range from 0, the highest priority, down to a configurable minimum priority which defaults to 15. Your minimum task priority should be one less than the lowest priority which is reserved for the microkernel’s idle task which runs when nothing else needs to execute. The Zephyr RTOS will run whichever task has the highest priority first. If two tasks have the same priority, it will run the one that has been waiting the longest.

Like the main() task in the nanokernel, tasks normally run forever in a loop. It’s your responsibility to set the priorities, determining which code needs to always execute on time and which code can handle more interruptions and longer delays.

Microkernel tasks require: a defined memory region to store the task’s stack, a function to be invoked when the task starts executing, a priority, and what’s called a task group. Also, the microkernel requires an extra file, called an MDEF file, in your project directory. The MDEF file is a text file in which you’ll declare all your microkernel objects, including tasks.

Declaring a microkernel task

Microkernel tasks are declared in the MDEF file with all the necessary information conveyed in this order: name, priority, function, stack memory size, and task group. In the MDEF file, comment lines, starting with the % symbol, are not interpreted by the Zephyr build system. Task declarations start with the keyword TASK. Tasks that should execute immediately when the application starts should be assigned to the EXE group. Using task groups, you can start and stop a group of tasks together. Tasks that should not execute immediately, like ones which handle sensors that may require a start up delay, can be assigned to a different task group, or the group can be left empty as in the example below.

% TASK NAME           PRIO  ENTRY          STACK   GROUPS
% ===================================================================
  TASK MAIN_TASK        6   main                    1024   [EXE]
  TASK SENSOR_TASK 2   sensor                 400     []

In this example, two tasks are defined. The “MAIN_TASK” is defined with a priority of 6, the main function as its entry point, a stack memory region of 1024 bytes, and assigned to the EXE group to start immediately. The “SENSOR_TASK” is defined with a higher priority of 2, a function called sensor() as its entry point, a stack region of 400 bytes, and no assigned task group.

Starting a Microkernel Task

Tasks in the EXE group will start automatically but tasks that don’t start right away need to be started by another task using the task_start() function. To start a task, you only need to know the task’s name as it appears in the MDEF file:

task_start(SENSOR_TASK);

Fibers

Unlike tasks, fibers are handled the same with the nanokernel and the microkernel. Fibers are intended to be used for shorter performance-critical pieces of code. Fibers cannot be interrupted by another task or another fiber, so execution timing is more consistent and reliable. You should use fibers for device driver code or for communications requiring precise timing. Interrupt service routines can interrupt a fiber, so you should still account for this in writing your code.

Fibers are scheduled for execution by the RTOS based upon priority. Fiber priorities range from 0, which is the highest priority, down to 232-1. If two fibers have the same priority, the fiber which has been idle the longest is executed first. If no fibers need to execute, then the highest priority task that needs to run is scheduled for execution. Of course, a fiber can interrupt any task if it needs to run.

Initializing a Fiber

Fibers are declared in your source code and then initialized and started from a task or another fiber. The process is slightly more complex than that for a task. Fibers require that you declare a stack memory region for storing fiber variables and context data that is used when the task is idled and restarted. You also need to create a function which will be used as the function entry point where the fiber starts execution. The function can take up to two arguments although it’s not necessary that you use them. The function arguments can be used to supply initialization information for the fiber. You also need to specify the fiber’s priority and have the option of passing some options to the fiber. The options don’t apply to the Intel® Quark™ microcontroller D2000. To declare the stack memory region, do something like this:

#define STACKSIZE 2000
char __stack fiberStack[STACKSIZE];

This declaration uses the C preprocessor __stack identifier to declare a memory array of size set by the STACKSIZE constant. The function that will serve as the fiber entry point requires no special declaration, using a prototype like this:

void fiberEntry(int arg1, int arg2);

If your application doesn’t have a need for the function arguments, you are free to use the function prototype:

Void fiberEntry(void);

For your application, you should change “fiberEntry” to something appropriate and meaningful.

With a stack memory region and an entry point function, you have everything you’ll need to use the task_fiber_start function to enable the fiber. The fiber start function takes as arguments: a pointer to the stack memory, an integer defining the size of the stack, the name of the entry function, the two integer arguments, the priority as an integer, and any fiber options. In practice, it looks like this:

task_fiber_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options);

In this case, arg1, arg2, priority, and options are integer variables. Since fibers take priority over tasks, if a task executes this function, the fiber will start to execute immediately after the processor finishes executing the task_fiber_start function. The task will be idled and the RTOS will switch over to the fiber. If immediately starting the fiber is undesirable, then you can use the task_fiber_delayed_start() function instead. This function takes one extra argument which is the number of ticks that the fiber should be delayed in starting up. The other arguments are the same and in the same order. To delay the startup of a fiber by 10 ticks, you would change the above function call to this one:

task_fiber_delayed_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options, 10);

If you are starting a fiber from another fiber, then you need to use a different function designed to be used from fibers. The form is the same; just the name is different. Function calls from fibers substitute the word “fiber” for “task” in the function name. Use this function to start a fiber immediately:

fiber_fiber_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options);

And this function to start a fiber with a delay:

fiber_fiber_delayed_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options, 10);

Idling a Fiber

Like a task, a fiber normally executes forever once it’s started. Unlike a task, a fiber cannot be interrupted except by an interrupt. Other fibers, including higher priority fibers cannot interrupt an actively processing fiber. Any fiber that monopolizes the processor with long processing times may cause delays with other fibers, including higher priority ones. In this case, it may be necessary for a task to deliberately pause execution to allow time for other fibers and tasks to execute. There are two Zephyr functions for this purpose, each with slightly different behaviors. The fiber_yield() function will idle a fiber so that higher or equal priority fibers have an opportunity to execute. As an argument, the fiber_yield() function requires the number of ticks for which the task will yield control of the processor. To yield for 10 timer ticks, use this:

fiber_yield(10);

For a more general relinquishment of processor control, you should use fiber_sleep(), which will surrender control of the processor without condition for a specified number of ticks. Unlike the yield function, the sleep function allows tasks and lower priority fibers to execute. To idle a fiber for 10 ticks, use this:

fiber_sleep(10);

The Microkernel Server Fiber

Fibers play a larger role in nanokernel applications since the nanokernel doesn’t allow multiple tasks. In microkernel applications, fiber usage should be reserved for the highest priority activities when being preempted could compromise performance. Nanokernel applications should use fibers for driver interactions and time sensitive processing.

The microkernel automatically runs one task, the microkernel server fiber, which handles the scheduling of all microkernel fibers, determining which fiber needs to execute first. The microkernel server fiber defaults to the highest priority, 0. You can change the microkernel server fiber to a lower one if you have high priority critical code that can’t tolerate any delay, like time sensitive device drivers. In general, it won’t be necessary to change the microkernel server priority, but if you’re curious, consult the Zephyr Microkernel Fiber Documentation.

Next Steps

With basic knowledge of scheduling with the Zephyr RTOS and an understanding of fibers and tasks you can now make quality applications with Zephyr. For further information, check out the advanced RTOS mechanisms in the Zephyr Project Documentation.

Resources

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Do I need to use the Intel XDK to complete the HTML5 from W3C Xseries Course?

It is not required that you use the Intel XDK to complete the HTML5 from W3C Xseries course. There is nothing in the course that requires the Intel XDK. 

All that is needed to complete the course is the free Brackets HTML5 editor. Whenever the course refers to using the "Live Layout" feature of the Intel XDK, use the "Live Preview" feature in Brackets, instead. The Intel XDK "Live Layout" feature is directly derived from, and is nearly identical to, the Brackets "Live Layout" feature. 

For additional help, see this Intel XDK forum post and this Intel XDK forum thread.

Error contacting remote build servers.

If you consistently see this error message while using the Build tab, and you are logged into the Intel XDK, it is likely due to using an obsolete and unsupported version of the Intel XDK. Check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

You must upgrade to a new version of the Intel XDK to resolve this issue.

NOTICE: Internet connection and login are required.

If you have successfully logged into the Intel XDK, but you are seeing the error message in the image below, when using the Build or Test tab, it may be due to an obsolete and unsupported version of the Intel XDK. Please check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

Otherwise, please review this FAQ for help creating an Intel XDK login.

I cannot login to the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a userid and password, you can create your login credentials outside of the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

I cannot login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel login page, to something short and simple. (If you do not have an Intel XDK userid, goto the Intel XDK registration page to create one.)

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK use the same technique to authenticate your login). Once the above works, you can reset your password to something else if you do not like the short and simple password you used for the test.

If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

Inactive account/login issue/problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

I lost my project, how do I download my project source code from the Intel XDK servers?

We do not store your projects on our servers for any significant period of time, just long enough to perform a build or send for testing on App Preview. Your source code is located inside of the APK and IPA files you built. You will have to recreate the project settings, but you have all of the source if you have the APK (or IPA or Windows build). Rename the APK you have to a ZIP, for example from "my-app.apk" to "my-app.apk.zip" and then unzip that file using your favorite archive tool. For example, the contents of an APK based on the "hello-cordova" sample:

NOTE: the cordova-js-src folder was added by Cordova, it is not part of the original source for this sample project. Likewise, the cordova.js and the cordova_plugins.js files were added by Cordova. The remaining files and folders within the www folder were directly copied from the original project's www folder.

You can start a new project using the blank template and copy the source code from inside the APK's www folder into that project's www folder. You can also see which plugins were included in the APK by inspecting the plugins folder or inspecting the cordova_plugins.js file that was added to the APK. At the very end of the cordova_plugins.js file is a list of plugins that were added and the specific versions of those plugins. For example, from the APK above, that is based on the "hello-cordova" sample, the last lines from the cordova_plugins.js file:

module.exports.metadata =
// TOP OF METADATA
{"cordova-plugin-crosswalk-webview": "1.5.0","cordova-plugin-device-orientation": "1.0.3","cordova-plugin-device": "1.1.2","cordova-plugin-compat": "1.1.0","cordova-plugin-geolocation": "2.2.0","cordova-plugin-inappbrowser": "1.4.0","cordova-plugin-splashscreen": "3.2.2","cordova-plugin-dialogs": "1.2.1","cordova-plugin-statusbar": "2.1.3","cordova-plugin-file": "4.2.0","cordova-plugin-media": "2.3.0","cordova-plugin-device-motion": "1.2.1","cordova-plugin-vibration": "2.1.1","cordova-plugin-whitelist": "1.2.2"
};

NOTE: in the list above, the cordova-plugin-whitelist and cordova-plugin-crosswalk-webview plugins were added automatically by the Intel XDK and, likewise, will be added automatically by the Intel XDK Cordova export tool, so you do not need to add these two plugins to your rebuilt project.

If you were using Crosswalk, you may see a xwalk-command-line file in the APK, the contents of that file are the Crosswalk initialization commands that were provided, for example, from this same sample app:

xwalk --ignore-gpu-blacklist --disable-pull-to-refresh-effect 

Beyond that, you can inspect the AndroidManifest.xml file to find a few other things, like the version numbers. For example, if you have Android Studio installed on your system, you can use the aapt command to inspect the contents of your APK. The most useful being the version codes and the package name, as shown below:

$ aapt list -a my-app.apk | fgrep -i version
    ...lines deleted for clarity...
    A: android:versionCode(0x0101021b)=(type 0x10)0x1c
    A: android:versionName(0x0101021c)="16.5.16" (Raw: "16.5.16")
    A: platformBuildVersionCode=(type 0x10)0x17 (Raw: "23")
    A: platformBuildVersionName="6.0-2704002" (Raw: "6.0-2704002")
      A: android:minSdkVersion(0x0101020c)=(type 0x10)0xe
      A: android:targetSdkVersion(0x01010270)=(type 0x10)0x15

$ aapt list -a my-app.apk | fgrep package
Package Group 0 id=0x7f packageCount=1 name=xdk.intel.hellocordova
    A: package="xdk.intel.hellocordova" (Raw: "xdk.intel.hellocordova")

How do I convert my web app or web site into a mobile app?

The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

Where are the global-settings.xdk and xdk.log files?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

The xdk.log file contains logged data generated by the Intel XDK while it is running. Sometimes technical support will ask for a copy of this file in order to get additional information to engineering regarding problems you may be having with the Intel XDK. 

Both files are located in the same directory on your development system. Unfortunately, the precise location of these files varies with the specific version of the Intel XDK. You can find the global-settings.xdk and the xdk.log using the following command-line searches:

  • From a Windows cmd.exe session:
    > cd /
    > dir /s global-settings.xdk
     
  • From a Mac and Linux bash or terminal session:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* you must use Windows* 7 or higher. The Intel XDK will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (the Intel XDK has issues with network shares that have not been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). 
  • Some people have issues using the Intel XDK behind a corporate network proxy or firewall. To check for this issue, try running the Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel login page and confirm that you can login with your Intel XDK account username and password.
  • If you are experiencing login issues, please send an email to html5tools@intel.com from the email address registered to your login account, describing the nature of your account problem and any other details you believe may be relevant.

If you can reliably reproduce the problem, please post a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to the Intel XDK forum. Please ATTACH the xdk.log file to your post using the "Attach Files to Post" link below the forum edit window.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

The steps below assume you installed into the "default" location. Version 3900 (and later) installs the user data files one level deeper, but using the locations specified will still find the saved user information and node-webkit cache files. If you did not install in the "default" location you will have to find the location you did install into and remove the files mentioned here from that location.

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > rmdir /s /q .

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > rmdir /s /q .
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

If the Intel XDK is still listed as an app in the Windows Control Panel "Uninstall or change a program" list, find this entry in your registry (using regedit):

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall

Delete any sub-entries that refer to the Intel XDK. For example, a 3900 install will have this sub-key:

ARP_for_prd_xdk_0.0.3900

Use the following methods on a Linux or a Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

See the image below (this image is from a Windows 8.1 system).

Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

How do I run the Intel XDK on Fedora Linux?

See the instructions below, copied from this forum post:

$ sudo find xdk/install/dir -name libudev.so.0
$ cd dir/found/above
$ sudo rm libudev.so.0
$ sudo ln -s /lib64/libudev.so.1 libudev.so.0

Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

The Intel XDK generates a path error for my launch icons and splash screen files.

If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

<icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

Inside of your <project-name>.xdk file you will find entries that look like this:

"icons_": [
  {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
  },
  {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
  },
  {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
  },

Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

<!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

Upgrading to the latest version of the Intel XDK results in a build error with existing projects.

Some users have reported that by creating a new project, adding their plugins to that new project and then copying the www folder from the old project to the new project they are able to resolve this issue. Obviously, you also need to update your Build Settings in the new project to match those from the old project.

How do I generate my Android hash key for Facebook ads?

Please see this article for help.

Back to FAQs Main

Novosibirsk State University Gets More Efficient Numerical Simulation

$
0
0

Russia's Novosibirsk State University boosted a simulation tool’s performance by 3X with Intel® Parallel Studio, Intel® Advisor, and Intel® Trace Analyzer and Collector, cutting the standard time for calculating one problem from one week to just two days.

When researchers at the University were looking to develop and optimize a software tool for numerical simulation of magnetohydrodynamics (MHD) problems with hydrogen ionization—part of an astrophysical objects simulation (AstroPhi) project—they needed to optimize the tool’s performance on Intel® Xeon Phi™ processor-based hardware. The team turned to Intel® Advisor and Intel® Trace Analyzer and Collector. This resulted in a performance speed-up of 3X.

"The use of Intel® Advanced Vector Extensions for Intel® Xeon Phi™ processors gave us the maximum code performance compared with other architectures available on the market,” explained Igor Kulikov,
assistant professor.

Get the whole story in our new case study.
 

 


Installing Intel® MKL Cloudera* CDH Parcel

$
0
0

Intel® worked with Cloudera* to make it easy to use the Community forum supported Intel® Math Kernel Library (Intel® MKL) with Cloudera* CDH. This page provides general installation and support notes about Intel® MKL as it has been distributed via Cloudera* DHC described below.

These software development tools are also available as part of the Intel® Parallel Studio XE and Intel® System Studio products. These products include enterprise-level Intel® Online Service Center support.

Using Intel® MKL Parcel

Here is how to install the Intel® MKL Parcel

  1. In the Cloudera Manager Admin Console access the Parcels page by doing one of the following:
  • Click the Parcels indicator in the top navigation bar.
  • Click the Hosts in the top navigation bar, then the Parcels tab.
  1. Click the Configuration button on the Parcels page.
  2. In the Remote Parcel Repository URLs list, click plus symbol to open an additional row. Enter the path to Intel® MKL Parcel repository:
 

http://parcels.repos.intel.com/mkl/latest

  1. Click the Save Changes button
  2. Click the Check for New Parcels button
  3. In the Location selector, click Available Remotely. The latest Intel MKL parcel should be available for download.
  4. Click the Download button for Intel® MKL Parcel. By downloading Intel® MKL you agree to the terms and conditions stated in the End-User License Agreement (EULA).
  5. Click the Distribute button in order to distribute the parcel on all cluster nodes, when download is completed
  6. Click the Activate button to activate Intel MKL parcel on all cluster nodes, when distribution is completed. A pop-up indicates which services must be restarted to use the new parcel.
  7. Choose one of the following:
  • Restart - Activate the parcel and restart services affected by the parcel.
  • Activate Only - Active the parcel. You can restart services at a time that is convenient.
  1. Click OK.

Note: The repository URL shown above installs the latest version of Intel MKL parcel. To install an older version of the Intel MKL use the URL based on the following model:

 

http://parcels.repos.intel.com/mkl/<VERSION>.<UPDATE>.<BUILD_NUMBER>

The following variables are used in the repository URL: <VERSION>, <UPDATE>, <BUILD>. The available values for these variables are available in the table below:

 

<VERSION>

<UPDATE>

<BUILD_NUM>

Intel® MKL

2017

2

201

Example:

 

http://parcels.repos.intel.com/mkl/2017.2.201

You can find more information about Parcels installation at Managing Software Installation Using Cloudera Manager.

Have Questions?

Check out the FAQ
Or ask in our User Forums

Where to Use the Intel® Quark™ Microcontroller D2000

$
0
0

Overview

This article introduces you to the Intel® Quark™ microcontroller D2000 and shows you where you can use it in your designs.

Microcontrollers and The Internet Of Things (IoT)

Intel wants to be at the center of connected hardware. From the servers in the cloud to the gateways that connect devices to the internet, and, now, all the way to the edge of the internet in the sensors gathering data. The Intel® Quark™ microcontroller D2000 is a 32-bit x86 Pentium® processor in a microcontroller package. Engineered for low power consumption, this microcontroller is designed to be the heart of battery-operated sensor devices. With the goal of bringing intelligence to the edge of the Internet of Things (IoT), the Intel® Quark™ microcontroller D2000 makes sensors smarter with the ability to not only read data and pass it on, but to analyze the data and independently take appropriate actions.

If a sensor device loses its connection to the cloud, a smart sensor equipped with the Intel® Quark™ microcontroller D2000 can log data and operate autonomously until the connection returns. If the smart sensor recognizes that a problem has occurred, it can take action locally, instead of sending the data to the cloud and waiting for instructions. Pushing intelligence to the devices at the edge of the IoT reduces the need for complex and expensive gateways, limits the network bandwidth required to support the growing number of sensors talking to the cloud. The power of the IoT is not just in controlling things, but in the data that connected sensors generate.

Within data, there are patterns which are recognizable by smart sensors that learn and get smarter. The patterns are everywhere and the Intel® Quark™ microcontroller D2000 can make sensors smarter to recognize them.

With an x86 instruction set compatible processor at its core, the Intel® Quark™ microcontroller D2000 can run software written for desktop computers. Decades of software development for the x86 processor has produced a highly optimized set of tools that are perfect for making compact software to run on battery-powered resource limited devices. The Intel® Quark™ microcontroller D2000, based on a Pentium® processor, has one of the lowest power 32-bit processors available. The combination of powerful processing and low power along with a complete set of peripherals makes the Intel® Quark™ microcontroller D2000 ideal for IoT. Let’s now take a look at some areas where this microcontroller can be used to power the IoT.

Features of the Intel® Quark™ Microcontroller D2000

Package

Coming in a 40-pin surface mount package, this microcontroller is less than a quarter inch by a quarter inch making it ideal for small sensor devices.

Integrated and Internal Features

It includes a number of features designed to reduce the number of required external components. It has an integrated regulator to power itself and attached sensors. It’s also designed to run directly from batteries and able to tolerate the large voltage range of discharging batteries. It can additionally run without an external crystal and has internal flash memory to retain sensor data after the power runs out.

Power

Running at full power, the Intel® Quark™ microcontroller D2000 consumes only 8mA. Low power sleep modes can limit current to nanoAmps. With proper software design, a sensor device (based on the Intel® Quark™ microcontroller D2000) could run for a couple years on a 9-Volt lithium-ion battery.

Peripherals

The Intel® Quark™ microcontroller D2000 comes with an array of peripherals around the x86 processor core. It has standard communication hardware for talking to sensors: I2C*, SPI, and UART. To read analog sensors, it has Analog-to-Digital Converters (ADCs). It has timers for triggering periodic sensor readings or for controlling the brightness of an LED for feedback.

Clock and Security

The Intel® Quark™ microcontroller D2000 has a Real-Time Clock for keeping track of the passage of time, information that’s often important for data logging. It also has security features to prevent sensor devices from becoming a network entry point for hackers.

The Intel® Quark™ microcontroller D2000 has been created from the ground up for IoT. Let’s take a look at some applications.

Applications

Smart Buildings and Homes

The Intel® Quark™ microcontroller D2000 can be used for building a wireless home automation system. You could build wireless, battery powered sensors for taking measurements around the house. With the same microcontroller you could then send that data to the internet, process it and access the data from anywhere.  You could track your pets, enable your plants to send you a text requesting to be watered, or detect a water leak. In an office environment, you could install wireless light sensors, controlled by the Intel® Quark™ microcontroller D2000, to measure current lighting conditions and dim overhead lights when there’s enough light coming through the windows. In the home, it can also monitor current conditions and make changes in response. Let’s look at one idea for a home automation system.

Connected Thermostat with Wireless Temperature Sensors

By connecting the Intel® Quark™ microcontroller D2000 to a temperature sensor, a Bluetooth® low energy radio, and a coin cell battery, you can create wireless, battery-powered, temperature sensors. You could position the temperature sensors throughout the house. Instead of measuring the temperature just at one place on one floor, you could measure the temperature in every room. You could prioritize the temperature of some rooms over others to decide which rooms are more important. With Bluetooth® technology, you can transmit that data back to a central thermostat which could have the same Intel® Quark™ microcontroller D2000 in it.

In the thermostat, the Intel® Quark™ microcontroller D2000 could control your furnace, turning on and off relays, and reading the status of the air conditioner, the furnace, and the fan. You could add an interface with switches, knobs, and a display to control the thermostat locally and to build schedules. The thermostat could be connected to the internet with a Wi-Fi module that could talk to your router to send and receive messages from the cloud. The connected thermostat could be controlled from anywhere in the world.  With built-in security on the Intel® Quark™ microcontroller D2000, you can be sure that your data remains secure. With the data you gather over time, patterns will emerge that will support long term decisions like replacing windows or improving insulation. Then, you will be able to directly quantify the improvements by comparing new patterns to old ones. Of course, this is just one example. Anywhere there is data to be gathered, you can use the Intel® Quark™ microcontroller D2000 to build sensors that will help people make better informed decisions with measurable results.

Industrial Applications

Automation is an essential part of modern industry. The Internet of Things presents an enormous opportunity to expand automation in manufacturing. Wireless battery-powered sensor devices can gather data to improve efficiency and reduce downtime by recognizing problems and notifying the appropriate people to perform maintenance. With sensor nodes, powered by the Intel® Quark™ microcontroller D2000, machines and processes can be managed remotely. New lower cost, battery-powered sensors can be installed without running new cables for power and communication. With the Intel® Quark™ microcontroller D2000, remote sensors can be made intelligent, to operate autonomously in remote locations even if the network is unavailable. Combined with new developments in wireless technologies, this microcontroller can power low-power networks covering large areas. An example would be a long range wireless industrial sensor network.

Transportation

Transportation is another industry where the Intel® Quark™ microcontroller D2000 can power the Internet of Things. As in industrial applications, transportation systems can be monitored with wireless sensor networks. Pairing the microcontroller with a GPS radio, buses can be tracked. It could relay the position information over a cellular network to the cloud. Applications in the cloud could analyze position and traffic information from the internet to accurately predict arrival and departure times. A mobile application could access the cloud data to keep riders informed. The microcontroller could also power wireless occupancy sensors at bus stops to assess which stops require service or to drive displays providing information to riders. By gathering data and putting it in the hands of the right people, the Intel® Quark™ microcontroller D2000 will play an important role in improving transportation for growing cities.

NFC Fare Card System

Public transportation systems have increasingly moved away from cash and fare cards, instead relying on permanent user fare cards and cellphones equipped with secure Near-Field Communication, or NFC, technology. NFC works with a reader which generates an electric field. When the card moves into the electric field, it gets power from the electric field and communicates back to the reader by manipulating the electric field. The NFC electrical field is small, not supplying substantial power. The Intel® Quark™ microcontroller D2000 is ideal for low power applications and works well as a controller for secure fare cards. With the Intel® Quark™ microcontroller D2000 and its onboard security features, the NFC system could be protected against hacking. It could also power the fare card reader, providing a secure microcontroller for communicating with servers in the cloud. Usage data from the NFC reader could be used to improve transportation systems to operate at peak efficiency, adding trains or busses during predicted busy times and reducing fleet operations when traffic is expected to be light. Patterns that emerge from the data can inform long term decision making that can affect millions of people.

Development Tools

The Intel® Quark™ microcontroller Developer Kit D2000

D2000 Developer Kit
Figure 1. Intel® Quark™ microcontroller Developer Kit D2000

The Intel® Quark™ microcontroller Developer Kit D2000 is built to get you started developing devices for the Internet of Things. In addition to the microcontroller, the Development Kit contains a 6-axis IMU with accelerometers and magnetometers, USB connectivity for programming and debugging, a coin cell battery holder, and memory for storing data. The kit has headers to interface with 3.3V compatible Arduino shields and Launchpad Booster packs. Pairing the microcontroller with any of the dozens of available sensor shields and a radio, you can create an IoT device.

Intel® System Studio for Microcontrollers

Intel provides free software tools for developing with the Intel® Quark™ microcontroller D2000. Intel® System Studio for Microcontrollers (ISSM) is a robust, full-featured, complete integrated development environment. ISSM comes with a complete set of software tools that make it simple to develop your own IoT applications. Loaded with examples, you can find software that performs just about any function and that automates reading a wide variety of sensors and communicates with an array of radios. Download it for free at Intel® System Studio for Microcontrollers.

The Zephyr* Real-time Operating System (RTOS)

Intel, in partnership with the Linux Foundation*, developed a free, open-source real time operating system, RTOS, to support the Intel® Quark™ microcontroller D2000 and a range of other microcontrollers. The Zephyr* RTOS has been created with a focus on enabling developers to quickly build IOT devices. With comprehensive libraries and a highly optimized core, the Zephyr RTOS is the ideal choice for developers using the Intel® Quark™ microcontroller D2000. For more information on the Zephyr RTOS, check out the Zephyr Project Documentation.

 

Download    Get Started    Code Sample

Additional Resources

Get started with the Intel® Quark™ microcontroller D2000

Intel® System Studio for Microcontrollers

Zephyr Project Documentation

Intel® Manycore Platform Software Stack for Intel® Xeon Phi™ Coprocessor x200

$
0
0

Summary of (latest) changes

This article describes the most recent changes that have been made to the Intel® Manycore Platform Software Stack (Intel® MPSS) 4.x. If you've subscribed to get update notifications, you can use this information to quickly determine whether these changes apply to you.

  • May 8, 2017, Intel® MPSS 4.4.0 HotFix 1 released for Linux* and Windows*

‍‍About the Intel® Manycore Platform Software Stack 4.x

The Intel MPSS 4.x is necessary to run the Intel® Xeon Phi™ coprocessor x200. It has been tested to work with specific versions of 64-bit operating systems:

The readme files (referenced in the Downloads section) have more information on how to build and install the stack.

One important component of Intel MPSS is the Symmetric Communications Interface (SCIF). The SCIF is included in the RPM bundle. SCIF provides a mechanism for inter-node communications within a single platform. A node, for SCIF purposes, is defined as either an Intel® Xeon Phi™ coprocessor or the Intel® Xeon® processor. In particular, the SCIF abstracts the details of communicating over the PCI Express* bus. The SCIF APIs are callable from both user space (uSCIF) and kernel space (kSCIF).

Intel MPSS is downloadable from the sources below. Note that these packages include documentation and APIs (for example, the SCIF API).

For Linux systems, users can measure Intel® Xeon Phi™ processor and coprocessor x200 product family performance with a tool called micperf. micperf is designed to incorporate a variety of benchmarks into a simple user experience with a single interface for execution. For the coprocessor, the micperf package is distributed as an RPM file within Intel MPSS. The following table summarizes all the benchmarks that can be run with the micperf tool:

Benchmark

CLI Name

Target Operations

Component

Comments

Intel® Math Kernel Library (Intel® MKL) DGEMM

dgemm

Double-precision floating point

VFU

For the processor, micperf provides a MCDRAM and DDR version

Intel MKL SGEMM

sgemm

Single-precision floating point

VFU

For the processor, micperf provides a MCDRAM and DDR version

Intel MKL SMP Linpack

linpack

Double-precision floating point

VFU

 

SHOC Download*

shoc download

Bus transfer host to device

PCIe* bus

Only available for the coprocessor

SHOC Readback*

shoc readback

Bus transfer device to host

PCIe bus

Only available for the coprocessor

STREAM*

stream

Round-trip memory to registers

MCDRAM, GDDR and caches

For the processor, micperf provides a MCDRAM and DDR version

HPLinpack*

hplinpack

Double-precision floating point

VFU

Only available for the processor

HPCG*

hpcg

Double-precision floating point

VFU

Only available for the processor; requires Intel® MPI Library

Note: the Intel MPSS download files for Linux marked “.gz” should end in “.gz” when downloaded; most browsers leave the extension alone, but Windows Explorer* may rename the files. If this affects you, we recommend renaming the file to the proper extension after downloading.

‍‍Getting notified of future updates

If you want to receive updates when we publish a new Intel MPSS 4.x stack, add a comment at the bottom of this page.

‍‍Release support schedule?

The following table shows when releases were issued and when Intel will no longer support them. Releases with a strikethrough are no longer supported. For an overview of Intel's release structure and support length, please see this article.

Downloads

There are currently two major releases available for the Intel MPSS 4.x. The most recent major release is 4.4.x.

We recommend that new adopters start by using the 4.4 release. Support for each Intel MPSS release ends 6 months from the date it was posted, except for long-term support products.

 

Intel MPSS 4.4.0 HotFix 1 release for Linux

Intel® Manycore Platform Software Stack versionDownloads availableSize (range)MD5 Checksum

mpss-4.4.0 Hotfix 1(released: May 8, 2017)

RedHat 7.3


214MB
8a015c38379b8be42c8045d3ceb44545
 

RedHat 7.2


214MB
694b7b908c12061543d2982750985d8b
 

SuSE 12.2

213MB506ab12af774f78fa8e107fd7a4f96fd
 

SuSE 12.1

213MBb8520888954e846e8ac8604d62a9ba96
 

SuSE 12.0

213MB88a3a4415afae1238453ced7a0df28ea
 Card installer file (mpss-4.4.0-card.tar)761MBd26e26868297cea5fd4ffafe8d78b66e
 Source file (mpss-4.4.0-card-source.tar)514MB127713d06496090821b5bb3613c95b30

 

Documentation linkDescriptionLast Updated OnSize (approx)
releasenotes-linux.txtRelease Notes (English)May 201715KB
README.txtReadme (includes installation instructions) for Linux (English)May 201717KB
MPSS_Users_Guide.pdfMPSS User's guideMay 20173MB
EULA.txtEnd User License Agreement (IMPORTANT: Read Before Downloading, Installing, or Using)May 201733KB
   

 

 

 

Intel MPSS 4.4.0 HotFix 1 release for Microsoft Windows

Intel® Manycore Platform Software Stack versionDownloads availableSizeMD5 Checksum

64-bit Install Package (release May 8, 2017)

 mpss-4.4.0-windows.zip

1091MB204a65b36858842f472a37c77129eb53

 

Documentation linkDescriptionLast Updated OnSize
releaseNotes-windows.txtEnglish - release notesMay 20177KB
readme-windows.pdfEnglish - readme for Microsoft* WindowsMay 2017399KB
MPSS user's_GuideMPSS User Guide for WindowsMay 20173MB
EULA.txtEnd User License Agreement (IMPORTANT: Read Before Downloading, Installing, or Using)May 201733KB

 

‍‍Additional documentation

The Intel MPSS packages contain additional documentation for Linux: man pages and documents in /usr/share/doc/ (see myo, intel-coi-* and micperf-* directories). The Platform Control Panel User’s Guide is now in /usr/share/doc/systools/micmgmt/

Also, below is a link to the Intel® MPSS Performance Guide, which documents best-known methods for fine-tuning the Intel MPSS runtime environment to get the best application performance.

http://software.intel.com/sites/default/files/managed/72/db/mpss-performance-guide.pdf‍‍

‍‍Where to ask questions and get more information

The discussion forum at http://software.intel.com/en-us/forums/intel-many-integrated-core is available to join and discuss any enhancements or issues with Intel® MPSS.

Information about Intel MPSS security can be found here. 

You can also find support collaterals here or submit an issue.

Intel® Xeon Phi™ Coprocessor x200 Quick Start Guide

$
0
0

Introduction

This document introduces the basic concept of the Intel® Xeon Phi™ coprocessor x200 product family, tells how to install the coprocessor software stack, discusses the build environment, and points to important documents so that you can write code and run applications.

The Intel Xeon Phi coprocessor x200 is the second generation of the Intel Xeon Phi product family. Unlike the first generation running on an embedded Linux* uOS, this second generation supports the standard Linux kernel. The Intel Xeon Phi coprocessor x200 is designed for installation in a third-generation PCI Express* (PCIe*) slot of an Intel® Xeon® processor host. The following figure shows a typical configuration:

Figure 1

Benefits of the Intel Xeon Phi coprocessor:

  • System flexibility: Build a system that can support a wide range of applications, from serial to highly parallel, while leveraging code optimized for Intel Xeon processors or Intel Xeon Phi processors.
  • Maximize density: Gain significant performance improvements with limited acquisition cost by maximizing system density.
  • Upgrade path: Improve performance by adding to an Intel Xeon processor system or upgrading from the first generation of the Intel Xeon Phi product family with minimum code changes.

For workloads that fit within 16 GB coprocessor memory, adding a coprocessor to a host server allows customers to avoid costly networking. For workloads that have a significant portion of highly parallel phases, offload can offer significant performance with minimal code optimization investment.

Additional Documentation

Basic System Architecture

The Intel Xeon Phi coprocessor x200 is based on a modern Intel® Atom™ microarchitecture with considerable high performance computing (HPC)-focused performance improvements. It has up to 72 cores with four threads per core, giving a total of 288 CPUs as viewed by the operating system, and has up to 16 GB of high-bandwidth on-package MCDRAM memory that provides over 500 GB/s effective bandwidth. The coprocessor has an x16 PCI Express Gen3 interface (8 GT/s) to connect to the host system.

The cores are laid out in units called tiles. Each tile contains a pair of cores, a shared 1 MB L2 cache, and a hub connecting the tile to a mesh interface. Each core contains two 512-bit wide vector processing units. The coprocessor supports Intel® AVX-512F (foundation), Intel AVX-512CD (conflict detection), Intel AVX-512PF (prefetching), and Intel AVX-512ER (exponential reciprocal) ISA.

Figure 2

Intel® Manycore Platform Software Stack

Intel® Manycore Platform Software Stack (Intel® MPSS) is the user and system software that allows programs to run on and communication with the Intel Xeon Phi coprocessor. Intel MPSS version 4.x.x is used for the Intel Xeon Phi coprocessor x200 and can be download from here [(https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200)]. (Note that the older Intel MPSS version 3.x.x is used for the Intel Xeon Phi coprocessor x100); standard Linux kernel running on the coprocessor.

You can download the Intel MPSS stack at https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200. The following host operating systems are supported: Red Hat* Enterprise Linux Server, SUSE* Linux Enterprise Server and Microsoft Windows*. For detailed information on requirements and on installation, please consult the README file for Intel MPSS. The figure below shows the high representation of the Intel MPSS. The host software stack is on the left and the coprocessor software stack is on the right.

Figure 3

Install the Software Stack and Start the Coprocessor

Installation Guide for Linux* Host:

  1. From the “Intel Manycore Platform Software Stack for Intel Xeon Phi Coprocessor x200 (https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200), navigate to the latest version of the Intel MPSS release for Linux and download “Readme for Linux (English)” (README.txt). Also download the release notes (releasenotes-linux.txt) and the User’s Guide for Intel MPSS.
  2. Install one of the following supported operating systems in the host:
    • Red Hat Enterprise Linux Server 7.2 64-bit kernel 3.10.0-327
    • Red Hat Enterprise Linux Server 7.3 64-bit kernel 3.10.0-514
    • SUSE Linux Enterprise Server SLES 12 kernel 3.12.28-4-default
    • SUSE Linux Enterprise Server SLES 12 SP1 kernel 3.12.49-11-default
    • SUSE Linux Enterprise Server SLES 12 SP2 kernel 4.4.21-69-default

    Be sure to install ssh, which is used to log in to the card.

    WARNING: On installing Red Hat, it may automatically update you to a new version of the Linux kernel. If this happens, you will not be able to use the prebuilt host driver, but will need to rebuild it manually for the new kernel version. Please see Section 5 in the readme.txt for instructions on building an Intel MPSS host driver for a specific Linux kernel.

  3. Log in as root.
  4. Download the release driver appropriated for your operating system in Step 1 (<mpss-version>-linux.tar), where <mpss-4> is mpss-4.3.3 at the time this document was written.
  5. Install the host driver RPMs as detailed in Section 6 of readme.txt. Don’t skip the creation of configuration files for your coprocessor.
  6. Update the flash on your coprocessor(s) as detailed in Section 8 of readme.txt.
  7. Reboot the system.
  8. Start the Intel Xeon Phi coprocessor (you can set up the card to start with the host system; it will not do so by default), and then run micinfo to verify that it is set up properly:
    # systemctl start mpss
    # micctrl –w
    # /usr/bin/micinfo
    micinfo Utility Log
    Created On Mon Apr 10 12:14:08 2017
    
    System Info:
        Host OS                        : Linux
        OS Version                     : 3.10.0-327.el7.x86_64
        MPSS Version                   : 4.3.2.5151
        Host Physical Memory           : 128529 MB
    
    Device No: 0, Device Name: mic0 [x200]
    
    Version:
        SMC Firmware Version           : 121.27.10198
        Coprocessor OS Version         : 4.1.36-mpss_4.3.2.5151 GNU/Linux
        Device Serial Number           : QSKL64000441
        BIOS Version                   : GVPRCRB8.86B.0012.R02.1701111545
        BIOS Build date                : 01/11/2017
        ME Version                     : 3.2.2.4
    
    Board:
        Vendor ID                      : 0x8086
        Device ID                      : 0x2260
        Subsystem ID                   : 0x7494
        Coprocessor Stepping ID        : 0x01
        UUID                           : A03BAF9B-5690-E611-8D4F-001E67FC19A4
        PCIe Width                     : x16
        PCIe Speed                     : 8.00 GT/s
        PCIe Ext Tag Field             : Disabled
        PCIe No Snoop                  : Enabled
        PCIe Relaxed Ordering          : Enabled
        PCIe Max payload size          : 256 bytes
        PCIe Max read request size     : 128 bytes
        Coprocessor Model              : 0x57
        Coprocessor Type               : 0x00
        Coprocessor Family             : 0x06
        Coprocessor Stepping           : B0
        Board SKU                      : B0 SKU _NA_A
        ECC Mode                       : Enabled
        PCIe Bus Information           : 0000:03:00.0
        Coprocessor SMBus Address      : 0x00000030
        Coprocessor Brand              : Intel(R) Corporation
        Coprocessor Board Type         : 0x0a
        Coprocessor TDP                : 300.00 W
    
    Core:
        Total No. of Active Cores      : 68
        Threads per Core               : 4
        Voltage                        : 900.00 mV
        Frequency                      : 1.20 GHz
    
    Thermal:
        Thermal Dissipation            : Active
        Fan RPM                        : 6000
        Fan PWM                        : 100 %
        Die Temp                       : 38 C
    
    Memory:
        Vendor                         : INTEL
        Size                           : 16384.00 MB
        Technology                     : MCDRAM
        Speed                          : 6.40 GT/s
        Frequency                      : 6.40 GHz
        Voltage                        : Not Available

Installation Guide for Windows* Host:

  1. From the “Intel Manycore Platform Software Stack for Intel Xeon Phi Coprocessor x200 (https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200), navigate to the latest version of the Intel MPSS release for Microsoft Windows. Download “Readme file for Microsoft Windows” (readme-windows.pdf). Also download the “Release notes” (releaseNotes-windows.txt) and the “Intel MPSS User’s Guide” (MPSS_Users_Guide-windows.pdf).
  2. Install one of the following supported operating systems in the host:
    • Microsoft Windows 8.1 (64-bit)
    • Microsoft Windows® 10 (64-bit)
    • Microsoft Windows Server 2012 R2 (64-bit)
    • Microsoft Windows Server 2016 (64-bit)
  3. Log in as “administrator”.
  4. Install .NET Framework* 4.5 or higher on the system (http://www.microsoft.com/net/download), Python* 2.7.5 x86-64 or higher (Python 3.x is not supported), Pywin32 build or higher (https://sourceforge.net/projects/pywin32).
  5. Be sure to install PuTTY* and PuTTYgen*, which are used to log in to the card’s OS.
  6. Follow the preliminary steps as instructed in Section 2.2.1 of the Readme file.
  7. Restart the system.
  8. Download the drivers package mpss-4.*-windows.zip for your Windows operating system from the page described in Step 1.
  9. Unzip the zip file to get the Windows exec files (“mpss-4.*.exe” and “mpss-essentials-4*.exe”).
  10. Install the Windows Installer file “mpss-4.*.exe” as detailed in Section 3.2 of the User’s Guide. Note that if a previous version of the Intel Xeon Phi coprocessor stack is already installed, use Windows Control Panel to uninstall it prior to installing the current version. By default, Intel MPSS is installed in “c:\Program Files\Intel\MPSS”. Also, install “mpss-essentials-4*.exe”, the native binary utilities for the Intel Xeon Phi coprocessor. These are required when using offload programming or cross compilers.
  11. Confirm that the new Intel MPSS stack is successfully installed by looking at Control Panel > Programs > Programs and Features: Intel Xeon Phi (see the following illustrations).

    Figure 4

  12. Update the flash according to Section 2.2.3 of the readme-windows.pdf file.
  13. Reboot the system.
  14. Log in to the host and verify that the Intel Xeon Phi x200 coprocessors are detected by the Device Manager (Control Panel > Hardware > Device Manager, and click “System devices”):

    Figure 5
  15. Start the Intel Xeon Phi coprocessor (you can set up the card to start with the host system; it will not do so by default). Launch a command-prompt window and start the Intel MPSS stack:
        prompt> micctrl --start
  16. Run the command “micinfo” to verify that it is set up properly:
        prompt> micinfo.exe

    Figure 6

Intel® Parallel Studio XE

After starting the Intel MPSS stack, users can write applications running on the coprocessor using Intel Parallel Studio XE.

Intel Parallel Studio XE is a software development suite that helps boost application performance by taking advantage of the ever-increasing processor core count and vector register width available in Intel Xeon processors, Intel Xeon Phi processors and coprocessors, and other compatible processors. Starting with the Intel Parallel Studio 2018 beta, the following Intel® products support program development on the Intel Xeon Phi coprocessor x200:

  • Intel® C Compiler/Intel® C++ Compiler/Intel® Fortran Compiler
  • Intel® Math Kernel Library (Intel® MKL)
  • Intel® Data Analytics Acceleration Library (Intel® DAAL)
  • Intel® Integrated Performance Primitives (Intel® IPP)
  • Intel® Cilk™ Plus
  • Intel® Threading Building Blocks (Intel® TBB)
  • Intel® VTune™ Amplifier XE
  • Intel® Advisor XE
  • Intel® Inspector XE
  • Intel® MPI Library
  • Intel® Trace Analyzer and Collector
  • Intel® Cluster Ready
  • Intel® Cluster Checker

To get started writing programs running on the coprocessor, you can get the code samples at https://software.intel.com/en-us/product-code-samples. The packages “Intel Parallel Studio XE for Linux - Sample Bundle”, and “Intel Parallel Studio XE for Windows - Sample Bundle” contain code samples for Linux and Windows, respectively.

Programming Models on Coprocessor

There are three programing models that can be used for the Intel Xeon Phi coprocessor x200: offload programing model, symmetric programing model, and native programing model.

  • Offload programing: The main application runs on the host, and offload selected, highly parallel portions of the program to the coprocessor(s) to take advantage of manycore architecture. The serial portion of the program still runs in the host to take advantage of big cores architecture.
  • Symmetric programming: The coprocessors and the host are treated as separate nodes. This model is suitable for distributed computing.
  • Native programming: The coprocessors are used as independent nodes, just like a host. Users compile the binary for the coprocessor in the host, transfer the binary, and log in the coprocessor to run the binary.

The figure below summarizes different programming models used for the Intel Xeon Phi coprocessor:

Figure 7

What's New? - Intel® VTune™ Amplifier XE 2017 Update 3

$
0
0

Intel® VTune™ Amplifier XE 2017 performance profiler

A performance profiler for serial and parallel performance analysis. Overviewtrainingsupport.

New for the 2017 Update 3! (Optional update unless you need...)

As compared to 2017 Update 2:

  • Application Performance Snapshot (Preview) provides a quick look at your application performance and helps you understand where your application will benefit from tuning. The revised tool shows metrics on MPI parallelism (Linux* only), OpenMP* parallelism, memory access, FPU utilization, and I/O efficiency with recommendations on further in-depth analysis.
  • Support for Intel® Xeon Phi™ coprocessor targets codenamed Knights Landing
  • Improved insight into parallelism inefficiencies for applications using Intel Threading Building Blocks (Intel TBB) with extended classification of high Overhead and Spin time.
  • Automated installation of the VTune Amplifier collectors on a remote Linux target system. This feature is helpful if you profile a target on a shared resource without VTune Amplifier installed or on an embedded platform where targets may be reset frequently.
  • Support for Microsoft Visual Studio* 2017

Resources

  • Learn (“How to” videos, technical articles, documentation, …)
  • Support (forum, knowledgebase articles, how to contact Intel® Premier Support)
  • Release Notes (pre-requisites, software compatibility, installation instructions, and known issues)

Contents

File: vtune_amplifier_xe_2017_update3.tar.gz

Installer for Intel® VTune™ Amplifier XE 2017 for Linux* Update 3

File: VTune_Amplifier_XE_2017_update3_setup.exe

Installer for Intel® VTune™ Amplifier XE 2017 for Windows* Update 3 

File: vtune_amplifier_xe_2017_update3.dmg

Installer for Intel® VTune™ Amplifier XE 2017 - OS X* host only Update 3 

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>