Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

An Artificial Intelligence Primer for Developers

$
0
0

Download Document [PDF 453K]

Computer scientists have been pursuing Artificial Intelligence (AI) for over 60 years. While the term has meant different things over the decades, recent advances have brought us closer to achieving machine intelligence than ever before.

Developers who are just learning about AI and machine learning will have many questions, from “What can I do with AI?” or “Why would AI be a programming solution?” to “What is necessary to say my program is learning?” and “What level of machine interaction is necessary to make it seem intelligent?” or even “Does my program need to appear intelligent to be intelligent?”

Introduction

Artificial intelligence (AI) is both a problem and a solution. It is a rapidly growing field of inquiry that could solve deep complex problems such as medical diagnosis or undersea mining. It can also give us fun solutions such as valued competition in video games. Developers not only develop the intelligence but also help mold it to solve problems.

Artificial Intelligence (AI) in the World

AI is a truly massive revolution in computing. It is fundamental in all kinds of computing fields, such as gaming, robotics, medicine, transportation, and Internet of Things (IoT). And it’s happening at a depth that will transform society in much the same ways as the industrial revolution, the technical revolution, and the digital revolution altered every aspect of daily life.

Even though AI has been a part of computing for many decades, the prospects of what AI can do, of what we can do with AI, still have us at the beginning of the field. Here are a few examples of what it will enable as the technology matures.

AI will accelerate how we answer large-scale problems that would otherwise take months, years, or decades to resolve. Medical treatments such as drugs or other interventions will be personalized at the level of an individual’s DNA. Intelligent assistants will forestall mistakes and open new opportunities by providing real-time guidance about the world around us. In commerce, it will be much easier to detect—and in some cases even eliminate—fraud.

AI will unleash new scientific discovery. No longer restricted by human biology and cognitive methods, scientists will be able to mine new insights in the realms of the deep sea and space, the animal and insect kingdoms, particle physics, mysteries of the brain, and more.

AI will augment our human capabilities. A new symbiosis between human and machine will expand our capacity, so that medical diagnoses can be more precise, legal counsel can encompass the entire history of case law, and other services will achieve unprecedented levels of accuracy.

AI will remove the burden of tedious or dangerous tasks, such as driving, firefighting, and mining. We are already seeing the early stages of this field with autonomous cars.

Artificial Intelligence Improved with Intel

At Intel, the ideas of AI are not tied to levels of capability. Rather than seeing AI as the end result, the ability to define human understanding, Intel sees AI as a computational tool for solving human problems. Rather than defining what it takes for an intelligence to be human, or what minimum tests must be passed to attain a threshold of “intelligence,” AI runs in a cycle of Sense, Reason, Act, Adapt. The input (Sense) is analyzed and a result formulated (Reason). Based on this, the proper action is chosen (Act) and based on results, the input is then used to improve how input is gathered and selected, and the calculations made on the input is improved (Adapt).

Rather than go into the different ways to determine if a machine has a human level of intelligence, the four-step cycle used at Intel is all you need to guide your programming to create an AI solution. In addition to the methodology, Intel, of course, offers computer technologies that make the complex calculations necessary to make AI run faster.

What is Artificial Intelligence?

It may be good to call AI a solution, but the question still hides in the background: How do we know when it’s intelligent? A number of tests have been developed to “tell” if an AI’s ability to exhibit intelligent behavior is indistinguishable from a human. The most famous of these is the Turing Test. In daily use cases, the determination of whether or not an intelligence is equivalent to our own is an academic point.

While this is important to understand what intelligence means, there is a more practical consumer application regarding what AI is. If the AI is the solution to our problem then that is what is important. So when you consider how AI is a powerful tool, no matter the technique we might use to create and harness AI or the scope to which AI is employed, the intelligence must be able to sense, reason, and act, then adapt based on experience.

Sensing requires the AI to identify and/or recognize meaningful concepts or objects in the midst of vast pools of data. Is this a tumor or normal tissue? Is this a stoplight or a neon sign, and if it’s a stoplight, is it green, yellow, or red?

Reasoning requires that the AI understands a larger context and make a plan to achieve a goal. If the goal is to make a differential diagnosis, then the machine must consider the patient’s reported symptoms, DNA profile, medical history, and environmental influence in addition to the findings from imaging and lab tests. If the goal is to avoid a vehicle collision, the AI must calculate the likelihood of a crash based on vehicle behaviors, proximity, speed, and road conditions.

Acting means that the AI either recommends or directly initiates the best course of action to maximize the desired outcome. Based on a diagnosis, it may recommend or perform a treatment protocol. Based on vehicle and traffic analyses, it may brake, accelerate, or prepare safety mechanisms.

Finally, we must be able to adapt algorithms (both within the AI and as part of the computing system the AI resides in) at each phase based on experience, retraining them to be ever more intelligent in their inferences. Healthcare algorithms should be re-trained to detect disease with more accuracy, better grasp context, and improve treatment based on previous outcomes. Autonomous vehicle algorithms should be re-trained to recognize more blind spots, factor new variables into the context, and adjust actions based on previous incidents.

Today, the greatest ability lies in the “sense” phase, while progress continues to be made in both reasoning and action. The majority of techniques used involve mathematical or statistical algorithms, including regression, decision trees, graph theory, classification, clustering, and many more. However, an emerging algorithm in deep learning is growing rapidly, harnessing deep neural networks that simulate the basic function of neurons in the human brain.

The Market for Artificial Intelligence

It might be easy to see that there are huge areas that benefit from AI. Cancer research, space exploration, and self-driving vehicles are a few fields. But they seem so overwhelming that a person getting started in AI might feel like they can’t contribute let alone make a difference. But AI is used in places you don’t expect, which is why it’s such a broad, useful tool. You can work on non-player characters in games, on predictive route-finding applications, even sheepherding robots. There is no limit to the possibilities of AI.

Business Interest in Artificial Intelligence

In previous years, businesses have not been as willing to invest in AI because there have been large research costs involved. But this has changed. Business leaders, from CTOs to CFOs to CEOs, recognize the utility and even necessity of AI as a solution.

In 2014, there was more than USD 300 million invested into AI startups, which was up 20 times from USD 15 million in 20101, and the global robotics and AI markets are estimated to grow to USD 153 billion by 2020. The market for AI systems in healthcare alone is estimated to grow from USD 633 million in 2014 to more than USD 6 billion by 20211; and by 2020, autonomous software will participate in 5 percent of all economic transactions2. Companies are putting more and more effort into AI R&D and products; your input will only help.

But What Field Can I Work In?

It’s understandable to see the amount of interest in AI and how opportunities are growing. But what kinds of fields are actually using AI? Maybe you aren’t interested in gaming, and maybe you want to grow in a field that isn’t in academic research. What else can you do? Here is a small list of fields and how AI is growing in them:

Healthcare

  • Image analysis – Medical startups are pursuing technology that will help read X-rays, MRIs, CAT scans, and more.
  • Dulight* – This is a wearable that identifies food, money, and more for the visually impaired.

Automotive

  • Self-driving cars – AI helps autonomous cars recognize road signs, people, and other vehicles.
  • Infotainment – Improved speech recognition helps drivers better engage with music, maps, and more.

Industrial

  • Repairs and maintenance – AI systems can anticipate repairs and improve preventative maintenance.
  • Precision agriculture – AI can help improve food production with efficient fertilization methods and time-to-market.
  • Sales and time-to-market – AI can predict which products will be sold faster or in more volumes in different areas at different times of year, and which times it would be more efficient to keep them in stock or have them drop-shipped to customers.

Sports

  • Performance optimization – AI systems can help coach athletes’ conditioning and nutrition, and improve their skills.
  • Injury prevention – Equipment design and improved play calling, and even predictive rules needs for player safety.

Even from this brief, incomplete list you can see how many opportunities are available. AI can be used to improve lives in so many ways. How, is up to you.

Artificial Intelligence – Driven by Intel

Intel is not merely invested in the growth of AI, we are committed to fueling the AI revolution. AI is a top priority for Intel and we’re committed to leading the charge, both through our own R&D and through acquisitions. Our innovation and integration of capabilities into the CPU, driven by Moore’s Law, will continue to deliver the best possible results for performance, efficiency, density, and cost effectiveness. Also, we have a long history of successfully executing technology shifts driven by groundbreaking technologies, including breakthroughs in memory, graphics, I/O, and wireless, and we have in place today the toolbox and unique capabilities needed for the transformation to AI.

First, Intel is compressing the AI innovation cycle in bold new ways. We’ve acquired the best deep-learning talent and technology on the planet, Nervana*, which will not only accelerate AI data ingestion and model building, but also deliver a substantial training performance versus GPU next year through integrating the Nervana technology into the CPU.

Second, as AI becomes pervasive in applications from datacenters to the IoT, Intel has the unique, complete portfolio to deliver end-to-end AI solutions.

Finally, Intel has the experience of successfully leading past transformations from the client/server model, to server virtualization, to the rise of the cloud.

Intel can offer crucial technologies to drive the AI revolution, but we must work together as an industry – and as a society – to achieve the ultimate potential of AI. To that end, Intel leads the charge for open data exchanges and initiatives, easier-to-use tools, training to broaden the talent pool, and equal access to intelligent technology. We entered a partnership with Data.gov in an open data initiative. An open car collaboration with BMW will reduce duplicated effort and accelerate innovation, with society playing a key role.

Intel is committed to compressing the innovation cycle from conception to deployment of ever more intelligent, robust, and cooperative AI agents, through breakthroughs in data ingestion and the building, training, and deployment of models. These AI capabilities will be driven by a portfolio of powerful technology solutions.

Conclusion

AI is rapidly transforming industries and is an increasingly important source of competitive advantage. To maintain a leadership position in your field, this is the best time to begin integrating AI into your products, services, and your business processes. Visit the Intel Software Developer Zone for Artificial Intelligence: https://software.intel.com/en-us/machine-learning to get started today.

Notes

  1. Clark, Jack. “I'll Be Back: The Return of Artificial Intelligence,” Bloomberg Technology, February 2015. http://www.bloomberg.com/news/articles/2015-02-03/i-ll-be-back-the-return-of-artificial-intelligence
  2. Gartner Press Release. “Gartner Reveals Top Predictions for IT Organizations and Users for 2016 and Beyond,” October 6, 2015. http://gartner.com/newsroom/id/3143718

What's New? Intel® Threading Building Blocks 2017 Update 2

$
0
0

The updated version contains several bug fixes when compared to the previous  Intel® Threading Building Blocks (Intel® TBB) 2017 release. Information about new features of previous releases you can find under the following links.

Obsolete

Removed the long-outdated support for Xbox* consoles.

Bugs fixed:

  • Fixed the issue with task_arena::execute() not being processed when the calling thread cannot join the arena.
  • Fixed dynamic memory allocation replacement failure on macOS* 10.12.
  • Fixed dynamic memory allocation replacement failures on Windows* 10 Anniversary Update.
  • Fixed emplace() method of concurrent unordered containers to not require a copy constructor.

You can download the latest Intel TBB version from http://threadingbuildingblocks.org and https://software.intel.com/en-us/articles/intel-tbb

Recipe: Building NAMD on Intel® Xeon® and Intel® Xeon Phi™ Processors

$
0
0

Purpose

This recipe describes a step-by-step process of how to get, build, and run NAMD, Scalable Molecular Dynamic, code on Intel® Xeon Phi™ processor and Intel® Xeon® E5 processors for better performance.

Introduction

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecule systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.

NAMD is distributed free of charge with source code. You can build NAMD yourself or download binaries for a wide variety of platforms. Find the details below of how to build on Intel® Xeon Phi™ processor and Intel® Xeon® E5 processors and learn more about NAMD at http://www.ks.uiuc.edu/Research/namd/

Building NAMD on Intel® Xeon® Processor E5-2697 v4 (BDW) and Intel® Xeon Phi™ Processor 7250 (KNL)

  1. Download the latest NAMD source code(Nightly Build) from this site: http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD
  2. Download fftw3 from this site: http://www.fftw.org/download.html
    • Version 3.3.4 is recommended
  3. Build fftw3:
    1. Cd<path>/fftw3.3.4
    2. ./configure --prefix=$base/fftw3 --enable-single --disable-fortran CC=icc
                        Use xMIC-AVX512 for KNL or –xCORE-AVX2 for BDW
    3. make CFLAGS="-O3 -xMIC-AVX512 -fp-model fast=2 -no-prec-div -qoverride-limits" clean install
  4. Download charm++* version 6.7.1
  5. Build multicore version of charm++:
    1. cd <path>/charm-6.7.1
    2. ./build charm++ multicore-linux64 iccstatic --with-production "-O3 -ip"
  6. Build BDW:
    1. Modify the Linux-x86_64-icc.arch to look like the following:
      NAMD_ARCH = Linux-x86_64
      CHARMARCH = multicore-linux64-iccstatic
      FLOATOPTS = -ip -xCORE-AVX2 -O3 -g -fp-model fast=2 -no-prec-div -qoverride-limits -DNAMD_DISABLE_SSE
      CXX = icpc -std=c++11 -DNAMD_KNL
      CXXOPTS = -static-intel -O2 $(FLOATOPTS)
      CXXNOALIASOPTS = -O3 -fno-alias $(FLOATOPTS) -qopt-report-phase=loop,vec -qopt-report=4
      CXXCOLVAROPTS = -O2 -ip
      CC = icc
      COPTS = -static-intel -O2 $(FLOATOPTS)
    2.  ./config Linux-x86_64-icc --charm-base <charm_path> --charm-arch multicore-linux64- iccstatic --with-fftw3 --fftw-prefix <fftw_path> --without-tcl --charm-opts –verbose
    3. gmake -j
  7. Build KNL:
    1. Modify the arch/Linux-KNL-icc.arch to look like the following:
      NAMD_ARCH = Linux-KNL
      CHARMARCH = multicore-linux64-iccstatic
      FLOATOPTS = -ip -xMIC-AVX512 -O3 -g -fp-model fast=2 -no-prec-div -qoverride-limits
      DNAMD_DISABLE_SSE
      CXX = icpc -std=c++11 -DNAMD_KNL
      CXXOPTS = -static-intel -O2 $(FLOATOPTS)
      CXXNOALIASOPTS = -O3 -fno-alias $(FLOATOPTS) -qopt-report-phase=loop,vec -qopt-report=4
      CXXCOLVAROPTS = -O2 -ip
      CC = icc
      COPTS = -static-intel -O2 $(FLOATOPTS)
    2. ./config Linux-KNL-icc --charm-base <charm_path> --charm-arch multicore-linux64-iccstatic --with-fftw3 --fftw-prefix <fftw_path> --without-tcl --charm-opts –verbose
    3. gmake –j
  8. Change the kernel setting for KNL: “nmi_watchdog=0 rcu_nocbs=2-271 nohz_full=2-271”
  9. Download apoa and stmv workloads from here: http://www.ks.uiuc.edu/Research/namd/utilities/
  10. Change next lines in *.namd file for both workloads:
    	numsteps         1000
            outputtiming     20
            outputenergies   600

Run NAMD workloads on Intel® Xeon® Processor E5-2697 v4 and Intel® Xeon Phi™ Processor 7250

Run BDW (ppn = 72):

           $BIN +p $ppn apoa1/apoa1.namd +pemap 0-($ppn-1)

Run KNL (ppn = 136, MCDRAM, similar performance in cache):

           numactl –m 1 $BIN +p $ppn apoa1/apoa1.namd +pemap 0-($ppn-1)

Performance results reported in Intel® Salesforce repository

(ns/day; higher is better):

WorkloadIntel® Xeon® Processor E5-2697 v4 (ns/day)Intel® Xeon Phi™ Processor 7250 (ns/day)KNL vs. 2S BDW (speedup)
stmv0.450.55  1.22x
Ap0a15.5  6.181.12x

Systems configuration:

ProcessorIntel® Xeon® Processor E5-2697 v4(BDW)Intel® Xeon Phi™ Processor 7250 (KNL)
Stepping1 (B0)1 (B0) Bin1
Sockets / TDP2S / 290W1S / 215W
Frequency / Cores / Threads2.3 GHz / 36 / 721.4 GHz / 68 / 272
DDR4 8x16 GB 2400 MHz(128 GB)7210: 6x16 GB 2400 MHz
MCDRAMN/A16 GB Flat
Cluster/Snoop Mode/Mem ModeHomeQuadrant/flat
TurboOnOn
BIOSGRRFSDP1.86B0271.R00.1510301446GVPRCRB1.86B.0010.R02.1608040407
CompilerICC-2017.0.098ICC-2017.0.098
Operating System

Red Hat* Enterprise Linux* 7.2

(3.10.0-327.e17.x86_64)

Red Hat Enterprise Linux 7.2

(3.10.0-327.22.2.el7.xppsl_1.4.1.3272._86_64)

  

Preparing for the 2016 HPC Developer conference Python Lab

$
0
0

 

Here are the steps you need to perform to prepare for HPC Developer conference Python Lab

As part of Python profiling we will be using Intel(r) VTune(tm) Amplifier 2017. You can get a free evaluation copy of VTune Amplifier using the following link: https://software.intel.com/en-us/intel-vtune-amplifier-xe/try-buy

Click on either Windows* trial or Linux* trial

You also need a version of Python running on your system. We currently support Python 2.7 and Python 3.5. If you would like to use the Intel version you can get a free copy using the following link:https://software.intel.com/en-us/intel-distribution-for-python

Click on Download free.

 

 

Intel® Deep Learning SDK Release Notes

Understanding and Harnessing the Capabilities of Intel® Xeon Phi™ Processor (Code Named Knights Landing) Lab - HPC Developer Conference 2016

$
0
0

At the 2016 HPC Developer Conference in Salt Lake City, we will be running a lab entitled Understanding and Harnessing the Capabilities of Intel® Xeon Phi™ Processor (Code Named Knights Landing).  In order to maximize the benefit from this lab, we are asking all attendees to meet certain requirements and providing some recommendations.

Requirements:

  • A laptop or similar portable computer with wireless connectivity.
  • A modern SSH client installed on the laptop, such as PuTTY* 0.66 or later for Windows*, iTerm2* for OS X*, or OpenSSH* 5.3 for Linux*.
  • Firewall configuration to allow SSH functionality.

Recommended:

  • VNC client/viewer program such as VNC Viewer* installed on the laptop for running graphical applications remotely.
  • Intel® Parallel Studio XE Cluster Edition 2017 installed on the laptop to use the tools locally.
  • Basic familiarity with Intel® C/C++ Compiler, Intel® Trace Analyzer and Collector, Intel® VTune™ Amplifier XE, and Intel® Advisor.
    • These are components of Intel® Parallel Studio XE and will be used in the lab.

We expect that all attendees will have completed the required steps before the lab begins and will not be providing support for installing required tools during the lab.  We will be providing access to an Intel owned cluster for use during the lab, and will provide assistance with connecting to and using this cluster.

Intel® Advisor 2017 Update 1 What’s new

$
0
0

We’re pleased to announce new version of the Vectorization Assistant tool - Intel® Advisor 2017 update 1. For details about download, terms and conditions please refer to Intel® Parallel Studio 2017 program site.

Below are highlights of the new functionality in Intel Advisor 2017 update 1

Cache-aware roofline modeling: To enable this preview feature, set the environment variable ADVIXE_EXPERIMENTAL=roofline before launching the Intel Advisor.

Analysis workflow:

  • Intel® Math Kernel Library (Intel® MKL) support: Intel Advisor results now show Intel MKL function calls.
  • Improved FLOPs analysis performance.
  • Decreased Survey analysis overhead for the Intel® Xeon Phi™ processor.
  • New category for instruction mix data: compute with memory operands.
  • Finalize button for “no-auto-finalize” results, when result is finalized on a separate machine.
  • MPI support in the command line dialog box..

Recommendations:

  • Recommendations display in Refinement Reports.
  • New recommendation: Vectorize call(s) to virtual method.
  • Cached recommendations in result snapshots to speed up display.

Memory analysis:

  • Ability to track refinement analysis progress: You can stop collection if every site executes at least once.

Get Intel Advisor and more information

Visit the product site, where you can find videos and tutorials. Register for Intel® Parallel Studio XE 2017 to download the whole bundle, including Intel Advisor 2017 update 1.

HPC Applications for Supercomputing 2016


Tutorial: Intel® IoT Gateway, Industrial Oil & Gas Pressure Sensor, and AWS* IoT

$
0
0

In this use case tutorial we'll use an Intel® NUC and Intel® IoT Gateway Developer Hub to interface an industrial fluid/gas pressure sensor to AWS* IoT running in the Amazon Web Services* Cloud. The application software we'll develop will control the pressure sensor and continuously transmit pressure measurements to AWS IoT where the data will be stored, processed and evaluated in the cloud.

The pressure sensor is an Omega* PX409-USBH which is a high speed industrial pressure transducer with a USB interface. The sensor is available in a number of different sensing configurations and pressure ranges from vacuum to 5,000 PSI. For this tutorial we'll use model PX409-150GUSBH which is a gage pressure transducer with a measurement range of 0 to 150 PSI. It's designed to connect to piping using a ¼-18 NPT threaded pipe fitting.

We'll connect the sensor to a pressurized water line with a secondary gauge for visual inspection of the pressure. Figure 1 shows the sensor configuration and fittings. (1) is the pressure sensor; (2) is the sensor USB cable that will connect to the Intel® NUC; (3) is a secondary gauge for comparison purposes; (4) and (5) are inlet and outlet connections along with control valves for changing the pressure.

Figure 1. The Omega PX409-USBH industrial pressure sensor

Setup Prerequisites

  1. Intel® NUC powered up and connected to a LAN network with Internet access and a development laptop or workstation for logging into the Intel® NUC.
  2. Intel® IoT Gateway Developer Hub running on the Intel® NUC and software updates applied.
  3. Package "packagegroup-cloud-aws" installed on the Intel® NUC.
  4. An active account on Amazon Web Services and familiarity with the AWS console, AWS IoT, and AWS Elasticsearch Service.

Connect Pressure Sensor to the Intel® NUC

  1. Connect the pressure sensor USB cable into the USB port on the front of the Intel® NUC. After connecting the sensor the Intel® NUC will automatically assign a TTY device such as /dev/ttyUSB0 or /dev/ttyUSB1. The exact name will vary depending on whether you’ve connected other USB devices to the Intel® NUC.
  2. Find the sensor’s device name by logging into the Intel® NUC over the LAN network using ssh. You’ll need to know the Intel® NUC’s IP address assigned on the LAN. For this example the IP address is 192.168.22.100 and the login name is gwuser and password gwuser.
ssh gwuser@192.168.22.100

gwuser@WR-IDP-9C99:~$  ls /dev/ttyUSB**
/dev/ttyUSB0

Here we see only one USB device which is named /dev/ttyUSB0 - that’s the pressure sensor. Next we’ll run some verifications tests to confirm the sensor is communicating with the Intel® NUC.

  1. Use the screen utility to connect directly to the USB port and manually issue commands. The commands you type won’t be visible – only the results of the commands will display.

     

    gwuser@WR-IDP-9C99:~$  sudo screen /dev/ttyUSB0 115200
  2. Type in ENQ and hit Enter. You should receive a response that looks like this:
USBPX2
1.0.13.0826
0.0 to 150.0 PSI G

When you see this response it confirms that the pressure sensor is communicating with the Intel® NUC through the USB port. Exit screen by typing these commands: Control-A, Control-\, y.

Configure AWS* IoT and Node-RED*

Node-RED* is a visual tool for building Internet of Things applications. It’s pre-installed on the Intel® NUC as part of the Intel® IoT Gateway Developer Hub.

  1. Log into your AWS account and navigate to the AWS IoT console.
  2. Create a new device (thing) named Intel_NUC and a Policy named PubSubToAnyTopic that allows publishing and subscribing to any MQTT topic.
  3. Create and activate a new Certificate and download the private key file, certificate file, and root CA file (available here) to your computer.
  4. Attach the Intel_NUC device and PubSubToAnyTopic policy to the new certificate.
  5. While logged into the Intel® NUC via ssh as gwuser, create the directory /home/gwuser/awsiot and then use SFTP or SCP to copy the downloaded private key file, certificate file and root CA files from your workstation to the /home/gwuser/awsiot directory on the Intel® NUC.

Create Node-RED* Sensor Flow

In the Intel® IoT Gateway Developer Hub click on the Sensors icon and then the Program Sensors button. This will open the Node-RED canvas. If you get another login prompt use the same gwuser / gwuser login credentials you used when logging into the Intel® IoT Gateway Developer Hub.

You will see a default Node-RED flow for a RH-USB sensor. We’re not using that type of sensor here so you can either delete that flow or disable it by double-clicking the Interval node and setting Repeat to none followed by clicking Done. If you don’t delete the flow, select all the elements of the flow using the mouse and drag them down lower on the screen to open up more room at the top. Click the Deploy button to save and activate the changes.

First we’ll build a flow that continuously reads the pressure sensor and displays the pressure readings in the Intel® IoT Gateway Developer Hub. Follow these steps:

  1. Drag the following types of nodes from the list on the left side onto the canvas and arrange them like shown in Figure 2: (1) inject (input), (2) function (function), (3) serial (output), (4) serial (input), (5) function (function), (6) chart tag (function), (7) mqtt (output).
  2. A couple of the names will automatically change when you drop them on the canvas: inject will change to timestamp, and function will change to blank. Use the mouse to draw lines between the nodes so they look like Figure 2.

Figure 2. Initial Node-RED flow with nodes and connections

  1. Next, double-click on each node corresponding to the numbered callouts and set the node parameters as shown in Figure 3. You may need to move the nodes around to maintain a clean layout.
  2. For the serial nodes use the /dev/ttyUSBn device name corresponding to your pressure sensor (we're using /dev/ttyUSB0).
  3. The serial port name and parameters are set by clicking the pencil icon next to Add new serial-port… and setting the values as shown in item 3A of Figure 3, then clicking Add.
  4. When you're done configuring the nodes the flow should look like Figure 4.

Figure 3. Node configuration details

Figure 4. Configured Node-RED flow and pressure data display

  1. Click the Deploy button to deploy and activate the flow, then click your browser’s refresh button to refresh the entire web page. You should now see a live pressure gauge in the upper part of the Intel® IoT Gateway Hub as shown in Figure 4. You can apply water pressure to the sensor assembly and you’ll see the pressure readings increase and decrease as the pressure varies.

Connect Intel® NUC to AWS IoT

  1. Drag a mqtt output node onto the Node-RED canvas and then double-click it.
  2. In the Server pick list select Add new mqtt-broker… and then click the pencil icon to the right. In the Connection tab, set the Server field to your AWS IoT endpoint address which will look something like aaabbbcccddd.iot.us-east-1.amazonaws.com. You can find the endpoint address by using the AWS CLI command aws iot describe-endpoint on your workstation.
  3. Set the Port to 8883 and checkmark Enable secure (SSL/TLS) connection, then click the pencil icon to the right of Add new tls-config…
  4. In the Certificate field enter the full path and filename to your certificate file, private key file, and root CA file that you copied earlier into the /home/gwuser/awsiot directory. For example, the Certificate path might look like /home/gwuser/awsiot/1112223333-certificate.pem.crt and the Private Key path might look like /home/user/awsiot/1112223333-private.pem.key. The CA Certificate might look like /home/gwuser/awsiot/VeriSign-Class-3-Public-Primary-Certification-Authority-G5.pem.
  5. Checkmark Verify server certificate and leave Name empty.
  6. Click the Add button and then click the second Add button to return to the main MQTT out node panel.
  7. Set the Topic to nuc/pressure, set QoS to 1, and set Retain to false.
  8. Set the Name field to Publish to AWS IoT and then click Done.

Drag another function node onto the Node-RED canvas. Double-click to edit the node and set the Name to Format JSON. Edit the function code so it looks like this:

msg.payload = {
  pressure: Number(msg.payload),
  timestamp: Date.now()
  };
return msg;
  1. Click Done to save the function changes.
  2. Draw a wire from the output of the Extract Data node to the input of Format JSON, and another wire from the output of Format JSON to the input of Publish to AWS IoT. These changes will convert the pressure reading into a JSON object and send it to AWS IoT.
  3. Click the Deploy button to deploy and activate the changes. The finished flow should look like Figure 5.

Figure 5. Finished flow with connection to AWS IoT

Back in the AWS IoT console, start the MQTT Client and subscribe to the topic nuc/pressure. You should see messages arriving once a second containing live pressure readings. Vary the pressure on the sensor and observe the values increasing and decreasing.

Recording and Visualizing Pressure History in AWS* IoT

Now that live pressure data is arriving in AWS IoT, a variety of additional data processing can be performed in the AWS cloud. Here we’ll send the data into Elasticsearch where it can be searched and visualized on dashboards.

  1. In the AWS console, navigate to the Elasticsearch Service and provision an Elasticsearch cluster with a domain name of nucdata. You can initially make it a small cluster with one instance.
  2. Set the security access policy to your Internet access preferences and also add a policy for principal “AWS”: ""* to access “Action”: “ESHttpPut”. Wait for the cluster provisioning to complete and the Domain status to change to Active.
  3. Use curl or a REST tool to create an Elasticsearch index named nucs using the Endpoint URI listed in the AWS Elasticsearch cluster console. When creating the index, include a type named nuc with two properties: pressure of type float and timestamp of type date.
  4. In the AWS IoT console, create a Rule named Record_Pressure.
  5. Set the Description to Record pressure readings to Elasticsearch, set the Attribute to pressure,timestamp and set the Topic Filter to nuc/pressure. Leave Condition blank.
  6. In the action section choose an action of Send the message to a search index cluster (Amazon Elasticsearch Service) and select nucdata for the Domain name. The Endpoint will be filled in automatically.
  7. Set the ID to ${newuuid()}, set the Index to nucs, and set the Type to nuc.
  8. For the Role name, click Create a new role and set the role name to aws_iot_elasticsearch.
  9. Click Add action and then click Create to create the rule. This rule will take data from the nuc/pressure MQTT topic and send it into Elasticsearch where it can be searched and viewed.

To search and view data:

  1. Navigate to your Elasticsearch cluster Kibana URI and create an index pattern named nucs using timestamp as the Time-field name. You should see pressure and timestamp in the fields list.
  2. Navigate to the Kibana Discover tab and enable auto-refresh at a 5 second interval. You should see roughly 5 new data records every 5 seconds – these are the pressure readings coming from the NUC.
  3. Navigate to the Visualize tab and create a Line chart time series visualization using X and Y parameters shown in Figure 6.
  4. Now vary the actual pressure on the sensor and watch the time series graph in Kibana. Figure 6 shows a live pressure cycle starting at 0 PSI, jumping to 82 PSI, incrementally stepping down to 20 PSI, ramping up to 76 and then 82 PSI, dropping abruptly to 0 PSI, stepping up to 55 PSI for a short period, and then stepping up to 82 PSI.

Figure 6. Live graph of pressure reading data in AWS

[Infographic] Artificial Intelligence - The Next Big Revolution in Computing

vHost User Client Mode in Open vSwitch* with DPDK

$
0
0

This article describes the concept of vHost User client mode, how it can be tested, and the benefits the feature brings to Open vSwitch* (OVS) with the Data Plane Development Kit (DPDK). This article was written with OVS users in mind who wish to know more about the feature and for users who may be configuring a virtualized OVS DPDK setup that uses vHost User ports as the guest access method for virtual machines (VMs) running in QEMU.

Note: vHost User client mode in OVS with DPDK is available on both the OVS master branch and the 2.6 release branch. Users can download the OVS master branch as a zip here or the 2.6 branch as a zip here. Installation steps for the master branch can be found here. 2.6 installation steps can be found here.

vHost User Client Mode

vHost User client mode was introduced in DPDK v16.07 to address a limitation in the DPDK, whereby if the vHost User backend (DPDK-based application such as OVS with DPDK) crashes or is restarted, VMs with DPDK vHost User ports cannot re-establish connectivity with the backend and are essentially rendered useless from a networking perspective. vHost User client mode solves this problem.

The vHost User standard uses a client-server model. The server creates and manages the vHost User sockets and the client connects to the sockets created by the server. Before the introduction of this feature, the only configuration used by OVS-DPDK had it acting as the server and QEMU* acting as the client. Figure 1 shows this configuration.

Figure 1.Typical Open vSwitch* with the Data Plane Development Kit (DPDK) configuration using OVS-DPDK vHost server mode and QEMU* using vHost client mode. The direction of the arrow indicates the client connecting to the server.

With this default configuration, if OVS-DPDK was reset the sockets would be destroyed, and when relaunched, connectivity could not be re-established with the VM. With client mode, OVS-DPDK acts as the vHost client and QEMU acts as the server. In this configuration, when the switch crashes, the sockets still exist as they are managed by QEMU. This allows OVS-DPDK to relaunch and reconnect to these sockets and thus resume normal connectivity. Figure 2 shows this configuration.

Figure 2.Open vSwitch* with the Data Plane Development Kit (DPDK) configuration using OVS-DPDK vHost client mode and QEMU* using vHost server mode. The direction of the arrow indicates the client connecting to the server.

As seen in Table 1, OVS-DPDK supports this feature by offering two different types of vHost User ports. The first, dpdkvhostuser operates in the standard server mode. The second, dpdkvhostuserclient operates in client mode as the name suggests.

Table 1. Types of vHost User ports offered by Open vSwitch* with the Data Plane Development Kit (DPDK) and their respective modes

Port NameUses vHost modeRequires QEMU mode
dpdkvhostuserServerClient
dpdkvhostuserclientClientServer

The ability to reconnect is a very useful feature; it reduces the impact of catastrophic failure of the switch as the VMs connected to the switch do not need to be rebooted upon switch failure. It also makes other tasks more lightweight; for example, maintenance tasks such as updating the switch requires rebooting the switch, but no longer requires the VMs connected to the switch to be rebooted as well.

Test Environment

The following describes how to set up an OVS-DPDK configuration with one physical dpdk port and one vHost User dpdkvhostuserclient port, and QEMU in vHost server mode. Next, the steps to demonstrate and reconnect capability are described.

Figure 3 shows the test environment configuration.

Figure 3. Open vSwitch* with the Data Plane Development Kit (DPDK) configuration using OVS-DPDK vHost client mode and QEMU* using vHost server mode. The direction of the arrows denotes the flow of traffic. The tcpdump tool is being used on the guest to monitor incoming traffic on the eth0 interface.

Table 2 shows the hardware and software components used for this setup:

Table 2. Hardware and software components

ProcessorIntel® Xeon® processor E5-2695 v3 @ 2.30 GHz
Kernel4.2.8-200
OSFedora* 22
QEMU*v2.7.0
Data Plane Development Kitv16.07
Open vSwitch*62f0430e903ad29bdde17bd8e8aa814198fac890

Configuration Steps

This section describes how to build OVS with DPDK as described in the installation docs.

Configure the switch as described in the Test Environment section, with one physical dpdk port and one vHost User dpdkvhostuserclient port. Add a flow that directs traffic from the physical port (1) to the vHost User port (2):

ovs-vsctl add-br br0

ovs-vsctl set Bridge br0 datapath_type=netdev

ovs-vsctl add-port br0 dpdk0

ovs-vsctl set Interface dpdk0 type=dpdk

ovs-vsctl add-port br0 vhost0

ovs-vsctl set Interface vhost0 type=dpdkvhostuserclient

ovs-ofctl add-flow br0 in_port=1,action=output:2

Set the location of the socket:

ovs-vsctl set Interface vhost0 options:vhost-server-path="/tmp/sock0"

Logs similar to the following should be printed:

VHOST_CONFIG: vhost-user client: socket created, fd: 28

VHOST_CONFIG: failed to connect to /tmp/sock0: No such file or directory

VHOST_CONFIG: /tmp/sock0: reconnecting...

Launch QEMU in server-mode:

qemu-system-x86_64 -cpu host -enable-kvm -m 4096M -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/hugepages,share=on -numa node,memdev=mem -mem-prealloc  -drive file=/images/image.qcow2 -chardev socket,id=char0,path=/tmp/sock0,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off –nographic

Logs similar to the following should be printed:

QEMU waiting for connection on: disconnected:unix:/tmp/sock0,server

VHOST_CONFIG: /tmp/sock0: connected</p>

The important part of the command above is to include “,server” as part of the path argument of the chardev configuration.

Once the VM has booted successfully, test the connection between dpdk0 and vhost0. Run the tcpdump tool on the vHost interface to monitor incoming traffic. Send traffic to dpdk0 (for example, using a traffic generator); it should be switched to vhost0 and captured by the tool:

[root@localhost ~]# tcpdump -i eth0

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

09:57:50.276877 IP 2.2.2.2.0 > 1.1.1.1.0: Flags [], seq 0:6, win 0, length 6

09:57:50.779559 IP 2.2.2.2.0 > 1.1.1.1.0: Flags [], seq 0:6, win 0, length 6

To test client mode reconnection, simply reset the switch and verify connectivity is re-established by continually monitoring the tcpdump instance, and verify traffic is once again switched to the VM after a brief loss of connectivity during the reset.

This is just one of many ways the reconnect capability can be tested. For instance, it can also be tested in the reverse direction by generating traffic (for example, a ping in the VM), and verifying it reaches the physical port. The DPDK pdump tool is a useful way to monitor traffic on physical ports in OVS. Instructions for configuring and using pdump with OVS can be found in the OVS-DPDK documentation mentioned earlier in this article, or in the article DPDK Pdump in Open vSwitch* with DPDK.

Conclusion

In this article, we described and showed how OVS-DPDK vHost User ports can be configured in client mode, allowing reestablishment of connectivity upon a reset of the switch. We have demonstrated one method of how to test this feature and suggested another.

Additional Information

For more details on the DPDK vHost library, refer to the DPDK documentation.

For more information on configuring vHost User in Open vSwitch, refer to INSTALL.DPDK.rst.

Have a question? Feel free to follow up with the query on the Open vSwitch discussion mailing thread.

To learn more about OVS with DPDK, check out the following videos and articles on Intel® Developer Zone and Intel® Network Builders University.

QoS Configuration and usage for Open vSwitch* with DPDK

Open vSwitch with DPDK Architectural Deep Dive

DPDK Open vSwitch: Accelerating the Path to the Guest

DPDK Pdump in Open vSwitch* with DPDK

vHost User NUMA Awareness in Open vSwitch* with DPDK

About the Author

Ciara Loftus is a network software engineer with Intel. Her work is primarily focused on accelerated software switching solutions in user space running on Intel® architecture. Her contributions to OVS with DPDK include the addition of vHost User ports, vHost User client ports, NUMA-aware vHost User, and DPDK v16.07 support.

Intel® Software Guard Extensions Tutorial Series: Part 6, Dual Code Paths

$
0
0

In Part 6 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series, we set aside the enclave to address an outstanding design requirement that was laid out in Part 2, Application Design: provide support for dual code paths. We want to make sure our Tutorial Password Manager will function on hosts both with and without Intel SGX capability. Much of the content in this part comes from the article, Properly Detecting Intel® Software Guard Extensions in Your Applications.

You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.

There is source code provided with this installment of the series.

All Intel® Software Guard Extensions Applications Need Dual Code Paths

First it’s important to point out that all Intel SGX applications must have dual code paths. Even if an application is written so that it should only execute if Intel SGX is available and enabled, a fallback code path must exist so that you can present a meaningful error message to the user and then exit gracefully.

In short, an application should never crash or fail to launch solely because the platform does not support Intel SGX.

Scoping the Problem

In Part 5 of the series we completed our first version of our application enclave and tested it by hardcoding the enclave support to be on. That was done by setting the _supports_sgx flag in PasswordCoreNative.cpp.

PasswordManagerCoreNative::PasswordManagerCoreNative(void)
{
	_supports_sgx= 1;
	adsize= 0;
	accountdata= NULL;
	timer = NULL;
}

Obviously, we can’t leave this on by default. The convention for feature detection is that features are off by default and turned on if they are detected. So our first step is to undo this change and set the flag back to 0, effectively disabling the Intel SGX code path.

PasswordManagerCoreNative::PasswordManagerCoreNative(void)
{
	_supports_sgx= 0;
	adsize= 0;
	accountdata= NULL;
	timer = NULL;
}

However, before we get into the feature detection procedure, we’ll give the console application that runs our test suite, CLI Test App, a quick functional test by executing it on an older system that does not have the Intel SGX feature. With this flag set to zero, the application will not choose the Intel SGX code path and thus should run normally.

Here’s the output from a 4th generation Intel® Core™ i7 processor-based laptop, running Microsoft Windows* 8.1, 64-bit. This system does not support Intel SGX.

CLI Test App

What Happened?

Clearly we have a problem even when the Intel SGX code path is explicitly disabled in the software. This application, as written, cannot execute on a system without Intel SGX support. It didn’t even start executing. So what’s going on?

The clue in this case comes from the error message in the console window:

System.IO.FileNotFoundException: Could not load file or assembly ‘PasswordManagerCore.dll’ or one of its dependencies. The specified file could not be found.

Let’s take a look at PasswordManagerCore.dll and its dependencies:

Additional Dependencies

In addition to the core OS libraries, we have dependencies on bcrypt.lib and EnclaveBridge.lib, which will require bcrypt.dll and EnclaveBridge.dll at runtime. Since bcrypt.dll comes from Microsoft and is included in the OS, we can reasonably assume its dependencies, if any, are already installed. That leaves EnclaveBridge.dll.

Examining its dependencies, we see the following:

Additional Dependencies

This is the problem. Even though we have the Intel SGX code path explicitly disabled, EnclaveBridge.dll still has references to the Intel SGX runtime libraries. All symbols in an object module must be resolved as soon as it is loaded. It doesn’t matter if we disable the Intel SGX code path: undefined symbols are still present in the DLL. When PasswordManagerCore.dll loads, it resolves its undefined symbols by loading bcrypt.dll and EnclaveBridge.dll, the latter of which, in turn, attempts to resolve its undefined symbols by loading sgx_urts.dll and sgx_uae_service.dll. The system we tried to run our command-line test application on does not have these libraries, and since the OS can’t resolve all of the symbols it throws an exception and the program crashes before it even starts.

These two DLLs are part of the Intel SGX Platform Software (PSW) package, and without them Intel SGX applications written using the Intel SGX Software Development Kit (SDK) cannot execute. Our application needs to be able to run even if these libraries are not present.

The Platform Software Package

As mentioned above, the runtime libraries are part of the PSW. In addition to these support libraries, the PSW includes:

  • Services that support and maintain the trusted compute block (TCB) on the system
  • Services that perform and manage certain Intel SGX operations such as attestation
  • Interfaces to platform services such as trusted time and the monotonic counters

The PSW must be installed by the application installer when deploying an Intel SGX application, because Intel does not offer the PSW for direct download by end users. Software vendors must not assume that it will already be present and installed on the destination system. In fact, the license agreement for Intel SGX specifically states that licensees must re-distribute the PSW with their applications.

We’ll discuss the PSW installer in more detail in a future installment of the series covering packaging and deployment.

Detecting Intel Software Guard Extensions Support

So far we’ve focused on the problem of just starting our application on systems without Intel SGX support, and more specifically, without the PSW. The next step is to detect whether or not Intel SGX support is present and enabled once the application is running.

Intel SGX feature detection is, unfortunately, a complicated procedure. For a system to be Intel SGX capable, four conditions must be met:

  1. The CPU must support Intel SGX.
  2. The BIOS must support Intel SGX.
  3. In the BIOS, Intel SGX must be explicitly enabled or set to the “software controlled” state.
  4. The PSW must be installed on the platform.

Note that the CPUID instruction, alone, is not sufficient to detect the usability of Intel SGX on a platform. It can tell you whether or not the CPU supports the feature, but it doesn’t know anything about the BIOS configuration or the software that is installed on a system. Relying solely on the CPUID results to make decisions about Intel SGX support can potentially lead to a runtime fault.

To make feature detection even more difficult, examining the state of the BIOS is not a trivial task and is  generally not possible from a user process. Fortunately the Intel SGX SDK provides a simple solution: the function sgx_enable_device will both check for Intel SGX capability and attempt to enable it if the BIOS is set to the software control state (the purpose of the software control setting is to allow applications to enable Intel SGX without requiring users to reboot their systems and enter their BIOS setup screens, a particularly daunting and intimidating task for non-technical users).

The problem with sgx_enable_device, though, is that it is part of the Intel SGX runtime, which means the PSW must be installed on the system in order to use it. So before we attempt to call sgx_enable_device, we must first detect whether or not the PSW is present.

Implementation

With our problem scoped out, we can now lay out the steps that must be followed, in order, for our dual-code path application to function properly. Our application must:

  1. Load and begin executing even without the Intel SGX runtime libraries.
  2. Determine whether or not the PSW package is installed.
  3. Determine whether or not Intel SGX is enabled (and attempt to enable it).

Loading and Executing without the Intel Software Guard Extensions Runtime

Our main application depends on PasswordManagerCore.dll, which depends on EnclaveBridge.dll, which in turn depends on the Intel SGX runtime. Since all symbols need to be resolved when an application loads, we need a way to prevent the loader from trying to resolve symbols that come from the Intel SGX runtime libraries. There are two options:

Option #1: Dynamic Loading      

In dynamic loading, you don’t explicitly link the library in the project. Instead you use system calls to load the library at runtime and then look up the names of each function you plan to use in order to get the addresses of where they have been placed in memory. Functions in the library are then invoked indirectly via function pointers.

Dynamic loading is a hassle. Even if you only need a handful of functions, it can be a tedious process to prototype function pointers for every function that is needed and get their load address, one at a time. You also lose some of the benefits provided by the integrated development environment (such as prototype assistance) since you are no longer explicitly calling functions by name.

Dynamic loading is typically used in extensible application architectures (for example, plug-ins).

Option #2: Delayed-Loaded DLLs

In this approach, you dynamically link all your libraries in the project, but instruct Windows to do delayed loading of the problem DLL. When a DLL is delay-loaded, Windows does not attempt to resolve symbols that are defined by that DLL when the application starts. Instead it waits until the program makes its first call to a function that is defined in that DLL, at which point the DLL is loaded and the symbols get resolved (along with any of its dependencies). What this means is that a DLL is not loaded until the application needs it. A beneficial side effect of this approach is that it allows applications to reference a DLL that is not installed, so long as no functions in that DLL are ever called.

When the Intel SGX feature flag is off, that is exactly the situation we are in so we will go with option #2.

You specify the DLL to be delay-loaded in the project configuration for the dependent application or DLL. For the Tutorial Password Manager, the best DLL to mark for delayed loading is EnclaveBridge.dll as we only call this DLL if the Intel SGX path is enabled. If this DLL doesn’t load, neither will the two Intel SGX runtime DLLS.

We set the option in the Linker -> Input page of the PasswordManagerCore.dll project configuration:

Password Manager

After the DLL is rebuilt and installed on our 4th generation Intel Core processor system, the console test application works as expected.

CLI Test App

Detecting the Platform Software Package

Before we can call the sgx_enable_device function to check for Intel SGX support on the platform, we first have to make sure that the PSW package is installed because sgx_enable_device is part of the Intel SGX runtime. The best way to do this is to actually try to load the runtime libraries.

We know from the previous step that we can’t just dynamically link them because that will cause an exception when we attempt to run the program on a system that does not support Intel SGX (or have the PSW package installed). But we also can’t rely on delay-loaded DLLs either: delayed loading can’t tell us if a library is installed because if it isn’t, the application will still crash! That means we must use dynamic loading to test for the presence of the runtime libraries.

The PSW runtime libraries should be installed in the Windows system directory so we’ll use GetSystemDirectory to get that path, and limit the DLL search path via a call to SetDllDirectory. Finally, the two libraries will be loaded using LoadLibrary. If either of these calls fail, we know the PSW is not installed and that the main application should not attempt to run the Intel SGX code path.

Detecting and Enabling Intel Software Guard Extensions

Since the previous step dynamically loads the PSW runtime libraries, we can just look up the symbol for sgx_enable_device manually and then invoke it via a function pointer. The result will tell us whether or not Intel SGX is enabled.

Implementation

To implement this in the Tutorial Password Manager we’ll create a new DLL called FeatureSupport.dll. We can safely dynamically link this DLL from the main application since it has no explicit dependencies on other DLLs.

Our feature detection will be rolled into a C++/CLI class called FeatureSupport, which will also include some high-level functions for getting more information about the state of Intel SGX. In rare cases, enabling Intel SGX via software may require a reboot, and in rarer cases the software enable action fails and the user may be forced to enable it explicitly in their BIOS.

The class declaration for FeatureSupport is shown below.

typedef sgx_status_t(SGXAPI *fp_sgx_enable_device_t)(sgx_device_status_t *);


public ref class FeatureSupport {
private:
	UINT sgx_support;
	HINSTANCE h_urts, h_service;

	// Function pointers

	fp_sgx_enable_device_t fp_sgx_enable_device;

	int is_psw_installed(void);
	void check_sgx_support(void);
	void load_functions(void);

public:
	FeatureSupport();
	~FeatureSupport();

	UINT get_sgx_support(void);
	int is_enabled(void);
	int is_supported(void);
	int reboot_required(void);
	int bios_enable_required(void);

	// Wrappers around SGX functions

	sgx_status_t enable_device(sgx_device_status_t *device_status);

};

Here are the low-level routines that check for the PSW package and attempt to detect and enable Intel SGX.

int FeatureSupport::is_psw_installed()
{
	_TCHAR *systemdir;
	UINT rv, sz;

	// Get the system directory path. Start by finding out how much space we need
	// to hold it.

	sz = GetSystemDirectory(NULL, 0);
	if (sz == 0) return 0;

	systemdir = new _TCHAR[sz + 1];
	rv = GetSystemDirectory(systemdir, sz);
	if (rv == 0 || rv > sz) return 0;

	// Set our DLL search path to just the System directory so we don't accidentally
	// load the DLLs from an untrusted path.

	if (SetDllDirectory(systemdir) == 0) {
		delete systemdir;
		return 0;
	}

	delete systemdir; // No longer need this

	// Need to be able to load both of these DLLs from the System directory.

	if ((h_service = LoadLibrary(_T("sgx_uae_service.dll"))) == NULL) {
		return 0;
	}

	if ((h_urts = LoadLibrary(_T("sgx_urts.dll"))) == NULL) {
		FreeLibrary(h_service);
		h_service = NULL;
		return 0;
	}

	load_functions();

	return 1;
}

void FeatureSupport::check_sgx_support()
{
	sgx_device_status_t sgx_device_status;

	if (sgx_support != SGX_SUPPORT_UNKNOWN) return;

	sgx_support = SGX_SUPPORT_NO;

	// Check for the PSW

	if (!is_psw_installed()) return;

	sgx_support = SGX_SUPPORT_YES;

	// Try to enable SGX

	if (this->enable_device(&sgx_device_status) != SGX_SUCCESS) return;

	// If SGX isn't enabled yet, perform the software opt-in/enable.

	if (sgx_device_status != SGX_ENABLED) {
		switch (sgx_device_status) {
		case SGX_DISABLED_REBOOT_REQUIRED:
			// A reboot is required.
			sgx_support |= SGX_SUPPORT_REBOOT_REQUIRED;
			break;
		case SGX_DISABLED_LEGACY_OS:
			// BIOS enabling is required
			sgx_support |= SGX_SUPPORT_ENABLE_REQUIRED;
			break;
		}

		return;
	}

	sgx_support |= SGX_SUPPORT_ENABLED;
}

void FeatureSupport::load_functions()
{
	fp_sgx_enable_device = (fp_sgx_enable_device_t)GetProcAddress(h_service, "sgx_enable_device");
}

// Wrappers around SDK functions so the user doesn't have to mess with dynamic loading by hand.

sgx_status_t FeatureSupport::enable_device(sgx_device_status_t *device_status)
{
	check_sgx_support();

	if (fp_sgx_enable_device == NULL) {
		return SGX_ERROR_UNEXPECTED;
	}

	return fp_sgx_enable_device(device_status);
}

Wrapping Up

With these code changes, we have integrated Intel SGX feature detection into our application! It will execute smoothly on systems both with and without Intel SGX support and choose the appropriate code branch.

As mentioned in the introduction, there is sample code provided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager core, including the new feature detection DLL. Additionally, we have added a new GUI-based test program that automatically selects the Intel SGX code path, but lets you disable it if desired (this option is only available if Intel SGX is supported on the system).

SGX Code Branch

The console-based test program has also been updated to detect Intel SGX, though it cannot be configured to turn it off without modifying the source code.

Coming Up Next

We’ll revisit the enclave in Part 7 in order to fine-tune the interface. Stay tuned!

Amazing Video Experiences Make Game-changing Calls in Sports

$
0
0

sportsfieldAs entertainment, sports, fashion, science, food and more info-junkies, humans are fast using video as a quick, easy way to satisfy their needs to stay informed and connected about the things they care about. Whether viewing via the internet, TV or mobile devices—video is a part of everyday life. And how do you make it an excellent experience for millions of viewers?  

Video solution providers are vying in this space to deliver real-time, reliable content available everywhere, and at high-quality with brilliant colors and immersive experiences—all with an efficiency that allows room for profits, innovation, growth, and more reach. And sports is the perfect place to show this evidence.

Open the hood to see how it’s done, and you’ll see it’s all driven by computing—from data centers to encoders/decoders, to edge devices. This is where Intel® Xeon® processors with built-in media accelerators best fit the bill for performance, coupled with Intel’s media software tools to speed video transcoding, deliver efficient/high-density streaming, and help video solution vendors build competitive features for their products and services. Video acceleration is fast growing with game-changing results in the world of sports—below is a key example of just how.

Innovation by Design, Technology provides Video Replay Advantage

We all remember viscerally that one play where the referee got the game-changing call wrong and our favorite team lost the competition. It’s human error. It happens. But it doesn’t need to anymore. Now, new solutions with the latest Intel technologies can help resolve challenging referee decision moments with fast, high-quality specialized video replay systems. 

slomotvSlomo.tv, a producer of instant replay servers, innovated a family of videoReferee* systems that provide instant high-quality video replays from up to 18 cameras direct to referee viewing systems—allowing it to join the sports competition sidelines.

Built with Intel® Xeon® processors (E3-1500 v5) for extreme processing power, and optimized by Intel® Media Server Studio software for fast, high-quality video transcoding helped the company transform its solutions, which are being used around the world.

In addition to the server and video cameras placed around the sports area, the system includes a monitor and an easy-to-use control panel that can be learned by any referee in less than an hour. Video can be reviewed in Quad mode simultaneously from 4 cameras at different angles, in slow motion, or using zoom function for objective, error-free analysis of gameplays.

Helping Referees across a Diversity of Sports

Sports organizations like Kontinental Hockey League; basketball leagues in Korea, Russia, and Lithuania; and even canoe racing at the Rio Olympics in 2016 took notice of videoReferee—using it to help get their sports judging calls right. The Fédération Internationale de Football Association (FIFA) is also testing similar systems for possible use in its worldwide football (known as soccer in some parts of the world) competitions.

What makes slomo.tv’s solution different from those offered by other video solution providers and manufacturers, is how it manages video compression. By using Intel’s Media Server Studio (a tool suite that provides the Intel® Media SDK, runtimes, and advanced analysis tools for media, video and broadcast application development), to optimize, accelerate and compress its video streams, the company sees flexibility and an efficiency advantage over traditional hardware-only designs to manage that function.

Slomo.tv CTO Igor Vitiorets says, “Without the Intel® Media SDK <in Intel Media Server Studio>, we could not have created our innovative video replay and server products now in use worldwide, as it was the cornerstone for our software development and innovation.”

Today, instant replays are in the center of sports. What could be ahead? Providing fans and sports consumers a more in-depth, insightful view of key support sports decision-making process than available today, or even more immersive virtual reality views on-the-spot.

VR video

 

Delivering Amazing Video Experiences

See how Wowza, Rivet VR and Intel created a 360-degree view of a live concert delivered via 4K streaming made possible by Intel technologies. 
Video | Article

 


Learn More

  • intel.com/visualcloud>
     
  • Media developers: Try out the Intel Media Server Studio free Community Edition: makebettercode/mediaserverstudio>
     
  • See slomo.tv sports video judging. And check out another innovative slomo.tv server, RED ARROW, which can simultaneously provide 4 channels recording, 4 channels search and 2 channels playback with six 4K physical video ports—all in 4K 50/60p built with Intel Xeon processors and optimized by Intel Media Server Studio.

Xeon-MSS-slomotv

 

 
 

OVS-DPDK Datapath Classifier

$
0
0

Overview

This article describes the design and implementation of the datapath classifier – aka dpcls – in Open vSwitch* (OVS) with Data Plane Development Kit (DPDK). It presents flow classification and the caching techniques, and also provides greater insight on functional details with different scenarios.

Virtual switches are drastically different from traditional hardware switches. Hardware switches employ ternary content-addressable memory (TCAM) for high-speed packet classification and forwarding. Virtual switches, on other hand, use multiple levels of caches and flow caching to achieve higher forwarding rates. OVS is an OpenFlow-compliant virtual switch that can handle advanced network virtualization use cases, resulting in longer packet-processing pipelines with reduced hypervisor resources. To achieve higher forwarding rates, OVS stores the active flows in caches.

OVS user space datapath uses DPDK for fastpath processing. DPDK includes poll mode drivers (PMDs) to achieve line-rate packet processing by bypassing the kernel stack for receive and send. OVS-DPDK has three-tier look-up tables/caches. The first-level table works as an Exact Match Cache (EMC) and as the name suggests, the EMC can’t implement a wildcard matching. The dpcls is the second-level table and works as a tuple space search (TSS) classifier. Despite the fact that it is implemented by hash tables (i.e., subtables), it works as a wildcard matching table. Implementation details with examples are explained in the following sections. The third-level table is the ofproto classifier table and its content is managed by an OpenFlow-compliant controller. Figure 1 depicts the three-tier cache architecture in OVS-DPDK.


Figure 1. OVS three-tier cache architecture

An incoming packet traverses multiple tables until a match is found. In the case of a match, the appropriate forwarding action is executed. In real-world virtual switch deployments handling a few thousand flows, EMC is quickly saturated. Most of the packets will then be matched against the dpcls, so its performance becomes a critical aspect for the overall switching performance.

Classifier Deep Dive

A classifier is a collection of rules or policies, and packet classification is about categorizing a packet based on a set of rules. The OVS second-level table uses a TSS classifier for packet classification, which consists of one hash table (tuple) for each kind of match in use. In a TSS classifier implementation, a tuple is defined for each combination of field length, and the resulting set is called a “tuple space.” Since each tuple has a known set of bits in each field, by concatenating these bits in order, a hash key can be derived, which can then be used to map filters of that tuple into a hash table.

A TSS classifier with some flow matching packet IP fields (e.g., Src IP and Destination IP only) is represented as one hash table (tuple). If the controller inserts a new flow with a different match (e.g., Src and Dst MAC address), it will be represented as a second hash table (tuple). Searching a TSS classifier requires searching each tuple until a match is found. While the lookup complexity of TSS is far from optimal, it still is efficient compared to decision tree classification algorithms. In decision tree algorithms, each internal node represents a decision, which has a binary outcome. The algorithm starts by performing the test at the root of the decision tree and based on the outcome, the program branches to one of the children and continues until a leaf node is reached and the output is produced. The worst-case complexity is therefore the height of the decision tree. In case of TSS classifier with “N” subtables, the worst case complexity is O(N) and much of the overhead is in hash computation. Though the TSS classifier has high lookup complexity, it still fares better than decision trees in below ways.

Tuple Space Search vs. Decision Tree Classification

  1. With a few hundred-thousand active parallel flows, the controller may add and remove new flows often. This will be inefficient with decision trees as node insertion – and most of all deletion – are costly operations that could consume significant CPU cycles. Instead, hash tables require many fewer CPU cycles for both insertions and deletions.
  2. TSS has O(N) memory and time complexity. In worst-case scenarios, TSS may make memory accesses equal to the number of hash tables and the number of hash tables can be as many as the number of rules in the database, and is still better than decision trees.
  3. TSS generalizes to an arbitrary number of packet header fields.

With a few dozen hash tables around, the classifier lookup means all the subtables should be traversed until a match is found. The flow cache entries in hash tables are unique and are non-overlapping. Hence, the first match is the only match and the search operation can be terminated on a match. The order of subtables in the classifier is random and the tables are created and destroyed at runtime. Figure 2 depicts the packet flow through dpcls with multiple hash tables/subtables.


Figure 2. Packet handling by dpcls on an EMC miss

dpcls Optimization Using Subtables Ranking

In OVS 2.5, using the long-term support (LTS) branch, a classifier instance is created per PMD thread. For each lookup, all the active subtables should be traversed until a match is found. On an average, a dpcls lookup has to visit N/2 subtables for a hit, with “N” being the number of active subtables. Though a single hash table lookup is inherently fast, a performance degradation is observed because of the expensive hash computation before each hash table lookup.

To address this issue, OVS 2.6 implements a ranking of the subtables based on the count of hits. Moreover, a dpcls instance is created per ingress port. This comes from the consideration that in practice there is a strong correlation between traffic coming from an ingress port and one or a small subset of subtables that have hits. The purpose of the ranking is to sort the subtables so that the most-hit subtables will be prioritized and ranked higher. Therefore, this allows the performance to improve by reducing the average number of subtables that need to be searched in a lookup.

Figure 3 depicts the multiple dpcls instance creation for the corresponding ingress ports. In this case, there are three ingress ports of which two are physical ports (DPDK_1, DPDK_2) and one vHost-user port for VM_1. DPDK_1 dpcls, DPDK_2 dpcls and VM_1 dpcls instances are created for the DPDK_1, DPDK_2 and VM_1 vHost-user port respectively. Each dpcls will manage the packets coming from its corresponding port. For example, the packets from vHost-user port of VM_1  are processed by PMD thread 1. A hash key is computed from header fields of the ingress packet and a lookup is done on the first-level cache EMC 1 using the hash key. On a miss, the packet is handled by VM_1 dpcls. 


Figure 3. OVS-DPDK classifier in OVS 2.6 with dpcls instance per ingress port

How Hash Tables are Used for Wildcard Matching

Here we will discuss how hash tables are used to implement wildcard matching.  

Let’s assume the controller installs a flow where the rule – referred as Rule #1 from here on – is for example:

Rule #1: Src IP = “21.2.10.*”

so the match must occur on the first 24 bits, and the remaining 8 bits are wildcarded. This rule could be installed by an OVS command, such as:

$ ovs-ofctl add-flow br0 dl_type=0x0800,nw_src=21.2.10.1/24,actions=output:2

With the above rule inserted, packets with Src IP like “21.2.10.5” or “21.2.10.123” shall match the rule.

When a packet like “21.2.10.5” is received there will be a miss on both EMC and dpcls (see Figure 1). A matching flow will be found in the ofproto classifier. A learning mechanism will cache the found flow into dpcls and EMC. Below is a description of the details of this use case with a focus on dpcls internals only.

Store a Rule into a Subtable

As a prior step to store the wildcarded Rule #1, we create a proper “Mask #1” first. That will be done by considering the wildcarded bits of the Rule #1. Each bit of the mask will be set to 1 when a match is required on that bit position; otherwise, it will be 0. So in this case, Mask #1 will be “0xFF.FF.FF.00.”  

A new hash-table “HT 1” will then be instantiated as a new subtable.

The mask is applied to the rule (see Figure 4) and the resulting value is given as an input to the hash function. The hash output will point to the subtable location where Rule #1 could be stored (we’re not considering collisions here, that’s outside the scope of this document).


Figure 4. Insert Rule #1 in to HT 1

HT 1 will collect Rule #1 and any other “similar” rule (i.e., with the same fields and the same wildcard mask). For example it could store a further rule, like:

Rule #1A: Src IP = “83.83.83.*”

because this rule specifies Src IP – and no other field – and its mask is equal to Mask #1.

Please note that in case we want to insert a new rule with different fields and/or a different mask, we will need to create a new subtable.

Lookup for Packet #1

Packet #1 with Src IP = 21.2.10.99 arrived for processing. It will be searched on the unique existing hash table HT 1 (see Figure 5).

Mask #1 of the corresponding HT 1 is applied on the Src IP address field and hash is computed thereafter to find a matching entry in the subtable. In this case the matching entry is found => hit.


Figure 5. Lookup on HT 1 for ingress Packet #1

Insert a Second Rule

Now let’s assume the controller adds a second wildcard rule – for simplicity, still on the source IP address – with netmask 16, referred as Rule #2 from here on.

Rule #2: Src IP = “7.2.*.*”

$ ovs-ofctl add-flow br0 dl_type=0x0800,nw_src=7.2.21.15/26,actions=output:2

With the above inserted rule, packets with Src IP like “7.2.15.5” or “7.2.110.3” would match the rule.

Based on the wildcarded bits in the last 2 bytes, a proper “Mask #2” is created, in this case: “0xFF.FF.00.00.” (see Figure 6)

Note that HT 1 can store rules only with netmask “0xFF.FF.00.00.” That means that a new subtable HT 2 must be created to store Rule #2.

Mask #2 is applied to Rule #2 and the result is an input for the hash computation. The resulting value will point to a HT 2 location where Rule #2 will be stored.


Figure 6. Insert Rule #2 in to HT 2

Lookup for Packet #2

Packet #2 with Src IP = 7.2.45.67 arrived for processing.

As we have 2 active subtables (in this case, HT 1 and HT 2), a lookup shall be performed by repeating the search on each hash table.

We start by searching into HT 1 where the mask is “0xFF.FF.FF.00.” (see Figure 7).

The corresponding table’s mask will be applied to the Packet #2 and a hash key will then be computed to retrieve a subtable entry from HT 1.


Figure 7. Lookup on HT 1 for ingress Packet #2

The outcome is a miss, as we find no entry into HT 1.

We continue our search (see Figure 8) on the next subtable –HT 2 – and proceed in a similar manner.

Now we will use the HT 2 mask, which is “0xFF.FF.00.00.”


Figure 8. Lookup on HT 2 for ingress Packet #2

The outcome is successful because we find an entry that matches the Packet #2.

A Scenario with Multiple Input Ports

This use case demonstrates the classifier behavior on an Intel® Ethernet Controller X710 NIC card which features four input 10G ports. On each port a dedicated PMD thread will process the incoming traffic. For simplicity Figure 9 shows just pmd60 and pmd63 as the PMD threads for port0 and port3, respectively. Figure 9 shows the details:

  1. dpcls after processing two packets with Src IP address: [21.2.10.99] and [7.2.45.67].
  2. The EMC after processing the two packets: [21.2.10.99] and [7.2.45.67].
  3. pmd63 is processing packets from port 3. The figure shows the content of the tables after processing the packet with IP: [5.5.5.1]
  4. The new packet [21.2.10.2] will not find a match into EMC of pmd60; instead it will find a match into the Port 0 Classifier. Also, a new entry will be added into pmd60 EMC.
  5. The new packet [5.5.5.8] will get a miss on pmd63 EMC; however, it will find a match on the Port 3 Classifier. A new entry will then be added into pmd63 EMC.


Figure 9. dpcls with multiple PMD threads in OVS-DPDK

Conclusion

In this article, we have described the working of the user space classifier with different test cases and also demonstrated how different tables in OVS-DPDK are set up. We’ve also discussed the shortcomings of the classifier in the OVS 2.5 release and how this is improved in OVS 2.6. A follow-up blog on the OVS-DPDK classifier will discuss the code flow, classifier bottlenecks, and ways to improve the classifier performance on Intel® architecture.

For Additional Information

For any question, feel free to follow up with the query on the Open vSwitch discussion mailing thread.

Videos and Articles

To learn more about OVS with DPDK, check out the following videos and articles on Intel® Developer Zone and Intel® Network Builders University.

Open vSwitch with DPDK Architectural Deep Dive

DPDK Open vSwitch: Accelerating the Path to the Guest

The Design and Implementation of Open vSwitch

Tuple Space Search

To learn more about the Tuple Space Search:

V. Srinivasan, S. Suri, and G. Varghese. Packet Classification Using Tuple Space Search. In Proc. of SIGCOMM, 1999

About the Authors

Bhanuprakash Bodireddy is a Network Software Engineer with Intel. His work is primarily focused on accelerated software switching solutions in user space running on Intel® architecture. His contributions to OvS-DPDK include usability documentation, Keep-Alive feature, and improving the datapath Classifier performance.

Antonio Fischetti is a Network Software Engineer with Intel. His work is primarily focused on accelerated software switching solutions in user space running on Intel® architecture. His contributions to OVS with DPDK are mainly focused on improving the datapath Classifier performance.

DPDK/NFV DevLab Trip Report: July 11, 2016

$
0
0

The Intel® Developer Zone Data Plane Development Kit (DPDK) DevLab was designed to both improve platform knowledge and deepen the interest of new and current networking virtualization and DPDK developers. It was a full-day event at the Intel Santa Clara Campus, with hosted presentations, demos, and hands-on training for the developers in attendance.

There were ten presentations by experts from Intel and Berkeley, two hands-on sessions, three in-class demos, and four independent software vendor demos so participants could learn from architects and experts from industry, academia, and Intel.

This report contains videos and PowerPoint slides that capture the day’s presentations. You can use them to learn, review, and get involved in the DevLab. The two hands-on sessions at the DevLab are not posted here because the reader will not have access to the hardware set up at the lab, and hence would be of limited value. Updates from Intel® Network Builders University and DPDK open source community are included so you can refer to these resources outside of Intel Developer Zone for your learning.

Table of Contents

Software Defined Infrastructure/Network Function Virtualization/ONP Ingredients
DPDK Overview and Core APIs
DPDK API and Virtual Infrastructure
DPDK and Virtio
Open vSwitch* with DPDK: Architecture and Performance
BESS  ̶  A Virtual Switch Tailored for NFV
Intel® VTune™ and Performance Optimizations
DPDK Performance Benchmarking
DPDK Open Source Community Update
Intel Network Builders University

Software Defined Infrastructure/Network Function Virtualization/ONP Ingredients

This presentation starts with an overview of Software Defined infrastructure (SDI) and describes how Software Defined Networking (SDN) and Network Function Virtualization (NFV) come together to achieve a flexible and scalable independent software framework: OpenStack*. Intel® Open Network Platform (Intel® ONP), which is based on Open Platform for NFV (OPNFV), is introduced here. OPNFV aims to provide a reference architecture for NFV and SDN deployments in the real world. Presented by Sujata Tibrewala and Ashok Emani.

View slides

DPDK Overview and Core APIs

This presentation centers on DPDK design and how it is used. In addition to an overview of DPDK, Network Function Virtualization (NFV), Vector Packet Processing (VPP)/Fast Data I/O (FD.io), and the new Transport Layer Development Kit (TLDK) used in current deployments are discussed. Further, the presentation shows how DPDK is used within Virtual Network Function (VNF)/NFV systems to accelerate these cloud applications, including how DPDK is used to improve the performance of Cisco’s routing software. Presented by Keith Wiles.

View slides

DPDK API and Virtual Infrastructure

This presentation showcases DPDK API Virtualization Support and how it opens up multiple network interfaces that can be used to deliver packets from the physical Network Interface Card (NIC) to a VM/VNF in the NFV setup. In addition, the presentation lists various virtual devices with available Poll Mode Drivers in DPDK API, and delivers insights into how toproperly build your NFVi from the beginning. Presented by Rashmin Patel.

View slides

DPDK and Virtio

The presentation starts with an overview of Virtio and how it is used with DPDK and in a VNF/NFV cloud. A simple example of how to use Virtio APIs follows, and the presentation finishes with the design of VNF/NFV software with respect to how these layers combine into a cloud product. Presented by Keith Wiles.

View slides

Open vSwitch with DPDK: Architecture and Performance

This presentation covers Open vSwitch (OVS), a production-quality, multilayer virtual switch that supports SDN control semantics via the Open Flow protocol and its OVSDB management interface.  Performance is not sufficient to support Telco NFV use cases, so the presentation further shows how DPDK is integrated into native OVS to boost performance. The presentation specifically covers OVS multilevel table support, vhost multi-queue, and related features used with DPDK to achieve maximum performance. The presentation ends with benchmark results on OVS for the most common use cases. Presented by Irene Liew.

View slides

BESS - A Virtual Switch Tailored for NFV

This presentation discusses Berkeley Extensible Software Switch (BESS), an extensible platform for rapid development of software switches. BESS allows you to implement a fully customizable packet processing data path. In this session, we present some technical details of BESS and then demonstrate how to implement a custom virtual switch in just 30 minutes. Presented by Joshua Reich and Sangjin Han.

View BESS Intro slides

View BESS Walkthrough slides

Intel® VTune™ and Performance Optimizations

This presentation is a tutorial about performance optimization best practices and includes a demo with a link to a do-it-yourself cookbook called “Profiling DPDK Code with Intel® VTune™ Amplifier.” Role-playing sessions are used, with the audience acting as various building blocks of a CPU pipeline. It emphasizes the thought process for analysis of Non-Uniform Memory Access (NUMA) affinity, followed by a discussion of microarchitecture optimizations with VTune. The presentation concludes with information about how viewers can replicate the demo shown with VTune profiling DPDK micro benchmarks and identify hotspots in their own applications. Presented by Muthurajan Jayakumar (M Jay).

View slides

DPDK Performance Benchmarking

This presentation describes the standard process for performing high-throughput networking performance benchmarking tests using the DPDK Layer 3 forwarding (l3fwd) sample application workload. This includes hardware and software configurations for performance optimization and tuning. The session is also a tutorial on reading DPDK performance reports produced by the Intel® NPG PMA team posted on http://cat.intel.com (Note: NDA required) and for performing some essential platform performance tuning. Presented by Georgii Tkachuk.

View slides

DPDK Open Source Community Update

This presentation describes the history of the DPDK open source community. It describes the increasing level of multi-architecture support now available in DPDK, including information on the number of contributions and main contributors to DPDK releases. It further explains how new members can contribute and provides links to more information. Presented by Tim O’Driscoll.

View slides

Intel Network Builders University

This presentation gives an overview of the Intel Network Builders University. Network Builders University is an NFV/SDN Training Program for Network Builders Partners and end users. Presented by George Ranallo.

View slides

For More Information

For more information about topics in this article visit the Intel® Developer Zone's Networking site and Intel® Network Builders University. If you're in the San Francisco Bay area, check out the Out Of The Box Network Developers Meetup.


How to Get Started as a Developer in AI

$
0
0

The promise of artificial intelligence has captured our cultural imagination since at least the 1950s—inspiring computer scientists to create new and increasingly complex technologies, while also building excitement about the future among regular everyday consumers. What if we could explore the bottom of the ocean without taking any physical risks? Or ride around in driverless cars on intelligent roadways? While our understanding of AI—and what’s possible—has changed over the the past few decades, we have reason to believe that the age of artificial intelligence may finally be here. So, as a developer, what can you do to get started? This article will go over some basics of AI, and outline some tools and resources that may help. 

First Things First—What Exactly is AI?

While there are a lot of different ways to think about AI and a lot of different techniques to approach it, the key to machine intelligence is that it must be able to sense, reason, and act, then adapt based on experience.

  • Sense—Identify and recognize meaningful objects or concepts in the midst of vast data. Is that a stoplight? Is it a tumor or normal tissue?
     
  • Reason—Understand the larger context, and make a plan to achieve a goal. If the goal is to avoid a collision, the car must calculate the likelihood of a crash based on vehicle behaviors, proximity, speed, and road conditions.
     
  • Act—Either recommend or directly initiate the best course of action. Based on vehicle and traffic analysis, it may brake, accelerate, or prepare safety mechanisms.
     
  • Adapt—Finally, we must be able to adapt algorithms at each phase based on experience, retraining them to be ever more intelligent. Autonomous vehicle algorithms should be re-trained to recognize more blind spots, factor new variables into the context, and adjust actions based on previous incidents.

What Does AI Look Like Today?

These days, artificial intelligence is an umbrella term to represent any program that can sense, reason, act, and adapt. Two ways that developers are actually getting machines to do that are machine learning and deep learning.

  • In machine learning, learning algorithms build a model from data, which they can improve on as they are exposed to more data over time. There are four main types of machine learning: supervised, unsupervised, semi-supervised, and reinforcement learning. In supervised machine learning, the algorithm learns to identify data by processing and categorizing vast quantities of labeled data. In unsupervised machine learning, the algorithm identifies patterns and categories within large amounts of unlabeled data—often much more quickly than a human brain could. You can read a lot more about machine learning in this article.
     
  • Deep learning is a subset of machine learning in which multilayered neural networks learn from vast amounts of data.

AI in Action: A Machine Learning Workflow

As we discussed above, artificial intelligence is able to sense, reason, and act, then adapt based on experience. But what does that look like? Here is a general workflow for machine learning:

  1. Data Acquisition—First, you need huge amounts of data. This data can be collected from any number of sources, including sensors in wearables and other objects, the cloud, and the Web.
     
  2. Data Aggregation and Curation—Once the data is collected, data scientists will aggregate and label it (in the case of supervised machine learning).
     
  3. Model Development—Next, the data is used to develop a model, which then gets trained for accuracy and optimized for performance.
     
  4. Model Deployment and Scoring—The model is deployed in an application, where it is used to make predictions based on new data.
     
  5. Update with New Data—As more data comes in, the model becomes even more refined and more accurate. For instance, as an autonomous car drives, the application pulls in real-time information through sensors, GPS, 360-degree video capture, and more, which it can then use to optimize future predictions.

Opportunities for AI Developers

One of most exciting things about AI is that it has the potential to revolutionize not just the computing industry, or the software industry, but really every industry that touches our lives. It will transform society in much the same way as the industrial revolution, the technical revolution, and the digital revolution altered every aspect of daily life. Intel provides the foundation, frameworks, and strategies to power artificial intelligence. And when it comes to deep learning and machine learning technologies, Intel can help developers deliver projects better, faster, and more cost-effectively.

For developers, the expansion of the AI field means that you have the potential to apply your interest and knowledge of AI toward an industry that you’re also interested in, like music or sports or healthcare. As you explore the world of AI, think about what else you find interesting, and how you’d like contribute to that field in a meaningful way. The ideas are limitless, but here are a few examples to get you thinking.
 

So, Where Should I Get Started? Intel Can Help.

Intel is supporting rapid innovation in artificial intelligence. The Intel Software Developer Zone for AI is a great starting point for finding community, tools, and training. Here are some specific links to get you started.

  • Join the AI Community– There is a robust community of AI developers worldwide. Connect with them on Facebook and LinkedIn, and look for Meetups, workshops, and events happening in your area. Intel regularly participates in conferences and puts on webinars about AI topics— learn more here.
     
  • Are you a student? See if the Intel® Software Student Developer Program is at your campus. Get hands-on training from industry experts, professors and professionals to build your skillset. Learn more here.
     
  • Optimized frameworks– Caffe* is one of the most popular community applications for image recognition, and Theano* is designed to help write models for deep learning. Both frameworks have been optimized for use with Intel® architecture. Learn how to install and use these frameworks, and find a number of useful libraries, here.
     
  • Hardware - Intel® Xeon Phi™ processor family– These massively multicore processors deliver powerful, highly parallel performance for machine learning and deep learning workloads. Get a brief overview of deep learning using Intel® architectures here, and learn more about the Intel® Xeon Phi™ processor family’s competitive performance for deep learning here.The topic of AI is incredibly deep, and we’ve only scratched the surface so far. Come back soon for more articles about what’s happening and how you can get involved.      

The topic of AI is incredibly deep, and we’ve only scratched the surface so far. Come back soon for more articles about what’s happening and how you can get involved.  

Innovative Media Solutions Showcase

$
0
0

New, Inventive Media Solutions Made Possible with Intel Media Software Tools

With Intel media software tools, video solutions providers can create inspiring, innovative products that capitalize on next gen capabilities like real-time 4K HEVC, virtual reality, simultaneous multi-camera streaming, high-dynamic range (HDR) content delivery, video security solutions with smart analytics, and more. Check these out. Envision using Intel's advanced media tools to transform your media, video, and broadcasting solutions for a competitive edge with high performance and efficiency, and room for higher profits, market growth, and more reach.

sportsfield

Amazing Video Solution Enables Game-changing Sports Calls

Slomo.tv innovated its videoReferee* systems, which provide instant high-quality video replays from up to 18 cameras direct to referee viewing systems. Referees can view video from 4 cameras simultaneously at different angles, in slow motion, or using zoom for objective, error-free gameplay analysis. The Kontinental Hockey League; basketball leagues in Korea, Russia, Lithuania; and Rio Olympics used videoReferee. Read More.

 

VR video

Immersive Experiences with Real-time 4K HEVC Streaming

See how Wowza, Rivet VR and Intel worked together to deliver a live-streamed 360-degree virtual-reality jazz concert at the legendary Blue Note Jazz Club in New York using hardware-assisted 4K video​. See just how: Video | Article

 

    Mobile Viewpoint Live Reporting Ronde of Norg

    Mobile Viewpoint Delivers HEVC HDR Live Broadcasting

    Mobile Viewpoint delivers live HEVC HDR broadcasting at the scenes of breaking news action. The company developed a mobile encoder running on 6th generation Intel® processors using the graphics-accelerated codec to create low power hardware-accelerated encoding and transmission, and optimized by Intel® Media Server Studio Pro Edition for HEVC compression and quality. The results: fast, high-quality, video broadcasting on-the-go so the world stays informed of fast-changing events. Read more.

     

    Sharp all-around security camera

    Sharp's Innovative Security Camera is built with Intel® Media Technologies

    With security concerns now part of everyday life, SHARP built an omnidirectional wireless, intelligent, digital surveillance camera for these needs. Built with an Intel® Celeron® processor (N3160), SHARP 12 megapixel image sensors, and by using the Intel® Media SDK for hardware-accelerated encoding, the QG- B20C camera can capture video in 4Kx3K resolution, provide all-around views, and includes intelligent automatic detection functions. Read more.

     

    Magix Video Pro XMAGIX's Video Editing Software Provides HEVC to Broad Users

    While elite video pros have access to high-powered video production applications with bells and whistles available mostly only to enterprise, MAGIX unveiled Video Pro X, a video editing software for semi-pro video production. Optimized with Intel Media Server Studio, Video Pro X provides HEVC encoding to prosumers and semi-pros to help alleviate a bandwidth-constrained internet where millions of videos are shared. Read more.

     

    Comprimato2

    JPEG2000 Codec Now Native for Intel Media Server Studio

    Comprimato worked with Intel to provide additional video encoding technology as part of Intel Media Server Studio through a software plug-in for high quality, low latency JPEG2000 encoding. This  powerful encoding option allows users to transcode JPEG2000 contained in IMF, AS02 or MXF OP1a files to distribution formats like AVC and HEVC, and enables software-defined processes of IP video streams in broadcast applications. By using Media Server Studio to access hardware-acceleration and programmable graphics in Intel GPUs, encoding can run fast and reduce latency, which is important in live broadcasting. Read more.

     

    SPB TV AG Showcases Innovative Mobile TV/On-demand Transcoder

    SPB TV AG innovated its single-platform Astra* transcoder, a pro solution for fast, high-quality processing of linear TV broadcast and on-demand video streams from a single head-end to any mobile, desktop or home device. The transcoder uses Intel® Core™ i7 processors with media accelerators and delivers high-density transcoding optimized by Intel Media Server Studio. “We are delighted that our collaboration with Intel ensures faster and high quality transcoding, making our new product performance remarkable,” said CEO of SPB TV AG Kirill Filippov. Read more.

     

    SURF Communications collaborates with Intel for NFV & WebRTC all-inclusive platforms

    SURF Communication Solutions announced SURF ORION-HMP* and SURF MOTION-HMP*. The SURF-HMP architecture delivers fast, high-quality media acceleration - facilitating up to 4K video resolutions and ultra-high capacity HD voice and video processing. The system runs on Intel® processors with integrated graphics and is optimized by Intel Media Server Studio. SURF-HMP is driven-by a powerful processing engine that supports all major video and voice codecs and protocols in use, and delivers a multitude of applications fot transcoding, conferencing/mixing, MRF, playout, recording, messaging, video surveillance, encryption and more. Read more.

     


    More about Intel Media Software Tools

    Intel Media Server Studio - Provides an Intel® Media SDK, runtimes, graphics drivers, media/audio codecs, and advanced performance and quality analysis tools to help video solution providers deliver fast, high-density media transcoding.

    Intel Media SDK - A cross-platform API for developing client and media applications for Windows*. Achieve fast video plaback, encode, processing, media format conversion, and video conferencing. Accelerate RAW video and image processing. Get audio decode/encode support.

    Accelerating Media Processing: Which Media Software Tool do I use? English | Chinese

     

    Introducing DNN primitives in Intel(R) MKL

    $
    0
    0

        Deep Neural Networks (DNNs) are on the cutting edge of the Machine Learning domain. These algorithms received wide industry adoption in the late 1990s and were initially applied to tasks such as handwriting recognition on bank checks. Deep Neural Networks have been widely successful in this task, matching and even exceeding human capabilities. Today DNNs have been used for image recognition and video and natural language processing, as well as in solving complex visual understanding problems such as autonomous driving. DNNs are very demanding in terms of compute resources and the volume of data they must process. To put this into perspective, the modern image recognition topology AlexNet takes a few days to train on modern compute systems and uses slightly over 14 million images. Tackling this complexity requires well optimized building blocks to decrease the training time in order to meet the needs of the industrial application.

        Intel MKL 2017 introduces the DNN domain, which includes functions necessary to accelerate the most popular image recognition topologies, including AlexNet, VGG, GoogleNet and ResNet.

        These DNN topologies rely on a number of standard building blocks, or primitives, that operate on data in the form of multidimensional sets called tensors. These primitives include convolution, normalization, activation and inner product functions along with functions necessary to manipulate tensors. Performing computations effectively on Intel architectures requires taking advantage of SIMD instructions via vectorization and of multiple compute cores via threading. Vectorization is extremely important as modern processors operate on vectors of data up to 512 bits long (16 single-precision numbers) and can perform up to two multiply and add (Fused Multiply Add, or FMA) operations per cycle. Taking advantage of vectorization requires data to be located consecutively in memory. As typical dimensions of a tensor are relatively small, changing the data layout introduces significant overhead; we strive to perform all the operations in a topology without changing the data layout from primitive to primitive.

    Intel MKL provides primitives for most widely used operations implemented for vectorization-friendly data layout:

    • Direct batched convolution
    • Inner product
    • Pooling: maximum, minimum, average
    • Normalization: local response normalization across channels (LRN), batch normalization
    • Activation: rectified linear unit (ReLU)
    • Data manipulation: multi-dimensional transposition (conversion), split, concat, sum and scale.

    Programming model

        Execution flow for the neural network topology includes two phases: setup and execution. During the setup phase the application creates descriptions of all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match. This phase is performed once in a typical application and followed by multiple execution phases where actual computations happen.

        During the execution step the data is fed to the network in a plain layout like BCWH (batch, channel, width, height) and is converted to a SIMD-friendly layout. As data propagates between layers the data layout is preserved and conversions are made when it is necessary to perform operations that are not supported by the existing implementation.

     

        Intel MKL DNN primitives implement a plain C application programming interface (API) that can be used in the existing C/C++ DNN framework. An application that calls Intel MKL DNN functions should involve the following stages:

        Setup stage: for given a DNN topology, the application creates all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match.

        Execution stage: at this stage, the application calls to the DNN primitives that apply the DNN operations, including necessary conversions, to the input, output, and temporary arrays.

        The appropriated examples for training and scoring computations may be find out into MKL package directory: <mklroot>\examples\dnnc\source 

    Performance

    Caffe, a deep learning framework developed by Berkeley Vision and Learning Center (BVLC), is one of the most popular community frameworks for image recognition. Together with AlexNet, a neural network topology for image recognition, and ImageNet, a database of labeled images, Caffe is often used as a benchmark. The chart below shows performance comparison of original Caffe implementation and Intel optimized version, that takes advantage of optimized matrix-matrix multiplication and new Intel MKL 2017 DNN primitives on Intel Xeon E5-2699 v4 (codename Broadwell) and Intel Xeon Phi 7250 (codename Knights Landing).

    Summary

    DNN primitives available in Intel MKL 2017 can be used to accelerate Deep Learning workloads on Intel Architecture. Please refer to Intel MKL Developer Reference Manual and examples for detailed information.

     

     

    Transfer learning using neon

    $
    0
    0

    Introduction

    In the last few years plenty of deep neural net (DNN) models have been made available for a variety of applications such as classification, image recognition and speech translation. Typically, each of these models are designed for a very specific purpose, but can be extended to novel use cases. For example, one can train a model to recognize numbers and characters in an image and then reuse that model to read signposts in a broader model or a dataset used in autonomous driving.

    In this blog post we will:

    1. Explain transfer learning and some of its applications
    2. Explain how neon can be used for transfer learning
    3. Walk through example code that uses neon for transferring a pre-trained model to a new dataset
    4. Discuss the merits of transfer learning with some results

    Transfer Learning

    Consider the task of visual classification. Convolutional neural networks (CNN) are organized into several layers with each layer learning features at a different scale. The lower level layers recognize low level features such as the fur of a cat or the texture on a brick wall. Higher level layers recognize higher level features such as the body shape of a walking pedestrian or the configuration of windows in a car.

    Features learnt at various scales offer excellent feature vectors for various classification tasks. They fundamentally differ from feature vectors that are obtained by kernel-based algorithms developed by human operators because these feature vectors are learnt after extensive training runs. These training runs aim to systematically refine the model parameters so that the typical error between the predicted output, yp=f(xt) (where xt is the observed real world signal and f() is the model), and the ground truth, yt, is made as small as possible.

    There are several examples of reusing the features learnt by a well trained CNN. Oquab et. al.

    [1] show how the features of an AlexNet model trained on images with a single object can be used to recognize objects in more complex images taken in the real world. Szegedy et. al. [2] show that given a very deep neural network, the features learnt only by half the layers of the network can be used for visual classification. Bell et. al. [3] show that material features (such as wood, glass, etc.) learnt by various pre-trained CNNs such as AlexNet and GoogLeNet can be used for other tangential tasks such as image segmentation. The features learnt by a pre-trained network work so well because they capture the general statistics, the spatial coherence and the hierarchical meta relationships in the data.

    Transferring Learning with neon

    Neon not only excels in training and inference of DNNs, but also delivers a rich ecosystem with support for requirements surrounding DNNs. For example, you can serialize learned models, load pre- or partially-trained models, choose from several DNNs built by industry experts, and run it in the cloud without any physical infrastructure of your own. You can get a good overview of the neon API here.

    You can load pre-trained weights of a model and access them at a per-layer level with two lines of code as follows:

    from neon.util.persist import load_obj
    pre_trained_model = load_obj(filepath)
    pre_trained_layers = pre_trained_model['model']['config']['layers']

    You can then transfer the weights from these pre-learnt layer to a compatible layer in your own model with one line of code as follows:

    [code]layer_of_new_model.load_weights(pre_trained_layer, load_states=True)[/code]

    Then the task of transferring weights from a pre-learnt model to a few select layers of your model is straightforward:

    new_layers = [l for l in new_model.layers.layers]
    for i, layer in enumerate(new_layers):
        if load_pre_trained_weight(i, layer):
            layer.load_weights(pre_trained_layers[i], load_states=True)

    That’s it! You have selectively transferred your pre-trained model into neon. In the rest of this post, we will discuss: 1) how to structure your new model, 2) how to selectively write code and maximally reuse the neon framework and 3) how to quickly train your new model to very high accuracy in neon without having to go through an extensive new training exercise. We will discuss this in the context of implementing the work of Oquab et. al. [1].

    General Scene Classification using Weights Trained on Individual Objects

    ImageNet is a very popular dataset where the images of the training dataset are mostly representative of individual objects representing 1000 different classes. It is an excellent database for obtaining feature vectors representing individual objects. However, pictures taken in the real world tend to be much more complex with many instances of the objects captured in a single image at various scales. These scenes are further complicated by occlusions. This is illustrated in the below figure where you find many instances of people and cows at varying degrees of scale and occlusion.

    Classification in such images is typically done using two techniques: 1) using a sliding multiscale sampler which tries to classify small portions of the image and 2) selectively feeding region proposals discovered by more sophisticated algorithms that are then fed into the DNN for classification. An implementation of the latter approach using Fast R-CNN[4] can be found here. Fast R-CNN also uses transfer learning to accelerate its training speed. In this section we will discuss the former approach which is easier to implement. Our implementation can be found here. Our implementation trains on the Pascal VOC dataset using an AlexNet model that was pre-trained on the ImageNet dataset.

    The core structure of the implementation is simple:

    def main():
    
        # Collect the user arguments and hyper parameters
        args, hyper_params = get_args_and_hyperparameters()
    
        # setup the CPU or GPU backend
        be = gen_backend(**extract_valid_args(args, gen_backend))
    
        # load the training dataset. This will download the dataset
        # from the web and cache it locally for subsequent use.
        train_set = MultiscaleSampler('trainval', '2007', ...)
    
        # create the model by replacing the classification layer
        # of AlexNet with new adaptation layers
        model, opt = create_model( args, hyper_params)
    
        # Seed the Alexnet conv layers with pre-trained weights
        if args.model_file is None and hyper_params.use_pre_trained_weights:
            load_imagenet_weights(model, args.data_dir)
    
        train( args, hyper_params, model, opt, train_set)
    
        # Load the test dataset. This will download the dataset
        # from the web and cache it locally for subsequent use.
        test_set = MultiscaleSampler('test', '2007', ...)
        test( args, hyper_params, model, test_set)
    
        return

    Creating the Model

     

    The structure of our new neural net is the same as the pre-trained AlexNet except we replace its final classification layer with two affine layers and a dropout layer that serve to adapt the neural net trained to the labels of ImageNet to the new set of labels of the Pascal VOC dataset. With the simplicity of neon, that amounts to replacing this line of code (see create_model())

    # train for the 1000 labels of ImageNet
    Affine(nout=1000, init=Gaussian(scale=0.01),
           bias=Constant(-7), activation=Softmax())

    with these:

    Affine(nout=4096, init=Gaussian(scale=0.005),
           bias=Constant(.1), activation=Rectlin()),
    Dropout(keep=0.5),
    # train for the 21 labels of PascalVOC
    Affine(nout=21, init=Gaussian(scale=0.01),
           bias=Constant(0), activation=Softmax())

    Since we are already using a pre-trained model, we just need to do about 6-8 epochs of training. So we’ll use a small learning rate of 0.0001. Furthermore we will reduce that learning aggressively every few epochs and use a high momentum component because the pre-learned weights are already close to a local minima. These are all done as hyper parameter settings:

    if hyper_params.use_pre_trained_weights:
        # This will typically train in 5-10 epochs. Use a small learning rate
        # and quickly reduce every few epochs.
        s = 1e-4
        hyper_params.learning_rate_scale = s
        hyper_params.learning_rate_sched = Schedule(step_config=[15, 20],
                                                    change=[0.5*s, 0.1*s])
        hyper_params.momentum = 0.9
    else:
        # need to actively manage the learning rate if the
        # model is not pre-trained
        s = 1e-2
        hyper_params.learning_rate_scale = 1e-2
        hyper_params.learning_rate_sched = Schedule(
                                step_config=[8, 14, 18, 20],
                                change=[0.5*s, 0.1*s, 0.05*s, 0.01*s])
        hyper_params.momentum = 0.1

    These powerful hyper parameters are enforced with one line of code in create_model():

    opt = GradientDescentMomentum(hyper_params.learning_rate_scale,
                                  hyper_params.momentum, wdecay=0.0005,
                                  schedule=hyper_params.learning_rate_sched)

    Multiscale Sampler

    The 2007 Pascal VOC dataset supplies several rectangular regions of interest (ROI) per image with a label for each of the ROI. Neon ships with a loader of the Pascal VOC dataset. We’ll create a dataset loader by creating a class that derives from the PASCALVOCTrain class of that dataset.

    We will sample the input images at successively refined scales of [1., 1.3, 1.6, 2., 2.3, 2.6, 3.0, 3.3, 3.6, 4., 4.3, 4.6, 5.] and collect 448 patches. The sampling process at a given scale is simply (see compute_patches_at_scale()):

    size = (np.amin(shape)-1) / scale
    num_samples = np.ceil( (shape-1) / size)

    Since the patches are generated and not derived from the ground truth, we need to assign it a label. A patch is assigned the label of the ROI with which it significantly overlaps. The overlap criteria we choose is that at least 20% of a patch’s area needs to overlap with that of a ROI and at least 60% of that ROI’s area has to be covered by the overlap region. If no ROI or more than one ROI qualifies for this criteria for a given patch we label that patch as a background (see get_label_for_patch()). Typically, the background patches tend to dominate. During training we bias the sampling to carry more non-background patches (see resample_patches() ). All of the sampling is done dynamically within the __iter__() function of the MultiscaleSampler. This function is called when neon asks the dataset to supply the next mini-batch worth of data. The motivation behind this process is illustrated in figure 4 of [1]

    We use this patch sampling method for both training and inference. The MutiscaleSampler feeds neon a minibatch worth of input and label data while neon is not even aware that a meta form of multiscale learning is in progress. Since there are more patches per image than the minibatch size, a single image will feed multiple mini-batches during both training and inference. During training we simply use the CrossEntropyMulti cost function that ships with neon. During inference we leverage neon’s flexibility by defining our own cost function.

    Inference

    We do a multi-class classification during inference by predicting the presence or absence of a particular object label in the image. We do this on a per-class basis by skewing the class predictions with an exponent and accumulating this skewed value across all the patches inferred on the image. In other words, the score S(i,c) for a class c in image i is the sum of the individual patch scores P(j,c) for that class c, raised to an exponent.

    This is implemented by the ImageScores class and the score computation can be expressed with two lines of code (see __call__() ):

    exp = self.be.power(y, self.exponent)
    self.scores_batch[:] = self.be.add(exp, self.scores_batch)

    The intuition behind this scoring technique is illustrated in figures 5 and 6 of [1].

    Results

    Here are results on the test dataset. The prediction quality is measured with the Average Precision metric. The overall mean average precision (mAP) is 74.67. Those are good numbers for a fairly simple implementation. It took just 15 epochs of training as compared to the pre-trained model that needed more than 90 epochs of training. In addition, if you factor in the hyper-parameter optimization that went into the pre-trained model, we have a significant savings in compute.

    Classair planebikebirdboatbottlebuscarcatchaircow
    AP81.1779.3281.2174.8452.8974.5787.7278.5463.0069.57
    Classdining tabledoghorsemotorbikepersonplantsheepsofatraintv
    AP58.2874.0677.3879.9190.6969.0578.0259.5581.3282.26

    As expected, training converges much faster with a pre-trained model as illustrated in the below graph.

    Here are some helpful hints for running the example:

    1. Use this command to start a fresh new training run
      [code]./transfer_learning.py -e10 -r13 -b gpu –save_path model.prm –serialize 1 –history 20 > train.log 2>&1 &[/code]
    2. Use this command to run test. Make sure that the number of epochs specified in this command with the -e option is zero. That ensures that neon will skip the training and jump directly to testing.
      [code]./transfer_learning.py -e0 -r13 -b gpu –model_file model.prm > infer.log 2>&1 &[/code]
    3. Training each epoch can take 4-6 hours if you are training on the full 5000 images of the training dataset. If you had to terminate your training job for some reason, you can always restart from the last saved epoch with this command.
      [code]./transfer_learning.py -e10 -r13 -b gpu –save_path train.prm –serialize 1 –history 20 –model_file train.prm > train.log 2>&1 &[/code]

    The pre-trained model that we used can be found here.

    A fully trained model obtained after transfer learning can be found here.

    You can use the trained model to do classification on the Pascal VOC dataset using AlexNet.

    References

    [1] M. Oquab et. al. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks, CVPR 2014.
    [2] C. Szegedy et. al. Rethinking the Inception Architecture for Computer Vision. 2015
    [3] S. Bell, P. Upchurch, N. Snavely, and K. Bala. Material Recognition in the Wild with the Materials in Context Database. CVPR 2015.
    [4] R. Girshick. Fast R-CNN. 2015.
     

    About the Author:

    Aravind Kalaiah is a tech lead experienced in building scalable distributed systems for real time query processing and high performance computing. He was a technology lead at NVIDIA where he was the founding engineer on the team that built the world’s first debugger for massively parallel processors. Aravind has previously founded revenue generating startups for enterprises and consumer markets in the domains of machine learning and computer vision.

     

    Intel® XDK FAQs - General

    $
    0
    0

    How can I get started with Intel XDK?

    There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

    Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

    You can do the following to access our demo apps:

    • Select Project tab
    • Select "Start a New Project"
    • Select "Samples and Demos"
    • Create a new project from a demo

    If you have specific questions following that, please post it to our forums.

    How do I convert my web app or web site into a mobile app?

    The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

    Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

    Can I use an external editor for development in Intel® XDK?

    Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

    Some popular editors among our users include:

    • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
    • Notepad++* for a lighweight editor
    • Jetbrains* editors (Webstorm*)
    • Vim* the editor

    How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

    ...to be written...

    Why doesn’t my app show up in Google* play for tablets?

    ...to be written...

    What is the global-settings.xdk file and how do I locate it?

    global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

    You can locate global-settings.xdk here:

    • Mac OS X*
      ~/Library/Application Support/XDK/global-settings.xdk
    • Microsoft Windows*
      %LocalAppData%\XDK
    • Linux*
      ~/.config/XDK/global-settings.xdk

    If you are having trouble locating this file, you can search for it on your system using something like the following:

    • Windows:
      > cd /
      > dir /s global-settings.xdk
    • Mac and Linux:
      $ sudo find / -name global-settings.xdk

    When do I use the intelxdk.js, xhr.js and cordova.js libraries?

    The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

    How do I get my Android (and Crosswalk) keystore file?

    New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

    It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

    If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

    How do I rename my project that is a duplicate of an existing project?

    See this FAQ: How do I make a copy of an existing Intel XDK project?

    How do I recover when the Intel XDK hangs or won't start?

    • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
    • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
    • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
    • Clear Intel XDK's program cache directories and files.

      On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

      > cd %AppData%\..\Local\XDK
      > del *.* /s/q

      To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

      $ sudo find / -name global-settings.xdk
      $ cd <dir found above>
      $ sudo rm -rf *

      You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
    • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
    • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
    • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
    • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

    If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

    Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

    No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

    The following open source components are the major elements that are being used by Intel XDK:

    • Node-Webkit
    • Chromium
    • Ripple* emulator
    • Brackets* editor
    • Weinre* remote debugger
    • Crosswalk*
    • Cordova*
    • App Framework*

    How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

    Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

    How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

    You can try adding nw.exe as the app that needs an exception in AVG.

    What do I specify for "App ID" in Intel XDK under Build Settings?

    Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

    Here are some useful articles on how to create an App ID:

    Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

    You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

    <?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

    You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

    $ apktool d my-app.apk
    $ cd my-app
    $ more AndroidManifest.xml

    This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

    Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

    <?xml version="1.0" encoding="UTF-8"?><plugin
        xmlns="http://apache.org/cordova/ns/plugins/1.0"
        id="my-custom-bis-plugin"
        version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

    And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

    How can I share my Intel XDK app build?

    You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

    Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

    Common reasons include:

    • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
    • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
    • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

    How do I add multiple domains in Domain Access?

    Here is the primary doc source for that feature.

    If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

    How do I build more than one app using the same Apple developer account?

    On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

    How do I include search and spotlight icons as part of my app?

    Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

    <!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

    For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

    For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

    NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

    Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

    Does Intel XDK support Modbus TCP communication?

    No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

    How do I sign an Android* app using an existing keystore?

    New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

    If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

    How do I build separately for different Android* versions?

    Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

    How do I display the 'Build App Now' button if my display language is not English?

    If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

    How do I update my Intel XDK version?

    When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

    How do I import my existing HTML5 app into the Intel XDK?

    If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

    If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

    If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

    It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

    I am unable to login to App Preview with my Intel XDK password.

    On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

    Try the following if you are having such difficulties:

    • Reset your password, using the Intel XDK, to something short and simple.

    • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

    • Confirm that this new password works with the Intel Developer Zone login.

    • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

    • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

    If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

    If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

    How do I completely uninstall the Intel XDK from my system?

    Take the following steps to completely uninstall the XDK from your Windows system:

    • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

    • Then:
      > cd %LocalAppData%\Intel\XDK
      > del *.* /s/q

    • Then:
      > cd %LocalAppData%\XDK
      > copy global-settings.xdk %UserProfile%
      > del *.* /s/q
      > copy %UserProfile%\global-settings.xdk .

    • Then:
      -- Goto xdk.intel.com and select the download link.
      -- Download and install the new XDK.

    To do the same on a Linux or Mac system:

    • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
       
    • Remove the directory into which the Intel XDK was installed.
      -- Typically /opt/intel or your home (~) directory on a Linux machine.
      -- Typically in the /Applications/Intel XDK.app directory on a Mac.
       
    • Then:
      $ find ~ -name global-settings.xdk
      $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
      $ cp global-settings.xdk ~
      $ rm -Rf *
      $ mv ~/global-settings.xdk .

       
    • Then:
      -- Goto xdk.intel.com and select the download link.
      -- Download and install the new XDK.

    Is there a tool that can help me highlight syntax issues in Intel XDK?

    Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

    How do I delete built apps and test apps from the Intel XDK build servers?

    You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

    I need help with the App Security API plugin; where do I find it?

    Visit the primary documentation book for the App Security API and see this forum post for some additional details.

    When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

    If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

    Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

    1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
    2. The source is not an established market (Google Play is an example of an established market).

    If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

    Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

    If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

      

      

    How do I add a Brackets extension to the editor that is part of the Intel XDK?

    The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

    Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

    • exit the Intel XDK
    • download a ZIP file of the extension you wish to add
    • on Windows, unzip the extension here:
      %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
    • on Mac OS X, unzip the extension here:
      /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
    • start the Intel XDK

    Note that the locations given above are subject to change with new releases of the Intel XDK.

    Why does my app or game require so many permissions on Android when built with the Intel XDK?

    When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

    A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

    Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

    If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

    • android.permission.INTERNET
    • android.permission.ACCESS_NETWORK_STATE
    • android.permission.ACCESS_WIFI_STATE
    • android.permission.INTERNET
    • android.permission.WRITE_EXTERNAL_STORAGE

    then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

    BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

    How do I make a copy of an existing Intel XDK project?

    If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

    • Exit the Intel XDK.
    • Copy the entire project directory:
      • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
      • on Mac use Finder to "right-click" and then "duplicate" your project directory
      • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

    If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

    • Exit the Intel XDK.
    • Make a copy of your existing project using the process described above.
    • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
    • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • Save the modified "project-new.xdk" file.
    • Open the Intel XDK.
    • Goto the Projects tab.
    • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
    • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
    • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

    My project does not include a www folder. How do I fix it so it includes a www or source directory?

    The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

    This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

    • Exit the Intel XDK.
    • Copy the entire project directory:
      • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
      • on Mac use Finder to "right-click" and then "duplicate" your project directory
      • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
    • Create a "www" directory inside the new duplicate project you just created above.
    • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
    • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
    • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • A few lines down find: "sourceDirectory": "",
    • Change it to this: "sourceDirectory": "www",
    • Save the modified "project-copy.xdk" file.
    • Open the Intel XDK.
    • Goto the Projects tab.
    • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
    • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

    Can I install more than one copy of the Intel XDK onto my development system?

    Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

    Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

    On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

    Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

    I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

    Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

    What is the best training approach to using the Intel XDK for a newbie?

    First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

    What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

    There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

    In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

    Is my password encrypted and why is it limited to fifteen characters?

    Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

    The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

    Why does the Intel XDK take a long time to start on Linux or Mac?

    ...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

    At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

    On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

    $ export no_proxy="localhost,127.0.0.1/8,::1"
    $ export NO_PROXY="localhost,127.0.0.1/8,::1"
    $ export http_proxy=http://proxy.mydomain.com:123/
    $ export HTTP_PROXY=http://proxy.mydomain.com:123/
    $ export https_proxy=http://proxy.mydomain.com:123/
    $ export HTTPS_PROXY=http://proxy.mydomain.com:123/

    IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

    If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

    After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

    On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

    $ open /Applications/Intel\ XDK.app/

    On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

    $ ~/intel/XDK/xdk.sh &

    In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

    How do I generate a P12 file on a Windows machine?

    See these articles:

    How do I change the default dir for creating new projects in the Intel XDK?

    You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

    "projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
      },

    The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

    On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

    "projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
      },

    Obviously, it's the defaultPath part you want to change.

    BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

    Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

    Where I can find recent and upcoming webinars list?

    How can I change the email address associated with my Intel XDK login?

    Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

    What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

    Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

    • appcenter.html5tools-software.intel.com (for communication with the build servers)
    • s3.amazonaws.com (for downloading sample apps and built apps)
    • download.xdk.intel.com (for getting XDK updates)
    • debug-software.intel.com (for using the Test tab weinre debug feature)
    • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
    • signin.intel.com (for logging into the XDK)
    • sfederation.intel.com (for logging into the XDK)

    Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

    I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

    If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

    Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

    If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

    • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
    • The install package is corrupt and failed the verification step.

    The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

    The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

    If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

    See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

    Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

    Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

    Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

    As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

    We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

    If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

    Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

    Connection Problems? -- Intel XDK SSL certificates update

    On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

    If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

    • the operation that failed
    • the version of your XDK
    • the version of your operating system
    • your geographic region
    • and a screen capture

    How do I resolve build failure: "libpng error: Not a PNG file"?  

    f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

    Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png
    
    Error Code: 42
    
    Output: libpng error: Not a PNG file

    You need to change the format of your icon and/or splash screen images to PNG format.

    The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

    Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

    Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

    Why do I get a "Parse Error" when I try to install my built APK on my Android device?

    Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

    My converted legacy keystore does not work. Google Play is rejecting my updated app.

    The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

    If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

    There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

    • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
    • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

    Both of the above items have been resolved and should no longer be an issue.

    If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

    IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

    If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

    How can I have others beta test my app using Intel App Preview?

    Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

    If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

    • give them your Intel XDK userid and password
    • create an Intel XDK "test account" and provide your testers with that userid and password

    For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

    A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

    Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

    • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
    • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
    • make sure you have selected the project that you want users to test, on the Projects tab
    • goto the Test tab
    • make sure "MOBILE" is selected (upper left of the Test tab)
    • push the green "PUSH FILES" button on the Test tab
    • log out of your "test account"
    • log into your development account

    Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

    Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

    I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

    There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

    <script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

    ...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

    An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

    You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

    How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

    You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

    See the image below (this image is from a Windows 8.1 system).

    Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

    Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

    Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

    This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

    As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

    The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

    How do I run the Intel XDK on Fedora Linux?

    See the instructions below, copied from this forum post:

    $ sudo find xdk/install/dir -name libudev.so.0
    $ cd dir/found/above
    $ sudo rm libudev.so.0
    $ sudo ln -s /lib64/libudev.so.1 libudev.so.0

    Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

    Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

    Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

    The Intel XDK generates a path error for my launch icons and splash screen files.

    If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

    <icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

    where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

    This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

    Inside of your <project-name>.xdk file you will find entries that look like this:

    "icons_": [
      {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
      },
      {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
      },
      {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
      },

    Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

    Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

    <!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

    Back to FAQs Main

    Viewing all 3384 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>