Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Trend Watch: What to Look for in 2017

$
0
0

A lot of the most innovative technology we saw in 2016 was focused on the integration of the real world and the virtual world—whether it was Pokemon Go*, an augmented reality game using GPS and cameras, or new chatbots that can provide real-time info and help make travel arrangements via messaging platforms. As we move into 2017, we’re going to see even more ways of using technology to connect to the world around us, and we’re going to see conversations between the things themselves—and the software that supports them—become even more sophisticated and complex.

Whether your focus is on consumer apps or you're interested in the B2B space, here are the major tech trends to pay attention to in 2017.

Artificial Intelligence (AI) & Machine Learning

The field of Artificial Intelligence once isolated to the dreams of science fiction, continues to grow and develop, expanding our definition of what’s possible. Encompassing everything from self-driving cars to smart product recommendations to chatbots, AI is essentially the idea of machines that can sense, reason, act—and then adapt based on experience. As we continue to increase our level of connectedness—through more IoT devices, and massive amounts of data—there’s a lot more information to work with, and a lot of important advances being made.

The big news in 2016 was that Google and Microsoft added AI services to their clouds, making AI available to a lot of smaller businesses and start-ups. As we move forward into 2017, we expect to see a lot more of these companies integrating AI into their offerings—and as the market increases, we expect to see a lot more collaboration. There’s a lot to be gained by understanding how devices and systems can work together, something smart companies and smart developers will be putting a lot of energy and resources toward. Rather than existing in their own silos, companies and developers will look to collaborate and integrate with each other.

What’s Intel Doing?

Intel has hardware, optimized frameworks, and resources to help with AI development.

Optimized frameworks – Caffe* is one of the most popular community applications for image recognition, and Theano* is designed to help write models for deep learning. Both frameworks have been optimized for use with Intel® architecture. Learn how to install and use these frameworks, and find a number of useful libraries, here.

Hardware - Intel® Xeon Phi™ processor family – These massively multicore processors deliver powerful, highly parallel performance for machine learning and deep learning workloads. Get a brief overview of deep learning using Intel® architectures here, and learn more about the Intel® Xeon Phi™ processor family’s competitive performance for deep learning here

Learn More:

Virtual assistants aren’t new, and consumers have become increasingly comfortable with them over the last few years—with Siri* always available on their phone and Alexa* sitting on their kitchen counter, ready to answer a question when asked. One of the things that changed in 2016 was that there was a big push for virtual assistants to become more integrated with other systems and services, so that users will be able to access third party software, or handle more complicated transactions just by interacting with their virtual assistant. In June, Apple finally opened up Siri* to third-party developers, which has opened up a wide new range of possibilities.

As we move into 2017, look for new collaborations, and also look for digital services to become even more conversational. Instead of typing a search term into Google, the advent of virtual assistants means that you can make your request or ask your question in the same way you normally talk—“Should I wear a jacket today?”—and then your virtual assistant can translate your question into the series of steps or actions that need to be taken. For example, checking the weather forecast, checking your schedule, and checking wardrobe inventory, all while using AI to continually improve quality and accuracy of results. As virtual assistants become more integrated, the tasks they’re capable of will continue to get more complicated, and ultimately—a lot more helpful. 

With IoT, or the Internet of Things, everyday objects in our world continue to grow smarter. No longer just a shoelace, or a lightbulb, or an electric socket, objects which have been outfitted with chips and sensors become a way for us to track and understand the world around us, improving our experience with software that can help make sense of it—by identifying upcoming repair needs in manufacturing, turning off our lights when we leave the room, or encouraging us to get more physical exercise.

In 2017, watch for devices to start to communicate and with each other, and help each other make decisions, especially when it comes to smart buildings and cities. Really interesting models can be built as more data is collected and more systems are opened to collaboration. For example, if a rain gauge can measure precipitation, and then that information can be combined with the latest weather forecast, then a smart building can make the right decision about whether or not the front entry lights should be turned on. Another place to look for opportunities is in manufacturing. Sensors and chips can help identify repairs before they need to happen, saving time and money in the process.

What’s Intel Doing?

Intel provides a wide range of hardware options—including Intel® Edison and Intel® Curie modules and the Intel® IoT Gateway  for enterprise—in addition to software tools and code samples to get you started, tools for cloud and analytics, and support. Intel also hosts and participates in numerous hackathons and other IoT-related events worldwide.

Learn More:

Virtual Reality was a big topic of conversation in 2016, but it has yet to really move into the mainstream. Look for that to change in 2017. Microsoft will be releasing a new HoloLens this year, and Facebook has also previewed some new work applications with Oculus Rift*. As we see continued improvements in AI, and more connected devices, we’ll start to see immersive experiences like VR become more connected as well—to the real world, to other devices, and even to other virtual experiences. We’ll also see a move toward more business use-cases.

Much of the promise for VR, or Augmented Reality (AR), has been in consumer applications such as gaming, or events like football games or concerts where you can “be” right in the middle of the action without ever leaving your couch. But there are plenty of opportunities for this technology in the B2B market as well, especially when it comes to training.

Think of any hard-to-access location—whether that’s outer space, the depths of the ocean, or the inside of a person’s body—and then think about how we can train people to work in that environment. Surgeons can use VR to practice surgical techniques, while underwater technicians can learn how to perform complicated repairs. There are also a lot of great industrial applications, such as digital overlays on machinery in factories, which can help provide education and assist in repairs.

In 2017, as VR moves further into the mainstream, look for these more functional and business-related use-cases to really shine.

What’s Intel Doing?

Intel is focused on Merged Reality, which feels less virtual and more real. In this type of experience, you see the real world, but you're also able to see and manipulate digital objects. With no restrictions on where you go next in the experience, you have the power to change the story, or to experience important moments with important people, no matter where they are in the actual world.

Learn More:

Essentially, chatbots are entities that you can communicate with on Facebook*, Skype*, and other popular messaging platforms. Instead of going to a retailer’s website to purchase a new belt, you might just send them a message using Facebook Messenger*, letting them know what kind of belt you’re looking for and what size you need. Or you might send a message to Marriott*, letting them know the dates of your upcoming trip and what kind of room you need, or asking them about upcoming special deals. Facebook* launched bots in Messenger* in 2016, allowing companies to deliver automated customer support, e-commerce guidance, content and interactive experiences, and we should start to see a lot more of that in the next year.

In 2017, look for further integration between messaging apps and other companies—some of these chatbot interactions will happen within messaging platforms, and some will happen within existing product apps. Some companies will be interested in custom chatbots, while others will want to make use of off-the-shelf versions. Also look for increased functionality within chatbots, like more complex transactions and e-commerce, as well as enhanced AI which will allow chatbots to actually emulate certain people or personalities, bringing in humor and the “sound” of particular voices. 

The following four trends are less developed than the ones we've discussed above—but no less exciting. The focus here is on how we can better support all of this new technology that’s being created, and make it work even better. How will we access and secure the data that’s being collected? How can we find efficiencies, and collaborate to find the promise of all of these connected devices and systems? Read more to get on the ground floor.

Blockchain

Bitcoin* was all the rage a few years ago, but in 2017, what we’re most excited about is the underlying technology behind it. So what is it, and why is it important? Essentially, blockchain is a way of distributing a database across many different computers, maintaining a growing list of records, called blocks. Each block is then chained to the previous block in a specially-encrypted peer-to-peer network—a system that's able to provide a high level of security from tampering and revision. This technology can be used to keep track of digital coins, like Bitcoin*, but it also has many other potential uses, and can change the way we store and transfer data in a number of different industries. Because the data is recorded and shared by a community, there is not only a higher level of security, but also transparency and trust—each member of the community has their own copy of the record, and must validate updates collectively. Think about the impact this might have, not just in the financial industry, but in health care, social networks, loyalty and reward programs, contracts, music distribution, asset management, identity verification, title registry, supply chain, and even in electronic voting.

There are use-cases to consider in nearly any industry where we want to keep and maintain accurate records. Consider ticketing for events, something that the entertainment industry works hard to manage, usually with the help of third-party vendors charging additional fees. Using blockchain, fans can verify the transfer of ownership from one digital wallet to another, without having to worry that the PDF of the ticket they received might have been sold to multiple other people.

2016 was a year for experimenting with blockchain, and in 2017 we’ll start to see real-world applications. Because the technology is new, and there's not yet a critical mass of assets on blockchain, growth will still be limited—and likely, focused on the financial industry—but we should see multiple networks moving forward with their own standards and protocols, determining best practices and paving the way for future collaborations.

Digital Twin

A digital twin is a software model of a physical thing. By using sensor data to understand the current state of the real thing—and visualize changes as they occur—digital twins provide us with new opportunities to to improve operations and add value. If we were to look at a report containing data, such as the performance of machines in a factory, we would need to review the data and then reconceptualize how the product is moving through individual stations in order to understand what's happening—and how operations might be improved, or where we might soon anticipate problems. With digital twins, we're able to actually see the progress of the physical product as it moves through stations, actually see information about the characteristics of the physical product. Instead of looking at the numbers, we can look at the products in the virtual factory and see the actual trend lines that indicate a problem is developing.

In some ways, you can think of digital twins as the confluence of AI, IoT and VR/AR. We use data from sensors and devices—whether that's machine readings for a factory, or wearables for a serious athlete in training—then we use VR/AR to visualize the status of the factory or the athlete, and then we use AI to do what AI does best—sense, reason, act and adapt. We have an amazing amount of information being collected by sensors and devices, and digital twins provide a new way to make use of it.

According to Gartner, within the next five years, hundreds of millions of things will have digital twins, used to plan for equipment service, operate factories, predict equipment failure, increase efficiency, and develop new products. These digital twins will act as the critical connection between data about the physical world, and information contained in the digital world.

Mesh App & Service Architecture (MASA)

As we continue to invent and rely upon new online devices, we’ll need IT systems that allow these devices to talk to each other, and that's where the mesh app and service architecture, or MASA, comes in. MASA will continue to develop alongside these other technologies in 2017 and beyond. By exposing APIs at multiple levels and across traditional boundaries, MASA weaves together web, mobile, desktop and IoT apps, allowing for a continuous, optimized experience.

Essentially, the idea is to link up multiple endpoints in a way that offers a seamless experience. It has to be faster than traditional architecture, and needs to be collaborative across multiple devices and applications, working in and outside of the cloud, and accommodating the rapidly changing needs of users.

Adaptive Security Architecture

As devices get smarter and more connected, we need even better ways to protect data and ensure the security of our systems. Adaptive security architecture is the idea that we can bring the power and learning of AI to the field of security—not just putting safeguards in place, but being able to predict, block/prevent, detect and report—managing threats as they occur, and gaining critical insights for improvement.  With ASA, smarter computers and devices will be able to learn how to protect themselves better.

It's more important than ever to have intelligent, flexible strategies for dealing with a very high volume of security data. To be truly adaptive and to provide a high level of security, we can’t afford to simply log data, or alert a human operator. The architecture itself needs to be able to make decisions and respond within seconds, and that's what we'll begin to see as we move into 2017 and ASA is further developed.

As we move into 2017, there’s a lot to look forward to—and a lot that hasn't even been discovered yet. If you're interested in one of these fields, continue to experiment and explore, and check out the Intel® Developer Zone for more articles, tools, and to connect with other developers.


Intel® XDK FAQs - IoT

$
0
0

General IoT FAQs

Connecting your board to the Intel XDK

Resolving runtime issues

Where can I download the Intel XDK?

The Intel XDK main page includes download links for the Linux, Windows and OSX operating systems.

How do I update the MRAA library on my Intel IoT platforms?

The simplest way to update the mraa library on an Edison or Galileo platform is to use the built in "Update libraries on board" option which can be found inside the IoT settings panel on the Develop tab. See the screenshot below:

Alternatively, on a Yocto Linux image, you can update the current version of mraa by running the following commands from the Yocto Linux root command-line:

# opkg update
# opkg upgrade

If your IoT board is using some other Linux distribution (e.g. a Joule platform), you can manually update the version of mraa on the board using the standard npm install command:

# npm install -g mraa

...or:

$ sudo npm install -g mraa

...for a Linux distribution that does not include a root user (such as Ubuntu).

All command-line upgrade options assume the IoT board has a working Internet connection and that you are logged into the board using either an ssh connection or over a serial connection that provides access to a Linux prompt on your IoT board.

Can the xdk-daemon run on other Linux distributions besides Yocto?

The Intel XDK xdk-daemon is currently (November, 2016) only supported on the Yocto and Ostro Linux distributions. Work is ongoing to provide a version of the xdk-daemon that will run on a wider range of IoT Linux platforms.

How do I connect the Intel XDK to my board without an active Internet connection?

The Intel Edison Board for Arduino supports the use of an RNDIS connection over a direct USB connection, which provides a dedicated network connection and IP address. Other boards can connect to a local network using either a wireless or wired LAN connection. The wired LAN connection may require attaching a USB Ethernet adaptor to the IoT board, in order to provide the necessary physical wired Ethernet connection point. Access to your local network is all that is required to use an IoT device with the Intel XDK, access to the Internet (by the IoT board) is not a hard requirement, although it can be useful for some tasks.

Most Intel IoT platforms that are running Linux (and Node.js) can be "logged into" using a USB serial connection. Generally, a root Linux prompt is available via that USB serial connection. This serial Linux prompt can be used to configure your board to connect to a local network (for example, configure the board's wifi connection) using Linux command-line tools. The specific details required to configure the board's network interface, using the board's Linux command-line tools, is a function of the board and the specific version of Linux that is running on that board. Please see the IoT board's installation and configuration documentation for help with that level of setup.

How do I use a web service API in my IoT project from my main.js?

Your application's main.js file runs on a standard Node.js runtime engine; just as if you were in a server-based Node.js environment, you can create a simple HTTP server as part of your IoT Node.js app that serves up an index.html to any client that connects to that HTTP server. The index.html file should contain a reference to the JavaScript files that update the HTML DOM elements with the relevant web services data. You are accessing the index.html (HTML5 application) from the http server function in the main.js file. A web services enabled app would be accessed through a browser, via the IoT device's IP address.

See this blog, titled Making a Simple HTTP Server with Node.js – Part III, for more help.

Error: "Cannot find module '/opt/xdk-daemon/current/node-inspector-server/.../debug.node" message

In some IoT Linux images the xdk-daemon was not compiled correctly, resulting in this error message appearing when a debug session is started. You can work around this issue on an Edison or Galileo platform by using the "Upgrade Intel xdk-daemon on IoT device" option, which can be found in the IoT settings panel on the Develop tab. See the screenshot below:

Error: "Cannot find module 'mime-types' at Function.Module ..."

This error usually indicates than an npm install may not have completed correctly. This can result in a missing dependency at runtime for your IoT Node.js app. The best way to deal with this is:

  1. Remove the node_modules directory in the project folder on your development system.

  2. Switch to another Intel XDK project (if you don't have another project, create a blank project).

  3. Switch back to the problem project.

  4. Click the "Upload" icon on the Develop tab and you should be prompted by a dialog asking if you want to build.

  5. Click the build button presented by the dialog prompt in the previous step.

  6. Wait for a completion of the build, indicated by this message in the console:
    NPM REBUILD COMPLETE![ 0 ] [ 0 ]

Now you should be able safely run the project without errors.

Error: "Write Failed" messages in console log, esp. on Edison boards.

This can be caused by your Edison device running out of disk space on the '/' partition. You can check this condition by logging into the Edison console and running the df command, which should give output similar to this:

# df -h /
Filesystem  Size    Used    Available  Use%  Mounted on
/dev/root   463.9M  453.6M  0          100%  / 

A value of "100%" under the "Use%" column means the partition is full. This can happen due to a large number of logs under the /var/log/journal folder. You can check the size of those logs using the du command:

# cd /var/log/journal
# du -sh *
 11.6M 0ee60c06f3234299b68e994ac392e8ca
 46.4M 167518a920274dfa826af62a7465a014
  5.8M 39b419bfd0fd424c880679810b4eeca2
 46.4M a40519fe5ab148f58701fb9e298920da
  5.8M db87dcad1f624373ba6743e942ebc52e
 34.8M e2bf0a84fab1454b8cdc89d73a5c5a6b 

Removing some or all of the log files should free up the necessary disk space.

Be sure not to delete the /var/logs/journal directory itself!!

Name/IP address of the board doesn't appear in the device list

See Finding and connecting your board to the Intel XDK IoT Edition.

Network issues when connecting the board to the Intel XDK IoT Edition

If your board is not showing up in the IoT Device list in the Intel XDK IoT Edition, you may have a DNS issue or network conflict that is cached on the board, the Intel XDK IoT Edition, or your system. This section contains workarounds and suggestions to address these connection issues. If you are on a home network or are reasonably sure your network is not the issue, see Finding and connecting your board to the Intel XDK IoT Edition for other possible solutions.

Try the following:

  • Check that your host system is on the same network as your board.
  • Shut down the Intel XDK IoT Edition and reboot your system. Unplug your board from its power supply and plug it back in.
  • If your board is connected to your Wi-Fi network, reconfigure your board's Wi-Fi connection. For the Intel® Edison board, use the configure_edison --wifi command. Also set a password for your device using configure_edison --password, and be sure to provide your login information when connecting to your board.
  • Make sure your LAN or corporate firewall supports TCP/IP Port 22.
  • Also try restarting the XDK daemon. In a serial communication session, enter the command: systemctl restart xdk-daemon

You may need to connect to your board manually under the following circumstances:

  • Your Internet network requires additional login credentials (for example, a university Wi­-Fi network).
  • You are using Ethernet over USB for the Intel® Edison board or a direct Ethernet connection for the Intel® Galileo board.

If the problem persists, you may have a local networking issue that may resolve once fewer people are on the network.

"Bonjour is missing - Please install Bonjour!" message on a system with Windows

If you are a Windows user and Bonjour is not installed on your host system, a "Bonjour is missing" message is displayed. Complete the steps in the Install Bonjour section to install it.

Example of a Bonjour is missing message

I have Bonjour installed, but the Intel XDK IoT Edition doesn't automatically detect my board

If you have a system with Windows 8.1, consider updating to Bonjour Print Services 3.0, which is included with Apple iTunes*. To download iTunes, see https://www.apple.com/itunes/download/. Note that if you're using a corporate host machine, your firewall may block Bonjour.

If you still have issues connecting with your board, try connecting to your board manually.

"ERRCONNREFUSED" message

For possible workarounds, see the following: https://communities.intel.com/message/279807#279807

The Intel XDK IoT Edition hangs when I try to connect to my device

From the IoT Device drop-down list, attempt to connect again by selecting your device or Add manual connection. Your previous attempt is automatically cancelled and you can try the connection again.

Finding and connecting your board to the Intel XDK IoT Edition

A number of issues may cause difficulty in connecting your board to the Intel XDK IoT Edition:

  • Network or DNS issues. This is more likely to be an issue if your network uses additional login credentials (for example, a university Wi-Fi network) or a corporate firewall, or if you are at an event such as a hackathon, with many users trying to connect with limited network resources. It is less likely to be an issue on a home network.

    For workarounds and suggestions to deal with network issues, see Network issues when connecting your board to the Intel XDK IoT Edition. If you are using an Intel® Edison board, you may also want to try connecting your board directly to your host, as described in Connecting to your Intel® Edison board using Ethernet over USB.
  • Problems with Bonjour: If your host system has Windows, you must install Bonjour to have the Intel XDK IoT Edition automatically detect your board and add it to the IoT Device drop-down list. You may experience issues with Bonjour if you have Windows 8.1, or if you are using a corporate host system. Try manually connecting to your board instead.
  • If you are connecting your board to the Intel XDK IoT Edition using Ethernet over USB or a direct Ethernet connection, or if you are simply not seeing your board in the IoT Device drop-down list, you may need to connect to your board manually.

Manually connecting your board to the Intel XDK IoT Edition

You can manually connect your board to the Intel XDK IoT Edition using the board's IP address. At a high-level, the steps to connect your device are:

  1. Set up your board and connect it to your host system.
  2. Connect your board to the same network as your host.
  3. Create a serial communication session with your board. In this session, use the ifconfig command to find your board's IP address.
  4. Manually connect your board to the Intel XDK IoT Edition. In the Intel XDK IoT Edition, from the IoT Device drop-down list, select Add Manual Connection. Enter your board's IP address, as well as your login information, then clickConnect.

The exact steps to connect your board vary depending on your board and what method you use when connecting your board to your network. Whichever method you use, be sure that your board and your host system are connected on the same network.

For guidance to connect your board to the Intel XDK IoT Edition, see the appropriate section:

Connecting to an Intel® Edison board using Wi-Fi

This section assumes that you have already assembled your Intel® Edison board and connected it to your host system. For steps, see Assembling the Intel® Edison board with the Arduino expansion board.

  1. Set up a serial communication session with your board. See the appropriate steps for Windows, Mac OS X, or Linux for detailed instructions.
  2. If you haven't already, enter the command: configure_edison --password and follow the onscreen prompts to set up a login password for your board. Your default user account is root.
  3. Use the command configure_edison --wifi and follow the onscreen prompts to connect your board to your Wi-Fi network and find your board's IP address. For detailed steps, see Connecting your Intel Edison board using Wi-Fi*.
  4. In the Intel XDK IoT Edition, from the IoT Device drop-down list, select Add Manual Connection. Enter your board's IP address, as well as your login information, then clickConnect.

Connecting to an Intel® Edison board using Ethernet over USB

This section contains steps to connect your Intel® Edison board directly to your host using Ethernet over USB. It assumes that you have already assembled your Intel® Edison board and connected it to your host system. For steps, see Assembling the Intel® Edison board with the Arduino expansion board. Your host system must have either Windows or Linux to connect to your board using Ethernet over USB.

  1. Refer to Connecting to your Intel® Edison board using Ethernet over USB for steps to set up Ethernet over USB.
  2. In the Intel XDK IoT Edition, from the IoT Device drop-down list, select Add Manual Connection.
  3. In theAddressfield, enter 192.168.2.15. Enter your board's login information, then clickConnect.

Connecting to an Intel® Galileo board using Wi-Fi

This section assumes that you have already assembled your Intel® Galileo board and connected it to your host system. This includes connecting your Wi-Fi adapter to your board. See the appropriate steps for Windows, Mac, or Linux.

  1. Open a serial communication session with your board. See the appropriate steps for Windows, Mac, and Linux.
  2. Connect the board to your Wi-Fi network and find your board's IP address. See the appropriate steps for Windows, Mac, or Linux.
  3. In the Intel XDK IoT Edition, from the IoT Device drop-down list, select Add Manual Connection. Enter your board's IP address, as well as your login information, then clickConnect.

Connecting to an Intel® Galileo board using a direct Ethernet connection

This section assumes that you have already assembled your Intel® Galileo board and connected it to your host system. An example of a connected Intel® Galileo board is shown below. For detailed instructions to assemble your board, see the appropriate steps for Windows, Mac, and Linux. Note that you don't have to connect a Wi-Fi adapter to your board or connect your board to a Wi-Fi network.

Example of a connected board

  1. Connect an Ethernet cable to the Ethernet port on your board. Plug the other end in to your host system.
  2. Open a serial communication session with your board. See the appropriate steps for Windows, Mac, and Linux.
  3. Once logged in to your board, enter the command:

    ifconfig

    This will give you the IP address of the device. In the listing, it is called the inet addr. In the picture below, the IP address is 10.253.60.19. So to connect to this device, you would type 10.253.60.19 into the Address field in the Connect to your IoT Device dialog box.

    If the response does not include an entry for your Ethernet connection, try the command ipup eth0 before trying ifconfig again.

    Example of finding you board's IP address
    Notice that the line above the IP address entry gives you the HWaddr (hardware address) of your interface, which is the same as the MAC address on the piece of paper glued to the connector in the setup photo above.
  4. If the above method for finding your board's IP address does not work for you, another useful tool can be arp. Use the arp –a command to list all the connections to a computer together with their MAC addresses. If your board is on the same subnet as your host, you can start by pinging all the machines on the subnet so that you retrieve all their MAC addresses. You can then use arp –a to spot your board from its MAC address on that piece of paper glued to the Ethernet connector.

    Example of running arp -a
  5. In the Intel XDK IoT Edition, from the IoT Device drop-down list, select Add Manual Connection. Enter your board's IP address, as well as your login information, then clickConnect.

Setting up a serial terminal on a system with Windows*

This section contains steps to set up serial communication with an Intel® Galileo Gen 1 or Gen 2 board on a system with Windows.

Intel® Galileo Gen 1 board

  1. Open the Device Manager. You should see a new entry for your Intel Galileo board in the (COM & LPT)Ports section. Make a note of the COM#, as shown in the example image below (in this case, COM10); you'll need this information to connect to your board. Note that the name and COM# for your particular board may vary.

    Example of the entry for your board in the Device Manager
  2. Proceed to set up PuTTY.

Intel® Galileo Gen 2 board

  1. Open the Device Manager. You should see a new USB Serial Port entry in the (COM & LPT)Ports section. Make a note of the COM#, as shown in the example image below (in this case, COM21); you'll need this information to connect to your board.

    windows_2_usb_port
  2. Proceed to set up PuTTY.

Set up PuTTY

  1. Download the PuTTY terminal emulator: http://the.earth.li/~sgtatham/putty/latest/x86/putty.exe.
  2. Double-click the putty.exe file you downloaded to run it.
  3. Configure the PuTTY menu as follows
    1. Under Connection type, select Serial.
    2. In the Serial line field, type your COM# (the COM port of your FTDI or 3.5mm to DB-9 adapter).
    3. In the Speed field, type 115200.

      windows_3_setup_putty

  4. When you see a blank screen, press Enter. You should see a login screen.
  5. At the login prompt, type root and press Enter.
  6. Enter the password for the root account and press Enter. By default, the root password is blank. You should see a terminal prompt.

    Note: You can set the root password by entering the command: passwd.

You have now set up serial communication with your board.

Setting up a serial terminal on a system with Mac*

This section contains steps to set up serial communication with an Intel® Galileo board on a system with Mac* OS X*.

At the end of this section you will have connected to the board through a terminal, and checked whether your firmware is up to date.

  1. Launch the Terminal app, as follows:
    1. Launch Spotlight by typing Cmd+Space.
    2. Type terminal.
    3. Select the Terminal app.
  2. To list all connected devices, enter the command:

    ls /dev/tty.*

    Be sure to include the .* at the end of the command.
  3. Look for a device that contains cu.usbserial or tty.usbserial. For example, your device may show up as /dev/tty.usbserial-A402YSYU.

    Tip: If you don’t see a usbserial device listed, verify the power and USB connection to your Intel® Galileo board.
  4. Connect to the USB serial device using the Terminal screen utility by typing:

    screen /dev/xx.usbserial-XXXXXXXX 115200 –L where /dev/xx.usbserial-XXXXXXXX is replaced by your device’s unique name.

    Using the example above, the command would be: screen /dev/tty.usbserial-A402YSYU 115200 –L

    Note:115200 indicates the baud rate. Always use 115200. -L turns on output logging so you can see what the result of your commands are.

  5. When you see a blank screen, press Enter. You should see a login screen.
  6. At the login prompt, type root and press Enter.
  7. Enter the password for the root account and press Enter. By default, the root password is blank. You should see a terminal prompt.

    Note: You can set the root password by entering the command: passwd.

You have now set up serial communication with your board.

Setting up a serial terminal on a system with Linux*

This section contains steps to set up serial communication with an Intel® Galileo board on a system with Linux*.

  1. If you haven't already, install the screen shell session manager by entering the following in the Terminal:

    sudo apt-get install screen

  2. Launch the Terminal.
  3. Connect to your board by entering:

    sudo screen /dev/ttyUSB0 115200

  4. You may be asked for your root password (within Linux). If so, type in your root password and press Enter.
  5. When you see a blank screen, press Enter. You should see a login screen.
  6. Enter the password for the root account and press Enter. By default, the root password is blank. You should see a terminal prompt.

    Note: You can set the root password by entering the command: passwd.

You have now set up serial communication with your board.

How do I find the IP address of my board?

  1. In a serial communication session with your board, enter the command:

    ifconfig
  2. Note your IP address, as shown in the image below. The exact entry containing your IP address may vary slightly depending on how you have connected to your board. For details, see the appropriate link below:
    Find your IP address.

Intel XDK IoT Edition won’t upload to the Intel® Edison board/"Error Edison Drive is full" message

If your applications hang during the uploading process or you see the "Error Edison Drive is full" message, this is due to a bug in the file system that is continually logging without a limit. You must delete journal entries and install an update. As a workaround, try configuring systemd (the board's system logger) to set a maximum log file size.

  1. In a serial communication session with your board, open the /etc/systemd/journald.conf file.
  2. Replace the line that reads #SystemMaxFileSize= with SystemMaxFileSize=200K Notice the # (pound sign) is now gone.
  3. Restart the system logging service or type reboot.

Intel XDK IoT Edition crashes

If you get an error message and the Intel XDK IoT Edition crashes with an option to report the issue, the Intel XDK IoT Edition's connection to the board may have been interrupted. For example, this can happen in a hackathon setting with hundreds of phones, laptops, and boards connected to the LAN.

This is a known issue. As a workaround for the Intel® Edison board, try connecting to your board using Ethernet over USB.

Error: "Cannot find mraa" message when trying to run an application

If you see a “Cannot find mraa” message when trying to run an application on your Intel® Edison board, the default image on the board likely does not have the latest libraries to work with the Intel XDK. As a workaround, follow the instructions here in the Running Sample Applications section.

NPM ENOSPC errors on the Intel® Edison board

If you are getting "Error extracting update" or "Error: Command failed: x node_modules/" messages with your Intel® Edison board, as shown below, you may need to flash your board again.

  1. In a terminal communication session with your board, enter the command:

    reboot ota.
  2. Log in to your board, then enter the command:

    configure_edison -–wifi
  3. Follow the on-screen instructions to reestablish your board's Wi-Fi connection.

Example of errors in the Intel XDK IoT Edition console

My xdk-daemon is corrupted

Click the Manage your daemon/IoT device icon Manage your daemon/IoT device icon in the bottom right of the Intel XDK, then select Upgrade Intel xdk-daemon on IoT device. Your daemon is updated and should no longer be corrupted.

Connecting directly to your board with SSH

You will not normally need to connect to your board with SSH, but it can be handy if you have some experience working on Unix or Linux and need to troubleshoot. Just remember that it can be very easy to cause damage as the root user.

To connect to your board with a shell that will allow you to manage your board directly, use the ssh command:

ssh –l root ipaddress

where ipaddress is the IP address of your board. Often, this address is 192.168.2.15. For steps to find your board's IP address, see Finding and connecting your board to the Intel® XDK IoT Edition.

Alternatively, if you do not have ssh available as a command, you can download PuTTY and use it to SSH into the board.

Once you have connected to your board, you can perform a number of useful tasks:

Check that the daemon is running

To check if the Intel XDK IoT Edition daemon is running, enter the command:

ps | grep xdk

If the daemon is running, you should see a response similar to the following:

root@edison:~# ps | grep xdk
  287 root      3132 S    {xdk-daemon} /bin/sh /opt/xdk-daemon/xdk-daemon
  292 root     35964 S    /usr/bin/node /opt/xdk-daemon/main.js
  303 root     72224 S    /usr/bin/node /opt/xdk-daemon/current/appDaemon.js
443 root      2660 S    grep xdk

Change the root password

To change the password for the root account, enter the command:

passwd

Follow the onscreen instructions to change your password, as shown in the example below.

root@edison:~# passwd
Changing password for root
Enter the new password (minimum of 5 characters)
Please use a combination of upper and lower case letters and numbers.
New password:
Re-enter new password:
passwd: password changed.

Restart the daemon

There should be no reason you need tao restart the daemon unless you are working on the daemon itself; the most convenient way to update the daemon is from the Intel XDK IoT Edition. If you need to do some work on the daemon, however, you can restart it by entering the following command:

root@edison:~# systemctl restart xdk-daemon

Error: "Cannot find module '/opt/xdk-daemon/current/node-inspector-server/.../debug.node’"

If you see this message, you need to upgrade your version of the XDK daemon. To do so, click the Manage your daemon/IoT device icon Manage your daemon/IoT device icon in the lower right of the Intel XDK IoT Edition, then select Upgrade Intel xdk-daemon on IoT device. Your daemon is updated and the correct module is pulled in.

Back to FAQs Main 

Intel® Adaptive Display

$
0
0

Intel Adaptive Display context service is a windows service that adapts screen brightness of a display in response to the amount of ambient light in the environment. Advantages of this being, reduce eye strain in dim environments, increase readability in bright environments, improve sleep and save battery power.

Please download the release artifacts from the links below for your evaluation.

Updating the License File after Renewal

$
0
0

When renewing your license you are entitled to a full year of Premier Support and product updates. If you already have the desired version installed you don’t have to reinstall it. If you wish to download the newest version you will find a link to the download in your renewal email. You can also log into Intel Registration Center and download the product.

Note, the serial number for your product is not likely to change unless you purchase a product upgrade or if you received a free upgrade. If your serial number has changed be sure to keep the new number for your records.

If you choose to download and install an updated version be sure to use the correct serial number. If you choose to install using a license file make sure to use the new license file and not the old license file on your system.

Even if you have the latest version already installed you will need to replace the old license file with the new one in order to continue using your product successfully.

Installing the license file using Intel Software Manager

If you have Intel Software Manager on your system you can use it to install the new license and remove the old license quickly and efficiently.

- Open Intel Software Manager
- Navigate to the Licenses tab
- Click the Refresh License icon of the appropriate serial number and then click Refresh

The new license file will be installed in the appropriate directory and the old license file will be removed from your system.

Installing the license file manually

If your product was activated before the renewal, updated license file(s) will be attached your renewal email and will also be available to you from Intel Registration Center. If your product was not activated before the renewal, license file(s) will be generated upon the activation of the product.

Place the license file "*.lic" in the following directory, making sure not to change the license file name:

  • On Windows*: <installation drive>\Program Files\Common Files\Intel\Licenses
    For example: "c:\Program Files\Common Files\Intel\Licenses"
    Note: If the INTEL_LICENSE_FILE environment variable is defined, copy the file to the directory specified by the environment variable instead.
  • On Linux*: /opt/intel/licenses
  • On Mac OS*: /Users/Shared/Library/Application Support/Intel/Licenses

Note: You will likely need administrative/root privileges to copy the license to the named directory.

Remember: keep your licenses directory clean. Old licenses may cause slow checkout times. When you install the new license be sure to delete any old unused licenses from the directory.

Floating license

Place the new license file on your server(s) either manually or via Intel License Manager and remove the old license file. You will also need to replace the license files on the client systems if your use model is to have a copy of the license file on each client system. There is no need to replace the license file on the clients if you have specified the port#@host for the license server in the INTEL_LICENSE_FILE variable.

Named-User license

Place the new corresponding license file on each one of your activated systems and remove the old license file. Remember that host ID(s) in the license file must match the host ID of the system it is associated with. If you have multiple activations per serial number you will need to identify the correct license file for each one of your systems. Using the Intel Software Manager is highly recommended to ensure license and system match.

Have questions?

Check out the Licensing FAQ
Or ask in our Intel® Software Development Products Download, Registration & Licensing forum

* If you have a question be sure to start a new forum thread.

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

How do I convert my web app or web site into a mobile app?

The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* you must use Windows* 7 or higher. The Intel XDK will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (the Intel XDK has issues with network shares that have not been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). 
  • Some people have issues using the Intel XDK behind a corporate network proxy or firewall. To check for this issue, try running the Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel login page and confirm that you can login with your Intel XDK account username and password.
  • If you are experiencing login issues, please send an email to html5tools@intel.com from the email address registered to your login account, describing the nature of your account problem and any other details you believe may be relevant.

If you can reliably reproduce the problem, please post a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to the Intel XDK forum. Please ATTACH the xdk.log file to your post using the "Attach Files to Post" link below the forum edit window.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > del *.* /s/q

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > del *.* /s/q
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

To do the same on a Linux or Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • debug-software.intel.com (for using the Test tab weinre debug feature)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

See the image below (this image is from a Windows 8.1 system).

Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

How do I run the Intel XDK on Fedora Linux?

See the instructions below, copied from this forum post:

$ sudo find xdk/install/dir -name libudev.so.0
$ cd dir/found/above
$ sudo rm libudev.so.0
$ sudo ln -s /lib64/libudev.so.1 libudev.so.0

Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

The Intel XDK generates a path error for my launch icons and splash screen files.

If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

<icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

Inside of your <project-name>.xdk file you will find entries that look like this:

"icons_": [
  {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
  },
  {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
  },
  {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
  },

Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

<!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

Back to FAQs Main

The New Issue of The Parallel Universe is Out: The Present and Future of OpenMP*

$
0
0

The OpenMP* application programming interface turns 20 this year―and we’re celebrating in the new issue of The Parallel Universe, Intel’s quarterly magazine that explores inroads and innovations in software development.

We asked Michael Klemm (the current CEO of the OpenMP Architecture Review Board, or ARB) and some of his colleagues to give an overview of the newest features in the specification―particularly, enhancements to task-based parallelism and offloading computations to specialized accelerators. Learn why OpenMP remains the gold standard for portable, vendor-neutral parallel programming directives.

This issue’s other hot topics include:

  • Reducing Packing Overhead in Matrix-Matrix Multiplication: Improving performance on multicore and many-core Intel® architectures, particularly for deep neural networks

  • Identify Scalability Problems in Parallel Applications: How to boost scalability for Intel® Xeon and Intel® Xeon Phi™ processors using New Intel® VTune™ Amplifier memory analysis

  • Vectorization Opportunities for Improved Performance with Intel® AVX-512 47: Examples of how Intel® compilers can vectorize and speed up loops

  • Intel® Advisor Roofline Analysis: A new way to visualize performance optimization trade-offs

  • Intel-Powered Deep Learning Frameworks: Your path to deeper insights

Read it now >

What to Do When Auto-Vectorization Fails?

$
0
0

Introduction

The following article is a follow up and a detailed analysis of a problem reported on the Intel® Developer Zone (Intel® DZ) forum1 dedicated to the Intel® C++ Compiler 2.

An Intel DZ user implemented a simple program as part of a code modernization workshop, and a problem with an inner for-loop was detected. Here is a piece of the code related to the problem:

	...
	for (std::size_t i = 0; i < nb_cluster; ++i) {
	float x = point[k].red - centroid[i].red;
	float y = point[k].green - centroid[i].green;
	float z = point[k].blue - centroid[i].blue;
	float distance = std::pow(x, 2) + std::pow(y, 2) + std::pow(z, 2);
	if (distance < best_distance) {
	best_distance = distance;
	best_centroid = i;
	}
	...

Note: This is not an auto-vectorized inner for-loop from KmcTestAppV1.cpp.

The Intel DZ user suspects that the inner for-loop was not auto-vectorized because a variable 'i' was declared as a 'std::size_t' data type, that is as 'unsigned int'.

Unmodified source code6 is attached. See KmcTestAppV1.cpp for more details.

Note that this article is not a tutorial on vectorization or parallelization techniques. However, a brief overview of these techniques will be given in the next part of this article.

A brief overview of vectorization and parallelization techniques

Modern software is complex and in order to achieve peak performance, especially when doing data-intensive processing, the vectorization and parallelization capabilities of modern CPUs, which could have many cores with several Logical Processing Units (LPUs) and Vector Processing Units (VPUs), need to be fully used.

VPUs allow different operations to be performed on multiple values of a data set simultaneously, and this technique, called vectorization, increases the performance of the processing when compared to the same processing implemented in a scalar, or sequential, way.

Parallelization is another technique, which allows different parts of a data set to be processed at the same time by different LPUs.

When vectorization and parallelization are combined, the performance of the processing can be boosted significantly.

Generic vectorization rules

You need to take into account the following generic rules related to vectorization of source codes:

  • A modern C/C++ compiler needs to be used with vectorization support.
  • Two types of vectorization techniques can be used: auto-vectorization (AV) and explicit vectorization (EV).
  • Only relatively simple inner for-loops can be vectorized.
  • Some inner for-loops cannot be vectorized with AV or EV techniques, because complex C or C++ constructions are used, for example, Standard Template Library classes or C++ operators.
  • It is recommended to review and analyze all cases when a modern C/C++ compiler cannot vectorize inner for-loops.

How an inner for-loop counter variable should be declared

The AV technique is considered the most effective for simple inner for-loops, because no code modifications are required, and AV of modern C/C++ compilers is enabled by default when optimization options 'O2' or 'O3' are used.

In more complex cases EV can be used to force vectorization using intrinsic functions, or vectorization #pragma directives, but it requires some modifications of inner for-loops.

A question can be asked: How should an inner for-loop counter variable be declared?

Two possible declarations can be considered:

Case A - Variable 'i' is declared as 'int'

	...
	for( int i = 0; i < n; i += 1 )
	{
		A[i] = A[i] + B[i];
	}
	...

and

Case B - Variable 'i' is declared as 'unsigned int'

	...
	for( unsigned int i = 0; i < n; i += 1 )
	{
		A[i] = A[i] + B[i];
	}
	...

In Case A the variable 'i' is declared as a signed data type 'int'.

In Case B the variable 'i' is declared as an unsigned data type 'unsigned int'.

Cases A and B could be combined in a simple test program 3 to evaluate vectorization capabilities of a C/C++ compiler:

////////////////////////////////////////////////////////////////////////////////////////////////////
// TestApp.cpp - To generate assembly listings an option '-S' needs to be used.
// Linux:
//		icpc -O3 -xAVX -qopt-report=1 TestApp.cpp -o TestApp.out
//		g++ -O3 -mavx -ftree-vectorizer-verbose=1 TestApp.cpp -o TestApp.out
// Windows:
//		icl   -O3 /QxAVX /Qvec-report=1 TestApp.cpp TestApp.exe
//		g++ -O3 -mavx -ftree-vectorizer-verbose=1 TestApp.cpp -o TestApp.exe

#include <stdio.h>
#include <stdlib.h>
//

////////////////////////////////////////////////////////////////////////////////////////////////////

	typedef float			RTfnumber;

	typedef int				RTiterator;			// Uncomment for Test A
	typedef int				RTinumber;
//	typedef unsigned int	RTiterator;			// Uncomment for Test B
//	typedef unsigned int	RTinumber;

////////////////////////////////////////////////////////////////////////////////////////////////////

	const RTinumber iDsSize = 1024;

////////////////////////////////////////////////////////////////////////////////////////////////////

int main( void )
{
	RTfnumber fDsA[ iDsSize ];
	RTfnumber fDsB[ iDsSize ];

	RTiterator i;

	for( i = 0; i < iDsSize; i += 1 )
		fDsA[i] = ( RTfnumber )( i );
	for( i = 0; i < iDsSize; i += 1 )
		fDsB[i] = ( RTfnumber )( i );

	for( i = 0; i < 16; i += 1 )
		printf( "%4.1f ", fDsA[i] );
	printf( "\n" );
	for( i = 0; i < 16; i += 1 )
		printf( "%4.1f ", fDsB[i] );
	printf( "\n" );

	for( i = 0; i < iDsSize; i += 1 )
		fDsA[i] = fDsA[i] + fDsB[i];			// Line 49

	for( i = 0; i < 16; i += 1 )
		printf( "%4.1f ", fDsA[i] );
	printf( "\n" );

	return ( int )1;
}

It turns out that these two for-loops (see Line 49 in the code sample above) are easily vectorizible4 (instructions with a prefix 'v' are used, like vmovups, vaddps, and so on) and the Intel C++ Compiler generated identical vectorization reports regardless of how the variable 'i' is declared:

Vectorization report for cases A and B

...
	Begin optimization report for: main()
	Report from: Interprocedural optimizations [ipo]
	INLINE REPORT: (main())
	Report from: Loop nest, Vector & Auto-parallelization optimizations [loop, vec, par]
	LOOP BEGIN at TestApp.cpp(37,2)
		remark #25045: Fused Loops: ( 37 39 )
		remark #15301: FUSED LOOP WAS VECTORIZED
	LOOP END
	LOOP BEGIN at TestApp.cpp(39,2)
	LOOP END
	LOOP BEGIN at TestApp.cpp(42,2)
		remark #25460: No loop optimizations reported
	LOOP END
	LOOP BEGIN at TestApp.cpp(45,2)
		remark #25460: No loop optimizations reported
	LOOP END
	LOOP BEGIN at TestApp.cpp(49,2)
		remark #15300: LOOP WAS VECTORIZED
	LOOP END
	LOOP BEGIN at TestApp.cpp(52,2)
		remark #25460: No loop optimizations reported
	LOOP END
...

The vectorization reports4 show that a for-loop at Line 493 was vectorized:

	...
	LOOP BEGIN at TestApp.cpp(49,2)
		remark #15300: LOOP WAS VECTORIZED
	LOOP END
	...

However, the Intel C++ Compiler considers these two for-loops as different C language constructions and generates different vectorized binary codes.

Here are the two core pieces of assembler listings, related to the for-loop at Line 493, for both cases:

Case A - Assembler listing (option '-S' needs to be used when compiling TestApp.cpp)

...
..B1.12:								# Preds ..B1.12 ..B1.11
	vmovups		(%rsp,%rax,4), %ymm0					#50.13
	vmovups		32(%rsp,%rax,4), %ymm2					#50.13
	vmovups		64(%rsp,%rax,4), %ymm4					#50.13
	vmovups		96(%rsp,%rax,4), %ymm6					#50.13
	vaddps		4128(%rsp,%rax,4), %ymm2, %ymm3			#50.23
	vaddps		4096(%rsp,%rax,4), %ymm0, %ymm1			#50.23
	vaddps		4160(%rsp,%rax,4), %ymm4, %ymm5			#50.23
	vaddps		4192(%rsp,%rax,4), %ymm6, %ymm7			#50.23
	vmovups		%ymm1, (%rsp,%rax,4)					#50.3
	vmovups		%ymm3, 32(%rsp,%rax,4)					#50.3
	vmovups		%ymm5, 64(%rsp,%rax,4)					#50.3
	vmovups		%ymm7, 96(%rsp,%rax,4)					#50.3
	addq		$32, %rax#49.2
	cmpq		$1024, %rax								#49.2
	jb			..B1.12						# Prob 99%	#49.2
...

Note: See TestApp.icc.itype.s5.1 for a complete assembler listing.

Case B - Assembler listing (option '-S' needs to be used when compiling TestApp.cpp)

...
..B1.12:								# Preds ..B1.12 ..B1.11
	lea			8(%rax), %edx							#50.13
	lea			16(%rax), %ecx							#50.13
	lea			24(%rax), %esi							#50.13
	vmovups		(%rsp,%rax,4), %ymm0					#50.13
	vaddps		4096(%rsp,%rax,4), %ymm0, %ymm1			#50.23
	vmovups		%ymm1, (%rsp,%rax,4)					#50.3
	addl		$32, %eax								#49.2
	vmovups		(%rsp,%rdx,4), %ymm2					#50.13
	cmpl		$1024, %eax								#49.2
	vaddps		4096(%rsp,%rdx,4), %ymm2, %ymm3			#50.23
	vmovups		%ymm3, (%rsp,%rdx,4)					#50.3
	vmovups		(%rsp,%rcx,4), %ymm4					#50.13
	vaddps		4096(%rsp,%rcx,4), %ymm4, %ymm5			#50.23
	vmovups		%ymm5, (%rsp,%rcx,4)					#50.3
	vmovups		(%rsp,%rsi,4), %ymm6					#50.13
	vaddps		4096(%rsp,%rsi,4), %ymm6, %ymm7			#50.23
	vmovups		%ymm7, (%rsp,%rsi,4)					#50.3
	jb			..B1.12						# Prob 99%	#49.2
...

Note: See TestApp.icc.utype.s5.2 for a complete assembler listing.

It is finally clear that the problem where the inner for-loop is not auto-vectorized (see beginning of the forum posting1) is not related to how the variable 'i' is declared, and that something else is  affecting a vectorization engine of the Intel C++ Compiler.

In order to pinpoint a root cause of the vectorization problem a question needs to be asked: What compiler messages will be generated when AV or EV techniques cannot be applied?

A small list of some “loop was not vectorized” messages of the Intel C++ Compiler when AV or EV techniques can't be applied is as follows:

...loop was not vectorized: not inner loop.
...loop was not vectorized: existence of vector dependence.
...loop was not vectorized: statement cannot be vectorized.
...loop was not vectorized: unsupported reduction.
...loop was not vectorized: unsupported loop structure.
...loop was not vectorized: vectorization possible but seems inefficient.
...loop was not vectorized: statement cannot be vectorized.
...loop was not vectorized: nonstandard loop is not a vectorization candidate.
...loop was not vectorized: dereference too complex.
...loop was not vectorized: statement cannot be vectorized.
...loop was not vectorized: conditional assignment to a scalar.
...warning #13379: loop was not vectorized with "simd".
...loop skipped: multiversioned.

One message deserves special attention:

...loop was not vectorized: unsupported loop structure.

It is seen in KmcTestAppV1.cpp6 that the inner for-loop has three parts:

Part 1 - Initialization of x, y, and z variables

...
float x = point[k].red - centroid[i].red;
float y = point[k].green - centroid[i].green;
float z = point[k].blue - centroid[i].blue;
...

Part 2 - Calculation of a distance between points x, y, and z

...
float distance = std::pow(x, 2) + std::pow(y, 2) + std::pow(z, 2);
...

Part 3 - Update of a 'best_distance' variable

...
if (distance < best_distance) {
best_distance = distance;
best_centroid = i;
}
...

Because all these parts are in the same inner for-loop, the Intel C++ Compiler cannot match its structure to a predefined vectorization template. However, Part 3, with a conditional if-statement, is the root cause of the vectorization problem.

A possible solution of the vectorization problem is to split the inner for-loop into three parts as follows:

...														// Calculate Distance
for( i = 0; i < nb_cluster; i += 1 )
{
	float x = point[k].red - centroid[i].red;
	float y = point[k].green - centroid[i].green;
	float z = point[k].blue - centroid[i].blue;			// Performance improvement: ( x * x ) is
	distance[i] = ( x * x ) + ( y * y ) + ( z * z );	// used instead of std::pow(x, 2), etc
}
														// Best Distance
for( i = 0; i < nb_cluster; i += 1 )
{
	best_distance = ( distance[i] < best_distance ) ? ( float )distance[i] : best_distance;
}
														// Best Centroid
for( i = 0; i < nb_cluster; i += 1 )
{
	cluster[k] = ( distance[i] < best_distance ) ? ( float )i : best_centroid;
}
...

The most important two modifications are related to the conditional if-statement in the for-loop. It was modified from a generic form:

...
if( A < B )
{
	D = val1
	C = val3
}
...

to a form that uses two conditional operators ( ? : ):

...
D = ( A < B ) ? ( val1 ) : ( val2 )
...
C = ( A < B ) ? ( val3 ) : ( val4 )
...

also known as ternary operators. Now a modern C/C++ compiler can match this C language construction to a pre-defined vectorization template.

Performance evaluation of unmodified and modified source code

A performance evaluation of both versions of the program for 1,000,000 points, 1,000 clusters, and 10 iterations was completed and the results are as follows:

...>KmcTestAppV1.exe
		Time: 111.50

Note: Original version6.

...>KmcTestAppV2.exe
	Time:  20.48

Note: Optimized and vectorized version7.

The optimized and vectorized version7 is about 5.5x faster than the original version of the program (see 1 or 6). Times are in seconds.

Conclusion

If a modern C/C++ compiler fails to vectorize a for-loop it is important to evaluate its complexity. In the case of the Intel C++ Compiler, 'opt-report=n' option needs to be used (n greater than 3).

In most cases, a C/C++ compiler cannot vectorize the for-loop because it cannot match its structure to a predefined vectorization template. For example, in the case of the Intel C++ Compiler, the following vectorization messages would be reported:

...loop was not vectorized: unsupported reduction.

or

...loop was not vectorized: unsupported loop structure.

If this is the case, you need to modify the for-loop to simplify its structure, consider EV techniques using #pragma directives, like #pragma simd, or consider reimplementation of the required functionality using intrinsic functions.

About the author

Sergey Kostrov is a highly experienced C/C++ software engineer and Intel® Black Belt. He is an expert in design and implementation of highly portable C/C++ software for embedded and desktop platforms, scientific algorithms and high-performance computing of big data sets.

Downloads

WhatToDoWhenAVFails.zip

List of all files (sources, assembly listings and vectorization reports):

KmcTestAppV1.cpp
KmcTestAppV2.cpp
TestApp.cpp
TestApp.icc.itype.rpt
TestApp.icc.utype.rpt
TestApp.icc.itype.s
TestApp.icc.utype.s

See also

1. Vectorization failed because of unsigned integer?

https://software.intel.com/en-us/forums/intel-c-compiler/topic/698664

2. Intel C++ Compiler forum on Intel DZ:

https://software.intel.com/en-us/forums/intel-c-compiler

3. Test program to demonstrate vectorization of a simple for-loop:

TestApp.cpp

4. Intel C++ Compiler vectorization reports for TestApp.cpp program:

TestApp.icc.itype.rpt

TestApp.icc.utype.rpt

5.1. Complete assembler listing for Case A of TestApp.cpp program:

TestApp.icc.itype.s

5.2. Complete assembler listing for Case B of TestApp.cpp program:

TestApp.icc.utype.s

6. Unmodified source codes (KmcTestAppV1.cpp original)

7. Modified source codes (KmcTestAppV2.cpp optimized and vectorized)

How to import the latest QMSI in ISSM

$
0
0

How to import the latest QMSI in ISSM

If you want to evaluate the latest version of QMSI which is not intergarated in ISSM yet, here are the steps you can follow to evalutate it on ISSM. Otherwise, command line could be the other option. Before we start, please make sure your ISSM can build/run the application without any issue.

At the beginning of the steps, you should download the QMSI 1.3 package from this github webpage. (https://github.com/quark-mcu/qmsi/releases/tag/v1.3.1)

Go to File > Import > C/C++ > Exiting Code as Make Project. Select the compiler you need for different demand or different core. Here, we choose GCC IA Compiler, name the project name and browse the location of the sample in QMSI 1.3 package. We take LED blinking as example here.

Now, you can see the “blinky” project in the ISSM IDE.

In default, the image will be built in release build. You need to setup “BUILD” variable. Right click the project and go to C/C++ Build > Environment > Add BUILD debug variable.

After that, you can click hammer icon to build the application and check the parameters in console log to make sure the image is built for your requirement, target platform and so on.

Next step, we need to set up some debug configuration manually. For this part, you can easily duplicate the debug configuration with existing QMSI1.1 sample.

And then, modify the project name and C/C++ Application location.

Now, you can click the bug icon to build/flash image as usual.


Intel Compiler Version 16 not compatible with recent libc.so.6

$
0
0

Reference Number : DPD200416926, DPD200417021

Version : Intel® C++ Compiler 2016 updates 3 and 4 (Versions 16.0.3 and 16.0.4)

Operating System : Linux*

Problem Description :  An unexpected segmentation fault may be seen at run-time start-up when an application built with these Intel compiler versions is run on a system containing a recent libc.so.6, such as the one in glibc-2.17-157.el7.x86_64 or contained in Red Hat* Enterprise Linux 7.3. It is not seen with older libc versions such as that in Red Hat* Enterprise Linux 7.2.

Cause : This appears to be due to a symbol conflict between libintlc.so.5 from the Intel compiler version 16.0.3 or 16.0.4 and libc.so.6, as described for example at https://bugzilla.redhat.com/show_bug.cgi?id=1377895

Resolution Status : This incompatibility has been addressed in version 17 of the Intel® compiler.

Workaround : For the Intel Compiler version 16.0.3 or 16.0.4, it may be worked around by replacing libintlc by the corresponding library from the version 17 compiler, or possibly by preloading libintlc.so.5 from the version 17 compiler.

 

Intel® Deep Learning SDK Deployment Tool Tutorial

$
0
0

Download PDF [PDF 728 KB]

Revision History

Revision NumberDescriptionRevision Date

001

Initial version

December 16

Table of Contents

1 Introduction
  1.1 Related Information
  1.2 Installing Intel® Deep Learning SDK Deployment Tool
  1.3 Conventions and Symbols
  1.4 Introducing the Intel® Deep Learning SDK Deployment Tool
2 Using the Intel® Deep Learning SDK Deployment Tool
  2.1 Typical Usage Model
  2.2 Model Optimizer Overview
    2.2.1 Prerequisites
    2.2.2 Running the Model Optimizer
    2.2.3 Known Issues and Limitations
  2.3 Inference Engine Overview
    2.3.1 Building the Sample Applications
    2.3.2 Running the Sample Applications
3 End-to-end user scenarios
  3.1 Inferring an Image Using the Intel® Math Kernel Library for Deep Neural Networks Plugin

1 Introduction

The Intel® Deep Learning SDK Deployment Tool User Guide provides guidance on how to use the Deployment Tool to optimize trained deep learning models and integrate the inference with application logic using a unified API. See the End-to-End User Scenarios chapter to find usage samples.

This guide does not provide an information on the Intel® Deep Learning SDK Training Tool. For this information, see the Intel® Deep Learning SDK Training Tool User Guide.

1.1 Related Information

For more information on SDK requirements, new features, known issues and limitations, refer to the Release Notes document.

1.2 Installing Intel® Deep Learning SDK Deployment Tool

For installation steps please refer to the Intel® Deep Learning SDK Deployment Tool Installation Guide.

1.3 Conventions and Symbols

The following conventions are used in this document.

SDK

Software Development Kit

API

Application Programming Interface

IR

Internal representation of a deep learning network

CNN

Convolutional Neural Network

1.4 Introducing the Intel® Deep Learning SDK Deployment Tool

The Intel® Deep Learning SDK Deployment Tool is a feature of the Intel® Deep Learning SDK, which is a free set of tools for data scientists, researchers, and software developers to develop, train, and deploy deep learning solutions.

With the Intel® Deep Learning SDK Deployment Tool you can:

  • Optimize trained deep learning networks through model compression and weight quantization, which are tailored to end-point device characteristics.
  • Deliver a unified API to integrate inference with application logic.

The Deployment Tool comprises two main components:

Model Optimizer

Model Optimizer is a cross-platform command line tool that:

  • Takes as input a trained network that contains a certain network topology, parameters, and the adjusted weights and biases. The input network is produced using the Caffe* framework.
  • Performs horizontal and vertical fusion of the network layers.
  • Prunes unused branches in the network.
  • Applies weights compression methods.
  • Produces as output an Internal Representation (IR) of the network – a pair of files that describe the whole model:
    • Topology file – an .xml file that describes the network topology.
    • Trained data file – a .bin file that contains the weights and biases binary data.
  • The produced IR is used as an input for the Inference Engine.

Inference Engine

Inference Engine is a runtime which:

  • Takes as input an IR produced by Model Optimizer
  • Optimizes inference execution for target hardware
  • Delivers inference solution with reduced footprint on embedded inference platforms
  • Enables seamless integration with application logic, which eases transition between platforms from Intel® through supporting the same API across a variety of platforms.

2 Using the Intel® Deep Learning SDK Deployment Tool

2.1 Typical Usage Model

The scheme displays the typical usage of the Deployment Tool to perform inference of a trained deep neural network model. You can train a model using the Intel® Deep Learning SDK Training Tool or Caffe* framework.

  • Provide the model in the Caffe* format for Model Optimizer to produce the IR of the model based on the certain network topology, weight and bias values, and other parameters.
  • Test the model in the IR format using the Inference Engine in the target environment. Deployment Tool contains sample Inference Engine applications. For more information, see the Running the Sample Applications section.
  • Integrate the Inference Engine in your application and deploy the model in the target environment.

2.2 Model Optimizer Overview

The Model Optimizer is a cross-platform command line tool that facilitates transition between training and deployment environments.

The Model Optimizer:

  • Converts a trained model from a framework-specific format to a unified framework-independent format (IR). The current version supports conversion of models in Caffe* format only.
  • Can optimize a trained model by removing redundant layers and fusing layers, for instance, Batch Normalization and Convolution layers.

The Model Optimizer takes a trained model in Caffe* format (a .prototxt file with the network topology and a .cafemodel file with the network weights) and outputs a model in the IR format (an .xml file with the network topology and a binary .bin file with the network weights):

The Model Optimizer is also included into distributions of Intel® Deep Learning SDK Training Tool.

2.2.1 Prerequisites

  • The Model Optimizer is distributed as a set of binary files (an executable binary and a set of shared objects) for 64-bit Ubuntu* OS only. The files can reside in any directory with write permissions set.
  • Caffe* framework and all its prerequisites are to be installed. The libcaffe.so shared object must be available.
  • The Model Optimizer works with Berkley* community version of Caffe* and Intel® distribution of Caffe. It may fail to work with other versions of Caffe*.

2.2.2 Running the Model Optimizer

To run the Model Optimizer perform the following steps.

  • Add the path to the libCaffe.so shared object and the path to the Model Optimizer executable binary to LD_LIBRARY_PATH.
  • Change the current directory to the Model Optimizer bin directory. For example:
  • cd /opt/intel/deep_learning_sdk_2016.1.0.<build_number>/deployment_tools/model_optimizer
  • Run the ./ModelOptimizer command with desired command line arguments:
  • "-w" -Path to a binary file with the model weights (.caffemodel file)
  • "-i" - Generate IR
  • "-p" - Desired precision (for now, must be FP32, because the MKLD-NN plugin currently supports only FP32)
  • "-d"– Path to a file with the network topology (.prototxt file)
  • “-b” – Batch size; an optional parameter, equals the number of CPU cores by default
  • "-ms" - Mean image values per channel
  • “-mf” – File with mean image in the binaryproto format
  • "-f" - Network normalization factor (for now, must be set to 1, which corresponds to the FP32 precision).

Some models require subtract the image mean from each image on both sides training and deploying. There are two available options for subtraction:

  • -ms - allows you to subtract mean values per channel
  • –mf - subtracts the whole mean image.

Mean image file should be in the binaryproto format. For ilsvrc12 dataset, mean image file can be downloaded by the get_ilsvrc_aux.sh script from Caffe*:

./data/ilsvrc12/get_ilsvrc_aux.sh

Model Optimizer creates a text .xml file and a binary .bin file with a model in the IR format in the Artifacts directory in your current directory.

2.2.3 Known Issues and Limitations

The current version of the Model Optimizer has the following limitations:

  • It is distributed for 64-bit Ubuntu* OS only.
  • It can process models in Caffe* format only.
  • It can process popular image classification network models, including AlexNet, GoogleNet, VGG-16, LeNet, and ResNet-152, and fully convolutional network models like FCN8 that are used for image segmentation. It may fail to support a custom network.

2.3 Inference Engine Overview

The Inference Engine facilitates the deployment of deep learning solutions by delivering a unified API to integrate the inference with application logic.

The current version of the Inference Engine supports inference of popular image classification networks, including LeNet, AlexNet, GoogleNet, VGG-16, VGG-19, and ResNet-152, and fully convolutional networks like FCN8 used for image segmentation.

The Inference Engine package contains headers, libraries, and two sample console applications:

  • Sample Application for Image Classification - The application demonstrates how you can use Inference Engine for inference of popular image classification networks like AlexNet and GoogleNet.
  • Sample Application for Image Segmentation - The application demonstrates how you can use Inference Engine for inference of image segmentation networks like FCN8.

2.3.1 Building the Sample Applications

The recommended build environment is the following:

  • Ubuntu* x86_64 version14.04 or higher, GCC* version 4.8 or higher
  • CMake* 2.8 version or higher.

You can build the sample applications using the CMake file in the samples directory.

  1. In the samples directory create a new directory that will be used for building:
    $ mkdir build
    $ cd build

  2. Run CMake to generate Make files:
    $ cmake <path_to_samples_directory>

  3. Run Make to build the application:
    $ make

2.3.2 Running the Sample Applications

Running the sample application for image classification

Running the application with the -h option shows the usage prompt:

$ ./classification_sample --help
classification_sample [OPTION]
Options:
    -h, --help 		Print a usage message.
    -i "<path1>""<path2>" ..., --images "<path1>""<path2>" ...
   						Path to a folder with images or path to an image files: a .ubyte file for LeNet and a .bmp file for the other networks.
    -m "<path>", --model "<path>" 	Path to an .xml file with a trained model.
    -p "<name>", --plugin "<name>" 	Plugin name. For example MKLDNNPlugin.
    -pp "<path>", --plugin_path  	Path to a plugin folder.
    -ni N, --niter N 	The number of iterations to do inference; 1 by default.
    -l "<path>", --label "<path>" 	Path to a file with labels for a model.
    -nt N, --ntop N 	Number of top results to output; 10 by default.
    -pc, --performance_counts 		Enables printing of performance counts.

Sample commands below demonstrate use of the sample application for image classification to perform inference on an image using a trained AlexNet network and the Intel® Math Kernel Library for Deep Neural Networks  (Intel® MKLD-NN) plugin:

$ ./classification_sample -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet.xml -p MKLDNNPlugin -pp /path/to/plugins/directory

By default the application outputs 10 top inference results. Add the --ntop or -nt option to the previous command to modify the number of top output results. For example, to get the top 5 results, you can use the following command:

$ ./classification_sample -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet.xml -p MKLDNNPlugin -nt 5 -pp /path/to/plugins/directory

Running the sample application for image segmentation

Running the application with the -h option shows the usage message:

$ ./segmentation_sample -h
segmentation_sample [OPTION]
Options:
	-h Print a usage message.
	-i "" Path to a .bmp image.
	-m "" Path to an .xml file with a trained model.
	-p "" Plugin name. For example MKLDNNPlugin.

You can use the following command to do inference on an image using a trained FCN8 network:

$ ./segmentation_sample -i /inputImage.bmp -m /fcn8.xml -p MKLDNNPlugin -pp /path/to/plugins/directory

The application outputs is a segmented image (out.bmp).

3 End-to-End User Scenarios

3.1 Inferring an Image Using the Intel® Math Kernel Library for Deep Neural Networks Plugin

  1. Configure Model Optimizer as it was described above and go to the folder with binaries:
    cd <path_to_DLSDK>/deployment_tools/model_optimizer

  2. Add to the LD_LIBRARY_PATH variable the path to the libCaffe.so shared object and the Model Optimizer folder:
    export LD_LIBRARY_PATH=${CAFFE_ROOT}/build/lib:
    <path_to_DLSDK>/deployment_tools/model_optimizer

  3. Configure Model Optimizer for the MKL-DNN plugin using the command line arguments listed in the Running the Model Optimizer section.
  4. Run the following command:
    ./ModelOptimizer -w <path_to_network.caffemodel> -i -p FP32 -d
    <path_to_deploy.prototxt> -f 1 -b 1 -ms
    "104.00698793,116.66876762,122.67891434"

  5. The output of successful launching of the command is IR representation of model and located here:
    <path_to_DLSDK>/deployment_tools/model_optimizer/bin/Artifacts/<NetworkName>/

  6. Compile the Inference Engine classification sample application as it is described in the Building the Sample Applications chapter.
  7. Go to the compiled binaries:
    cd <path_to_DLSDK>/deployment_tools/inference_engine/bin/intel64/

  8. Infer an image using the trained and optimized model:
    $ ./classification_sample -i <path_to_image>/cat.bmp -m
    <path_to_model>/alexnet.xml -p MKLDNNPlugin -pp /path/to/plugins/directory

Getting Started on Arduino* 101/Genuino* 101 with Intel® System Studio for Microcontrollers using the Flyswatter* 2 JTAG* debugger

$
0
0

Getting Started on Arduino* 101/Genuino* 101 with Intel® System Studio for Microcontrollers using the Flyswatter* 2 JTAG* debugger

 

Prerequisites

Hardware Requirements

Software Requirements

Set up

Hardware Setup

  1. Power the Arduino 101/Genuino 101 board by connecting it to the host PC with a USB-A-B-cable (the green LED will light up).
  2. Power the Flyswatter 2 by connecting it to the host PC with a USB-A-B-cable (the green LED will light up).
  3. [To enable flashing]  Connect JTAG header on the board to the Flyswatter 2 via the ARM-JTAG-20-10 adapter.  Match pin 1 to pin 1 as shown in Diagram A.
  4. [To enable serial]  Connect the USB type A end of the FDTI cable to the host PC and use jumper wires to connect the other end to the board as listed below and pictured in Diagram A.
    1. Connect the Serial Cable RX pin to the Arduino 101's TX pin (Pin 1)
    2. Connect the Serial Cable TX pin to the Arduino 101's RX pin (Pin 0)
    3. Connect the Serial Cable GND pin to one of the Arduino 101's GND pins

Diagram A:

Image of board setup

 

Once you have your board connected, you can verify your setup in the Device Manager by confirming you see these entries (highlighted below for reference):

Device Manager

If you do not see these, you may need to install the FTDI driver. To install the driver see the link in Hardware Requirements section under the FTDI entry.

The USB Serial Port entries are for the serial connection to the board.  One of these is the Flyswatter* 2’s onboard serial port which you can use if you have an adapter.  The second one is the external FTDI cable we are using.  Your system will automatically assign COM port numbers, so check to see which ones you have. The last thing you’ll see is the OpenOCD* JTAG entry in the Universal Serial Bus devices category, this is the connection from the Flyswatter 2 to your board which allows you to upload your code. 

 

Software Setup

Intel® System Studio for Microcontrollers Update 1

  1. Download Intel® System Studio for Microcontrollers
  2. Run the installer you downloaded and enable Arduino* 101/Genuino* 101 Board Support. Do this by selecting the “Customize” option on the installation summary screen.  See example below:

 Installer option

 

Zephyr Development Environment Setup

Intel® System Studio for Microcontrollers Update 1 provides the Real Time Operating System (RTOS) Zephyr* in the installation package.  This guide uses a Zephyr-based application for the Arduino 101/Genuino 101.  Zephyr works out of the box on Linux* but to run this on Windows, you will need to install the prerequisites shown here:

To install MinGW, visit the site MinGW Home and install the following packages with their installer mingw-get-setup.exe

  • mingw-developer-toolkit
  • mingw32-base
  • msys-base
  • msys-binutils
  • msys-console
  • msys-w32api

Then launch the MSYS console. The installer does not create shortcuts for you, but the script to launch it is located here by default:

 

C:\MinGW\msys\1.0\msys.bat

 

From in this console you will then need to replace the default fstab file as shown:

 

cp /etc/fstab.sample /etc/fstab

 

Then, in the same console run these commands to install the final prerequisites:

 

mingw-get update
mingw-get install libpthread msys-libregex-dev --all-related 

 

Now you are ready to create your first Arduino* 101 / Genuino 101 project with Intel® System Studio for Microcontrollers!

 

Creating your First Application

Follow the steps below to create your Arduino* 101 / Genuino 101 application.

Launch Intel® System Studio for Microcontrollers Update 1

Intel® System Studio for Microcontrollers can be launched from the desktop icon or the start menu entry or by browsing to the install directory and running the launcher.

By default this can be found here:

 

C:\IntelSWTools\ISSM_2016.1.067\iss_mcu_ ide_eclipse-launcher.bat

 

Creating a new project

Go to File > New > Intel Project for Microcontroller… and follow the prompts to create a new project for the Arduino* 101/Genuino 101 board. 

Create Project

Note: Missing Arduino Board in the Create a new Project menu

If you did not choose a custom install during the options portion of the installation to select the Arduino* 101/Genuino* 101 it will not show up as a developer board option in the Create a New Project view.

 

If you missed this step when you installed the tools then simply:

1. Re-launch the installer

2. Select “Modify”

Modify Install

Then install the Arduino* 101/Genuino 101 Board support:

Arduino Board Support

 

Customize and select example template

After selecting the Arduino* 101 / Genuino* 101 you may customize the various settings of the project and choose which example template to use.  For this guide we selected the Hello World example template.

Hello World Project

Build the Project

To build the project select the hammer icon.

Debug Build

Note: Build Errors

If you see errors building one of the provided projects then go back to "Zephyr Development Environment Setup on Windows" and make sure each component is installed properly.

 

Flash the project to the Arduino* 101/ Genuino* 101

After the build is complete, flash (upload) your application onto the microcontroller with the (flashing) option in the drop down menu of the debug icon:

Debug Flashing

 

After flashing, the application will halt automatically with a temporary breakpoint at the start.

 

Setup your serial connection through the FTDI cable 

In the bottom right corner of Intel® System Studio for Microcontrollers you will see the “Serial Terminal”.  If the window is closed then you can reopen it from Window > Show view > Serial Terminal …

Serial Terminal

In this window select the green “+ and select the COM port of the Serial cable.  If you’re not sure which one it is then try replugging the FTDI cable and see which one changes in the device manager. 

Now click Resume to start running the program:

Resume

You should now see output in the serial terminal window:

Hello World Output

 

Congratulations, your application is now up and running on your board!  You now can build applications and view serial output on the Arduino* 101/Genuino* 101 with Intel® System Studio for Microcontrollers Update 1. 

 

Reference Information

Intel® System Studio for Microcontrollers Product Page

Intel® System Studio 2016 for Microcontrollers User and Reference Guide

Zephyr Environment Setup on Windows

 

*Other names and brands may be claimed as the property of others.

Intel® Unite™ Solution: Introduction and Using Plugins

$
0
0

Contents

Introduction

When you start a meeting in a conference room, do you ever have the following problems?

  • You cannot find a cable to connect to your laptop and the display equipment.
  • Your laptop does not have an available interface.
  • The current display cable isn’t compatible with your computer.
  • Only one person at a time can present.

The Intel® Unite™ solution is an easy-to-use collaboration solution that can help you solve these problems in a traditional meeting room. Using the Intel Unite solution, you can quickly connect to the display in a conference room through a wireless connection. No cables, connectors, or adapters are needed. Both an in-room connection and a remote connection are supported. Multiple attendees can present the screen at the same time and easily annotate and transfer files.

Features

The Intel Unite solution has the following features:

  • Fast deployment and fast connection
  • No cables needed
  • Ability to share displays
    • In-room and remote viewing
    • Application sharing
    • Extended display
    • No cable needed for the client side
  • Peer-to-peer sharing
    • An Intel vPro technology-based client can share content with other clients; no hub is needed
  • Annotation capabilities for the presenter and other attendees
  • Split-screen view
    • Up to four users can share content per room display
  • File transfer
  • Support for plugin extensions

Deployment Options and Components

There are two deployment options for the Intel Unite solution.

Enterprise Movement

Option A:Enterprise environment (applicable to enterprise environments that have multiple working and network environments)

Components

Server. The enterprise web server that runs the PIN service and provides an admin portal for admin to configure and download pages for the client application.

Hub. The specified Mini PC equipped with Intel vPro technology. This PC runs the Intel Unite solution hub application and connects to the display in the conference room. Any client can connect to the hub PC, then start their presentations or view content from the hub display.

Client. The user’s device that will connect to the hub. The client can be a Windows* or Mac*device. For client devices, Intel vPro technology is not mandatory. A mobile device is also supported. An Android* client application will be provided in the future. The end user controls the hub through the client application.

Plugin. The software components installed on the hub (for example, Skype for Business plugin, Guess Access plugin). The plugin can enrich the functionality and extend user experience of the Intel Unite solution. 

Option B:Standalone mode (applicable to a single business network environment)

Components

Hub. The specified Mini PC equipped with Intel vPro technology. This PC runs an Intel Unite solution hub application that displays a PIN and hosting plugins. The hub is connected to a display in the conference room.

Client. The user’s device that will connect to the hub; it could be Windows or Mac. Intel vPro technology is not mandatory for the client device. A mobile device is also supported. An Android client application will be provided in the future. The end user controls the hub through the client application.

Plugin. The software components installed on the hub (for example, Skype for Business plugin, Guest Access plugin). The plugin enriches the functionality and extends the user experience of the Intel Unite solution.

Applications for the Intel® Unite™ Solution

A group of applications needs to be installed on each component:

Application that Needs to Be InstalledComponentsRemarks
Intel® Unite™ application on the serverServerOnly required for the enterprise environment. - Intel provides this application.
Intel® Unite™ application on the hubHubIt is the core component of Intel Unite solution. - Intel provides this application.
Intel® Unite™ client applicationClientIt is installed on the end-user device - Intel provides this application.
Intel® Unite™ plugin (for example, Intel Unite plugin for Skype for Business), it could be an installation program or just a .dll file (named xxxPlugin.dll)PluginThe plugin is software and must be installed on the hub computer. When the client connects to the hub, the available plugins will be displayed on the client side. - Intel or a third party provides the plugin.

Hardware and software requirements

You’ll need to meet the following requirements before you start to use the Intel Unite solution.

Server

  • Microsoft Windows Server* 2008 or greater
  • Microsoft Internet Information Services* with SSL enabled - This will require a trusted web server certificate with an internal or public root of trust
  • Microsoft SQL Server* 2008 R2 or greater
  • Microsoft .NET* 4.5 or greater
  • 4 GB RAM
  • 32 GB available storage

Hub

To see the list of supported devices, go to: http://www.intel.com/buy/us/en/audience/unite

  • Microsoft Windows 7, 8, 8.1, or 10
  • Recommended latest patch level
  • Microsoft .NET 4.5 or greater
  • A Mini PC based on the 4th generation Intel® Core™ vPro™ processor or newer (Note: The hub must be an Intel vPro technology-based computer, but not all Intel vPro technology-based computers can be used as the hub. Only an authorized Mini PC can run the hub application.)
  • 4 GB RAM
  • 32 GB available storage

Client

  • Microsoft Windows 7, 8, 8.1, or 10
  • Microsoft .NET 4.5 or greater
  • OS X* 10.10.5 or greater
  • iOS* 9.3 or higher
  • Wired or wireless network connection
  • 1 GB RAM
  • 1 GB available storage

NOTE: A client application for Android will be available in 2017.

Below is the basic hardware combination of the Intel Unite solution.

  • Hub computer attached with display, audio, mic-phone, or other devices.
  • Client device (laptop or mobile device)

Intel Unite Solution 

Where to get software components

The Intel® Unite™ client software can be downloaded for free here:

https://downloadcenter.intel.com/download/25280/Intel-Unite-app

For other installations (server, hub, plugins, and so on) and related documents, go to: http://msp.intel.com/unite

Running the Intel Unite solution on an authorized PC does not require a license fee. Trying to run the hub on an unauthorized PC will fail. For software download and installation instructions, please contact Intel Unite solution integrator or technical support.

Getting started

This section discusses how to set up Intel Unite solution 3.0 in standalone mode and how to install and use the plugins. Some common issues will also be discussed. We will install two plugins: Skype for Business and Guest Access. To set up using enterprise mode, please refer to the Intel Unite Solution Enterprise Deployment Instructions, which can be downloaded from http://msp.intel.com/unite.

First, we need to prepare the installation package.

Intel Unite Plugin

You should install the hub application first, and then install the client application and plugins.

Install the Intel® Unite™ Application on hub

  1. Install the hub application on an authorized Intel vPro technology-based Mini PC. For a list of supported PCs, go to http://www.intel.com/buy/us/en/audience/unite.
  2. Select Standalone mode during the installation process.

    Standalone Option
  3. At the next several screens, click Next until the installation is finished. You will see two exe files on the desktop:

    Intel Unite exe files
  4. Launch IntelUnite.exe (it is the default and located in C:\Program Files (x86)\Intel\Intel Unite\Hub).
    • Enter the shared key on this hub. The shared key is used to identify the hub, and the client application should use the same shared key with the hub to communicate with it.

      Intel Unite Setup
  5. If your PC is not an authorized machine (even if it’s an Intel vPro technology-based computer), when you try to launch the application, you’ll get an error message like the one below:

    Intel Unite error message
  6. After setting up the shared key, you’ll see the following screen.

    Intel Unite configure screen
     

    Selecting Yes means that once you start this computer, it will log on automatically and start the Intel Unite application in full-screen mode. At this time it is inconvenient to check other applications or windows. Selecting No means you must open the Intel Unite application manually. If you are just testing or studying the application or need to access other applications, it is recommended that you choose No. If your computer is ready to be used as an Intel Unite appliance, choose Yes. Once you choose Yes, the following changes will be made to your system:

    1. A new non-administrative user will be created:
      User name: UniteUser
      Password: P@ssw0rd
    2. The computer will be set to automatically log in as UniteUser and start the Intel Unite application when the system boots.
    3. A Windows firewall exception will be added to allow the Intel Unite application.
    4. The power settings will be set to Always On
      Notice: This can be undone by using the Intel Unite Settings application or uninstalling the Intel® Unite™ software.
  7. After the configuration is finished, launch IntelUnite.exe; it will display in full-screen mode.

    Intel Unite Download Client
  8. Change the hub settings: Open “Intel Unite Setting.exe” and then make changes according to each item’s prompt information.

    Intel Unite hub settings
  9. The hub setting is finished. Next connect a big display (such as a monitor or TV) to the hub machine and connect it to your enterprise network.
  10. Press ALT+F4 to exit full-screen mode and close the application.

Install the Intel® Unite™ Client Application

  1. Install the Intel® Unite™ client application “Intel Unite Client.mui.msi” on any windows PC (OS X or iOS is supported; we only discuss the Windows client here).
  2. The client device should connect into a same subnet with the hub in standalone mode.
  3. When launching for the first time, you should fill in the shared key, which is from the hub side.

    Intel® Unite™ shared key
  4. After entering the shared key, you’ll launch the main menu of the Intel Unite client.

    Intel® Unite™ Main Menu
  5. To join a session, enter the PIN displayed on the hub screen.

    Note: if your client fails to connect to the hub, please do the following:

    1. Make sure the client uses the same shared key as the hub.
    2. Verify the PIN.
    3. Make sure the hub and client are in the same subnet in standalone mode; otherwise the connection will fail.
    4. Check the Windows firewall policy; please add Intel Unite into your allowed app list. For detailed steps, please refer to “Intel_Unite_Firewall_Help_Guide.pdf”. This document is can be applied and downloaded from http://msp.intel.com/unite.
  6. You’re ready to start using the Intel Unite solution on your client.
    • Quick connection for new attendees.
    • Present content or view content from the hub display.
    • Annotation capabilities by both presenters and attendees.
    • Up to four users can present their content on the hub display.
    • File transfer capabilities.
  7. You’re ready to start using the Intel Unite solution on your client.
    • Quick connection for new attendees.
    • Present content or view content from the hub display.
    • Annotation capabilities by both presenters and attendees.
    • Up to four users can present their content on the hub display.
    • File transfer capabilities.

Install the Skype* for Business Plugin

The Skype for Business plugin is installed and deployed on the hub side. It allows people on a Skype online meeting to join an Intel Unite solution session; it also allows attendees in the Intel Unite solution-enabled meeting room to join and control the Skype online meeting. The Skype online communications are operated by the Intel Unite client application, but actually the meeting is implemented on the hub computer. The Intel Unite client does not need to install this plugin or the Skype for Business application.

Before installing the Skype for Business plugin, please make sure the software below is installed on the hub side:

  • Intel Unite application hub installed and configured
  • Microsoft Exchange 2010 or greater
  • Skype for Business installed and logged in

If you don’t have a Microsoft Exchange and Skype for Business account, it is recommended that you purchase Office 365* series software: https://products.office.com/en-US/business/compare-office-365-for-business-plans?legRedir=true&CorrelationId=a9d3fc01-6514-425e-91d9-3533b2a597f1

Office 365*

Double-click the installation package “Intel Unite Plugin for Skype for Business Installer.mui.msi”, and then configure the exchange server:

Exchange Server Configuration

The address in this snapshot is the exchange web server address for Office 365. And the username and password are the Office 365 username and password.

If you are not using Office 365, please find out your exchange server information by doing the following:

  1. Launch Outlook. (Note: Outlook is not required to run on the hub; you can run this on any machine.)
  2. Press and hold the Ctrl key and right-click the Outlook system tray icon.
  3. You will see two new options in the context menu: Connection Status and Test E-mail Auto Configuration.
  4. Click Test E-mail Auto Configuration and then test to check the email server configuration.
  5. In the Results tab note the OOF URL to use as the server URL for the plugin (for example, https://exchange.domain.com/EWS/Exchange.aspx).
  6. To check whether the exchange information is correct, click Test Connection. It must pass the test before it can go to the next step.
  7. Click Next until the installation is finished.
  8. Before using this plugin, please set the Skype for Business to automatically launch and sign in. Otherwise once the hub computer is restarted, Skype for Business will not launch and the plugin cannot work.
  9. Begin to use Skype for Business plugin.
    The usage scenario is:
    • The Intel Unite Hub is deployed in an Intel meeting room with the Skype for Business plugin installed. The hub computer has an Outlook* email account and the Skype for Business account is already signed up on this computer. Ten Intel employees will attend the meeting in this room; they have installed the Intel Unite client application but do not have to have a Skype for Business account.
    • User A is an Intel employee but is out of the Intel office and connects to the Intel network through VPN. User A has a Skype for Business account.
    • User B is an external user at a location outside of Intel; user B has a Skype for Business account.
    • User A, user B, and those in the Intel Unite application hub meeting room (10 in-room participants) will have a meeting online. As user B is an external user, he or she cannot connect to the Unite meeting room directly. In this case, the Skype for Business plugin can help the Intel Unite application hub join the Skype for Business online meeting and connect user B to the meeting, too.

    Join Meeting

    Schedule and join a meeting:

    • Anyone who has a Skype for Business account can schedule the meeting in Microsoft Outlook. But the hub must be invited, and then the plugin can accept or decline this meeting request. Please refer to the figure below before sending out the meeting invitation:

      Join Skype* Meeting
    • If the hub’s calendar doesn’t have conflicts, the plugin will accept this invitation and display the meeting information on the bottom status line 10 minutes in advance of the meeting.
    • Before the meeting starts, the participants in the meeting room open their Intel Unite client application. Anyone can click on the Skype for Business plugin icon. The Join button will be available 10 minutes before the meeting start time. User A can also connect to the meeting room and open the Intel Unite client application to join the meeting.

      Ready to Present
    • Clicking the Join button will display a toast message on the upper-right corner of the hub display indicating that the room is joining the Skype online meeting, and within 5 seconds the Skype for Business window should be full screen and in front. The meeting room joined the Skype for Business online meeting successfully.
    • User A and user B can join the Skype online meeting by clicking the Join Skype Meeting link in the email body. Although user A can connect to the Intel Unite application hub, after the hub joins the Skype online meeting, the hub’s screen will display Skype’s online meeting content, and the hub’s screen cannot be pushed to the client local screen. Since user A is not in the meeting room, he or she must join Skype online meeting to view the Skype meeting content. User B just follows the Skype online meeting instructions.

    Operations in the meeting

    • Share content
      The Intel Unite client in the meeting room clicks Start Presentation or Present Application. The hub computer will display the shared content (client to hub through the Intel Unite technology), and after several seconds users A and B will display this content in the Skype online meeting window (Hub → User A and B through Skype collaboration server).

      See below the different screens that display for in-room attendee’s personal screen, user B’s screen, and the hub’s attached display:

      Intel® Unite™ Solution Overview

      Intel® Unite™ Solution Overview

      Intel® Unite™ Solution Overview

      User B’s sharing will be presented on the hub’s screen, but in-room attendees can view it through the display attached to the hub.
    • Video and audio chat
      If the Skype for Business-enabled camera and mic-phone and audio devices are attached to the hub computer, the users in the meeting room can chat and talk to users A and B through Skype for Business.

      video and audio chat
    • Leave the meeting.
      Click the Skype for Business plugin icon, and then select Leave or another choice. These operations affect the hub computer.

      Skype* Plugin Filly

Install Protected Guest Access Plugin

You can use the Protected Guest Access plugin to add a guest to the Intel Unite solution session when he or she cannot connect to your enterprise network. This plugin allows the guest client to connect to the hub directly without connecting to the enterprise network. The hub will create an ad hoc access point to this guest, and the guest device can download, install the Intel Unite client application, and connect to hub successfully. This plugin is available from the Intel Unite solution 3.0 version and installs and deploys on the hub side.

Please make sure the Intel Unite application is deployed on the hub computer before you start to install “Protected Guest Access” plugin.

  1. Double click the installation package “Intel Unite Plugin for Protected Guest Access.mui.msi” on hub computer
  2. Click Next until the installation is finished.
  3. Open UniteSetting.exe on the hub machine. If Verify Certificates on Plugins is set to Yes, please make sure the Guest Access plugin is trusted.

    plugin settings
  4. Start Unite.exe on the hub machine.
  5. Open an Intel Unite client application, connect to the hub, then you will see the Guest Access icon.

    Guest Access Button
  6. Click the Guest Access button, and then Start Guest Access.

    The hub will display the guest access information at the bottom of the status line.

    pin and shared key
  7. On the guest computer, search SSID: IntelUnite_82M and connect it. Then type http://192.168.173.1/guest and follow the instructions to download the Intel Unite client application.
  8. Start the Intel Unite client application on the guest computer, and then type the PIN displayed on the hub to join the Intel Unite solution session.

By using Intel® Unite™ plugins, ISVs can extend their reach into the conference room and provide rich functions combined with the meeting room. With plugins integrated, the Intel Unite solution is more powerful and brings a user better experience.

Troubleshooting and Q&As

  1. “ID666666” error when launching on an Intel vPro technology-based PC.
    Solution: The PC you are using is not an authorized PC that can act as an Intel Unite application hub.
  2. Client connects to the hub successfully, but when the client tries to presents, the desktop has a black screen.
    Solution: Please update your display driver to the latest version.
  3. If the hub connects to two monitors, can the client present to both of them? Can the client present to a specified monitor?
    Solution: When the hub in is duplicate mode, the client can present to both monitors and has no display choice. When the hub is in extend mode, the client can present to one of the two monitors, and the user can choose which monitor to present to.
  4. If there is one hub in meeting room 1 and another hub in meeting room 2, can both hubs display the same content?
    Solution: As of Intel Unite solution version 3.0, this function is not supported. This feature will be supported in the Intel Unite solution version 4.0 in 2017.
  5. The Skype for Business plugin icon does not appear.
    Solution:
    1. Open the Hub Setting.exe - Plugins tab and set Verify the certificates in plugins to No.
    2. Ensure Skype for Business is installed and logged in before the Intel Unite application starts on the hub.
    3. Uninstall/Reinstall the plugin, and use the Test Connection button to ensure you have the correct settings.
  6. The Skype for Business plugin icon appears, but the Join icon does not appear since there is an available meeting at that time.
    Solution:
    1. Verify that the hub is invited in the meeting invitation. If it isn’t, the Join icon will not appear.
    2. Verify that the hub has accepted this invitation. If it didn’t, the Join icon will not appear.
    3. Verify that the Join Skype Meeting link is included in the email message. If it isn’t, the Join icon will not appear.
  7. The Skype for Business plugin Join icon appears, but fails to join the Skype meeting.

    Solution: On the hub computer, manually click the Join Skype Meeting link in the email message to verify whether Internet Explorer* can be opened and connect to the meeting successfully.
  8. After clicking the Guest Access plugin icon, it always says, “There was an error starting Guest Access”.

    Solution: Please do the following:
    1. Disable the firewall to check whether the firewall is preventing the plugin from talking on the local host. If yes, set this program to “Allowed”.
    2. Check whether the Intel Unite Guest Access service is started on the hub. If No, restart this service, and then retry.
  9. Is the Intel Unite solution free?
    There are no software licensing fees from Intel for the Intel Unite solution.
    The software is only available when purchasing authorized Mini PCs from Intel or OEM partners. But there may be a fee if you get the software from a third-party solution integrator.
  10. Does the Intel Unite solution support Linux*?
    No.
  11. How do I check the logs if I encounter an issue?
    1. In a command line, run “regedit”.
    2. Set regkey HKCU\Software\Intel\Unite\LogFile=‘c:\temp\log.txt’
    3. Run “Intel Unite.exe /debug” for popup debug window.
    4. Check logs in ‘c:\temp\log.txt’.
  12. Is Intel vPro technology mandatory to support the Intel Unite solution? How can I add the Intel Unite solution into an OEM product?

    Yes, Intel vPro technology is mandatory. The Intel Unite solution has hardware, software, productivity, and form factor requirements. Please contact Intel support to get further information.
  13. How many Intel Unite solution-enabled meeting rooms are there at Intel? How do I find these rooms?
    Please check for available rooms at: http://fmsitf01eot01.amr.corp.intel.com/uniterooms

References

  1. Intel Unite overview: http://intel.com/unite
  2. Intel Unite applications and plugin SDK: http://www.intel.com/content/www/us/en/support/software/software-applications/intel-unite-app.html
  3. Intel Unite solution community: https://soco.intel.com/groups/it-unite-info

About the Author

Qian, Caihong is application engineer in the Client Computing Enabling Team, Developer Relations Division, Software and Solutions Group. She is responsible for client business enabling such as security technology and Intel Unite solution technology.

Accessing Intel® Media Server Studio for Linux* codecs with FFmpeg

$
0
0

Intel hardware accelerated codecs are now accessible via FFmpeg* on Linux* systems where Intel® Media Server Studio is installed. The same OS and platform requirements apply, as the new additions to FFmpeg are simple wrappers to bridge the APIs.  For more information on Intel Media Server Studio requirements please see the Release Notes and Getting Started Guide.  

To get started:

  1. Install Intel Media Server Studio for Linux.  A free community edition is available from https://software.intel.com/en-us/intel-media-server-studio.
  2. Get the latest FFmpeg source from https://www.ffmpeg.org/download.html.  Intel Quick Sync Video support is available in FFmpeg 2.8 and newer for those who prefer a stable release.  Development is active so anyone needing latest updates and fixes should check the git repository tip.
  3. Configure FFmpeg with "--enable-libmfx --enable-nonfree", build, and install.  This requires copying include files to /opt/intel/mediasdk/include/mfx and adding a libmfx.pc file.  More details below.
  4. Transcode with an accelerated codec such as "-vcodec h264_qsv" on the ffmpeg command line.  Performance boost increases with resolution.
ffmpeg -i in.mp4  -vcodec h264_qsv out_qsv.mp4

 

Additional configure info:

The *_qsv codecs are enabled with "configure --enable-libmfx --enable-nonfree".  

A few additional steps are required for configure to work with Intel Media Server Studio codecs.

1. copy the mediasdk header files to include/mfx

# mkdir /opt/intel/mediasdk/include/mfx
# cp /opt/intel/mediasdk/include/*.h /opt/intel/mediasdk/include/mfx

2. provide a libmfx.pc file.  Same search rules apply as for other pkg-config configurations.  A good place to start is the same directory as an already findable config like libdrm.pc, but the PKG_CONFIG_PATH environment variable can be used to customize the search path.

example libmfx.pc file

prefix=/opt/intel/mediasdk
exec_prefix=${prefix}
libdir=${prefix}/lib/lin_x64
includedir=${prefix}/include

Name: libmfx
Description: Intel Media SDK
Version: 16.4.2
Libs: -L${libdir} -lmfx -lva -lstdc++ -ldl -lva-drm -ldrm
Cflags: -I${includedir} -I/usr/include/libdrm

 

Validation notes:

While this solution is known to work on a wide variety of systems, here are the most tested configurations:

Hardware

Intel® Xeon® Processors and Intel® Core™ Processors with support for Intel® Quick Sync Video

OSCentOS7.2 (3.10.0-327.36.3.el7.x86_64)
SoftwareGold CentOS install of Intel® Media Server Studio 2017 R2

 

BigDL: Distributed Deep learning on Apache Spark

$
0
0

What is BigDL?

BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters.

  • Rich deep learning support. Modeled after Torch, BigDL provides comprehensive support for deep learning, including numeric computing (via Tensor) and high level neural networks; in addition, users can load pre-trained Caffe or Torch models into Spark programs using BigDL.
  • Extremely high performance. To achieve high performance, BigDL uses Intel MKL and multi-threaded programming in each Spark task. Consequently, it is orders of magnitude faster than out-of-box open source Caffe, Torch or TensorFlow on a single-node Xeon (i.e., comparable with mainstream GPU).
  • Efficiently scale-out. BigDL can efficiently scale out to perform data analytics at "Big Data scale", by leveraging Apache Spark (a lightning fast distributed data processing framework), as well as efficient implementations of synchronous SGD and all-reduce communications on Spark.

Why BigDL?

You may want to write your deep learning programs using BigDL if:

  • You want to analyze a large amount of data on the same Big Data (Hadoop/Spark) cluster where the data are stored (in, say, HDFS, HBase, Hive, etc.).
  • You want to add deep learning functionalities (either training or prediction) to your Big Data (Spark) programs and/or workflow.
  • You want to leverage existing Hadoop/Spark clusters to run your deep learning applications, which can be then dynamically shared with other workloads (e.g., ETL, data warehouse, feature engineering, classical machine learning, graph analytics, etc.)

BigDL Program Flow Chart

How to use BigDL?

  • To learn how to install and build BigDL (on both Linux and macOS), you can check out the Build Page
  • To learn how to run BigDL programs (as either a local Java program or a Spark program), you can check out the Getting Started Page
  • To try BigDL out on EC2, you can check out the Running on EC2 Page
  • To learn how to create practical neural networks using BigDL in a couple of minutes, you can check out the Tutorials Page
  • For more details, you can check out the Documents Page (including Tutorials, Examples, Programming Guide, etc.)

Get BigDL from Github

Download Big DL

Support

Saffron Technology™ Solutions

$
0
0

Saffron Technology™ provides architecture, tools, and packages--everything you need to solve business problems.

The following documentation is available.

Saffron Application Packages

Our packages of dashboard solutions that can either act as stand-alone solutions or be embedded into your current workflow to assist in individual functions.

SaffronStreamline Issues and Defects Resolution (IDR) 1.0 User Guide

SaffronStreamline IDR includes five dashboard solutions that enable end-users to quickly identify possible duplicate issues, workforce redundancies, and skill gaps.

SaffronArchitect

SaffronArchitect is the package configuration and visualization tool. 

SaffronArchitect 1.0 User Guide

Installing and Deploying SaffronArchitect 1.0

SaffronElements

SaffronElements is a robust developer solution that includes APIs, Widgets, and other items that enable you to take advantage of many Saffron Technology™ solutions.

SaffronElements APIs

SaffronElements uses standard REST (Representational State Transfer) APIs to create and call our Thought Processes (THOPs), which are user-defined functions that allow you to tie together various capabilities using a scripting language. 

See also: 

SaffronElements Widgets

SaffronElements widgets provide visualizations that allow you to analyze query results. Widgets can be customized and embedded in your system. They include items such as bar charts, interactive tables, and heat maps.

Visual Analytics

SaffronElements uses the Tableau® web data visualization tool to visualize output from our APIs.


OpenCL™ Out-of-Order Queue on Intel® Processor Graphics

$
0
0

Introduction

The compute power of Intel® Processor Graphics is continuously growing with each generation. Each generation builds upon the last with increasing compute capacity in both fixed function and programmable execution units (EU’s). How do developers tap into this and effectively utilize the power of the platform to run GPU workloads in the most efficient manner?

This paper will detail the implementation of out of order queues, an OpenCL™ construct that allows independent kernels to execute simultaneously whenever possible, and thus keep all GPU assets fully utilized. This paper will utilize a simple matrix multiplication kernel(s) – GEMM and cl_intel_advanced_motion_estimation extension to illustrate how the OpenCL out of order queues can boost the performance of an application that has many independent workloads.

GPU Workloads and the OpenCL™ Command Queue

Execution of an OpenCL program occurs in two parts: kernels that execute on one or more OpenCL devices and a host program that executes on the host. The host program defines the context for the kernels and manages their execution. Objects such as memory, programs and kernel are created within an OpenCL context. The host creates a data structure called a command-queue to coordinate execution of the kernels on the devices. The host places commands into the command-queue which are then scheduled onto the devices within the context.

The command-queue schedules commands for execution on a device. These execute asynchronously between the host and the device. Commands execute relative to each other in one of two modes:

In-Order Execution: Commands are launched in the order they appear in the command queue and complete in order. In other words, a prior command in the queue completes before the following command begins. This serializes the execution order of commands in a queue. For example, we have kernel A and kernel B to be executed in order.

Out-of-Order Execution: Commands in an out-of-order queue do not guarantee any order of execution. Any order constraints are enforced by the programmer through explicit synchronization commands. For example, when a command waiting for a user event is placed on an OOQ it will not execute until the event is satisfied, but other commands even the ones placed after this command will start execution if their dependencies are met.

When creating the OpenCL command queue the developer has the option of specifying the order in which the commands will be executed. The default method of operation for an OpenCL command-queue is “in-order”, which means commands will be executed in the order in which they are submitted.

Figure 1: Out-of-order queue enables independent kernels to execute simultaneously whenever possible to keep all GPU assets busy, which does not guarantee any order of execution.

OpenCL Out-of-Order Queue

The OpenCL standard lets an application configure a command-queue to execute commands out-of-order.

In many cases multiple different kernels could potentially be ready to execute concurrently, in other words, commands placed in the work queue may begin and complete execution in any order. Therefore, it can utilize all the hardware assets to the maximum.

Applications can configure the commands enqueued to a command-queue to execute out-of-order by setting the CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE property of the command-queue. This can be specified when the command-queue is created or can be changed dynamically using clCreateCommandQueue. More details are shown in the session below.

When using the out-of-order execution mode there is no guarantee that the enqueued commands will finish execution in the order that they were queued because there is no guarantee that kernels will be executed in order, that is, based on when the clEnqueueNDRangeKernel calls are made within a command-queue. It is therefore possible that an earlier clEnqueueNDRangeKernel call to execute kernel “A” identified by event “A” may execute and/or finish later than a clEnqueueNDRangeKernel call to execute kernel “B” which was called by the application at a later point in time.

To guarantee a specific order of execution of kernels, a wait on a particular event (in this case event “A”) can be used. The wait for event “A” can be specified in the event_wait_list argument to clEnqueueNDRangeKernel for kernel “B”.

Be aware that if we took a trivial sequence of in-order queue (IOQ) work (ABC) and migrated it to OOQ (A->B->C) with explicit dependencies yielding the same sequence, the results may be worse than an IOQ alone; a future release may similarly optimize such a sequence for OOQs.

Creating an OpenCL Out of Order Command Queue

Creating an out of order queue is a straightforward process of setting the cl_command_queue_property to CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE when creating the cl command queue. If set, the commands in the command-queue are executed out-of-order. Otherwise, commands are executed in-order.

Refer to OpenCL host API clCreateCommandQueue and clCreateCommandQueueWithProperties for more details.

For example:

cl_command_queue_properties qProperties = CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE;
cl_command_queue queue = clCreateCommandQueue(context, deviceIds[0], qProperties, &error);

Note that clCreateCommandQueue is used in OpenCL v1.2, and deprecated in favor of clCreateCommandQueueWithProperties in OpenCL v2.0 and later.

Avoid the CL_QUEUE_PROFILING_ENABLE property in production as it may severely impact performance or even make concurrent execution impossible.

OpenCL Implementation

Identify independent tasks that can run in parallel and prepare them to execute through one out-of-order command-queue:

double start_exec_time = time_stamp();
for ( cl_uint i= 0 ; i < iterations ; i++)
{
error = clEnqueueNDRangeKernel(oclobjects.queue, executable.kernel, 2, NULL,
                               global_size, local_size, 0, 0, NULL);
error = clEnqueueNDRangeKernel(oclobjects.queue, test_kernel_1, 2, NULL,
                               test_global_size, test_local_size, 0, 0,
                                     NULL);
error = clEnqueueNDRangeKernel(oclobjects.queue, test_kernel_2, 2, NULL,
                               test_global_size, test_local_size, 0, 0,
                               NULL);
    // (etc.)
}
error = clFinish(oclobjects.queue);
double exection_time = time_stamp() – start_exec_time;

Note: For each stream of commands avoid flushing or blocking operations such as clFlush, clFinish, clWaitForEvents or blocking enqueue commands and manage dependencies with event waitlists.

For example:

cl_event *nd_range_events = new cl_event[streams];
for(int s = 0; s < streams; s++)
{
     error = clEnqueueNDRangeKernel(oclobjects.queue, executable.kernel, 2, NULL,
                                    global_size, local_size, 0, 0,&nd_range_events[s]);

     clEnqueueMapBuffer(queue,
                        oclobjects.queue,
                        CL_FALSE, //non-blocking map
                        CL_MAP_READ,
                        0,
                        matrix_memory_size,
                        1, &nd_range_events[s], 0, &err);
}
error = clFinish(oclobjects.queue);

Applying Out-of-Order Command Queue to General Matrix Multiply OpenCL Kernels

A general matrix multiply (GEMM) sample demonstrates how to efficiently utilize an OpenCL device to perform general matrix multiply operation on two dense square matrices. General matrix multiply is a subroutine that performs matrix multiplication: C = alpha*A*B + beta*C, where A, B and C are dense matrices and alpha and beta are floating point scalar coefficients.

This implementation optimizes trivial matrix multiplication nested loop to utilize the memory cache more efficiently by introducing a well-known practice as tiling (or blocking), where matrices are divided into blocks and the blocks are multiplied separately to maintain better data locality. Most of the GEMM code comes from General Matrix Multiply Sample from Intel® SDK for OpenCL™ applications.

To evaluate the performance of a regular GEMM kernel, by default, we calculate C’ = alpha*A*B + beta*C, and the size of A, B, C is (256 * 256) matrix. The kernel gemm_nn deals with block multiplication, and the size of the block in each work item is (1 * 128) matrix, which results in that the global size is 256 * 2. Therefore, we can get 512 work items total, and we set the local size to be (16 * 1), which means there are 32 work groups. Only a few work-items are needed to finish one kernel execution. For example, on a 6th Generation Intel® Core™ i5-6600k processor at 3.50 GHz with Intel® HD Graphics, which contains 24 execution units and 2688 SIMD16 work-items, one execution of the kernel gemm_nn cannot occupy all the computing resources. Therefore, we can get the performance benefits from submitting kernels to the out-of-order queue when the kernel’s execution cannot fill all the EUs.

To demonstrate the effectiveness of out-of-order queues versus in-order queues, we run eight GEMM kernels in one loop in order to calculate eight different matrix multiplications. In an in-order queue eight streams/kernels are executed one by one on the GPU and therefore EUs cannot be saturated. However, in an out-of-order queue multiple executions will happen at the same time to occupy all the EUs, which significantly improves the performance. We will discuss performance in next section.

Applying Out-of-Order Command Queue with clWaitForEvents to Two GEMM Kernels

In this example, we’d like to submit two GEMM kernels with serialization. For example, we calculate C_1 = alpha*A*B + beta*C in kernel_1, and then calculate C_2 = alpha*A*B + beta*C_1 in kernel_2. Since in-order execution cannot fill all the EUs, we can also get performance benefits from submitting the GEMM kernels with serialization to the out-of-order queue. Note that, in order to get the right result C_2, kernel_2 must be executed exactly after kernel_1. We can use clWaitForEvents or blocking to enqueue commands and manage dependencies with event waitlists. The experiments are tested both in out-of-order queue and in-order queue, and performance improvement will be discussed in the next section.

Applying Out-of-Order Queue to VME and General GEMM Kernels

A real-world example of the effectiveness of out-of-order queues can be demonstrated using a common media workload. The 6th Generation Intel® Core™ i5-6600k processor at 3.50 GHz with Intel HD Graphics has dedicated hardware blocks for video motion estimation (VME) processing along with Execution Units (EUs) available for general computations. Ideally, an application’s goal should be to let applications utilize the EU’s for general purpose computations, while still using the VME engine to operate on the media content in parallel.

In our sample the VME workload and a regular OpenCL kernel will be enqueued back to back in out-of-order and in-order queues. Assuming the execution time of these two kernels is comparable, the speedup of an out-of-order queue is observed in a total execution time of these two kernels enqueued back to back and measured together.  Each of the (VME+GEMM) kernels are executed 10 times.

Most of the VME code comes from ocl_motion_estimation_advanced sample from Intel® Media Sever Studio samples (https://software.intel.com/en-us/intel-media-server-studio-support/code-samples).

Performance

Demonstrated below is the performance comparison between an in-order and an out-of-order queue on a 6th Generation Intel® Core™ i5-6600k processor at 3.50 GHz with Intel® HD Graphics, which contains 24 EUs running at 1.15Hz, on a CENTOS* 7.2.1511 platform.

First, we executed eight streams of single GEMM kernel with matrix size 256x256, back to back in in-order and out-of-order queues, and used Intel® VTune™ Amplifier 2017 to measure the GPU usages, which are shown separately in Figure 2 and Figure 3.

As Figure 2 shows, in the middle of each of the kernel’s execution cycle, VTune™ Amplifier may report the GPU EU array reaching 100 percent utilization. However due to the relatively small NDRange sizes and in-order command queues, the GPU EU array usage can only reach about 96 percent at the beginning and end of each stream/kernel. Digging deeper into the analysis of EU arrays usage, we also find the percentage of EU Active is only about 57 percent, due to a high percentage of EU stalled (about 40 percent) and also EU idled (about 3 percent) existing in the beginning and end of each kernel, which means that in-order command queue leads to underutilization of the GPU.

In Figure 3, in comparison with Figure 2, notice that the average GPU usage in the out-of-order queue is higher because most of the EU idle of GPU execution (the gray lines) are eliminated, especially those between the switch of each kernel.

Figure 2: Intel® VTune™ performance analysis of in-order command queues using the GEMM sample kernel.

Figure 3: Intel® VTune™ performance analysis of out-of-order command queues using the GEMM sample kernel.

We execute GEMM kernels for different matrix sizes and performance gain of the OOQ is presented in the chart below. It shows clearly that the smaller the matrix size is, the better performance is against the IOQ, which proves that out-of-order queue can regain the lost performance in workloads that are using only a portion of the GPU assets available.

Like as before, we executed eight streams of two GEMM kernels with serialization. The experiments were tested enqueued back to back in out-of-order and in-order queues and the computing time and GFLOPS were recorded below. And as shown below, we obtained about 1.26x performance improvement.

In the application “Applying Out-of-Order Queue to VME and General GEMM Kernels”, in order to make VME and GEMM kernels comparable, the matrix size in the GEMM kernel was modified to (512 * 512). First we tested EU array and computing time of these two kernels separately, listed in the following table. Those data will be used in the analysis of out-of-order queue performance later.

 EU Array ActiveExecution Time (ms)
VME * 10 times4.2%85.73
GEMM * 10 times71.1%141.875

Then we executed 10 streams of (VME + GEMM) kernels back to back in out-of-order and in-order queues. Note that VME and GEMM kernels are independent, and there is no clWaitForEvents between kernels. Their GPU usage is measured by VTune™ Amplifier. In Figure 4, the upper part is in the in-order queue and the bottom part is in the out-of-order queue.

In the upper part of Figure 4, we can see that EU array active (the green line) is periodically lower and higher. That’s because VME is executed first with short execution time and low EU activity, and then GEMM is executed in-order with long execution time and high EU active. The execution time of VME + GEMM in in-order queue is about 225ms, which is almost the same as the sum of execution time of VME and GEMM.

In the bottom part of Figure 4, due to the out-of-order queue execution, we cannot see the periodic change of EU array. Instead, we measured the lowest active EU array, which is about 24 percent. And it is obvious that the lowest EU active in the in-order queue is equal to the active EU of VME, which is about 4 percent. Moreover, in GPU usage, there are fewer grooves in the out-of-order queue. Therefore, it can be proved that in the out-of-order queue, we can get a higher average active EU array and keep all GPU assets better utilized.

In conclusion, in the (VME + GEMM) kernels application, we obtain about 1.5x performance improvement.

Figure 4: Intel® VTune™ performance analysis using (VME + regular GEMM) kernels.

Caveats/Limitations

  • The out-of-order queue feature is not a default property, and cl_command_queue_properties CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE must be specified when creating cl command queue.
  • Be aware of limited hardware resources (barriers, SLM) when expecting parallel execution benefits.
  • When using out-of-order queues explicitly manage dependencies between enqueues through events and event_waitlists arguments as there is no in-order execution guarantee.
  • Internal optimizations may result in worse performance than using in order queues when dependecies result in the serialization of kernels. When in doupt, test and implement what works best for your specific workload.
  • Using events – the users must call clReleaseEvent() for every event they get back from an enqueue command, or they’ll have a memory leak.
  • Speed-up is observed in the total execution time of multiple kernels when enqueued together into the same out-of-order command-queue. Particular performance gains vary and depend on a workload and a given hardware configuration characteristics.

Conclusion

In this article we demonstrated how to use OpenCL out-of-order queues to improve performance on the 6th Generation Intel® Core™ i5-6600k processor with Intel® HD Graphics. We implemented our sample using OpenCL VME and regular GEMM OpenCL kernel in out-of-order queue, and compare performance with an in-order queue. When used properly, the OpenCL out-of-order queue provides a significant performance boost.

System/Driver/Tool Version

CPU: The 6th Generation Intel® Core™ i5-6600k processor at 3.50 GHz

GPU: Intel® HD Graphics, EU Count: 24, Max EU Thread Count: 7

OpenCL: OpenCL™ 2.0

OS: CENTOS* 7.2.1511 platform

Tool: Intel® VTune™ Amplifier XE 2017

References

  1. clCreateCommandQueue& clCreateCommandQueueWithProperties
  2. Intro to Advanced Motion Estimation Extension for OpenCL™
  3. cl_intel_advanced_motion_estimation
  4. General Matrix Multiply sample
  5. https://software.intel.com/en-us/intel-media-server-studio-support/code-samples
  6. OpenCL™ Optimization Guide for Intel® Processor Graphics

About the Authors

Danyu Bi

  • Software Engineer in the Intel IT Flex Services Group.

Sardella, Eric

  • Software Engineer in the Intel Software and Services Group.

Installing and Building MXNet with Intel® MKL

$
0
0

Summary:

MXNet is an open-source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of devices, from cloud infrastructure to mobile devices. It is highly scalable, allowing for fast model training, and supports a flexible programming model and multiple languages. MXNet allows you to mix symbolic and imperative programming flavors to maximize both efficiency and productivity. MXNet is built on a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient.  The latest version of MXNet includes built-in support for the Intel® Math Kernel Library (Intel® MKL) 2017. The latest version of the Intel MKL includes optimizations for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and AVX-512 instructions which are supported in Intel® Xeon® processor and Intel® Xeon Phi™ processors.

Prerequisites:

Follow the instructions given here.

Building/Installing with MKL:

MXNet can be installed and used with several combinations of development tools and libraries on a variety of platforms. This tutorial provides one such recipe describing steps to build and install MXNet with Intel MKL 2017 on CentOS*- and Ubuntu*-based systems.

1.     Clone the mxnet tree and pull down it’s submodule dependencies:

git submodule update --init --recursive

git clone https://github.com/dmlc/mxnet.git

2.     Edit the following lines in make/config.mk to “1” to enable MKL support. 

With these enabled when you attempt your build it will pull the latest MKL package for you and install it on your system.

USE_MKL2017 = 1

USE_MKL2017_EXPERIMENTAL = 1

3.     Build the mxnet library

NUM_THREADS=$(($(grep 'core id' /proc/cpuinfo | sort -u | wc -l)*2))

make -j $NUM_THREADS

4.     Install the python modules

cd python

python setup.py install

Benchmarks:

A range of standard image classification benchmarks can be found under examples/image-classification.  We’ll focus on running a benchmark meant to test inference across a range of topologies.

Running Inference Benchmark:

The provided benchmark_score.py will run a variety of standard topologies (AlexNet, Inception, ResNet, etc) at a range of batch sizes and report the img/sec results.  Prior to running set the following environmental variables for optimal performance:

export OMP_NUM_THREADS=$(($(grep 'core id' /proc/cpuinfo | sort -u | wc -l)*2))

export KMP_AFFINITY=granularity=fine,compact,1,0

Then run the benchmark by doing:

python benchmark_score.py

If everything is installed correctly you should expect to see img/sec #’s output for a variety of topologies and batch sizes.  Ex:

INFO:root:network: alexnet

INFO:root:device: cpu(0)

INFO:root:batch size  1, image/sec: XXX

INFO:root:batch size  2, image/sec: XXX

INFO:root:batch size 32, image/sec: XXX

INFO:root:network: vgg

INFO:root:device: cpu(0)

INFO:root:batch size  1, image/sec: XXX

[How to fix] MATALB* crashed when link other version of Intel® MKL

$
0
0

1. Issue Description

MATLAB* is shipping specified version of Intel® Math Kernel Library(MKL) as BLAS and LAPACK library. For instance, the default MKL package for MATLAB2016b is MKL 11.3.1. If customer want to link with latest release of MKL (for example, MKL 2017 update 1). Customer need to set MATLAB variables for BLAS and LAPACK to the single dynamic library "MKL_rt.dll". 

Some customer may meet program crash while calling some lapack/blas function after setting to new version of MKL. If you get a system error like below screenshot, please follow the resolution for this problem.

2. Reason

MATLAB* uses ILP64 library for interface layer, but MKL single dynamic library (mkl_rt.dll) loads LP64 type by default. Thus, you probably need to specify the interface layer for MKL to ILP64.

3. Resolution

Here I make an example that people is going to link with MKL 2017 update 1. If you are using Linux* system, firstly please check if you already sourced root path for MKL.

> source /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/bin/mklvars.sh <intel64|ia32>

Then, set to link with latest version of MKL like following comments,

> export BLAS_VERSION=$MKLROOT/lib/intel64/libmkl_rt.so> export LAPACK_VERSION=$MKLROOT/lib/intel64/libmkl_rt.so> export MKL_INTERFACE_LAYER=ILP64

If you are using Windows* system, please open commend prompt for Intel compiler

> set BLAS_VERSION=mkl_rt.dll> set LAPACK_VERSION=mkl_rt.dll> set MKL_INTERFACE_LAYER=ILP64

4. Testing

Now you could launch your matlab in same terminal window for testing.

> matlab -desktop

Check if we set to link MKL 2017u1 successfully by type in command windows of MATLAB like below,

>> version -blas
ans = Intel(R) Math Kernel Library Version 2017.0.1 Product Build 20161005 for Intel(R) 64 architecture applications, CNR branch AVX2>> version -lapack
ans = Intel(R) Math Kernel Library Version 2017.0.1 Product Build 20161005 for Intel(R) 64 architecture applications, CNR branch AVX2
Linear Algebra PACKage Version 3.6.0

Test LU factorization function to check if MKL could be used without problem,

>>A=[1,2;3,4]
A=
    1    2
    3    4>>[L,U]=lu(A)
L =
    0.3333    1.0000
    1.0000         0
U =
    3.0000    4.0000
         0    0.6667

* You could learn more information about Using MKl with MATLAB from following links,
Using Intel® MKL with MATLAB*
Using Intel® Math Kernel Library with MathWorks* MATLAB* on Intel® Xeon Phi™ Coprocessor System
 

Analyzing Open vSwitch* with DPDK Bottlenecks Using Intel® VTune™ Amplifier

$
0
0

Download PDF [PDF 1.4 MB]

Overview

This article is primarily aimed at development engineers working on high-performance computing applications. We will show an example of how we used Intel® VTune™ Amplifier to detect a performance bottleneck in Open vSwitch* (OvS) with Data Plane Development Kit (DPDK), also known as OvS-DPDK. We will also describe how we addressed this performance issue. If you are relatively new to design principles of OvS-DPDK packet processing, we highly recommend reading our previous introductory article for a description of the datapath classifier fundamentals as well as our second article, where we emphasized the overall packet processing pipeline with detailed call graphs.

The primary focus of this article is on Intel® microarchitecture and particularly on the top-down analysis approach.

Introduction

To optimize the application for performance, it’s not necessary for a developer to be a performance expert but they should be proficient with their own application. Many aspects come into play when trying to improve application performance, ranging from hardware platform considerations, code design changes, and code fine-tuning to leverage microarchitecture features. A deep understanding of the code design and implementation becomes an essential requirement for an application developer to understand how the application is utilizing the available hardware resources. This can be achieved by acquiring a deeper knowledge of hardware microarchitecture and by using specialized profiling tools like VTune™ Amplifier.

Getting Started: The Top-down Approach

One of the prominent performance tuning methodologies is the top-down approach. This approach has three stages: system tuning on the top, application tuning in middle, and microarchitecture tuning at the bottom. System tuning involves the hardware and operating system tuning, while application tuning includes better design, parallelization, and improving the efficiency of libraries. Microarchitecture is the last stage and involves careful selection of compiler flags, vectorization, and code refactoring w.r.t memory/cache optimizations as well as an understanding of CPU pitfalls, as depicted in Figure 1.

Expected vs. measured ranges of pipeline slots.
Figure 1.Top-down performance tuning approach.

The discussions that follow in the next sections will refer entirely to the microarchitecture stage.

Microarchitecture and Micro-operations

Intel® Core™ microarchitecture.
Figure 2.Intel® Core™ microarchitecture.

Figure 2 depicts the 4th generation Intel® Core™ microarchitecture, where the upper green portion is the front end and the lower blue portion is the back end of the pipeline. The front end is where the (macro) instructions – like ADD, MUL – are fetched and decoded (i.e., translated into smaller operations, called micro-operations [μ-Ops]). The back end is where the required operation, the actual computation, is carried out. Detailed description of each block of the pipeline is beyond the scope of this article. Figure 3 depicts the front-end and back-end processor pipeline responsibilities.

Front-end and back-end processor pipeline responsibilities.
Figure 3. Front-end and back-end processor pipeline responsibilities.

Micro-operations (aka μOps) that successfully completed the whole datapath are considered to be “retired.” The back end can retire up to four μOps per cycle. μOps pass through the processor pipeline and retire; but occasionally, a few μOps that are speculatively fetched can get canceled in between due to various branch miss-predictions. A pipeline slot represents the hardware resources needed to process one μOp, meaning on a CPU core for each clock cycle four pipeline slots are available. As mentioned earlier, four μOps can be retired per cycle implying each instruction should take 0.25 cycles to retire in theory.

At a given point in time, a pipeline slot can be empty or filled with μOps. The pipeline slot is classified in four categories as depicted in Figure 4. More information on pipeline slot classification can be found in this article.

Pipeline slot classifications.
Figure 4.Pipeline slot classifications.

VTune™ Amplifier Tool

The VTune Amplifier collects measurements by leveraging performance monitoring units (PMUs) that are part of the CPU core. The specialized monitoring counters can collect and expose information on the hardware resource consumption. With the help of PMUs, the metrics regarding efficiency of the processed instructions and the caches usage can be measured and retrieved. It’s important to become familiar with the wide range of available metrics like retired instructions; clock ticks; L2, L3 cache statistics; branch mis-predicts; and so on. Also, “uncore” PMU metrics like bytes read/written from/to memory controller and data traffic transferred by Intel® QuickPath Interconnect (Intel® QPI) can be measured.

Below is the brief description and the formulas used to calculate the metrics of different pipeline slots. An application is considered front-end bound if the front end is delivering < 4 μOps per cycle while the back end is ready to accept more μOps. This is likely caused by delays in fetching code (caching/ITLB issues) or in decoding instructions. The front-end bound pipeline slot can be classified into sub-categories as depicted in Figure 5.

Formula: IDQ_UOPS_NOT_DELIVERED.CORE / (4 * Clockticks)

Front-end bound sub-categories.
Figure 5.Front-end bound sub-categories.

The application is considered back-end bound if no μOps are delivered due to lack of required resources at the back end of the pipeline. This could be because internal structures are filled with μOps waiting on data. Further back-end bound sub-categories are depicted in Figure 6.

Formula: 1 - (Front-end Bound + Bad speculation + Retiring)

Back-end bound sub-categories.
Figure 6. Back-end bound sub-categories.

As seen in Figure 7, the pipeline slot is classified as bad speculation when the μOps never retire or allocation slots were wasted, due to recovery from branch mis-prediction or machine clears.

Formula: (UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4* INT_MISC.RECOVERY_CYCLES) / (4* Clockticks)

Bad speculation.
Figure 7.Bad speculation.

As seen in Figure 8, the pipeline slot is classified as retiring if the successfully delivered μOp was eventually retired.

Formula: UOPS_RETIRED.RETIRE_SLOTS / (4 * Clockticks)

Retiring.
Figure 8.Retiring.

The detailed description of each sub-category is beyond the scope of this article. Refer to “CPU Metrics Reference” for in-depth descriptions of each sub-category.

Analyzing OvS-DPDK Bottlenecks

Now that we have discussed the microarchitecture fundamentals, let’s apply the knowledge in analyzing the bottlenecks in OvS-DPDK.

Batching packets by flows

One of the important packet pipeline stages in OvS-DPDK is the flow batching. Each incoming packet is first classified and then batched depending on its matching flow, as depicted in Figure 9.

Packet are grouped depending on the matching flow.
Figure 9.Packet are grouped depending on the matching flow.

The packets queued into a batch shall be processed with a corresponding list of actions (drop, push/pop VLAN) defined for that flow. To improve the packet forwarding performance the packets belonging to the same flow are batched and processed together. 

Occasionally, there could be few packets in a batch. In the worst case, each of the fetched packets is matching a different flow, so each batch will contain a single packet. When the corresponding action for the flow is to forward packets to a certain physical port, transmitting a few packets can be very inefficient as packet transmission over a DPDK interface incurs expensive MMIO writes.

Figure 10 shows the packet processing call graph. In exact match cache processing, for every input packet a lookup is performed in EMC to retrieve matching flow. In case of an EMC hit, the packets are queued into batches — see struct packet_batch_per_flow in OvS-DPDK source code— matching the flow using dp_netdev_queue_batches().  Thereafter, packets are processed in batches for faster packet processing using packet_batch_per_flow_execute.  If the corresponding action of the flow is to forward the packets to a DPDK port, netdev_send will be invoked, as depicted in Figure 10.

OvS-DPDK call graph for classification, flow batching and forwarding.
Figure 10.OvS-DPDK call graph for classification, flow batching and forwarding.

Benchmark with 64-byte UDP packets

A benchmark was set up with an IXIA Traffic Generator sending a few dozen unique streams comprising 64-byte UDP packets. Significant performance drop is observed with tens of different flows in “PHY2PHY” test case. Note that the flow rules are unique and are set up to match on source IP address of the packets for a corresponding stream. Example flow rules are shown below; this will create four batches and the packets are queued to corresponding batches.

$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x0800,nw_src=2.2.2.1,actions=output:2
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x0800,nw_src=4.4.4.1,actions=output:2
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x0800,nw_src=6.6.6.1,actions=output:2
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x0800,nw_src=8.8.8.1,actions=output:2

With the above flow rule, VTune General Exploration analysis revealed interesting insights into transmission bottlenecks in OvS-DPDK.

VTune Amplifier summary

When General Exploration analysis is run for a 60-second duration, Figure 11 shows the snapshot of the VTune Amplifier summary on how the pipeline slots are occupied. Note that the slots highlighted pink need attention and are auto highlighted by VTune based on the default thresholds of the application category.

Summary of the General Exploration analysis provided by VTune™ Amplifier.
Figure 11.Summary of the General Exploration analysis provided by VTune™ Amplifier.

According to VTune documentation, Table 1 shows the comparison between the expected range of pipeline slots in healthy high-performance computing (HPC) applications with the OvS-DPDK results as output by VTune Amplifier.

Expected vs. measured ranges of pipeline slots.
Table 1.Expected vs. measured ranges of pipeline slots.

As seen in the table, for the particular test case, OvS-DPDK is back-end bound and is retiring (only 35%) far less instructions than the expected and healthy 70% limit. A bottom-up analysis revealed a few interesting details, as depicted in Figure 12.

  • netdev_dpdk_eth_send()  consumed 17% of the total cycles.
  • The cycles per instruction (CPI) rate for the above function stands at 4.921, much higher than the theoretical limit0.25 and acceptable range of 1.0 in case of HPC applications.
  • This function is entirely back-end bound and is hardly retiring any instructions (<6%).

Summary of the General Exploration analysis provided by VTune™ Amplifier.
Figure 12.Summary of the General Exploration analysis provided by VTune™ Amplifier.

Let’s see in more detail what’s happening in the back-end pipeline by expanding the back-end Bound column. Figure 13 depicts where exactly the bottleneck is in the back-end pipeline and it points to a significant issue in L1 cache.

Back-end bound summary.
Figure 13.Back-end bound summary.

The listed functions are dealing with the transmission of the packets. This may imply that the back-end pipeline is suffering a long latency for I/O operations. So the bottleneck may possibly be due to the costly MMIO transactions. That’s especially true when the processed packets – 32 at the most – are matching many different flows. In the worst-case scenario where each packet is hitting a different flow, an MMIO operation will be triggered for transmitting each packet from the corresponding batch.

Solution to Mitigate MMIO Cost

In a real scenario, packets coming from a physical port may hit a large number of flows resulting in grouping very few packets for each flow batch. This becomes very inefficient when packets are transmitted over DPDK interfaces. To amortize the cost of MMIO writes, an intermediate queue can be used to queue “NETDEV_MAX_BURST” (i.e., 32) packets and transmit the packets in burst with rte_eth_tx_burst. The intermediate queue is implemented using the ‘netdev_dpdk_eth_tx_queue()’  function that will queue the packets. Packets will be transmitted when one of the conditions below are met.

  • If packet count in txq (txq->count) >= NETDEV_MAX_BURST, invoke netdev_dpdk_eth_tx_burst() to burst packets.
  • After a timeout elapses, any packet waiting into the queue must be flushed, regardless of its number.

With this solution, an improvement in both back-end and Retiring categories can be observed, as shown in Table 2.

Measured ranges of pipeline slots with and without the patch.
Table 2.Measured ranges of pipeline slots with and without the patch.

Figure 14 shows that after applying the patch, the function netdev_dpdk_eth_send() drastically moved from the top to the very bottom of the list. By mitigating the effect of MMIO latency the measured CPI went from 4.921 down to 1.108.

Effect of MMIO mitigation by the intermediate queue.
Figure 14.Effect of MMIO mitigation by the intermediate queue.

Please note that the performance can be further improved with more optimization in the packet transmission path; however, this is beyond the scope of this article. The above is an example to understand and fix the bottlenecks in a HPC application using VTune Amplifier. In our analysis using VTune Amplifier, we used the General Exploration mode and focused on back-end bound metrics. It’s worth noting that other analysis types are available in the VTune Amplifier tool like the hotspot, advanced hotspot modes, hpc-performance, memory-access, and locksandwaits. Also, other metrics are available to show resource consumption.

Conclusions

In this article we described the top-down methodology to analyze and detect performance bottlenecks. Also we focused on explaining the VTune Amplifier metrics in context of microarchitecture. An application example is used (i.e., OvS-DPDK) to identify and find the root cause of the bottlenecks and the steps taken to improve the performance.

Additional Information

For any questions, feel free to follow up with the query on the Open vSwitch discussion mailing thread.

Articles

Intel VTune Amplifier: https://software.intel.com/en-us/intel-vtune-amplifier-xe/

OvS-DPDK Datapath Classifier: https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier

OvS-DPDK Datapath Classifier – part 2: https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier-part-2

About the Authors

Bhanuprakash Bodireddy is a network software engineer at Intel. His work primarily focuses on accelerated software switching solutions in user space running on Intel architecture. His contributions to OvS-DPDK include usability documentation, the Keep-Alive feature, and improving the datapath classifier performance.

Antonio Fischetti is a network software engineer at Intel. His work primarily focuses on accelerated software switching solutions in user space running on Intel architecture. His contributions to OvS-DPDK are mainly focused on improving the datapath classifier performance.

Thought Processes Overview and Examples

$
0
0

Understanding Thought Processes

Saffron Technology™ Thought Processes (THOPs) are user-defined functions that allow you to tie together various SaffronMemoryBase™ and other capabilities using a scripting language. Thought Processes can help perform a wide variety of actions such as: simplifying the execution of complex sequential queries, calling outside applications to be used in conjunction with Saffron Technology reasoning functions, or converting query results into data suitable for UI widgets. THOPs are written in one of the supported script languages (currently, only JavaScript is available).

If you are familiar with other database products, think of Thought Processes as stored procedures. THOPs can be created via the REST API or by using the developer tools in Saffron Admin. After a Thought Process is defined, it becomes callable through the standard REST API.

Currently, SaffronMemoryBase includes two versions of Thought Processes: THOPs 1.0 and THOPs 2.0.

THOPs 1.0

These Thought Processes are run synchronously THOPs 1.0 runs via an HTTP GET call with the calling client and waits for a result.

Synchronous THOP diagram

THOPs 1.0 are called using a GET operation and uses the Tomcat application server. This version is based on the Rhino JavaSript engine with extensions.

Use THOPs 1.0 for the following types of operations:

  • Simple single queries
  • Fast operations 
  • Where asynchronization is not necessary

THOPs 2.0

The THOPs 2.0 layer enables Thought Processes to run asynchronously and to have bi-directional communication with the calling client. The processes can be long-running, which is why we refer to them as Deep Thoughts (DT). When initiated by a caller, DTs can run arbitrarily long and can send messages to the client before delivering a final result.

The following is an example of a Deep Thought that can deliver a partial result to the calling client as soon as data is available and deliver the remaining results later:

Deep Thought diagram sending messages to caller

Deep Thoughts can also listen to messages being sent by the caller and reply to those messages individually (as well as return partial or final results).

Here, THOPs can register for messages from the caller. THOPs can also reply to messages or send messages proactively:

Clients start an instance of a Deep Thought (DT) by POSTing to the thopname/result resource. The result of the POST doesn't contain a final result, but an Event Bus address to listen for messages from the DT as well as a start address. Clients then send a message to the start address to actually start the DT (it lies dormant to give the client a chance to register to the listen address).

A DT can send a (partial) result to the client using the send(data) function. data is any JS object; it will be turned into JSON during transmission to the client.

A DT can register for messages sent by the client after the DT was started using the on(address, handler) function. Address is any string that the client uses to send messages to the DT . DT's handler will then be called with the message as well as a replyHandler. The DT can use the replyHandler to send a message back to the calling client. (Alternatively it can use the send function)

As part of default parameters for a DT, a timeout value can be specified after which a DT instance shuts down (and the client will be unable to send messages to the DT).

Use THOPs 2.0 for the following types of operations:

  • complex queries that can't be expressed in a single query
  • business logic when writing apps based on SUIT and THOPS
  • integrating SMB APIs and foreign APIs
  • using a stored procedure in a relational DB
  • deploying code at runtime
Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>