Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Get Started Installing Parallel Studio XE 2018 - Linux

$
0
0

‹ Back to Intel® Parallel Studio XE


Step 1. Linux OS Selected

Select another OS

Windows  macOS

 

Step 2. Before you Install

Donec ullamcorper nulla non metus auctor fringilla. Cras mattis consectetur purus sit amet fermentum. Donec ullamcorper nulla non metus auctor fringilla. Cras justo odio, dapibus ac facilisis in, egestas eget quam.

Hardware Requirements

Donec ullamcorper nulla non metus auctor fringilla. Cras mattis consectetur purus sit amet fermentum. Donec ullamcorper nulla non metus auctor fringilla. Cras justo odio, dapibus ac facilisis in, egestas eget quam.

Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Nulla vitae elit libero, a pharetra augue. Donec sed odio dui. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Maecenas sed diam eget risus varius blandit sit amet non magna. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

 

Something Else Here

Donec ullamcorper nulla non metus auctor fringilla. Cras mattis consectetur purus sit amet fermentum. Donec ullamcorper nulla non metus auctor fringilla. Cras justo odio, dapibus ac facilisis in, egestas eget quam.

Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Nulla vitae elit libero, a pharetra augue. Donec sed odio dui. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Maecenas sed diam eget risus varius blandit sit amet non magna. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

Step 3. Download & Install 

Donec ullamcorper nulla non metus auctor fringilla. Cras mattis consectetur purus sit amet fermentum. Donec ullamcorper nulla non metus auctor fringilla. Cras justo odio, dapibus ac facilisis in, egestas eget quam.

Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Nulla vitae elit libero, a pharetra augue. Donec sed odio dui. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Maecenas sed diam eget risus varius blandit sit amet non magna. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

Step 4. Start Developing

Donec ullamcorper nulla non metus auctor fringilla. Cras mattis consectetur purus sit amet fermentum. Donec ullamcorper nulla non metus auctor fringilla. Cras justo odio, dapibus ac facilisis in, egestas eget quam.

Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Nulla vitae elit libero, a pharetra augue. Donec sed odio dui. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Maecenas sed diam eget risus varius blandit sit amet non magna. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

Additional Resources

Link 1

Link 2

Link 3

Link 4


Feature Comparison of the Intel® Joule™ Module and the Intel Atom® Processor E3900 Series

$
0
0

Introduction

This white paper presents a selected set of information including technical specifications and key features or advantages of the Intel® Joule™ module and the Intel Atom™ E3900 processor series (announced). It is intended to present high level information for designers and developers who would like to understand the key similarities and differences between the two.  For completeness, information on the Intel® Pentium® processor N3350 and the Intel® Celeron® processor N4200 series are also provided in Table 1 and Table 2.

A High Level Comparison

For many embedded applications, the Intel® Joule™ module, and the Intel Atom® processor E3900 series (announced), code named Apollo Lake, offer similar features. They both appear to be ideally suited for moderate computational tasks while using low power. They also both have Intel® HD Graphics capabilities that include HDMI output.  However, there are some key differences.

The Intel® Joule™ module is a pre-configured device that is meant to be plugged into a carrier board. It has two configuration options, the 570x and the 550x.  Because it is pre-configured, the design work required to take it to market is reduced to creating a carrier board that fits the specific application.  This reduced effort can potentially lower the production cost, while allowing for a faster time to market. 

The Intel Atom® processor E3900 series has a wide range of configuration options available.  As a traditional processor, design work is required to create a board to house the processor as well as the peripherals for the specific application.  This design work can cost more and take longer than developing a board for the Intel® Joule™ module, but for high volumes the overall cost can be less since a custom board may allow for a reduced Bill of Materials.

The major features of each are outlined below.

Features

Intel® Joule™ Module

Rapid prototyping, Accelerated Time to Market
Pre-installed storage and memory along with integrated wireless capabilities allow customers to accelerate time to market (less time making design considerations because the module comes with default features factory loaded).

Wireless Pre-Certification
The Intel® Joule™ module are pre-certified for distribution and sale into more than 80 countries, enabling customers to save on the cost (time and money) it takes to acquire certification.

Development Ecosystem
Developers on the Intel® Joule™ platform can take advantage of a vast hardware ecosystem through 3rd party companies such as Gumstix*, DFRobot*, and Seeed* Studio.   More information can be found on the Intel® Joule™ Developer Kit page.

Operating System Options
The Intel® Joule™ module has support for Windows® 10 IoT Core, Ubuntu* Desktop 16.04, Ubuntu Snappy 16, and comes pre-installed with a Reference Linux* OS for IoT.

Intel Atom® Processor E3900 Series

Intel® Time Coordinated Computing (Intel® TCC)
IoT solutions can be made more reliable (consistent and predictable behavior of a system) with Intel® Time Coordinated Computing, a technology which coordinates and synchronizes the clocks of devices across networks (connected devices).

For applications that require a high level of determinism (applications whose behavior needs to be reliably predicted), Intel® TCC can help to resolve latency issues by synchronizing the clocks of connected devices (to within 1 microsecond).

To learn more about Intel® TCC, check out the Powering Industry 4.0 and Smart Manufacturing Transformations solution brief.

Security
The new Intel® Trusted Execution Engine (Intel® TXE) enables you to achieve enhanced silicon-level security., Intel® TXE provides enhanced data and operations protection in some of the most challenging environments(retail transactions to manufacturing). Intel® TXE is matched with fast cryptographic execution and a number of secure boot features such as Intel® Boot Guard 2.0.

Extended Temperature
To support IoT applications in extreme environments, available SKUs offer operating temperature ranges from -40°C to 85°C and junction temperatures (maximum allowed temperature at the processor die) up to 110 °C.

Note: Designed for automotive and industrial markets, the extended temperature feature is offered by the Intel Atom® processor E3900 series only.

Enhanced Reliability with ECC Memory Options
Dual-channel error-correcting code memory (ECC memory) is available for DDR3L memory type to protect systems with no tolerance for data corruption under any circumstances.

More I/Os
The Intel Atom® processor E3900 series offers an expanded number of I/Os which allows for more connectivity (six USB 3.0 ports and four PCI Express* ports).

Table 1 below shows a side-by-side comparison of the Intel® Joule™ module and the Intel Atom® processor  E3900 series.  For completeness, information on the Intel® Pentium® N3350 processor series and the Intel® Celeron® N4200 processors series are also provided.

Comparison of Technical Specifications

Compare latest generation of the Intel Atom® ,  Pentium® and Celeron® processors with Intel® Joule™ 550x and 570x.

Product NameIntel® Celeron® and Pentium® processorsIntel Atom® processor E3900 seriesIntel® Joule™ 550x or 570x modules
StatusLaunchedAnnouncedLaunched
Recommended Customer Pricing$107 or $161n/a$149 - $159 or $199 - $209
Processor NumberN3350; N4200E3930; E3940; E3950n/a
CPU cores2 or 42 or 44
Processor Base Frequency1.1 GHz1.3 or 1.6 GHz1.5 or 1.7 GHz
Burst frequency2.4 or 2.5 GHz1.8 or 2.0 GHz2.4 GHz on 570x
Max Memory Size8 GB8 GB3 or 4 GB
Memory TypesDDR3L/LPDDR3 or LPDDR4DDR3L (ECC and Non ECC) or LPDDR4LPDDR4
Flash memory Up to 64GB eMMC8 or 16 GB eMMC
Cache2 MB2 MB1MB
# of USB Ports8 (6 USB 3.0)8 (6 USB 3.0)1 or 2 USB 3.0
Total # of SATA Ports220
Max # of PCI Express Lanes660 or 1
Graphics OutputeDP/DP/HDMI*/MIPI-DSIeDP/DP/HDMI/MIPI-DSIHDMI 1.4B and MIPI-DSI (1x4)
Processor GraphicsIntel® HD Graphics 500 or 505Intel® HD Graphics 500 or 505Intel® HD Graphics, gen 9
OSLinux*; Windows® 10 EnterpriseWindows® 10 Enterprise; Windows® 10 IoT Core; Wind River Linux*, VxWorks*; Android*Windows® 10 IoT Core; Ubuntu; Reference Linux* OS for IoT
Intel® High Definition Audio TechnologyYesYesNo
Operating temperature range0°C to 70°C
Commercial applications.
-40°C to 85°C
Extended temperature range for industrial applications
0°C to 70°C
Power DeliveryPMIC / discrete voltage regulator (VR)PMIC / discrete voltage regulator (VR)PMIC
Sleep statesS0ix, S3, S4, S5S0ix, S3, S4, S5S0ix
Security FeaturesIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® AES New Instructions
Package Size24mm x 31mm24mm x 31mm24mm x 48mm

† Intel® Joule™ module exposes one dedicated USB3.0 port, and one additional port that can be configured as either USB3.0 or PCIe*, based on BIOS loaded.  This implies that the Joule module supports one of the two options:

  1. Two USB3.0 ports and zero PCIe ports

  2. One USB3.0 port and one PCIe port

‡ Board form factor

Design Considerations

For those considering starting with an Intel® Joule™ module based design and someday moving to an Intel Atom® processor E3900 series, here are some topics for consideration:

  • Form factor
    The Intel Atom® processor E3900 series board area will probably increase because of a larger SoC package size, larger Power Management IC (PMIC) and Voltage Regulator (VR) solution space, and memory down (i.e. not package-on-package).
  • Performance Differences
    Lower operating frequencies on latest generation of Intel Atom® processor E3900 series, smaller cache size per core pair (e.g. 2MB vs 1MB) may affect performance.  Memory configuration differences may have an impact since Intel Atom® processor E3900 series have higher peak BW, but lower transfer rate.
  • I/O Interface Limitations
    Intel Atom® processor E3900 series supports a single LPSS SPI port, compared to Joule’s two LPSS SPI ports; The Intel® Joule™ module supports USB 2.0 and USB 3.0 OTG while the Intel Atom® processor E3900 series supports USB 2.0 and USB 3.0 dual-role (it does not support OTG).
  • Component selection, validation and qualification
    Customers and designers will the need to consider validation and qualification differences caused by the differences of PMIC and Voltage Regulator components, the memory capacity and speeds, and any other components which may differ between the solutions.
  • Completing design regulatory testing
    A design with the Intel Atom® processor E3900 series will need to go through various types of emissions certifications, safety certifications, and environmental certifications.
  • Driver Compatibility
    Register compatibility and I/O location compatibility from an Intel® Joule™ module to an Intel Atom® processor E3900 series may require driver changes.
  • Additional Features of the Intel Atom® processor E3900 series
    The Intel Atom® processor E3900 series has some new features and interfaces over Intel® Joule™ modules. Taking advantage of these interfaces and features may extend design and validation time of a migration, when compared to a situation where no new features are added.
  • Wireless Technology
    There is no integrated Wi-Fi and Bluetooth® on the Intel Atom® processor E3900 series.
  • Power Management
    Intel® Joule™ module does not support traditional PC sleep states (S3 , S4, S5), while the Intel Atom® processor E3900 series does.
  • Security
    Secure boot can only be done from the eMMC on the Intel® Joule™ module.  On the Intel Atom® processor E3900 series this can only be done through SPI. Intel Atom™ E3900 Series Processor solutions will need a custom BIOS.

Conclusion

This high level comparison gives a basic understanding of the differences between the Intel® Joule™ module and the Intel Atom® processor E3900 series.  For more information, refer to Additional Resources section below or visit intel.com.

Additional Resources

Appendix

Product SKUs for latest generation of Intel Atom™, Intel® Pentium® and Intel® Celeron® Processors

Product NameIntel® Celeron® Processor N3350Intel® Pentium® Processor N4200Intel Atom® x5-E3930 ProcessorIntel Atom® x5-E3940 ProcessorIntel Atom® x7-E3950 Processor
StatusLaunchedLaunchedAnnouncedAnnouncedAnnounced
Recommended Customer Pricing$107.00$161.00n/an/an/a
Processor NumberN3350N4200E3930E3940E3950
CPU cores24244
Processor Base Frequency1.1 GHz1.1 GHz1.3 GHz1.6 GHz1.6 GHz
Burst frequency2.4 GHz2.5 GHz1.8 GHz1.8 GHz2.0 GHz
Max Memory Size8 GB8 GB8 GB8 GB8 GB
Memory TypesDDR3L/LPDDR3; LPDDR4DDR3L/LPDDR3; LPDDR4DDR3L (ECC and Non ECC); LPDDR4DDR3L (ECC and Non ECC); LPDDR4DDR3L (ECC and Non ECC); LPDDR4
Flash memory  64GB eMMC64GB eMMC64GB eMMC
# of USB Ports88888
Total # of SATA Ports22222
Max # of PCI Express Lanes66666
Graphics OutputeDP/DP/HDMI/MIPI-DSIeDP/DP/HDMI/MIPI-DSIeDP/DP/HDMI/MIPI-DSIeDP/DP/HDMI/MIPI-DSIeDP/DP/HDMI/MIPI-DSI
Processor GraphicsIntel® HD Graphics 500Intel® HD Graphics 505Intel® HD Graphics 500Intel® HD Graphics 500Intel® HD Graphics 505
OSLinux*, Windows® 10 EnterpriseLinux*, Windows® 10 EnterpriseWindows® 10 Enterprise; Windows® 10 IoT Core; Wind River Linux*; Wind River VxWorks*; Android*Windows® 10 Enterprise; Windows® 10 IoT Core; Wind River Linux*; Wind River VxWorks*; Android*Windows® 10 Enterprise; Windows® 10 IoT Core; Wind River Linux*; Wind River VxWorks*; Android*
Intel® High Definition Audio TechnologyYesYesYesYesYes
Operating temperature range0°C to 70°C0°C to 70°C-40°C to 85°C
Extended temperature SKUs for industrial and automotive market segments
-40°C to 85°C
Extended temperature SKUs for industrial and automotive market segments
-40°C to 85°C
Extended temperature SKUs for industrial and automotive market segments
Power ManagementPower delivery: PMIC / discrete VR Sleep states – S0ix, S3, S4, S5Power delivery: PMIC / discrete VR Sleep states – S0ix, S3, S4, S5Power delivery: PMIC / discrete VR Sleep states – S0ix, S3, S4, S5Power delivery: PMIC / discrete VR Sleep states – S0ix, S3, S4, S5Power delivery: PMIC / discrete VR Sleep states – S0ix, S3, S4, S5
Security FeaturesIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® Trusted Execution Engine; Intel® AES New InstructionsIntel® Trusted Execution Engine; Intel® AES New Instructions
Package Size24mm x 31mm24mm x 31mm24mm x 31mm24mm x 31mm24mm x 31mm

Intel® Joule™ Modules 550x and 570x

Product NameIntel® Joule™ 550xIntel® Joule™ 570x
StatusLaunchedLaunched
Recommended Customer Pricing$149.00 - $159.00$199.00 - $209.00
CPU cores44
ProcessorIntel Atom® processorIntel Atom® processor
Processor Base Frequency1.5 GHz1.7 GHz
Burst frequencyn/a2.4 GHz
Max Memory Size3GB4GB
Memory TypesLPDDR4LPDDR4
Flash Memory8GB eMMC16GB eMMC
# of USB Ports1 or 2 USB 3.01 or 2 USB 3.0
Total # of SATA Ports00
Max # of PCI Express Lanes0 or 10 or 1
Graphics OutputHDMI 1.4B and MIPI-DSI (1x4)HDMI 1.4B and MIPI-DSI (1x4)
Processor GraphicsIntel® HD Graphics, Gen 9Intel® HD Graphics, Gen 9
OSUbuntuUbuntu
Intel® High Definition Audio TechnologyNoNo
Operating Temperature0°C to 70°C0°C to 70°C
Power ManagementPower delivery: PMIC Sleep states – S0ixPower delivery: PMIC Sleep states – S0ix
Security FeaturesIntel® AES New InstructionsIntel® AES New Instructions
Board Form Factor (Package Size)24mm x 48mm24mm x 48mm

Intel® IoT Gateway Developer Hub and Software Suite/Pro Software Suite Release Notes

$
0
0

These are the latest release notes for the Intel® IoT Gateway Developer Hub, Intel® IoT Gateway Software Suite, and Intel® IoT Gateway Pro Software Suite. 

Intel® IoT Gateway Developer Hub and Software Suite/Pro Software Suite Release Notes ARCHIVE

$
0
0

Use this ZIP file to access each available version of the release notes for the Intel® IoT Gateway Developer Hub, Intel® IoT Gateway Software Suite, and Intel® IoT Gateway Pro Software Suite, beginning with production version 3.1.0.17 through the currently released version. The release notes include information about the products, new and updated features, compatibility, known issues, and bug fixes.

Digital Retail Vertical - Now Live!

$
0
0

Our newest vertical is now live! This is just a start to the full retail experience. Stay tuned for exciting expansion into mobile retail, responsive retail, analytics, and code samples based on feature partner solutions.

Visit

 

A glimpse into site development:

  1. Content outline (a fast and efficient way to start writing)
  2. Sitemap (planning for the future experience coming over the next 6 months)
  3. Wireframe of the core retail display landing page
  4. The final web experience

Digital Retail Vertical Step by Step

Networking Zone Transformation

$
0
0

Networking has been a staple on the developer zone since it went live Dec 18th, 2014. Back then our groups were DPSA and CSIG. Today, we launched the next generation of networking.

When we first started the transformation, the number 1 request from the networking team was "not to use any people looking at servers or people looking at monitor" photography. The BC team did a wonderful job of bringing the vision to life.

Visit

 

Networking Transformed

Sample Title

$
0
0

First Subheading

Some text here.

Subsection 1

Hello!

Subsection 2

Hey!

Second Subheading

Some more text here.

Using Underminer Studios’ MR Configurator Tool to Make Mixed Reality VR Videos

$
0
0

Written by: Timothy Porter, Underminer Studios LLC
Edited by: Alexandria Porter, Underminer Studios LLC

I am Timothy Porter, a pipeline technical artist and efficiency expert. I create unified systems that promote faster, more intuitive, and collaborative workflows for creative projects. With more than nine years in the entertainment industry, a Bachelor’s degree in Computer Animation, and a sharp mind for technical details, I have used lessons learned from early-career gaming roles to develop methodologies for streamlining pipelines, optimizing multiple platforms, and creating tools that make teams stronger and more efficient. I own an outsourcing and bleeding edge tech company, Underminer Studios.

Overview

This article will teach you how to identify VR applications that are mixed reality (MR) ready, and how to enable MR mode in your Unity* VR applications. By the end of this article you will be able to calibrate your experience to have the cleanest and most accurate configuration possible to make MR green screen videos. Since green screen MR can be useful to both developers and content creators (like streamers or YouTubers), the information in this article will be presented from both a developer and a user perspective. Keep that in mind; some of the information might be more than what is needed for a content creator to begin working in MR immediately. Please reference the section titles as guides to locate relevant information for your purposes. The process of getting an MR experience set up for the first time was a painful and tedious process, until now. Underminer Studios has created the Underminer Studios MR Configurator Tool to smooth the process.

Underminer Studios’ MR Configurator Tool

This tool was designed to speed up calibration of your MR setup. This article explains how to use the tool and help you get the most out of your MR experiences. For VR users and streamers, making MR videos is a great way to show a different perspective to people who aren’t wearing a head-mounted display (HMD), while for VR developers, MR videos are a great way to create trailers and show a more comprehensive view of the VR experience.

How does the Underminer Studios MR Configurator Tool help?

This tool will automate the configuration of the controller/camera offset, reducing the time massively, compared to the difficult manual process of camera alignment. Without this helper utility, you start with a blank externalcamera.cfg file, and manually adjust x, y, and z offset values to align the virtual camera with the real one. Every time you make a change you must shut down the application and restart it, then check alignment, hoping that your configuration is correct, and repeat as necessary. This is a tedious and imprecise process. Our helper utility streamlines and automates this alignment process and makes it much easier to calibrate your MR setup. Download and install the executable, then download and follow the documentation in the read me guide.

Though we had many use cases in mind to appeal to a broad audience, inevitably with developers there are always new, uncharted needs. We plan to update the tool periodically, and if you have ideas to improve the tool, please email us at info@underminerstudios.com.

How to use the MR Configurator tool

There is a handy dandy information file that covers, step-by-step, how to do the application setup; it is included in the MrSetUp.pdf file. A direct link is also provided with your install.

What is Mixed Reality?

In this case, mixed reality refers to a person on a green screen background, layered into video from an MR-enabled VR application. This is a great way to show people outside the VR headset what’s happening in the world within. A user can share their VR experience with others and can help create a more social gaming environment. Once you have a VR application that supports MR, all you need is a suitable camera, some green screen material, and an extra Vive* controller to create your very own MR VR experiences.

What’s required?

A powerful machine

Adding MR on top of VR requires a high-end system to handle the inherent stresses that these applications create. Without a powerful enough system you can encounter performance lag, which may lower your frame rate and create a less than optimal experience, especially for the user wearing the HMD. A higher-end PC is required to provide the MR experience and avoid those issues. I have provided a list below with optimal system requirements on which to run an MR experience.

An MR-enabled application

You can either take a previously made project that is MR-enabled or you can make one for yourself. We will cover both in this section.

How to tell if a VR application will work with this method

  1. Config file

    Test to see if a Unity-based VR title supports this MR mode by placing a configuration file named “externalcamera.cfg” into the same directory as the VR executable. For example, say you have a game located at C:\Program Files (x86)\Steam\steamapps\common\APPNAMEHERE\. Just put the file in that folder. Here is an example of a raw config file:

    x=0
    y=0
    z= 0
    rx=0
    ry=0
    rz=0
    fov=60
    near=0.01
    far=1000

    Note that there is almost zero chance that this will work appropriately right away. Use our application to configure, or go to the manual configuration section below and follow the instructions.

  2. Connect a third Vive controller

    Connect a third Vive controller (this controller needs to be plugged in via USB, since SteamVR* only supports two wireless controllers at a time). Launch the VR executable, and if you see a four-panel quartered view on the desktop, the app should work for this method of making MR videos. If you don’t see the four-panel quartered view, it’s likely the app wasn’t made in Unity, or doesn’t support this method of MR video. If you created the VR executable, read on for instructions on how to enable this MR mode. If it’s not your application, you will probably have to choose another application for your MR video.

Developers and Users

If you want to do MR setup inside of Unity go to the Developer section. If you want to learn how to play a readymade MR game move to the User section. First, we will cover the developer side of things, so if your goal is to make cool MR experiences, start here. Later, we will be learning the end user side of things, so if your game already has the development side configured you can start there. We are going to limit this discussion and focus on how to make MR work within Unity and the SteamVR system, since there are many ways to create multiple cameras and green screens, as well as to composite them. I will be using the HTC Vive, but I’ve seen others use the Oculus Rift* and Touch* controllers, but that’s outside the scope of this article. Let’s jump right in!

Developer side

I will show the current native SteamVR plugin method first. It takes a considerable amount of guesswork out of the system setup and provides you with a quick and high-quality MR setup. The tool provided sits on top of the current system, alleviating some of the manual or tedious sections of the process with automated or helper solutions. If at any time the specific setup or process that your project needs is not covered, there is no reason to chuck it, as the rest of the processes are perfectly self-contained.

Native (built-in) SteamVR MR overview

The team that invented this tool was quite brilliant. Using the idea of clipping planes and the location of the player, the SteamVR setup creates multiple views to allow an MR experience. If you want to use the native plugin and enable this in your game you have two separate choices: +third controller and no third controller. Both require the use of externalcamera.cfg.

Example using externalcamera.cfg

This file goes into the root of your project as externalcamera.cfg. This file tells the system in meters how far to offset the camera versus the controller.

x=0
y=0
z= 0
rx=0
ry=0
rz=0
fov=60
near=0.01
far=100
sceneResolutionScale=0.5

What setup to use?

The use of a third controller allows the user to move the camera. If you are planning to have a stationary camera see the No third controller section, below. If your game requires moving the camera, see the +third controller section, below.

No third controller—native SteamVR MR setup

  1. Pull in the extra controller prefab
  2. Set the Index of the SteamVR_Tracked Object (Script) to Device 2.

Users need to set up the externalcamera.cfg covered in the Example using externalcamera.cfg section, above.

Note: This requires always using Unity IDE unless you follow the How to not use Unity IDE section, below.

+third controller—native SteamVR MR setup

  1. Pull in the extra controller prefab
  2. Set the Index of the SteamVR_Tracked Object (Script) to Device 3.

This is simple in concept. The only thing you’ll need is the extra controller that is attached via USB to the computer playing the game.

Note: This requires always using Unity IDE unless you follow the How to not use Unity IDE section, below.

How to not use Unity IDE—native SteamVR MR setup

Both require you to make the project run within the editor. If you want to make a standalone version there is a bit of extra work to do.

  1. Add the “SteamVR_ExternalCamera” prefab at the root of your hierarchy.
  2. Drag and drop the “Controller (third)” into the [SteamVR] script “Steam VR_Render” – External Camera.
  3. In SteamVR_Render.cs add the following code:
    	void Awake()
    	{
    	#if (UNITY_5_3 || UNITY_5_2 || UNITY_5_1 || UNITY_5_0)
    	    var go = new GameObject("cameraMask");
    	    go.transform.parent = transform;
    	    cameraMask = go.AddComponent<SteamVR_CameraMask>();
    	#endif
    	    if (System.IO.File.Exists(externalCameraConfigPath)) {
    	        {
    	            if (externalCamera == null) {
    	                var prefab = Resources.Load<GameObject>("SteamVR_ExternalCamera");
    	                var instance = Instantiate(prefab);
    	                instance.gameObject.name = "External Camera";
    	            }
    	            externalCamera = instance.transform.GetChild(0).GetComponent<SteamVR_ExternalCamera>();
    	            externalCamera.configPath = externalCameraConfigPath;
    	            externalCamera.ReadConfig();
    	        }
    	    }

User side

If you already have a VR-ready system you can to jump to the Running MR section, below.

System requirements

I have included both high-end (a) and low-end options (b) below.

Shopping list:

  1. Green screen kit

    a. StudioPRO* 3000W continuous output softbox lighting kit with 10ft x 12ft support system, $383.95 (or similar).
    b. ePhotoInc* 6 x 9 Feet cotton chroma key backdrop, $18.99.

  2. Extra controller

    a. Vive controller, $129.99.
    b. This solution does not need an extra controller, but if you aren’t the developer who created the game, you must keep the camera stationary and do some workarounds as discussed below.

  3. Camera

    a. Panasonic* HC-V770 with accessory kit, $499.99 (or similar camera with HDMI out for the live camera view; a DSLR or mirrorless digital camera will probably work, but be aware that their sensors are not designed to be run for long periods of time and can overheat).

  4. Video capture card

    a. Magewell* XI100DUSB-HDMI USB capture HDMI 3.0 - $299.00 (or similar HDMI capture device.

  5. Computer

    a. You’ve probably already got a VR-capable PC if you’re reading this. Beyond the minimum VR spec, you’ll need a system with enough power to handle the extra work you’re going to ask it to do (running the quartered view at 4x the resolution you intend to record, plus doing the layer capture, green screen chroma key, and MR compositing). A high end, sixth-gen or later Intel® Core™ i7 processor (for example, 7700K or similar) is recommended.

  6. 4K Monitor

    a. Because of the way MR captures the quartered view window, you’ll want to be able to run that window at 4x the resolution of the final video output. This means you need to be able to run that window at 1440p resolution if you want to record at 720p, since you’re only capturing a quarter of the window at a time. If you want to record at 1080p, you’ll need to run the window at 2160p. For that, you’re going to want a monitor that can handle those resolutions; probably 4K or higher.

A little more about some of the options

  1. Green screen

    You could use outdoor carpet (like AstroTurf*) as a backdrop. It looks like it gets decent results and it should last for a very long time, but anything in a single color should work just fine. Green is recommended, as most systems (OBS*, or the screen capture provided for this tutorial) utilize green as a cut-out or chroma key.

  2. Controller

    If the project was not set up appropriately and requires an extra controller, there is a possible solution involving faking the third controller in software. Using this option is outside the realm of this article, but if you want to try it out you can learn more here.

  3. Camera

    There is a HUGE difference if you go with a real camera versus a webcam. Without being as expensive as a pro camcorder, some great things have come out of the camcorder option listed above. If you use a still camera (DSLR or mirrorless), be aware that their sensors are often not designed to be run constantly due to heat; this is why they often have a 20 or 30 minute limit on video recording. Be careful so you don’t harm your equipment.

  4. Video capture card

    If you are using an external camera, a capture card is required to get the HDMI output of the camera to appear as a usable source to the PC. The one listed above uses USB and is a great all-around capture card. Compared to having an internal card that is tied to a system, the best part of the USB capture card is portability. To do an onsite with publishers, clients, or other developers you can just throw it in a bag, send them a build, and show everyone in the room what is going on in the game. It will allow you to convey the information and ideas quickly.

  5. Computer

    The project we are doing is computationally intensive, so CPU choice is very important. A modern, high-end Intel Core i7 processor, like a 7700K, is well-suited to a project like this because many of the processes are single-thread intensive (like the compositor from SteamVR) and the high single-core performance will really help. Using a quad core or higher CPU can really help with the work of capturing, compositing, and recording your MR video.

Running MR—setup

To view the setup, you only need the .cfg file below, and a game that allows the use of MR. Some of these games include Fantastic Contraption* and Job Simulator*, Space Pirate Trainer*, Zen Blade*, Tilt Brush*, and many more.

Only after fulfilling these two requirements will the setup view appear:

  1. Add a file called externalcamera.cfg in the root of your project.
  2. Have a third controller that is plugged into your system.

Running MR—step by step

Note: These steps will not align the experience with the real world until you configure the .cfg file using the steps below.

  1. Turn off SteamVR

    If you have SteamVR on it will have issues with the further processes so it’s best to turn it off. As well, if you run into issues later, a restart will always help.

  2. Put an externalcamera.cfg file into the root of your project

    Next, you will need to put the file in the correct location at the root of your project. If you find that your project doesn’t show a four-panel quartered screen, then you will want to verify that root location, after you check the controller.

  3. Set up your green screen and lights

    You will be compositing people into the VR environment. To do this correctly you will need to have a green screen setup to cut the person out of the real world and put them into the VR world.

  4. Connect your camera to your computer / capture card

    The extra overhead of running VR and MR at the same time almost necessitates having a capture card instead of only using a web cam. Also, a capture card lets you pull in video from a secondary camera.

  5. Affix your controller to your camera

    The system always needs to locate the camera and the way we do that is by having the system track the Vive controller. The config file above provides offset and camera information to the system based on the Vive controller’s location in relation to the camera that is attached. The tighter the controller is the better.

  6. Unplug any controllers that are attached to your system

    SteamVR gets confused during this process. If you get into the project and realize that the third camera is attached to the wrong controller, unplug the third controller and plug it back in. This should solve the issue.

  7. Turn on SteamVR

    Now that this is ready we should to tell SteamVR to get going.

  8. Turn on the two controllers not attached to the camera

    We only want to turn on the controllers that aren’t attached so they get to the correct places in the SteamVR handset slots. This is a crucial step, so please do this. I also recommend waving the controllers directly at a lighthouse.

  9. Plug into the system the controller that is attached to the camera

    Now that Steam knows where the first two controllers are you can plug in the third controller. As stated before, if you get into the project and realize that the third camera is attached to the wrong controller, unplug the third controller and plug it back in.

  10. Shift and double-mouse click your game of choice

    This allows you to open the project at the highest resolution. For some reason SteamVR also gives preferential treatment to admin-running applications, so this should help.

  11. Choose the desired resolution (4x the resolution at which you want to record)

    This is where a 4K or higher monitor comes in handy. Since you’re only capturing one-fourth of the window (and compositing multiple layers), you’ll need to choose the correct window size here. If you want to record at 720p, choose 2560 x 1440. If you want to record at 1080p, choose 3840 x 2160. You might have to try different recording resolutions, depending on your system performance and the desired quality of the recording.

  12. Open OBS or XSplit*

    Now we are moving on to the live compositing section of the article. Either of these programs are tested to do MR compositing, although there are others out there that might work as well.

  13. Add a cut of the upper-left corner and use the upper-right corner as the alpha; label this layer “Foreground”

    This is the part we will composite over people. If you don’t have time to match up the handsets exactly to the VR space, choose a skin for your controller that uses large symbols. This will hide the fact that everything isn’t exactly matched up. Here is a how-to which showcases exactly how to change your controller skins in SteamVR.

  14. Add the video stream from your camera and clear out the background with a chroma filter; label this “Live”

    Putting the live person into the VR environment is the most crucial part of this project. Depending on the program you are using there are a multitude of ways that you can do this. Below is a screenshot for XSplit showcasing that you could also color key out a layer, which will remove a single color from the image.

  15. Add the bottom left; label this layer “Background”

    We will put this layer in the bottom position in whichever program you are using to composite with. If the background isn’t visible, repeat the step above.

  16. Turn off your game and configure the config file

    To make the config file, either use our tool outlined in the beginning of this article, or follow the section below. A word of warning: Manual configuration is not only difficult to get right, it’s also a very slow and laborious task. Every time there is a change made you need to restart your program. The average time for setup is about one hour. We have reduced the process to three minutes on average using the MR Configurator tool. It is also more accurate, since the long setup time usually causes people to give up before the config is perfect.

How to manually calculate the information in externalcamera.cfg

x=0
y=0
z= 0
rx=0
ry=0
rz=0
fov=60
near=0.01
far=100
sceneResolutionScale=0.5

Configuring

Note: Remember that you can use our tool to skip this entire section.

  1. Field of view (FOV) – This must be the vertical FOV

    FOV is the hardest one in the setup; find this out first. Most camera manufacturers provide the FOV values of the camera, but this is not the vertical FOV. Most of these techniques come from the camera world. Here is an article on how to find the FOV.

    Note: The FOV of a camera is dependent on the focal length. Once you have your settings, do not zoom in or out on your camera!

  2. Rotation

    RX, RY, RZ—these are the rotational angles. 0,0,0 would be if the handset was level with the camera. Y+ is up, Z+ is forward, with X+ to the left. As a note, these are in degrees.

  3. Distance

    X, Y, and Z should be done using a tape measure. Remember, these numbers will be in meters.

  4. Test

    Open your game and with OBS or XSplit running, see if things line up. If not, shut down your game and try again.

Troubleshooting

If your system or game lags, options include lowering the canvas size, lowering the frame rate, using the GPU to encode video, or recording only without streaming. These could also make things worse, depending on the game and your system. With so many different variations to choose from it seems impractical to give profiles. To change these manually do the following:

Lower canvas size

Lower frame rate—be careful here; this can introduce further choppiness if below 24

Render using the GPU

Record only; do not stream

This article is an extension of my skills as a mentor and teacher. Often I am able to lead the path to new and exciting techniques, and I thoroughly enjoy sharing my knowledge with others. Enjoy your MR experiences and share your feedback with me at info@undeminerstudios.com.


NERSC Optimizes Application Performance with Roofline Analysis

$
0
0

The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines.

To meet its goals, NERSC needs to optimize its diverse applications for peak performance on Intel® Xeon Phi™ processors. To do that, it uses a roofline analysis model based on Intel® Advisor. The roofline model was originally developed by Sam Williams, a computer scientist in the Computational Research Division at Lawrence Berkeley National Laboratory. Using the model increased application performance up to 35%.

Get the whole story in our new case study.

Distributed, Docker*-ized Deep Learning with Intel® Nervana™ technology, Neon*, and Pachyderm*

$
0
0

The recent advances in machine learning and artificial intelligence are amazing! It seems like we see something groundbreaking every day, from self-driving cars, to AIs learning complex games. Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s data pipelines and infrastructure.

Moreover, data scientists should spend their time focused on improving their machine learning applications. They should not have to spend a great deal of time manually keeping their applications up to date with constantly changing production data. They also shouldn’t have to waste their days trying to retroactively identify and track interesting past behavior.

Docker* and Pachyderm* can help data scientists build, deploy, and update machine learning applications on production clusters, distributed processing across large data sets, and track input and output data throughout a data pipeline. In this post we show how to set up a production-ready machine learning workflow with Intel® Nervana™ technology, Neon*, and Pachyderm.

Intel® Nervana™ technology and Neon*

Intel Nervana technology with Neon is the “world’s fastest deep learning framework.” It is open source (see GitHub*), Python* based, and includes a really nice set of libraries for developing deep learning models.

You can get started developing by checking out the Neon documentation and installing Neon locally:

git clone https://github.com/NervanaSystems/neon.git
cd neon; make

Then you are ready to try out some examples.

Pachyderm*

Pachyderm is an open-source framework that provides data versioning and data pipelining built on containers (specifically, Docker containers). With Pachyderm, you can create language-agnostic data pipelines where the data input and output of each stage of your pipeline are version controlled in Pachyderm; think Git for data. You can view diffs of your data and collaborate with teammates using Pachyderm commits and branches. Also, if your data pipeline generates an unexpected result, you can debug or validate it by understanding the historical processing steps that lead to the result (or even reproducing them exactly).

Pachyderm can also easily parallelize your computation by only showing a subset of your data to each container that is part of a Pachyderm pipeline. A single node either sees a slice of each file (a map job) or a whole single file (a reduce job). The data itself can live in an object store of your choice (for example, S3), and Pachyderm smartly assigns different pieces of data to be processed by different containers.

Pachyderm clusters can run on any cloud, but you can also experiment with Pachyderm locally. After installing the Pachyderm CLI tool, Pachctl, and installing and running Minikube*, Pachyderm can be installed locally in a single command:

pachctl deploy local

Using Docker-ized Neon

In order to utilize Neon in a production Pachyderm cluster (or to just easily deploy it), we need to be able to Docker-ize Neon. Docker allows us to package up our machine learning application in a portable image that we can run as a container on any system that has Docker. Thus, it makes our machine learning application portable.

Thankfully, there are already publically available Docker images for Neon, so let’s see how this image works. Assuming you have Docker installed and you are running CPU only, you can get Docker-ized Neon by pulling the following image (which is endorsed by the Neon team):

docker pull kaixhin/neon

Then, to experiment with Neon inside Docker interactively, you could run:

docker run -it kaixhin/neon /bin/bash

This will open a bash shell in a running instance of the Neon image (or a container) in which you could create and run Python programs that utilize Neon. You could also navigate to the Neon `examples` directory to run the Neon example models inside Docker.

However, let’s say that we already have a Python program `mymodel.py` that utilizes Neon, and we want to run that program in Docker. We want to create a custom Docker image that includes our program. To do this, you would simply create a file called `Dockerfile` that lives with your `mymodel.py` script:

my_project_directory
├── mymodel.py
└── Dockerfile

This `Dockerfile` will tell Docker how to build a custom image that includes both Neon and our custom Neon-based script. For a case where `mymodel.py` only utilizes Neon and the Python standard library, the `Dockerfile` can simply be:

FROM kaixhin/neon
ADD mymodel.py /

Then, to build your custom Docker image, run:

docker build -t mycustomimage .

from the root of your project. The resulting image could then be used to run your Neon model on any machine running Docker by running:

docker run -it mycustomimage python /mymodel.py

Distributing Docker-ized Neon on a Production Cluster

Running Docker-ized Neon as mentioned above is great for portability, but we don’t necessarily want to manually log in to production machines to deploy single instances of our model. We want to integrate our Neon model into a distributed data pipeline running on a production cluster, and ensure that our model can scale over large data sets. This is where Pachyderm can help us.

We are going to implement both model training and inference into a sustainable, production-ready data pipeline. To illustrate this process, we are going to utilize an example LSTM model that predicts the sentiment of movie reviews based on a training data set from IMDB. More information about this model is available.

Training

Pachyderm lets us create data pipelines with Docker containers as the processing stages. These containerized processing stages have versioned data in data repositories as input, and they output to a corresponding data repository. Thus, the input/output data of each processing stage is versioned in Pachyderm.

We will set up one of these pipeline stages to perform the training of our LSTM model. This model pipeline stage will take a labeled training dataset `labeledTrainData.tsv` as input, and output a persisted (or saved to disk) version of the trained model as a set of model parameters, `imdb.p`, and a model vocab, `imdb.vocab`. The actual processing for this stage will use a python script, `train.py`, and is already included in the public kaixhin/neon Docker image.

Figure 1

To create the above pipeline, we first need to create the training data repository that will be input to our model pipeline stage:

pachctl create-repo training

We can then confirm that the repository was created by:

pachctl list-repo

Next, we can create the model pipeline stage to process the data in the training repository. To do this, we just need to provide Pachyderm with a JSON pipeline specification that tells Pachyderm how to process the data. In this case, the JSON specification looks like this:

{"pipeline": {"name": "model"
  },"transform": {"image": "kaixhin/neon","cmd": ["python","examples/imdb/train.py","-f","/pfs/training/labeledTrainData.tsv","-e","2","-eval","1","-s","/pfs/out/imdb.p","--vocab_file","/pfs/out/imdb.vocab"
    ]
  },"inputs": [
    {"repo": {"name": "training"
      },"glob": "/"
    }
  ]
}

This may seem complicated, but it really just says a few things: (1) create a pipeline stage called model, (2) use the kaixhin/neon Docker image for this pipeline stage, (3) run the provided Python cmd to process data, and (4) process any data in the input repository training. There are other options that we will ignore for the time being, but these fields are all discussed in the Pachyderm docs.

Once we have this JSON file (saved in `train.json`), creating the above pipeline on a production-ready Pachyderm cluster is as simple as:

pachctl create-pipeline -f train.json

Now, Pachyderm knows that it should perform the above processing, which in our case is training a Neon model on the training data in the training repository. In fact, because Pachyderm is versioning data and knows what data is new, Pachyderm will keep our model output (versioned in a corresponding model repository created by Pachyderm) in sync with the latest updates to our training data. When we commit new training data into the training repository, Pachyderm will automatically update our persisted, trained model in the model repository.

However, we haven’t actually put any data into the training repository yet, so let’s go ahead and do that. Specifically, let’s put a file `labeledTrainData.tsv`, in the master branch of our training repository (again with Git-like semantics):

pachctl put-file training master labeledTrainData.tsv -c -f labeledTrainData.tsv

Now, when you run `pachctl list-repo`, you will see that data has been added to the training repository. Moreover, if you run `pachctl list-job`, you will see that Pachyderm has started a job to process the training data and output our persisted model to the model repository. Once this job finishes, we have trained our model. We could then re-train our production model any time by pushing new training data into the training repository or updating our pipeline with a new image for the model pipeline stage.

To confirm that our model has been trained, you should see both `imdb.p` and `imdb.vocab` in an output data repository created for the pipeline stage by Pachyderm:

pachctl list-file model master

Note, we actually told Pachyderm what to write out to this repository by saving our files to Pachyderm’s magic `/pfs/out` directory, which specifies the output repository corresponding to our processing stage.

Prediction/Inference

Next, we want to add an inference stage to our production pipeline which utilizes our versioned, persisted model. This inference stage will take new movie reviews, which the model hasn’t seen yet as input, and output the inferred sentiment of those movie reviews using our persisted model. This inference will again run a Python script, `auto_inference.py`, that utilizes Neon.

Figure 2

To create this inference pipeline stage, we first need to create the Pachyderm repository that will store and version our input reviews:

pachctl create-repo reviews

Then we can create another JSON blog that will tell Pachyderm how to perform the processing for the inference stage:

{"pipeline": {"name": "inference"
  },"transform": {"image": "dwhitena/neon-inference","cmd": ["python","examples/imdb/auto_inference.py","--model_weights","/pfs/model/imdb.p","--vocab_file","/pfs/model/imdb.vocab","--review_files","/pfs/reviews","--output_dir","/pfs/out"
    ]
  },"parallelism_spec": {"strategy": "CONSTANT","constant": "1"
  },"inputs": [
    {"repo": {"name": "reviews"
      },"glob": "/*"
    },
    {"repo": {"name": "model"
      },"glob": "/"
    }
  ]
}

This is similar to our last JSON specification except, in this case, we have two input repositories (the reviews and the model) and we are using a different Docker image that contains `auto_inference.py`. The Dockerfile for this image can be found here.

To create the inference stage, we simply run:

pachctl create-pipeline -f infer.json

Now whenever new reviews are pushed into the reviews repository, Pachyderm will see these new reviews, perform the inference, and output the inference results to a corresponding inference repository.

In fact, let’s say that we have already pushed a million reviews into the reviews repository. If we push one more review into the repository, Pachyderm understands that only the one review is new and will only update the results for the one new review. There is no need to process all of our data again, because Pachyderm is diff aware and keeps our processing in sync with the latest changes to our data.

Our inference pipeline actually works as both a batch and streaming inference pipeline. You could push many reviews into the reviews repository at periodic times to perform batch inferences on those batches of reviews. However, you could also stream individual reviews into the repository as they are created. Either way, Pachyderm automatically performs the inferences on the new reviews and outputs the results.

Implications

By combining our training and inference into a data pipeline processing versioned data, we have set ourselves up to take advantage of some pretty valuable functionality. At any time, a data scientist or engineer could update the training data set utilized by the model to trigger the creation of a newly persisted, versioned model in our model repository. When the model is updated, any new reviews coming into the reviews repository will be processed with the updated model.

Further, old predictions can be recomputed with the updated model, or new models could be tested on previous versioned input. No more manual updates to historical results or worrying about how to swap-out models in production!

Furthermore, although we skipped over it above, each pipeline stage in our Pachyderm pipeline is individually scalable via a parallelism specification. If we were suddenly receiving tens of thousands of reviews at a time, we could adjust the parallelism of our inference pipeline stage by setting:

"parallelism_spec": {"strategy": "CONSTANT","constant": "10"
  }

or by setting our constant to any other number above one. This will tell Pachyderm to spin up multiple workers to perform our inference (10 in the above example), and Pachyderm will automatically split our review data between the workers for parallel processing.

Figure 3

We can focus on developing and improving your modeling and let Pachyderm worry about distributing our inference on the production cluster. This also keeps our implementations simple and readable. We can scale the Python/Neon scripts we develop on our local machine to production-scale data without having to think about data sharding, complicating your code with frameworks such as Dask*, or even transferring your modeling to another language or framework for production use.

Optimizations

Although Pachyderm takes care of the data sharding, parallelism, and orchestration pieces for us, there are several nice optimizations that we can take advantage of in this pipeline.

First, both our training and inference stages are running Python importing Neon. As such, we could further optimize our processing without even changing a line of code, by using Intel® Distribution for Python*. This automatically integrates the powerful Intel® Math Kernel Library (Intel® MKL), Intel® Data Analytics Acceleration Library (Intel® DAAL) and pyDAAL, Intel® MPI Library, and Intel® Threading Building Blocks (Intel® TBB) into core Python packages including NumPy*, SciPy*, and pandas*.

In fact, to take advantage of Intel optimized Python within Pachyderm, we could simply replace our current Neon image with a custom Docker image based on one of the public Intel Python images. An example of such an image could be built from a Dockerfile. We would just need to add a Python script of our choice to the image (as shown here), upload the image to DockerHub (or another registry), and change the name of the referenced image in our Pachyderm pipeline specification.

Also, we could choose to deploy our pipeline on Intel® Xeon Phi™ processor architecture, which would automatically give us more of a boost for machine learning workflows. These chips are further optimized for the types of processing involved in deep learning training and inference.

Conclusions/Resources

We were able to take a simple, but fast, deep learning implementation in Neon and deploy it on a production cluster. We were also able to distribute the processing across the cluster without having to worry about sharding data and parallelism in our code. Our production pipeline will track versions of our model and keep our processing in sync with updates to our data.

All of the code and configuration for the above data pipeline can be found here.

Pachyderm resources:

Intel Nervana technology with Neon resources:

 
About the Author

Daniel (@dwhitena) is a Ph.D. trained data scientist working with Pachyderm (@pachydermIO). Daniel develops innovative, distributed data pipelines which include predictive models, data visualizations, statistical analyses, and more. He has spoken at conferences around the world (ODSC, Spark Summit, Datapalooza, DevFest Siberia, GopherCon, and more), teaches data science/engineering with Ardan Labs (@ardanlabs), maintains the Go kernel for Jupyter, and is actively helping to organize contributions to various open source data science projects.

Go* for Big Data

$
0
0

Using Intel® Data Analytics Acceleration Library (Intel® DAAL) with the Go* programming language to enable batch, online, and distributed processing

The hottest modern infrastructure projects are powered by Go*, including Kubernetes*, Docker*, Consul*, etcd*, and many more. Go is turning into a go to language for devops, web servers, and microservices. It is easy to learn, easy to deploy, fast, and has a great set of tools for developers.

But as businesses become more data driven, there is a need to integrate computationally intensive algorithms at every level of a company’s infrastructure, including those levels where Go is playing a role. Thus, it’s natural to ask how we might integrate things like machine learning, distributed data transformation, and online data analysis into our blossoming Go-based systems.

One route to providing robust, performant, and scalable data processing within Go is to utilize the Intel® Data Analytics Acceleration Library (Intel® DAAL) within our Go programs. This library already provides batch, online, and distributed algorithms for a host of useful tasks:

Because Go provides a nice way to interface with C/C++, we can pull this functionality into our Go programs without too much trouble. In doing so, we can take advantage of Intel’s optimizations of these libraries for their architectures right out of the box. As shown here, Intel DAAL can be up to seven times faster than Spark* plus MLlib* for certain operations, like principal component analysis. Woah! I would say it’s time we explore how to level up our Go applications with that sort of power.

Installing Intel® DAAL

Intel DAAL is available as open source and can be installed by following these instructions. On my Linux* machine this was as simple as:

  1. Downloading the source code.
  2. Running the install script.
  3. Setting up the necessary environmental variables (which can also be done with a provided shell script).

Before trying to integrate Intel DAAL into any Go program, it’s a good idea to make sure that everything works normally. You can do this by following the various getting started guides in the Intel DAAL docs. Specifically, these getting started guides provide an example Intel DAAL application for Cholesky decomposition that we will be recreating in Go, below. The raw C++ example of Cholesky decomposition looks like this:

```
/*******************************************************************************
!  Copyright(C) 2014-2017 Intel Corporation. All Rights Reserved.
!
!  The source code, information and material ("Material") contained herein is
!  owned by Intel Corporation or its suppliers or licensors, and title to such
!  Material remains with Intel Corporation or its suppliers or licensors. The
!  Material contains proprietary information of Intel or its suppliers and
!  licensors. The Material is protected by worldwide copyright laws and treaty
!  provisions. No part of the Material may be used, copied, reproduced,
!  modified, published, uploaded, posted, transmitted, distributed or disclosed
!  in any way without Intel's prior express written permission. No license
!  under any patent, copyright or other intellectual property rights in the
!  Material is granted to or conferred upon you, either expressly, by
!  implication, inducement, estoppel or otherwise. Any license under such
!  intellectual property rights must be express and approved by Intel in
!  writing.
!
!  *Third Party trademarks are the property of their respective owners.
!
!  Unless otherwise agreed by Intel in writing, you may not remove or alter
!  this notice or any other notice embedded in Materials by Intel or Intel's
!  suppliers or licensors in any way.
!
!*******************************************************************************
!  Content:
!    Cholesky decomposition sample program.
!******************************************************************************/

#include "daal.h"
#include <iostream>

using namespace daal;
using namespace daal::algorithms;
using namespace daal::data_management;
using namespace daal::services;

const size_t dimension = 3;
double inputArray[dimension *dimension] =
{
    1.0, 2.0, 4.0,
    2.0, 13.0, 23.0,
    4.0, 23.0, 77.0
};

int main(int argc, char *argv[])
{
    /* Create input numeric table from array */
    SharedPtr<NumericTable> inputData = SharedPtr<NumericTable>(new Matrix<double>(dimension, dimension, inputArray));

    /* Create the algorithm object for computation of the Cholesky decomposition using the default method */
    cholesky::Batch<> algorithm;

    /* Set input for the algorithm */
    algorithm.input.set(cholesky::data, inputData);

    /* Compute Cholesky decomposition */
    algorithm.compute();

    /* Get pointer to Cholesky factor */
    SharedPtr<Matrix<double> > factor =
        staticPointerCast<Matrix<double>, NumericTable>(algorithm.getResult()->get(cholesky::choleskyFactor));

    /* Print the first element of the Cholesky factor */
    std::cout << "The first element of the Cholesky factor: "<< (*factor)[0][0];

    return 0;
}
```

Try compiling and running this to make sure your Intel DAAL installation has succeeded. It will also give you a taste of what we will be doing in Go. Any questions or issues with the Intel DAAL installation can be discussed in the Intel DAAL forum (which was a great resource for me while I was getting spun up with Intel DAAL). 

Using Intel DAAL in Go

When utilizing Intel DAAL from within Go, we have a couple of options:

  1. Directly calling Intel DAAL from your Go program via a wrapper function.
  2. Creating a reusable library that wraps specific Intel DAAL functionality.

I will demonstrate both of these options below, and all of the code used can be found here. This is just one example, and, eventually, it would be great to add more Go plus Intel DAAL examples to this repository. As you experiment, please submit your Pull Requests. I’m excited to see what you create!

If you are new to Go you should familiarize yourself a bit before continuing with this tutorial. In fact, you don’t have to install Go locally to start learning. You can take the online Tour of Go and use the Go Playground, and then when you are ready, install Go locally.

Calling Intel DAAL directly from Go

Go actually provides a tool, called cgo, which enables the creation of Go packages that call C code. In this case, we will use cgo to interoperate our Go program with Intel DAAL.

Note: there are various trade-offs for using cgo with your Go programs that are discussed at length across the Internet (in particular, see Dave Cheney’s discussion or this article from Cockroach Labs*). When choosing to use cgo you should consider these costs, or at least be aware of them. In this case, we are saying that we are willing to work with the cgo trade-offs in order to take advantage of the highly optimized and distributed Intel DAAL library, a trade-off that is likely warranted in certain data-intensive or compute-intensive use cases.

To integrate the Intel DAAL Cholesky decomposition functionality in a sample Go program, we will need to create a directory structure that looks like this (in our $GOPATH):

```
cholesky`
├── cholesky.go`
├── cholesky.hxx`
└── cholesky.cxx`
```

The cholesky.go file is our Go program that will utilize the Intel DAAL Cholesky decomposition functionality. The cholesky.cxx and cholesky.hxx files are C++ definition/declaration files that include Intel DAAL and will signal to cgo what Intel DAAL functionality we are going to wrap. Let’s take a look at each of these.

First, let’s take a look at the *.cxx file:

```
#include "cholesky.hxx"
#include "daal.h"
#include <iostream>

using namespace daal;
using namespace daal::algorithms;
using namespace daal::data_management;
using namespace daal::services;

int choleskyDecompose(int dimension, double inputArray[]) {

    /* Create input numeric table from array */
    SharedPtr<NumericTable> inputData = SharedPtr<NumericTable>(new Matrix<double>(dimension, dimension, inputArray));

    /* Create the algorithm object for computation of the Cholesky decomposition using the default method */
    cholesky::Batch<> algorithm;

    /* Set input for the algorithm */
    algorithm.input.set(cholesky::data, inputData);

    /* Compute Cholesky decomposition */
    algorithm.compute();

    /* Get pointer to Cholesky factor */
    SharedPtr<Matrix<double> > factor =
        staticPointerCast<Matrix<double>, NumericTable>(algorithm.getResult()->get(cholesky::choleskyFactor));

    /* Return the first element of the Cholesky factor */
    return (*factor)[0][0];
}
```

and the *.hxx file:

```
#ifndef CHOLESKY_H
#define CHOLESKY_H

// __cplusplus gets defined when a C++ compiler processes the file.
// extern "C" is needed so the C++ compiler exports the symbols w/out name issues.
#ifdef __cplusplus
extern "C" {
#endif

int choleskyDecompose(int dimension, double inputArray[]);

#ifdef __cplusplus
}
#endif

#endif
```

These files define a choleskyDecompose wrapper function in C++ that utilizes the Intel DAAL Cholesky decomposition functionality to compute the Cholesky decomposition of an input matrix and output the first element of the Cholesky factor (similar to what is shown in the Intel DAAL getting started guides). Note, in this case, our input is in an array with a length of the matrix dimension (that is, a 3 x 3 matrix would correspond to an input array of length 9). We need to include extern “C” in our *.hxx file. This will let the C++ compiler that cgo calls know that we need to export relevant names defined in our C++ files.

Once we have the Cholesky decomposition wrapper function defined in our *.cxx and *.hxx files, we can call that function directly from Go. cholesky.go looks like:

```
package main

// #cgo CXXFLAGS: -I$DAALINCLUDE
// #cgo LDFLAGS: -L$DAALLIB -ldaal_core -ldaal_sequential -lpthread -lm
// #include "cholesky.hxx"
import "C"

import (
	"fmt""unsafe"
)

func main() {

	// Define the input matrix as an array.
	inputArray := [9]float64{
		1.0, 2.0, 4.0,
		2.0, 13.0, 23.0,
		4.0, 23.0, 77.0,
	}

	// Get the first Cholesky decomposition factor.
	data := (*C.double)(unsafe.Pointer(&inputArray[0]))
	factor := C.choleskyDecompose(3, data)

	// Output the first Cholesky dcomposition factor to stdout.
	fmt.Printf("The first Cholesky decomp. factor is: %d\n", factor)
}
```

Let’s walk through this step by step to understand what is happening. First, we need to tell Go that we want to utilize cgo when we compile our program, and we want to compile with certain flags:

```
// #cgo CXXFLAGS: -I$DAALINCLUDE
// #cgo LDFLAGS: -L$DAALLIB -ldaal_core -ldaal_sequential -lpthread -lm
// #include "cholesky.hxx"
import "C"
```

To use cgo, we need to import “C”, which is a pseudo-package telling Go that we are using cgo. If the import of "C" is immediately preceded by a comment, that comment which is called the preamble is used as a header when compiling the C++ parts of the package.

CXXFLAGS and LDFLAGS allow us to specify the compile and linking flags that we want cgo to use during compilation, and we can include our C++ function via // #include "cholesky.hxx”. I used Linux with gcc to compile this example, so those flags are reflected above. However, you can follow this guide to determine how you should link your application to Intel DAAL.

After that, we can write our Go code just as we would with any other program, and access our wrapped function as C.choleskyDecompose():

```
// Define the input matrix as an array.
inputArray := [9]float64{
	1.0, 2.0, 4.0,
	2.0, 13.0, 23.0,
	4.0, 23.0, 77.0,
}

// Get the first Cholesky decomposition factor.
data := (*C.double)(unsafe.Pointer(&inputArray[0]))
factor := C.choleskyDecompose(3, data)

// Output the first Cholesky dcomposition factor to stdout.
fmt.Printf("The first Cholesky decomp. factor is: %d\n", factor)
```

One peculiarity here (unique to using cgo) is that we need to convert the pointer to the first element of our float64 slice to an unsafe pointer, which can then be explicitly converted to a *C.double (compatible with C++) pointer for our choleskyDecompose function. The unsafe package, as the name implies, allows us to step around the type safety of Go programs.

Ok, great! Now we have a Go program that called our Intel DAAL Cholesky decomposition. Now let’s build and run this program. We can do that as usual with go build:

```
$ ls
cholesky.cxx  cholesky.go  cholesky.hxx
$ go build
$ ls
cholesky  cholesky.cxx  cholesky.go  cholesky.hxx
$ ./cholesky
The first Cholesky decomp. factor is: 1
$
```

and we get the expected output! Indeed, the first Cholesky decomposition factor is 1. We have successfully tapped into the power of Intel DAAL directly from Go! However, our Go program does look a little peculiar with the unsafe and C bits. Also, this is kind of a one-time solution. Now, let’s cook this functionality into a reusable Go package that we can import, just like any other Go package.

Creating a reusable Go package with Intel DAAL

To create a Go package that wraps Intel DAAL functionality, we are going to use a tool called SWIG*. In addition to cgo, Go knows how to call SWIG at build time to compile Go packages that wrap C/C++ functionality. To enable this sort of build, we need to create a directory structure that looks like:

```
choleskylib
├── cholesky.go
├── cholesky.hxx
├── cholesky.cxx
└── cholesky.swigcxx
```

Our *.cxx and *.hxx wrapper files can stay the same. However, we now need to add a *.swigcxx file. This file looks like:

```
%{
#include "cholesky.hxx"
%}

%include "cholesky.hxx"
```

This instructs the SWIG tool to generate wrapping code for our Cholesky function, which allows us to use it as a Go package.

Also, now that we are creating a reusable Go package (not a standalone Go application), the *.go file doesn’t need to include a package main or function main. Rather, it simply needs to define our package name. In this case, let’s call it cholesky, which would mean that cholesky.go looks like:

```
package cholesky

// #cgo CXXFLAGS: -I$DAALINCLUDE
// #cgo LDFLAGS: -L$DAALLIB -ldaal_core -ldaal_sequential -lpthread -lm
import "C"
```

(Again providing the header flags.)

Now we can build and install our package locally:

```
$ ls
cholesky.cxx  cholesky.go  cholesky.hxx  cholesky.swigcxx
$ go install
$
```

This builds all of the necessary binaries and libraries that are called when a Go program utilizes this package. Go can see that we have a *.swigcxx file in our directory and, as a result, it will automatically use SWIG to build our package.

Awesome; we now have a Go package that uses Intel DAAL. Let’s see how we would import and use the package:

```
package main

import (
	"fmt""github.com/dwhitena/daal-go/choleskylib"
)

func main() {

	// Define the input matrix as an array.
	inputArray := [9]float64{
		1.0, 2.0, 4.0,
		2.0, 13.0, 23.0,
		4.0, 23.0, 77.0,
	}

	// Get the first Cholesky decomposition factor.
	factor := cholesky.CholeskyDecompose(3, &inputArray[0])

	// Output the first Cholesky dcomposition factor to stdout.
	fmt.Printf("The first Cholesky decomp. factor is: %d\n", factor)
}

```

Nice! This looks a lot cleaner as compared to our direct wrapping of Intel DAAL. We can import the Cholesky package, similar to any other Go package, and call our wrapped function as cholesky.CholeskyDecompose(...). Also, SWIG has taken care of all that unsafe stuff for us. Now we can just pass the address of the first element of our original float64 slice to cholesky.CholeskyDecompose(...).

Similar to any other Go program, this can be compiled and run with go build:

```
$ ls
main.go
$ go build
$ ls
example  main.go
$ ./example
The first Cholesky decomp. factor is: 1
$
```

Yay! The correct answer. We can now utilize this package in another other Go program where we need Cholesky decomposition.

Conclusions/Resources

With Intel DAAL, cgo, and SWIG we were able to integrate optimized Cholesky decomposition right in our Go programs. However, these techniques aren’t limited to Cholesky decomposition. You could create Go programs and packages that utilize any of the Intel DAAL implemented algorithms the same way. That means you can implement batch, online, and distributed neural networks, clustering, boostings, collaborative filtering, and much more right there in your Go applications.

All of the code used above can be found here.

Go data resources:

Intel DAAL resources:

About the Author

Daniel (@dwhitena) is a Ph.D. trained data scientist working with Pachyderm (@pachydermIO). Daniel develops innovative, distributed data pipelines which include predictive models, data visualizations, statistical analyses, and more. He has spoken at conferences around the world (ODSC, Spark Summit, Datapalooza, DevFest Siberia, GopherCon, and more), teaches data science/engineering with Ardan Labs (@ardanlabs), maintains the Go kernel for Jupyter, and is actively helping to organize contributions to various open source data science projects.

Intel® Parallel Studio XE 2018 Beta

$
0
0

This is a placeholder page for the Intel(R) Parallel Studio XE 2018 Beta

Virtual Reality User Experience Tips from VRMonkey

$
0
0

By Pedro Kayatt, an Intel Software Innovator with a strong passion for VR and games

Abstract

Virtual reality has been hyped for years, so people understand that it is really happening. Now is your chance to enter a new development platform, but are you ready for it?

The following guidelines show you how to avoid creating unpleasant feelings when developing for virtual reality. We also provide tested solutions from the people on the frontier of this new media.

Ready for a New Adventure?

This article aims to cover the best practices of virtual reality. It covers some of the major issues that people have been experiencing and how to solve them.

Our approach consists of rules, so let’s start with rule number 1, one of the more important rules of virtual reality (VR): Every rule has exceptions. With the advent of so many new VR devices, it is practically impossible to find someone who really knows what works and what doesn’t. With that in mind, always try something different.

In other words, there are no real experts in this area, only scientists. So try to be a scientist—apply a very defined methodology and try. It is important to try everything over and over again, and with the largest number of people that you can find.

Remember, everyone is different when we are talking about VR. Some people play games running around and making loops and nothing bothers them, other people can feel sick just looking at a scene with the wrong field of view (FOV), and can feel sick for the entire day.

Having said that, let’s go to the second rule: Never accelerate your player abruptly. Most games are not meant to be in VR so they use accelerations in a way that produces this sensation through the monitor. In fact, if you do that in VR, your own body will feel strange from accelerating and a very strong dizziness can overcome you.

Remember that most of the VR experience the camera sees appears to the player through the HMD. If the player is not moving, do not move the camera. This also applies to camera shaking, which is very common in producing a sensation of being hit or that you are in distress, but when you do that to a player it is very unpleasant. It is like shaking your head after few drinks.


Figure 1. Speeding up your player may make him feel really nauseous.

Do not forget that when the player uses a head-mounted display (HMD) he has a better notion of space, so sizes and velocity should follow measures similar to those in real life. For instance, in Half-Life* 2 the normal walking speed is around 17 km/h [1], which greatly differs from a normal walking speed of around 4 to 5 km/h [2]. So, keep that in mind when creating movement inputs for your character.

In fact, all locomotion in VR is very unpleasant. Until now, many mechanics have been proposed such as using a controller as usual, with a gamepad or keyboard, or creating an element of movement (which in fact has generated the bad reception for Resident Evil* 7 demo [3]. Other options, like teleport or dash movement, have been selected as better options.

If the movement takes less than 100 ms, the player will not notice it and will not get motion sickness. This approach is very comfortable, but still can be disorienting, since the player may be jumping from one place to another. Some solutions are to show the player the direction he is going, and keep his orientation fixed.

Another great option for movement is to have movements on rails, like in old arcade games like Virtual Cop* and House of the Dead*. Having a reference point is very important in avoiding cyber sickness; if you display a helmet or a cockpit interior this helps a lot in preventing nausea.

Believe it or not, some researchers have put a virtual nose to make some VR experiences more comfortable. This is true. Not that all VR games should have a nose, but to have this kind of UI feature makes it easier for the player to focus his attention on something not moving around him all the time.

Another common solution is to apply a continuous movement; for instance, when the player hits the button he begins movement on just one axis, and keeps that movement on only that axis until he moves his head to a boundary angle or hits a button again.

Apart from locomotion, another big problem usually found in many VR applications is the frame rate. This is something that is so easily measured that even VR stores will not publish games that have frame rates that do not maintain a framerate of at least 90 fps.

I believe that with some great, low-latency screens and with great technologies like asynchronous timewarp [4] and asynchronous spacewarp [5] we could have great experiences with even 45 fps; and remember that when talking about mobile devices, like GearVR* or Daydream*, the target frame rate is really 60 fps (a hardware limitation for now).


Figure 2. Example of asynchronous spacewarp—creating a new frame from the interpolation of 2.

Nevertheless, the frame rate is a very strong matter. Just because a console game or other games look good enough at 30 fps, this does not mean that it works in the VR experience. When the frame rate is low the interaction between head movement and game response is slow, and you start to get a lot of blurring; believe me, you will become sick after too much time exposed to that experience.

Another great thing that changes with using HMD is how interfaces should be placed. Nowadays we have gaming interfaces with a lot of information all around the screen showing maps, ammo, enemies, tutorials, and everything else. It is great when they are on a 50” TV five feet away, or even on a 21” monitor when playing with mouse or keyboard. But, when you are using a headset, the position of these interface gadgets should be carefully thought out.

In fact, if anything is too close to your eye it is hard to focus, so if you want to put some 2D sprite as an interface, try to place it at least two meters distant. Or try something more natural; place your interfaces over objects of your virtual world, similar to some amazing games such as Tomb Raider* and The Division. This gives a more natural approach to interacting with the interfaces and the results are pleasant.

VR Display
Figure 3. Remember to place virtual objects at a comfortable distance from the viewer.

Not only for interfaces, but for all objects in your virtual world, be careful when putting anything too close to the player. It is very nice to give camera closes; as VR experiences have stereoscopic vision, the player usually tries to avoid objects coming toward him. But at the same time, try to make this kind of interaction fast. Having to put focus on something that is very close to you will make you dizzy.

It is important to figure out how the body reacts to different stimulus in real life. For instance, we usually have 250 ms of reaction time for visual input, 170 ms for sound input and 150 ms for touch inputs (like controller rumbles). But, when the environment is dark our reactions are slower, which means that you can use that in VR.

In other words, dark, virtual places are more comfortable than brighter places, since our response time is usually slower in real life. Very fast audio feedback also increases the level of comfort in an experience, and understanding how to use special audio can make all the difference in your VR application.

As an example, for the Virtual Barbershop we could say that this is a VR experience, just using audio. Because it is binaural audio, it can express (even through stereo headphones) several positions for the audio source. The immersion you get from having sound like that is much better, and luckily most game engines are ready to create a 3D sound environment.


Figure 4. The difference between binaural audio and stereo audio, which creates a more immersive environment.

Another great hint is to not show the players’ body. You could ask, why? Well, there are several reasons for that. First, your player is no longer a spectator, he is the player! The embodiment that he has is much deeper; for instance, if a girl is playing she will not identify herself with a flat-chested male with strong arms.

Apart from that, some amazing new tools are allowing us to play the game in many different ways and positions—we can use room-scale VR, hand controls, or simple game pads. Imagine the difference between the players. How can you profile the player if they are in such different positions?


Figure 5. Remember, players do not like to follow rules. They will play your game in the position they prefer.

Since we just talked about controllers, consider using the hands as controllers. There are several technologies aimed to do that; the most popular are the Leap Motion* [6] and the motion controller of each main VR platform (Oculus Touch* and HTC controllers). They have amazing precision and create the perfect way to interact with the environment.

Some common interaction approaches are drastically changed when using these technologies, but almost every VR user, when facing the virtual world for the first time, has an immediate response of trying to reach the object with their hands.

In fact, it is so natural that it is even simpler than teaching touch controls on smart phones. If you look at videos on the Internet you will realize that elderly people (not monkeys) have almost no problems using VR, and they usually try to reach and interact using their hands.

Have you ever got a feeling that the face of some character is strange, almost wrong? That is probably because of the uncanny valley. The uncanny valley is how we perceive a human replica as a robot or a human; our strange revulsion toward things that appear nearly human, but not quite right. [7].

Why is that important for VR? Basically, it is important for every game, but given the problems with performance on VR (need to keep 90 fps at least), several graphical details must be avoided, and usually animations and a number of polygons are cut. Finally, we end up with faces in characters that are not realistic enough to feel like healthy people and we enter into the uncanny valley. Avoid that!


Figure 6. The uncanny valley [8], and how creepy it can be to be similar to humans.

The summary of all these tips is that VR is a science, and as a science a scientific approach must be taken. This means that you must experiment as much as possible; make new assumptions, create new hypotheses, and try them in prototypes. Never forget to take notes of what went well and what went bad, try to adapt, and never get too attached to an experiment; just mark it as failed and go to the next idea.

Conclusion

We are just entering a new world where anything is possible, so do not be closed in with ideas like, “My game must be a first person,” or “I should not move my character with analog sticks.” Try and reach new conclusions. If you want fast results go for the well-known, but if you want to achieve new possibilities, do not be afraid to try. More information can be found in the reference documents [9], [10], [11].

References

[1] E. D. Van Der Spek, The effect of cybersickness on the affective appraisal of virtual environments, Organization, 2007.

[2] N. Carey, Establishing Pedestrian Walking Speeds, no. 503, 2005.

[3] J. Conditt, ‘Resident Evil 7’ in VR is a sweaty, puke-inducing masterpiece, Engadget, 2016. [Online]. https://www.engadget.com/2016/06/15/resident-evil-7-vr-sickness-ps-vr/.

[4] M. Antonov, Asynchronous Timewarp Examined, Oculus, 2015. [Online]. https://developer3.oculus.com/blog/asynchronous-timewarp-examined/. [Accessed: 23-Mar-2017].

[5] D. Beeler, E. Hutchins, and P. Pedriana, Asynchronous Spacewarp, Oculus, 2016. [Online]. https://developer.oculus.com/blog/asynchronous-spacewarp/. [Accessed: 23-Mar-2017].

[6] L. Motion, Leap motion controller, URl. https://www.leapmotion.com, 2015.

[7] R. Schwarz, 10 Creepy Examples of the Uncanny Valley. [Online]. http://www.strangerdimensions.com/2013/11/25/10-creepy-examples-uncanny-valley/).

[8] M. Mori, K. F. MacDorman, and N. Kageki, The uncanny valley, IEEE Robot. Autom. Mag., vol. 19, no. 2, pp. 98–100, 2012.

[9] P. O. Luanaigh and R. “fabs” Fabian, The WHY behind the Dos and Don’ts of VR, nDreams. [Online]. http://malideveloper.arm.com/downloads/ARM_Game_Developer_Days/LondonDec15/presentations/nDreams_VR_presentation.pdf.

[10] D. Allen, Ten Do’s and Don’ts to Improve Comfort in VR

[11] M. Rose, The dos and don’ts of designing VR games

Intel(R) Media SDK GStreamer* Getting Started Guide

$
0
0

Intel(R) Media SDK GStreamer* Installation Process

1 Overview

This document provides the system requirements, installation instructions, issues and limitations. System Requirements:

  • Intel(R) Core(TM) Processor: SkyLake, Broadwell.
  • Fedora* 24 / 25
  • Intel(R) Media Server Studio 2017 R2.

2 Installing Fedora* 24 / 25

2.1 Downloading Fedora*

Go to the Fedora* download site and download Workstation OS image:

Fedora* 24: http://mirror.nodesdirect.com/fedora/releases/24/Workstation/x86_64/iso/Fedora*-Workstation-Live-x86_64-24-1.2.iso
Fedora* 25: http://mirror.nodesdirect.com/fedora/releases/25/Workstation/x86_64/iso/Fedora*-Workstation-Live-x86_64-25-1.3.iso

2.2 Creating the installation USB

Get an imaging tool like Rufus to create the USB bootable image

2.3 Installing Fedora* 24 / 25 on the system

For Fedora* 25, you may log on to the system with "GNOME on Xorg" option in the Gnome login manager. This is because the default desktop for Fedora* 25 uses Wayland, and the renderer plugin (mfxsink) native Wayland backend is not very well supported by the Fedora* Wayland desktop. In this case, you should use the Wayland EGL backend in mfxsink for native Wayland rendering in Fedora* 25 Wayland.

2.4 Configuring the Fedora* system (optional)

In case the user is behind a VPN, you may use the following method to set up the network proxy:

vi /etc/dnf/dnf.conf
# Add the following lines:
proxy=http://<proxy address>:<port>

Enable sudo privileges:

$ su
Password:
# vi /etc/sudoers
Find one line such like
  root    ALL=(ALL)    ALL
Then add one line for the normal user who wants to use sudo, e.g. for normal user "user"
  user    ALL=(ALL)    ALL

2.5 Installing rpm fusion

Fedora* 24:

wget <http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-24.noarch.rpm> -e use_proxy=yes -e  http_proxy=<proxy_address>:<port>
sudo rpm -ivh rpmfusion-free-release-24.noarch.rpm

Fedora* 25:

wget <http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-25.noarch.rpm> -e use_proxy=yes -e  http_proxy=<proxy_address>:<port>
sudo rpm -ivh rpmfusion-free-release-25.noarch.rpm

2.6 Updating system

sudo dnf update

3 Installing Intel(R) Media Server Studio 2017

3.1 Downloading Intel(R) Media Server Studio (MSS) 2017 R2 Community Edition

Go to https://software.intel.com/en-us/intel-media-server-studio and download the tar.gz file

3.2 Installing the user-space modules for MSS

Note: Before starting the following command sequence, please take note that the last cp command may reset the system, the system may freeze for a while and logout automatically. This is expected, continue logging in and resume the installation procedure. Create a folder for installation, for example “development”, download the tar file MediaServerStudioEssentials2017R2.tar.gz to this folder.

# cd ~
# mkdir development
# cd development
# tar -vxf MediaServerStudioEssentials2017R2.tar.gz
# cd MediaServerStudioEssentials2017R2/
# tar -vxf SDK2017Production16.5.1.tar.gz
# cd SDK2017Production16.5.1/Generic/
# tar -vxf intel-linux-media_generic_16.5.1-59511_64bit.tar.gz
# sudo cp -rdf etc/* /etc
# sudo cp -rdf opt/* /opt
# sudo cp -rdf lib/* /lib
# sudo cp -rdf usr/* /usr

3.3 Installing the custom kernel module package

3.3.1 Install the build tools

# sudo dnf install kernel-headers kernel-devel bc wget bison ncurses-devel hmaccalc zlib-devel binutils-devel elfutils-libelf-devel rpm-build redhat-rpm-config asciidoc hmaccalc perl-ExtUtils-Embed pesign xmlto audit-libs-devel binutils-devel elfutils-devel elfutils-libelf-devel newt-devel numactl-devel pciutils-devel python-devel zlib-devel mesa-dri-drivers openssl-devel

3.3.2 Download and build the kernel

# cd ~/development
# wget  https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.4.tar.xz -e use_proxy=yes -e  https_proxy= https://<proxy address>:<port>
# tar -xvf linux-4.4.tar.xz
# cp /opt/intel/mediasdk/opensource/patches/kmd/4.4/intel-kernel-patches.tar.bz2 .
# tar -xjf intel-kernel-patches.tar.bz2
# cd linux-4.4/
# vi patch.sh
(Added: “for i in ../intel-kernel-patches/*.patch; do patch -p1 < $i; done”)
# chmod +x patch.sh
# ./patch.sh
# make olddefconfig
# echo "CONFIG_NVM=y">> .config
# echo "CONFIG_NVM_DEBUG=n">> .config
# echo "CONFIG_NVM_GENNVM=n">> .config
# echo "CONFIG_NVM_RRPC=n">> .config
# make -j 8
# sudo make modules_install
# sudo make install

3.3.3 Validate the kernel change

Reboot the system with kernel 4.4 and check the kernel version

# uname –r
4.4.0

3.3.4 Validate the MSDK installation

The vainfo utility should show the Media SDK iHD driver details (installed in /opt/intel/mediasdk) and several codec entry points that indicate the system support for various codec formats.

$ vainfo
libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.99 (libva 1.67.0.pre1)
vainfo: Driver version: 16.5.1.59511-ubit
vainfo: Supported profile and entrypoints
 VAProfileH264ConstrainedBaseline: VAEntrypointVLD
 VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
 VAProfileH264Main : VAEntrypointVLD
 VAProfileH264Main : VAEntrypointEncSlice
 VAProfileH264High : VAEntrypointVLD
 VAProfileH264High : VAEntrypointEncSlice

Prebuilt samples are available for installation smoke testing in MediaSamples_Linux_*.tar.gz

# cd ~/development/MediaServerStudioEssentials2017R2/
# tar -vxf MediaSamples_Linux_2017R2.tar.gz
# cd MediaSamples_Linux_2017R2_b634/samples/_bin/x64/
# ./sample_multi_transcode -i::h264 ../content/test_stream.264 -o::h264 out.264

This test should pass on successful installation.

4 Installing GStreamer*

4.1 Install GStreamer* and corresponding plugins packages

# sudo dnf install gstreamer1 gstreamer1-devel gstreamer1-plugins-base gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-bad-freeworld gstreamer1-plugins-bad-free-extras gstreamer1-libav gstreamer1-plugins-bad-free-devel gstreamer1-plugins-base-tools

4.2 Validate the installation

# gst-launch-1.0 –-version
# gst-launch-1.0 -v fakesrc num_buffers=5 ! fakesink
# gst-play-1.0 sample.mkv

5 Building the GStreamer* MSDK plugin

5.1 Install the GStreamer* MSDK plugin dependencies

# sudo dnf install gcc-c++ glib2-devel libudev-devel libwayland-client-devel libwayland-cursor-devel mesa-libEGL-devel mesa-libGL-devel mesa-libwayland-egl-devel mesa-libGLES-devel libstdc++-devel cmake libXrandr-devel

5.2 Download the GStreamer* MSDK plugin

Go to https://github.com/01org/gstreamer-media-SDK and download the package to a "development" folder.

5.3 Build and install the plugin

# cd development/gstreamer-media-SDK-master/
# mkdir build
# cd build
# cmake .. -DCMAKE_INSTALL_PREFIX=/usr/lib64/gstreamer-1.0/plugins
# make
# sudo make install

5.4 Validate the installation

# gst-inspect-1.0 mfxvpp
# gst-inspect-1.0 mfxdecode
# gst-play-1.0 sample.mkv
# gst-launch-1.0 filesrc location=/path/to/BigBuckBunny_320x180.mp4 ! qtdemux ! h264parse ! mfxdecode ! fpsdisplaysink video-sink=mfxsink

You can go to the following site to download the clip: http://download.blender.org/peach/bigbuckbunny_movies/

Configure SNAP* Telemetry Framework to Monitor Your Data Center

$
0
0

Figure 1

Figure 1.Snap* logo.

Introduction

Would you believe that you can get useful, insightful information on your data center's operations AND provide a cool interface to it that your boss will love—all in the space of an afternoon, and entirely with open source tooling? It's true. Free up a little time to read this, and then maybe free up your afternoon to reap the benefits!

This article shows you how to use Snap* to rapidly select and begin collecting useful measurements, from basic system information to metrics, on sophisticated cloud orchestration.

We'll also show you how to publish that information in ways that are useful to you, as someone who needs true insight into their data center's operations (and, possibly, ways to trigger automation on the basis of it). Finally, we'll show how to publish that information in ways that your management will like, making a useful dashboard with Grafana*.

After that, you'll discover a way to do all of that even faster. Let's get started!

What Is Snap*?

Snap is an open-source telemetry framework. Telemetry is simply information about your data center systems. It covers anything and everything you can collect, from basic descriptive information, to performance counters and statistics, to log file entries.

In the past, this huge stew of information has been difficult to synthesize and analyze together. There were collectors for log files that were separate from collectors for performance data, and so on. Snap unifies the collection of common telemetry as a set of community-provided plugins. It does quite a bit more than that too, but for now let's drill-down on collectors, and introduce our demonstration environment.

A Sample Data Center

To keep things simple we'll be working with a small Kubernetes* cluster, consisting of only two hosts. Why Kubernetes? We're aiming at a typical cloud data center. We could have chosen Mesos* or OpenStack* just as easily; Snap has plugins for all of them. Kubernetes just gives us an example to work with. It's important to realize that even if you're running another framework or even something proprietary in your data center, you can still benefit from the system-level plugins. The principles are the same, regardless.

The test environment has two nodes. One is a control node from which we will control and launch Kubernetes Pods; the other is the sole Kubernetes worker node.

We'll be collecting telemetry from both hosts, and the way that Snap is structured makes it easy to extrapolate how to do much larger installations from this smallish example. The nodes are running CentOS* 7.2. This decision was arbitrary; Snap supports most Linux* distributions by distributing binary releases in both RPM* and Debian* packaging formats.

Installation and Initial Setup

We'll need to install Snap, which is quite simple, using the packaging. Complete instructions are available at the home of the Snap repository. We won't repeat all those instructions here, but for simplicity, these are the steps that we took on both of our CentOS 7.2 hosts:

curl -s https://packagecloud.io/install/repositories/intelsdi-x/snap/script.rpm.sh | sudo bash

The point of this step is to set up the appropriate package repository from which to install Snap.

Note: It is not a good idea to run code straight off the Internet as root. This is done here for ease-of-use in an isolated lab environment. You can and should download the script separately, and examine it to be satisfied with exactly what it will do. Alternatively, you can build Snap from source and install it using your own cluster management tools.

After that, we can install the package:

     sudo yum install -y snap-telemetry

This step installs the Snap binaries and startup scripts. We’ll make sure the Snap daemon runs on system startup, and that we’re running it now:

     systemctl enable snap-telemetry
     systemctl start snap-telemetry

Now we have the snap daemon (snapteld) up and running. Below is some sample output from systemctl status snap-telemetry. You can also validate that you are able to communicate with the daemon with a command like ‘snaptel plugin list’. For now, you'll just get a message that says ‘No plugins found. Have you loaded a plugin?’ This is fine, and it means you're all set.

Figure 2

Figure 2. Screen capture showing the snap daemon ‘snapteld’ running.

Now we're ready to get some plugins!

The Plugin Catalog

The first glimpse of the power of Snap is in taking a look at the Snap plugin catalog. For now, we'll concentrate on the first set of plugins, labeled 'COLLECTORS'. Have a quick browse through that list and you'll see that Snap's capabilities are quite extensive. There are low-level system information collectors like PSUtil* and Meminfo*. There are Intel® processor-specific collectors such as CPU states and Intel® Performance Counter Monitor (Intel® PCM). There are service and application-level collectors such as Apache* and Docker*. There are cluster-level collectors such as the OpenStack services (Keystone*, and so on) and Ceph*.

The first major challenge a Snap installer faces is what NOT to collect! There's so much available that it can quickly add up to a mountain of data. We'll leave those decisions to you, but for our examples, we're going to select three:

  • PSUtil—basic system information, common to any Linux data center.
  • Docker—Information about Docker containers and the Docker system.
  • Kubestate*—Information about Kubernetes clusters.

This selection will give us a good spread of different types of collectors to examine.

Installing Plugins

Installing a plugin is a relatively straightforward process. We'll start with PSUtil. First, look at the plugin catalog and click the release link on the far right of the PSUtil entry, shown here:

Figure 3

Figure 3. The line for PSUtil in the plugin catalog.

On the next page, the most recent release is at the top of the page. We'll copy the link to the binary for the current release of PSUtil for Linux x86_64.

Figure 4

Figure 4. Copying the binary link from the plugin release page.

Now we’ll download that plugin and load it. Paste in the URL that you copied above, and you get a pair of commands that look like this:You should receive some output indicating that the plugin loaded. Check that with ‘snaptel plugin list’.

Following the exact same procedure with the Docker plugin will work great:

curl -sfL https://github.com/intelsdi-x/snap-plugin-collector-docker/releases/download/7/snap-plugin-collector-docker_linux_x86_64 -o snap-plugin-collector-docker

snaptel plugin load snap-plugin-collector-docker

The final plugin we are interested in is Kubestate*. Note that the maintainer of Kubestate is not Intel, but Grafana. That means the releases are not maintained in the same GitHub* repository, so the procedure has to change a bit. Fortunately, by examining the documentation of the Kubestate repository, you can easily find the Kubestate release repository.

From there the procedure is exactly the same:

curl -sfL https://github.com/grafana/snap-plugin-collector-kubestate/releases/download/1/snap-plugin-collector-kubestate_linux_x86_64 -o snap-plugin-collector-kubestate

snaptel plugin load snap-plugin-collector-kubestate

If you want to track to current code updates, you are more than welcome to build your own binaries and load them, instead of the precompiled releases. Most of the plugin repository home pages provide instructions for doing it this way.

Publishers

We aren't quite ready to collect information just yet with Snap. Let’s take a look at the overall flow of Snap:

Figure 5

Figure 5. Snap workflow.

You can find the collectors we’ve been dealing with easily on the left-hand side.

In the middle are processors, which is another set of plugins available in the plugin catalog. We won't be installing any of these as part of this demonstration, but they are very useful plugins for making your telemetry data genuinely useful to you. Statistical transformations and filtering can help you get a handle on your operational environment, or even trigger automatic responses to loading or other events. Tagging allows you to usher data through to appropriate endpoints; for example, separating data out by customer, in the case of a cloud service provider data center.

Finally, on the right you can find publishers. These plugins allow you to take the collected, processed telemetry and output it to something useful. Note that Snap itself doesn't USE the telemetry. It is all about collecting, processing, and publishing the telemetry data as simply and flexibly as possible, but what is done with it from that point is up to you.

In the simplest case, Snap can publish to a file on the local file system. It can publish to many different databases such as MySQL* or Cassandra*. It can publish to message queues like RabbitMQ*, to feed into automation systems.

For our examples, we're going to use the Graphite* publisher, for three reasons. One is that Graphite itself is a flexible, well-known, and useful package for dealing with lots of metrics. A data center operation can use the information straight out of Graphite to get all kinds of useful insight into their data center operations. The second reason we're using Graphite is that it feeds naturally into Grafana, which will give us a pretty, manager-friendly dashboard. Finally, most of the example tasks (which we'll discuss shortly) that are provided in the plugin repositories are based on a simple file publisher. Using Graphite will involve a bit more complexity and is a more likely real-world use of publisher plugins.

Loading the publisher plugin works exactly the same as the previous plugins: Find the latest binary release, download it, and load it:

curl -sfL https://github.com/intelsdi-x/snap-plugin-publisher-graphite/releases/download/6/snap-plugin-publisher-graphite_linux_x86_64 -o snap-plugin-publisher-graphite

snaptel plugin load snap-plugin-publisher-graphite

Now we have all the plugins we need for our installation:

[plse@cspnuc03 ~]$ snaptel plugin list
NAME             VERSION         TYPE            SIGNED          STATUS          LOADED TIME
psutil           9               collector       false           loaded          Mon, 27 Mar 2017 20:36:05 EDT
docker           7               collector       false           loaded          Mon, 27 Mar 2017 20:39:45 EDT
kubestate        1               collector       false           loaded          Mon, 27 Mar 2017 20:51:17 EDT
graphite         6               publisher       false           loaded          Mon, 27 Mar 2017 21:03:31 EDT
[plse@cspnuc03 ~]$

We're almost ready to tie all this together. First, we need to pick out the metrics we're interested in collecting.

Metrics

Now that you've got all these plugins installed, it's time to select some metrics that you're interested in collecting. Most plugins offer a large listing of metrics; you may not want to take all of them.

To see what's available, you can view the master list of metrics that are available from the plugins you have installed with a single command:

snaptel metric list

The output from this is quite long—234 lines for the three collectors we have loaded, at the time of this writing. Rather than paste all of the output here, we'll just look at a few from each namespace, generated by our three collector plugins.

PSUtil

There are many metrics available from this package that should be readily identifiable to anyone who runs Linux. Here's the selection we'll go with for our examples:

     /intel/psutil/load/load1
     /intel/psutil/load/load15
     /intel/psutil/load/load5
     /intel/psutil/vm/available
     /intel/psutil/vm/free
     /intel/psutil/vm/used

These look like filesystem paths, but they are not. They are Snap namespace paths. The first element, 'intel', indicates the maintainer of the plugin. The second is the name of the plugin, after which comes the namespaces of the various metrics provided.

The metrics themselves are familiar; the typical, three-value load-average numbers, for 1-minute, 5-minute, and 15-minute averages, and some simple memory usage values.

Docker

For the Docker plugin, there are 150 metrics available at the time of this writing. They run from simple container information to details about network and filesystem usage per-container. For this, we'll take a few of the broader values, as recommended in the task examples provided in the plugin repository.

     /intel/docker/*/spec/*
     /intel/docker/*/stats/cgroups/cpu_stats/*
     /intel/docker/*/stats/cgroups/memory_stats/*

Kubestate

The Kubestate plugin (at /grafanalabs/kubestate in the namespace) provides 34 metrics for tracking Kubernetes information. Since Kubernetes is the top-level application platform, it's worth ensuring that we collect all of them. The full list will show up in the task definition file, below.

All of the metrics lists and documentation (available from the plugin repos) are worth having a closer look to get the insight you need for your workloads.

Now that we've selected some metrics to track, let's actually get some data flowing!

Our First Task

Putting together collector, processor, and publisher processors into an end-to-end workflow is performed by specifying task manifests. These are either JSON or YAML definition files that are loaded into the running Snap daemon to tell it what data to collect, what to do with it, and where to send it. A single Snap daemon can be in charge of many tasks at once, meaning you can run multiple collectors and publish to multiple endpoints. This makes it very easy and flexible to direct telemetry where you need it, when you need it.

All of the plugin repositories generally include some sample manifests to help you get started. We're going to take some of those and extend them just a bit to tie in the Graphite publisher.

For Graphite itself, we've run a simple graphite container image from Docker Hub* on the 'control' node in our setup. Of course, you can use any Graphite service you wish or, of course, try another publisher. (For example, you may be using Elasticsearch*, and there's a publisher plugin for that!)

sudo docker run -d --name graphite --restart=always -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 hopsoft/graphite-statsd

The first task we define will be to collect data from the PSUtil plugin and publish it to our Graphite instance. We'll start with this YAML file. It's saved in our lab to psutil-graphite.yaml.

---
version: 1
schedule:
  type: simple
  interval: 1s
max-failures: 10
workflow:
  collect:
    metrics:"/intel/psutil/load/load1": {}"/intel/psutil/load/load15": {}"/intel/psutil/load/load5": {}"/intel/psutil/vm/available": {}"/intel/psutil/vm/free": {}"/intel/psutil/vm/used": {}
    publish:
    - plugin_name: graphite
      config:
        server: cspnuc02
        port: 2003

The best part about this is that it is straightforward and quite readable. To begin with, we've defined a collection interval of one second in the schedule section. The simple schedule type just means to run at every interval. There are two other types: windowed, which means you can define exactly when the task will start and stop, and cron, which allows you to set a crontab- like entry for when the task will run. There's lots more information on the flexibility of task scheduling in the task documentation.

The ‘max-failures’ value indicates just what you would expect: After 10 consecutive failures of the task, the Snap daemon will disable the task and stop trying to run it.

Finally, the ‘workflow’ section defines our collector and our publisher, what metrics to collect, and defines the Graphite service to connect to as cspnuc02 on port 2003. This is the control node where the Graphite container we're using is running. Starting this task on both machines will ensure that both report into the same Graphite server.

To create the task, we'll run the task creation command:

     snaptel task create -t psutil-graphite.yaml

Assuming all is well, you should see output similar to this:

     [plse@cspnuc03 ~]$ snaptel task create -t psutil-graphite.yaml
     Using task manifest to create task
     Task created
     ID: d067a659-0576-44eb-95fe-0f01f7e33fbf
     Name: Task-d067a659-0576-44eb-95fe-0f01f7e33fbf
     State: Running

The task is running! For now, anyway. There's a couple of ways to keep tabs on your running tasks. One is just to list the installed tasks:

snaptel task list

You'll get a listing like the following that shows how many times the task has completed successfully (HIT), come up empty with no values (MISS), or failed altogether (FAIL). It will also indicate if the task is Running, Stopped, or Disabled.

ID                                       NAME                                            STATE           HIT     MISS    FAIL
CREATED                  LAST FAILURE
d067a659-0576-44eb-95fe-0f01f7e33fbf     Task-d067a659-0576-44eb-95fe-0f01f7e33fbf       Running         7K      15      3
3:55PM 3-28-2017         rpc error: code = 2 desc = Error Connecting to graphite at cspnuc02:2003. Error: dial tcp: i/o timeout

Even though you can see a failure here (from the last time it failed), you can also see that the task is still running, it's got 7000 hits, 15 misses and only 3 failures. Without 10 consecutive failures, the task will still try to run.

Another way to see your tasks working is to watch them. By taking the task ID field from the above output and pasting it into this command:

snaptel task watch d067a659-0576-44eb-95fe-0f01f7e33fbf

You can get a continuously updating text-mode output of the values as they stream by:

Figure 6

Figure 6. Output from snaptel task watch.

Now that we've created our first simple task, let's get the other collectors collecting!

Remaining Tasks

For Docker, our YAML looks like this, saved as docker-graphite.yaml:

---
max-failures: 10
schedule:
  interval: 5s
  type: simple
version: 1
workflow:
  collect:
    config:
      /intel/docker:
        endpoint: unix:///var/run/docker.sock
    metrics:
      /intel/docker/*/spec/*: {}
      /intel/docker/*/stats/cgroups/cpu_stats/*: {}
      /intel/docker/*/stats/cgroups/memory_stats/*: {}
    publish:
      -
        config:
          server: cspnuc02
          port: 2003
        plugin_name: graphite

By now you can probably see the general outline of how this works. Note the additional configuration of the collector to be able to communicate with the local Docker daemon. In this case we're using a socket; other examples work with a network connection to Docker on a specific port.

We'll enable this one the same way on both nodes:

snaptel task create -t docker-graphite.yaml

Finally, we'll enable Kubestate. This one is a bit different than the other two. We don't want to enable it on both nodes, since we're interested in the overall state of the Kubernetes cluster, rather than the values on a specific node. Let's enable it on the control node only.

The example task manifest for Kubestate is a JSON file, so the modifications made to it will be too. This one is saved as kubestate-graphite.json:

{"version": 1,"schedule": {"type": "simple","interval": "10s"
  },"workflow": {"collect": {"metrics": {"/grafanalabs/kubestate/container/*/*/*/*/limits/cpu/cores": {},"/grafanalabs/kubestate/container/*/*/*/*/limits/memory/bytes": {},"/grafanalabs/kubestate/container/*/*/*/*/requested/cpu/cores": {},"/grafanalabs/kubestate/container/*/*/*/*/requested/memory/bytes": {},"/grafanalabs/kubestate/container/*/*/*/*/status/ready": {},"/grafanalabs/kubestate/container/*/*/*/*/status/restarts": {},"/grafanalabs/kubestate/container/*/*/*/*/status/running": {},"/grafanalabs/kubestate/container/*/*/*/*/status/terminated": {},"/grafanalabs/kubestate/container/*/*/*/*/status/waiting": {},"/grafanalabs/kubestate/deployment/*/*/metadata/generation": {},"/grafanalabs/kubestate/deployment/*/*/spec/desiredreplicas": {},"/grafanalabs/kubestate/deployment/*/*/spec/paused": {},"/grafanalabs/kubestate/deployment/*/*/status/availablereplicas": {},"/grafanalabs/kubestate/deployment/*/*/status/deploynotfinished": {},"/grafanalabs/kubestate/deployment/*/*/status/observedgeneration": {},"/grafanalabs/kubestate/deployment/*/*/status/targetedreplicas": {},"/grafanalabs/kubestate/deployment/*/*/status/unavailablereplicas": {},"/grafanalabs/kubestate/deployment/*/*/status/updatedreplicas": {},"/grafanalabs/kubestate/node/*/spec/unschedulable": {},"/grafanalabs/kubestate/node/*/status/allocatable/cpu/cores": {},"/grafanalabs/kubestate/node/*/status/allocatable/memory/bytes": {},"/grafanalabs/kubestate/node/*/status/allocatable/pods": {},"/grafanalabs/kubestate/node/*/status/capacity/cpu/cores": {},"/grafanalabs/kubestate/node/*/status/capacity/memory/bytes": {},"/grafanalabs/kubestate/node/*/status/capacity/pods": {},"/grafanalabs/kubestate/node/*/status/outofdisk": {},"/grafanalabs/kubestate/pod/*/*/*/status/condition/ready": {},"/grafanalabs/kubestate/pod/*/*/*/status/condition/scheduled": {},"/grafanalabs/kubestate/pod/*/*/*/status/phase/Failed": {},"/grafanalabs/kubestate/pod/*/*/*/status/phase/Pending": {},"/grafanalabs/kubestate/pod/*/*/*/status/phase/Running": {},"/grafanalabs/kubestate/pod/*/*/*/status/phase/Succeeded": {},"/grafanalabs/kubestate/pod/*/*/*/status/phase/Unknown": {}
      },"config": {"/grafanalabs/kubestate": {"incluster": false,"kubeconfigpath": "/home/plse/.kube/config"
        }
      },"process": null,"publish": [
        {"plugin_name": "graphite","config": {"server": "localhost","port": 2003
          }
        }
      ]
    }
  }
}

Again, most of this is quite straightforward. As noted above, we're collecting all metrics available in the namespace, straight from what we would have gotten from snaptel metric list. To configure the collector itself, we tell it that we're not running from within the cluster ("incluster": false), and where to look for information on how to connect to the cluster management ("kubeconfigpath": "/home/plse/.kube/config").

The config file for Kubernetes that's referenced there contains the server name and port to connect to and the cluster and context to use when conducting queries. So, clearly, multiple tasks could be set up to query different clusters and contexts, and route them as desired. We could even add in a tagging processor plugin to tag the data by cluster and deliver it with the tags; that would allow us to split cluster data out by customer, for example.

Also note that here the server for Graphite is ‘localhost’ since only this one node needs to access the service. We could have used the hostname here as well; it works either way.

Enabling the service is the same as the others:

snaptel task create -t kubestate-graphite.json

Once we're satisfied that our tasks are up and running, we can go take a look at them with Graphite's native tools.

Real-World Deployments

We'll take a moment here to pause and have a quick look at how to make these kinds of settings permanent, as well as how to integrate Snap's tooling into a real-world environment.

The Snap daemon has several methods of configuration. We've been doing all of them via the command-line interface, but none of it is persistent. If we rebooted our nodes right now, the Snap daemon would start up, but it wouldn't load the plugins and tasks we've defined for it.

To make that happen, you would want to use the guidelines at Snap Daemon Configuration.

We won't get into the specifics here, but suffice it to say that /etc/snap/snapteld.conf can be set up as either a YAML or JSON file that contains more or less the same information that our task manifests did. This file will suffice to install plugins and run tasks at boot time. As well, it defines many defaults about the way that Snap runs, so that you can tune the daemon to collect properly without imposing too much of its own overhead on your servers.

Likewise, it's likely you've been wondering about loading plugins and how secure that process is. The default installation method that we've used here sets the value plugin_trust_level to 0 in the /etc/snap/snapteld.conf configuration file. This means that the plugins that we've been downloading and installing haven't been checked for integrity by the daemon.

Snap uses GPG* keys and signing methods to allow you to sign and trust binaries in your deployment. The instructions for doing this are at Snap Plugin Signing. Again, it is beyond the scope of this article to examine this system deeply, but we strongly advise that any production deployments integrate with the plugin signing system, and that signatures are distributed independently of the plugins. This should not be an unusual model for deploying software in most data centers (although the work will probably run past our single afternoon).

Examining the Data

The Graphite container that we ran earlier exposes a useful web interface from the server. If we connect to it, we'll find that the available metrics are listed on the left-hand side of the resulting page.

The headings by hostname are the ones that we're interested in here (the others are defaults provided by the Graphite and StatsD* container system we've started). In the screenshot below I've expanded some of the items so you can get a feel for where the metrics come out in Graphite.

Figure 7

Figure 7. Metrics in the Graphite homepage.

From here it would be quite simple to construct Graphite-based graphs that offer interesting and useful data about your servers. For example, Figure 8 shows a graph that we constructed that looks at the Kubernetes node, and combines information about system load, Docker containers, and Kubernetes pods over a 30-minute period.

You can see from here when a new pod was launched and then removed. The purple line that shoots to 1.0 and drops again is for one of the containers in the pod; it didn't exist before spawning, and ceased to exist afterwards.

The green line is one-minute load average on the worker node, and the red line is Docker memory utilization on the same node.

Figure 8

Figure 8. A simple Graphite chart.

This is a simple example, just to give an idea of what can be generated, and the point here is that it took very little time to construct a graph with viable information. A little bit of experimentation with your specific environment and workloads will almost certainly reveal more useful data collection and uses in a very short period of time!

Making a Nice Dashboard

From here, it's a relatively simple matter to make some nice boss-friendly charts out of our pile of Graphite data. First, we'll load up another couple of quickie containers on the control host, to store our Grafana info, and to run the tool itself:

sudo docker run -d -v /var/lib/grafana --name grafana-storage busybox:latest
sudo docker run   -d   -p 3000:3000   --name=grafana   --volumes-from grafana-storage   grafana/grafana

Now you've got a Grafana installation running on port 3000 of the node these commands were run on. We'll bring it up in a browser and log in with the default credentials of admin for both Username and Password. You will arrive at the Home dashboard, and the next task on the list is to add a data source. Naturally, we'll add our Graphite installation:

Figure 9

Figure 9. Adding a data source in Grafana.

Once that's set up, we can return to the Home dashboard using the pull-down menu from the Grafana log in the upper-left hand corner. Select Dashboards and Home. From there, click New Dashboard. You'll get a default graph that's empty on that page. Click the Panel Title, then Edit, and you can add metrics, change the title, and so on.

Adding metrics works the same here as it did on the Graphite screen, more or less. A little time spent exploring can give you something reasonably nice, as shown below. In an afternoon you could generate a very cool dashboard for your data sources!

Here's a quick dashboard we did with our one-minute load average chart and a chart of memory utilization of all Kubernetes containers in the cluster. The containers come into being as the Pod is deployed, which is why there is no data for them on the first part of the graph. We can also see that the initial deployment of the Pod was quickly halted and re-spawned between 17:35 and 17:40.

Figure 10

Figure 10. A simple dashboard in Grafana.

This may or may not be useful information for you; the point is to generate useful information for yourself that is very simple and quick.

Once Again, But Faster!

So by now we've explored a lot of Snap's potential, but one area we haven't covered too much is its extensibility. The plugin framework and open source tooling allows it to be extended quite easy by anyone interested.

For the example we used here, a Kubernetes setup, it turns out that there is a nice extension designed to plug directly into Snap with a full set of metrics and dashboards already available. It's called the Grafana Kubernetes app, and it runs all the components directly in your Kubernetes cluster instead of outside of it, the way we've done in this article.

You can find that in the Grafana Kubernetes app.

Besides prepared, chaining plugins like this one, other areas of extension are possible for Snap as well. For example, more advanced schedulers than the basic three (simple, windowed, cron) can be slotted-in with relative ease. And of course, new collector and publisher plugins are always welcome!

Summary

In this article we've introduced you to Snap, an extensible and easy-to-use framework for collecting data center telemetry. We've talked about the kinds of system, application, and cluster-level data that Snap can collect (including Intel® architecture-specific counters like CPU states and Intel PCM). We've demonstrated some common ways to put plugins together to produce usable data for data center operators and, furthermore, create good-looking graphs with Grafana. We've shown you how to install, configure, add plugins, schedule and create tasks, and check the health of the running tasks.

We've also had a short discussion on how to take it to the next level and deploy Snap for real with signed plugins and persistent state. Finally, we've shown that Snap is easily extended to deliver the telemetry you need, to where you need it.

We hope you take an afternoon to try it out and see what it can do for you!

About the Author

Jim Chamings is a Sr. Software Engineer at Intel Corporation, who focuses on enabling cloud technology for Intel’s Developer Relations Division. He’d be happy to hear from you about this article at: jim.chamings@intel.com.


Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Do I need to use the Intel XDK to complete the HTML5 from W3C Xseries Course?

It is not required that you use the Intel XDK to complete the HTML5 from W3C Xseries course. There is nothing in the course that requires the Intel XDK. 

All that is needed to complete the course is the free Brackets HTML5 editor. Whenever the course refers to using the "Live Layout" feature of the Intel XDK, use the "Live Preview" feature in Brackets, instead. The Intel XDK "Live Layout" feature is directly derived from, and is nearly identical to, the Brackets "Live Layout" feature. 

For additional help, see this Intel XDK forum post and this Intel XDK forum thread.

Error contacting remote build servers.

If you consistently see this error message while using the Build tab, and you are logged into the Intel XDK, it is likely due to using an obsolete and unsupported version of the Intel XDK. Check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

You must upgrade to a new version of the Intel XDK to resolve this issue.

NOTICE: Internet connection and login are required.

If you have successfully logged into the Intel XDK, but you are seeing the error message in the image below, when using the Build or Test tab, it may be due to an obsolete and unsupported version of the Intel XDK. Please check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

Otherwise, please review this FAQ for help creating an Intel XDK login.

I cannot login to the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a userid and password, you can create your login credentials outside of the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

I cannot login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel login page, to something short and simple. (If you do not have an Intel XDK userid, goto the Intel XDK registration page to create one.)

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK use the same technique to authenticate your login). Once the above works, you can reset your password to something else if you do not like the short and simple password you used for the test.

If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

Inactive account/login issue/problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

I lost my project, how do I download my project source code from the Intel XDK servers?

We do not store your projects on our servers for any significant period of time, just long enough to perform a build or send for testing on App Preview. Your source code is located inside of the APK and IPA files you built. You will have to recreate the project settings, but you have all of the source if you have the APK (or IPA or Windows build). Rename the APK you have to a ZIP, for example from "my-app.apk" to "my-app.apk.zip" and then unzip that file using your favorite archive tool. For example, the contents of an APK based on the "hello-cordova" sample:

NOTE: the cordova-js-src folder was added by Cordova, it is not part of the original source for this sample project. Likewise, the cordova.js and the cordova_plugins.js files were added by Cordova. The remaining files and folders within the www folder were directly copied from the original project's www folder.

You can start a new project using the blank template and copy the source code from inside the APK's www folder into that project's www folder. You can also see which plugins were included in the APK by inspecting the plugins folder or inspecting the cordova_plugins.js file that was added to the APK. At the very end of the cordova_plugins.js file is a list of plugins that were added and the specific versions of those plugins. For example, from the APK above, that is based on the "hello-cordova" sample, the last lines from the cordova_plugins.js file:

module.exports.metadata =
// TOP OF METADATA
{"cordova-plugin-crosswalk-webview": "1.5.0","cordova-plugin-device-orientation": "1.0.3","cordova-plugin-device": "1.1.2","cordova-plugin-compat": "1.1.0","cordova-plugin-geolocation": "2.2.0","cordova-plugin-inappbrowser": "1.4.0","cordova-plugin-splashscreen": "3.2.2","cordova-plugin-dialogs": "1.2.1","cordova-plugin-statusbar": "2.1.3","cordova-plugin-file": "4.2.0","cordova-plugin-media": "2.3.0","cordova-plugin-device-motion": "1.2.1","cordova-plugin-vibration": "2.1.1","cordova-plugin-whitelist": "1.2.2"
};

NOTE: in the list above, the cordova-plugin-whitelist and cordova-plugin-crosswalk-webview plugins were added automatically by the Intel XDK and, likewise, will be added automatically by the Intel XDK Cordova export tool, so you do not need to add these two plugins to your rebuilt project.

If you were using Crosswalk, you may see a xwalk-command-line file in the APK, the contents of that file are the Crosswalk initialization commands that were provided, for example, from this same sample app:

xwalk --ignore-gpu-blacklist --disable-pull-to-refresh-effect 

Beyond that, you can inspect the AndroidManifest.xml file to find a few other things, like the version numbers. For example, if you have Android Studio installed on your system, you can use the aapt command to inspect the contents of your APK. The most useful being the version codes and the package name, as shown below:

$ aapt list -a my-app.apk | fgrep -i version
    ...lines deleted for clarity...
    A: android:versionCode(0x0101021b)=(type 0x10)0x1c
    A: android:versionName(0x0101021c)="16.5.16" (Raw: "16.5.16")
    A: platformBuildVersionCode=(type 0x10)0x17 (Raw: "23")
    A: platformBuildVersionName="6.0-2704002" (Raw: "6.0-2704002")
      A: android:minSdkVersion(0x0101020c)=(type 0x10)0xe
      A: android:targetSdkVersion(0x01010270)=(type 0x10)0x15

$ aapt list -a my-app.apk | fgrep package
Package Group 0 id=0x7f packageCount=1 name=xdk.intel.hellocordova
    A: package="xdk.intel.hellocordova" (Raw: "xdk.intel.hellocordova")

How do I convert my web app or web site into a mobile app?

The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

Where are the global-settings.xdk and xdk.log files?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

The xdk.log file contains logged data generated by the Intel XDK while it is running. Sometimes technical support will ask for a copy of this file in order to get additional information to engineering regarding problems you may be having with the Intel XDK. 

Both files are located in the same directory on your development system. Unfortunately, the precise location of these files varies with the specific version of the Intel XDK. You can find the global-settings.xdk and the xdk.log using the following command-line searches:

  • From a Windows cmd.exe session:
    > cd /
    > dir /s global-settings.xdk
     
  • From a Mac and Linux bash or terminal session:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* you must use Windows* 7 or higher. The Intel XDK will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (the Intel XDK has issues with network shares that have not been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). 
  • Some people have issues using the Intel XDK behind a corporate network proxy or firewall. To check for this issue, try running the Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel login page and confirm that you can login with your Intel XDK account username and password.
  • If you are experiencing login issues, please send an email to html5tools@intel.com from the email address registered to your login account, describing the nature of your account problem and any other details you believe may be relevant.

If you can reliably reproduce the problem, please post a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to the Intel XDK forum. Please ATTACH the xdk.log file to your post using the "Attach Files to Post" link below the forum edit window.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

The steps below assume you installed into the "default" location. Version 3900 (and later) installs the user data files one level deeper, but using the locations specified will still find the saved user information and node-webkit cache files. If you did not install in the "default" location you will have to find the location you did install into and remove the files mentioned here from that location.

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > rmdir /s /q .

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > rmdir /s /q .
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

If the Intel XDK is still listed as an app in the Windows Control Panel "Uninstall or change a program" list, find this entry in your registry (using regedit):

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall

Delete any sub-entries that refer to the Intel XDK. For example, a 3900 install will have this sub-key:

ARP_for_prd_xdk_0.0.3900

Use the following methods on a Linux or a Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

See the image below (this image is from a Windows 8.1 system).

Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

How do I run the Intel XDK on Fedora Linux?

See the instructions below, copied from this forum post:

$ sudo find xdk/install/dir -name libudev.so.0
$ cd dir/found/above
$ sudo rm libudev.so.0
$ sudo ln -s /lib64/libudev.so.1 libudev.so.0

Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

The Intel XDK generates a path error for my launch icons and splash screen files.

If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

<icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

Inside of your <project-name>.xdk file you will find entries that look like this:

"icons_": [
  {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
  },
  {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
  },
  {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
  },

Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

<!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

Upgrading to the latest version of the Intel XDK results in a build error with existing projects.

Some users have reported that by creating a new project, adding their plugins to that new project and then copying the www folder from the old project to the new project they are able to resolve this issue. Obviously, you also need to update your Build Settings in the new project to match those from the old project.

Back to FAQs Main

Intel® Trace Analyzer and Collector 2018 Beta Release Notes for Windows* OS

$
0
0

Overview

Intel® Trace Analyzer and Collector is a powerful tool for analyzing MPI applications, which essentially consists of two parts:

  • Intel® Trace Collector is a low-overhead tracing library that performs event-based tracing in applications at runtime. It collects data about the application MPI and serial or OpenMP* regions, and can trace custom set functions. The product is completely thread safe and integrates with C/C++, FORTRAN and multithreaded processes with and without MPI. Additionally it can check for MPI programming and system errors.
  • Intel® Trace Analyzer is a GUI-based tool that provides a convenient way to monitor application activities gathered by the Intel Trace Collector. You can view the desired level of detail, quickly identify performance hotspots and bottlenecks, and analyze their causes.

To receive technical support and updates, you need to register your product copy. See Technical Support below.

What's New

Intel® Trace Analyzer and Collector 2018 Beta

  • MPI Performance Snapshot is no longer a part of Intel Trace Analyzer and Collector and is available as a separate product. See http://www.intel.com/performance-snapshot for details.
  • Removed the macOS* support.
  • Documentation is now removed from the product package and is available online.

Intel® Trace Analyzer and Collector 2017 Update 2

  • Enhancements in function color selection on timelines.

Intel® Trace Analyzer and Collector 2017 Update 1

  • Added zooming support with a mouse wheel on timelines.
  • Deprecated support for the ITF format.

Intel® Trace Analyzer and Collector 2017

Key Features

  • Advanced GUI: user-friendly interface, high-level scalability, support of STF trace data
  • Aggregation and Filtering: detailed views of runtime behavior grouped by functions or processes
  • Fail-Safe Tracing: improved functionality on prematurely terminated applications with deadlock detection
  • Intel® MPI Library Interface: support of tracing on internal MPI states, support of MPI-IO
  • Correctness Checking: check for MPI and system errors at runtime (including distributed memory checking)
  • ROMIO*: extended support of MPI-2 standard parallel file I/O
  • Comparison feature: compare two trace files and/or two regions (in one or two trace files)
  • Command line interface for the Intel Trace Analyzer

System Requirements

Hardware Requirements

  • Systems based on the Intel® 64 architecture, in particular:
    • Intel® Core™ processor family
    • Intel® Xeon® E5 v4 processor family recommended
    • Intel® Xeon® E7 v3 processor family recommended
    • 2nd Generation Intel® Xeon Phi™ Processor (formerly code named Knights Landing)
  • 1 GB of RAM per core (2 GB recommended)
  • 1 GB of free hard disk space

Software Requirements

  • Operating systems:
    • Microsoft* Windows Server* 2008, 2008 R2, 2012, 2012 R2, 2016
    • Microsoft* Windows* 7, 8.x, 10
  • MPI implementations:
    • Intel® MPI Library 5.0 or newer
  • Compilers:
    • Intel® C++/Fortran Compiler 15.0 or newer (required for OpenMP* support)
    • Microsoft* Visual Studio* Compilers 2013, 2015, 2017

Known Issues and Limitations

  • Tracing of MPI applications, which include the MPI_Comm_spawn function calls, is not supported.
  • Intel® Trace Analyzer may get into an undefined state if too many files are opened at the same time.
  • In some cases symbols information may appear incorrectly in the Intel® Trace Analyzer if you discarded symbols information from object files.
  • MPI Correctness Checking is available with the Intel® MPI Library only.

Technical Support

Every purchase of an Intel® Software Development Product includes a year of support services, which provides priority customer support at our Online Support Service Center web site, http://www.intel.com/supporttickets.

In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive priority support.

Intel® Trace Analyzer and Collector 2018 Beta Release Notes for Linux* OS

$
0
0

Overview

Intel® Trace Analyzer and Collector is a powerful tool for analyzing MPI applications, which essentially consists of two parts:

  • Intel® Trace Collector is a low-overhead tracing library that performs event-based tracing in applications at runtime. It collects data about the application MPI and serial or OpenMP* regions, and can trace custom set functions. The product is completely thread safe and integrates with C/C++, FORTRAN and multithreaded processes with and without MPI. Additionally it can check for MPI programming and system errors.
  • Intel® Trace Analyzer is a GUI-based tool that provides a convenient way to monitor application activities gathered by the Intel Trace Collector. You can view the desired level of detail, quickly identify performance hotspots and bottlenecks, and analyze their causes.

To receive technical support and updates, you need to register your product copy. See Technical Support below.

What's New

Intel® Trace Analyzer and Collector 2018 Beta

  • Added support for OpenSHMEM* applications.
  • MPI Performance Snapshot is no longer a part of Intel Trace Analyzer and Collector and is available as a separate product. See http://www.intel.com/performance-snapshot for details.
  • Removed the macOS* support.
  • Removed support for the Intel® Xeon Phi™ coprocessor (code named Knights Corner).
  • Documentation is now removed from the product package and is available online.

Intel® Trace Analyzer and Collector 2017 Update 2

  • Enhancements in function color selection on timelines.

Intel® Trace Analyzer and Collector 2017 Update 1

  • Added zooming support with a mouse wheel on timelines.
  • Deprecated support for the ITF format.

Intel® Trace Analyzer and Collector 2017

  • Introduced an OTF2 to STF converter otf2-to-stf (preview feature).
  • Introduced a new library for collecting MPI load imbalance (libVTim).
  • Introduced a new API function VT_registerprefixed.
  • Custom plug-in framework is now removed.
  • All product samples are moved online to https://software.intel.com/en-us/product-code-samples.

Key Features

  • Advanced GUI: user-friendly interface, high-level scalability, support of STF and OTF2 trace data
  • Aggregation and Filtering: detailed views of runtime behavior grouped by functions or processes
  • Fail-Safe Tracing: improved functionality on prematurely terminated applications with deadlock detection
  • Intel® MPI Library Interface: support of tracing on internal MPI states, support of MPI-IO
  • Correctness Checking: check for MPI and system errors at runtime (including distributed memory checking)
  • ROMIO*: extended support of MPI-2 standard parallel file I/O
  • Comparison feature: compare two trace files and/or two regions (in one or two trace files)
  • Command line interface for the Intel Trace Analyzer

System Requirements

Hardware Requirements

  • Systems based on the Intel® 64 architecture, in particular:
    • Intel® Core™ processor family
    • Intel® Xeon® E5 v4 processor family recommended
    • Intel® Xeon® E7 v3 processor family recommended
    • 2nd Generation Intel® Xeon Phi™ Processor (formerly code named Knights Landing)
  • 1 GB of RAM per core (2 GB recommended)
  • 1 GB of free hard disk space

Software Requirements

  • Operating systems:
    • Red Hat* Enterprise Linux* 6, 7
    • Fedora* 23, 24
    • CentOS* 6, 7
    • SUSE* Linux Enterprise Server* 11, 12
    • Ubuntu* LTS 14.04, 16.04
    • Debian* 7, 8
  • MPI implementations:
    • Intel® MPI Library 5.0 or newer
  • Compilers:
    • Intel® C++/Fortran Compiler 15.0 or newer (required for OpenMP* support)
    • GNU*: C, C++, Fortran 77 3.3 or newer, Fortran 95 4.4.0 or newer

Known Issues and Limitations

  • Static Intel® Trace Collector libraries require Intel® MPI Library 5.0 or newer.
  • Tracing of MPI applications, which include the MPI_Comm_spawn function calls, is not supported.
  • Intel® Trace Analyzer may get into an undefined state if too many files are opened at the same time.
  • In some cases symbols information may appear incorrectly in the Intel® Trace Analyzer if you discarded symbols information from object files.
  • MPI Correctness Checking is available with the Intel® MPI Library only.
  • Intel® Trace Analyzer requires libpng 1.2.x (libpng12.so), otherwise the Intel Trace Analyzer GUI cannot be started.
  • Intel® Trace Analyzer and Collector does not support Fortran applications or libraries compiled with the -nounderscore option. Only functions with one or two underscores at the end of the name are supported. See details on Fortran naming conventions at https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gfortran/Naming-conventions.html

Technical Support

Every purchase of an Intel® Software Development Product includes a year of support services, which provides priority customer support at our Online Support Service Center web site, http://www.intel.com/supporttickets.

In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive priority support.

Intel® Trace Analyzer and Collector 2018 Beta Release Notes

$
0
0

This page provides the Release Notes for the Intel® Trace Analyzer and Collector 2018 Beta. Please use the table below to select the version needed.

 Linux* OSWindows* OS
Intel® Trace Analyzer and Collector 2018 BetaEnglish Release NotesEnglish Release Notes

Intel® MPI Library 2018 Beta Release Notes for Windows* OS

$
0
0

Overview

Intel® MPI Library is a multi-fabric message passing library based on ANL* MPICH3* and OSU* MVAPICH2*.

Intel® MPI Library implements the Message Passing Interface, version 3.1 (MPI-3) specification. The library is thread-safe and provides the MPI standard compliant multi-threading support.

To receive technical support and updates, you need to register your product copy. See Technical Support below.

Product Contents

  • The Intel® MPI Library Runtime Environment (RTO) contains the tools you need to run programs including scalable process management system (Hydra), supporting utilities, and dynamic libraries.
  • The Intel® MPI Library Development Kit (SDK) includes all of the Runtime Environment components and compilation tools: compiler wrapper scripts (mpicc, mpiicc, etc.), include files and modules, static libraries, debug libraries, and test codes.

What's New

Intel® MPI Library 2018 Beta

  • Documentation has been removed from the product and is now available online.

Intel® MPI Library 2017 Update 2

  • Added an environment variable I_MPI_HARD_FINALIZE.

Intel® MPI Library 2017 Update 1

  • Support for topology-aware collective communication algorithms (I_MPI_ADJUST family).
  • Deprecated support for cross-OS launches.

Intel® MPI Library 2017

  • Support for the MPI-3.1 standard.
  • Removed the SMPD process manager.
  • Removed the SSHM support.
  • Deprecated support for the Intel® microarchitectures older than the generation codenamed Sandy Bridge.
  • Bug fixes and performance improvements.
  • Documentation improvements.

Key Features

  • MPI-1, MPI-2.2 and MPI-3.1 specification conformance.
  • MPICH ABI compatibility.
  • Support for any combination of the following network fabrics:
    • RDMA-capable network fabrics through DAPL*, such as InfiniBand* and Myrinet*.
    • Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet*, and other interconnects.
  • (SDK only) Support for Intel® 64 architecture clusters using:
    • Intel® C++/Fortran Compiler 14.0 and newer.
    • Microsoft* Visual C++* Compilers.
  • (SDK only) C, C++, Fortran 77, and Fortran 90 language bindings.
  • (SDK only) Dynamic linking.

System Requirements

Hardware Requirements

  • Systems based on the Intel® 64 architecture, in particular:
    • Intel® Core™ processor family
    • Intel® Xeon® E5 v4 processor family recommended
    • Intel® Xeon® E7 v3 processor family recommended
  • 1 GB of RAM per core (2 GB recommended)
  • 1 GB of free hard disk space

Software Requirements

  • Operating systems:
    • Microsoft* Windows Server* 2008, 2008 R2, 2012, 2012 R2, 2016
    • Microsoft* Windows* 7, 8.x, 10
  • (SDK only) Compilers:
    • Intel® C++/Fortran Compiler 15.0 or newer
    • Microsoft* Visual Studio* Compilers 2013, 2015, 2017
  • Batch systems:
    • Microsoft* Job Scheduler
    • Altair* PBS Pro* 9.2 or newer
  • Recommended InfiniBand* software:
    • Windows* OpenFabrics* (WinOF*) 2.0 or newer
    • Windows* OpenFabrics* Enterprise Distribution (winOFED*) 3.2 RC1 or newer for Microsoft* Network Direct support
    • Mellanox* WinOF* Rev 4.40 or newer
  • Additional software:
    • The memory placement functionality for NUMA nodes requires the libnuma.so library and numactl utility installed. numactl should include numactlnumactl-devel and numactl-libs.

Known Issues and Limitations

  • Cross-OS runs using ssh from a Windows* host fail. Two workarounds exist:
    • Create a symlink on the Linux* host that looks identical to the Windows* path to pmi_proxy.
    • Start hydra_persist on the Linux* host in the background (hydra_persist &) and use -bootstrap service from the Windows* host. This requires that the Hydra service also be installed and started on the Windows* host.
  • Support for Fortran 2008 is not implemented in Intel® MPI Library for Windows*.
  • Enabling statistics gathering may result in increased time in MPI_Finalize.
  • In order to run a mixed OS job (Linux* and Windows*), all binaries must link to the same single or multithreaded MPI library.  The single- and multithreaded libraries are incompatible with each other and should not be mixed. Note that the pre-compiled binaries for the Intel® MPI Benchmarks are inconsistent (Linux* version links to multithreaded, Windows* version links to single threaded) and as such, at least one must be rebuilt to match the other.
  • If a communication between two existing MPI applications is established using the process attachment mechanism, the library does not control whether the same fabric has been selected for each application. This situation may cause unexpected applications behavior. Set the I_MPI_FABRICS variable to the same values for each application to avoid this issue.
  • If your product redistributes the mpitune utility, provide the msvcr71.dll library to the end user.
  • The Hydra process manager has some known limitations such as:
    • stdin redirection is not supported for the -bootstrap service option.
    • Signal handling support is restricted. It could result in hanging processes in memory in case of incorrect MPI job termination.
    • Cleaning up the environment after an abnormal MPI job termination by means of mpicleanup utility is not supported.
  • ILP64 is not supported by MPI modules for Fortran 2008.
  • When using the -mapall option, if some of the network drives require a password and it is different from the user password, the application launch may fail.

Technical Support

Every purchase of an Intel® Software Development Product includes a year of support services, which provides priority customer support at our Online Support Service Center web site, http://www.intel.com/supporttickets.

In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive priority support.

Viewing all 3384 articles
Browse latest View live