Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

IoT Path-to-Product: How to Build an Intelligent Vending Machine

$
0
0

This Internet of Things (IoT) path-to-product project is part of a series of articles that portray how to develop a commercial IoT solution from the initial idea stage, through prototyping and refinement, to create a viable product. It uses the Grove* IoT Commercial Developer Kit, with the prototype built on an Intel® Next Unit of Computing (NUC) Kit DE3815TYKHE small-form-factor PC and Arduino* 101 board.

This document demonstrates how to build a prototype and utilize these same technologies in deploying an Intel IoT Gateway and industrial sensors. It does not require special equipment or deep expertise, and as such, it is intended to be instructive toward developing generalized prototyping phases for IoT projects.

Note: Known in the US as “Arduino* 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

This article contains a how to on building an Intelligent Vending Machine Prototype. To see the making of, see IoT Path-to-Product: The Making of an Intelligent Vending Machine.

Visit GitHub for this project's latest code samples and documentation.

 

Introduction

The completed intelligent vending machine is shown in Figure 1. From this exercise, developers will learn to do the following:

Figure 1. Completed intelligent vending machine.
  • Connect to the Intel NUC Kit DE3815TYKHE.
  • Interface with the IO and sensor repository for the NUC using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel IoT platforms.
  • Set up and connect to cloud services using the Microsoft Azure* platform, which provides cloud analytics and a communications hub for various parts of the solution.

What it does

This project enables you to simulate the following functionality of an intelligent vending machine:

  • Vend multiple products
  • Track inventory levels of products in the vending machine
  • Send alerts when the inventory levels fall below a preset threshold
  • Identify power start-up and malfunctions 
  • Notify when the vending machine door is open
  • Display coil status
  • Monitor and send alerts on internal temperature of the machine
  • Provide a visual log of past maintenance, inventory, and events
  • Provide a companion app for purchasing products

How it works

This intelligent vending machine prototype utilizes sensors to trigger a variety of different actions:

  • Buttons simulate the selection of specific products from the vending machine
  • Stepper motor turns to dispense product 
  • Red and green LEDs indicate failure and OK status
  • Temperature sensor monitors interior of vending machine
  • LCD displays vending machine status

All data flows occur through the cloud as an intermediary. For example, if a customer selects a product using the mobile application, the selection is passed to the cloud and then to the vending machine itself. This approach enables inventory levels to be maintained in the cloud. 

Similarly, data related to product offerings, pricing, and maintenance events are centrally managed in the cloud, enabling comprehensive trend analysis and reporting. Maintenance alerts, for example, can also be sent to service personnel as well as updating the administration application for tracking purposes. Note that while the potential for these capabilities is considered within the design of the intelligent vending machine prototype, they are not implemented as part of this demo.

Set up the Intel NUC Kit DE3815TYKHE

This section gives instructions for installing the Intel® IoT Gateway Software Suite on the NUC.

Note: If you have acquired a Grove IoT Commercial Developer Kit, the Intel IoT Gateway Software Suite is already pre-installed on the NUC.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Download the Intel IoT Gateway Software Suite and follow the instructions received by email to download the image file.
  3. Unzip the archive and write the .img file to a 4GB USB drive.

    On Microsoft Windows*, you can use a tool like Win32 Disk Imager*: https://sourceforge.net/projects/win32diskimager.
    On Linux*, use sudo dd if=GatewayOS.img of=/dev/sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system and plug it into the NUC, along with a monitor, keyboard, and power cable.
  5. Turn on the NUC and enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive, as follows:

    a. From the Advanced menu, select Boot.
    b. From Boot Configuration, under OS Selection, select Linux.
    c. Under Boot Devices, make sure the USB check box is selected.
    d. Save the changes and reboot.
    e. Press F10 to enter the boot selection menu and select the USB drive.

  7. Log in to the system with root:root.
  8. Install Wind River Linux on local storage with the following command:
    ~# deploytool -d /dev/mmcblk0 --lvm 0 --reset-media –F

    Note: Due to the limited size of the local storage drive, we recommend against setting a recovery partition. You can return to the factory image by using the USB drive again.

  9. Use the poweroff command to shut down your gateway. Then, unplug the USB drive, and turn the gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable and use the command ifconfig eth0 to find the IP address assigned to your gateway (assuming you have a proper network setup).
    You can now use your gateway remotely from your development machine if you are on the same network as the gateway. If you would like to use the Intel® IoT Gateway Developer Hub instead of the command line, enter the IP address into your browser and go through the first-time setup.
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering these commands:
     ~# smart update
     ~# smart upgrade
     ~# smart install upm
  12. Use the following commands to install Java* 8 support (after executing the previous commands). These remove the precompiled OpenJDK* 7 and install OpenJDK* 8 , which works with MRAA and UPM:
    ~# smart remove openjdk-bin
    ~# smart install openjdk-8-jre
  13. Plug in an Arduino 101 board and reboot the NUC. The Firmata* sketch is flashed onto the Arduino 101, and you are now ready to use MRAA and UPM with it.

Set up the Arduino* 101 board

Setup instructions for the Arduino* 101 board are available at https://www.arduino.cc/en/Guide/Arduino101

Connect other components

This section covers making the connections from the NUC to the rest of the hardware components. The bill of materials for the prototype is summarized in Table 1, and the assembly of those components is illustrated in Figure 2.

Table 1. Intelligent vending machine prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* Starter Kit Plus IoT Edition

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Gear Stepper Motor with Driver

http://www.seeedstudio.com/depot/Gear-Stepper-Motor-with-Driver-p-1685.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Green LED

http://www.seeedstudio.com/depot/Grove-Green-LED-p1144.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

Touch

http://www.seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

 
Figure 2. Intelligent vending machine proof of concept prototype.

 

Install Intel® System Studio

Intel® System Studio is a plug-in for Eclipse* that allows you to connect to, update, and program IoT projects on an Intel NUC or other compatible board. It helps you write applications in C, C++, and Java languages and provides two libraries, specially designed for the Intel® IoT Developer Kit:

  • MRAA is a low-level library that offers a translation from the input/output interfaces to the pins available on your IoT board.

  • UPM is a sensor library with multiple language support that utilizes MRAA. UPM allows you to conveniently use or create sensor representations for your projects.

Install on Windows*

Note: 7-Zip* supports extended path names, which some files in the compressed file have, so use only 7-Zip software to extract the installer file.

  1. Download the 7-Zip software from http://www.7-zip.org/download.html.
  2. Right-click on the downloaded executable and select Run as administrator.
  3. Click Next and follow the instructions in the installation wizard to install the application.
  4. Using 7-Zip, extract the installer file.

Warning: Be sure to extract the installer file to a folder location that does not include any spaces in the path name. For example, the folder C:\My Documents\ISS will not work, while C:\Document\ISS will.

Install on Linux*

  1. Download the Intel® System Studio installer file for Linux*.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, tar -jxvf iss-iot-linux.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.

Install on Mac* OS X*

  1. Download the Intel System Studio installer file for Mac* OS X*.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, tar -jxvf iss-iot-mac.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.

Note: If you see a message that says "iss-iot-launcher can’t be opened because it is from an unidentified developer", right-click the file and select Open with. Select the Terminal app. In the dialog box that opens, click Open.

Launch Intel® System Studio

  1. Navigate to the directory you extracted the contents of the installer file to.
  2. Launch Intel System Studio as follows:
  • On Windows*, double-click iss-iot-launcher.bat to launch Intel System Studio. 
  • On Linux*, run iss-iot-launcher.sh.
  • On Mac* OS X*, run iss-iot-launcher.

Note: Using the iss-iot-launcher file (instead of the Intel® System Studio executable) will launch Intel System Studio with all the necessary environment settings. Use the iss-iot-launcher file to launch Intel® System Studio every time.

Install Microsoft* Azure* components

The Azure* cloud maintains information about product inventory on intelligent vending machines in the network, keeps track of the events received from vending machines, and could provide future functionality to analyze this data and trigger responses to various conditions (e.g., low inventory or mechanical failure).

Implement the Azure* C++ API

Connecting to Azure* using the C++ API requires compilation of all the following libraries to build the Casablanca project:

Create a web app in Azure* 

Compile Boost

wget -O  boost_1_58_0.tar.gz  ‘http://downloads.sourceforge.net/project/boost/boost/1.58.0/boost_1_58_0.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fboost%2Ffiles%2Fboost%2F1.58.0%2F&ts=1438814166&use_mirror=iweb’
tar czvf boost_1_58_0.tar.gz
cd boost_1_58_0
./bootstrap.sh
./b2

Compile Casablanca

Clone Casablanca:

git clone https://git.codeplex.com/casablanca

https://github.com/Microsoft/cpprestsdk

Compile Casablanca:

https://casablanca.codeplex.com/wikipage?title=Setup%20and%20Build%20on%20Linux&referringTitle=Documentation
git clone https://github.com/Azure/azure-storage-cpp.git

https://github.com/Azure/azure-storage-cpp.git

SQLite3 installation and table initialization

SQLite3 IPK package installation:

root@galileo:~# opkg install sqlite3
Installing sqlite3 (3:3.8.6.0-r0.0) on root.
Downloading http://iotdk.intel.com/repos/1.5/iotdk/i586/sqlite3_3.8.6.0-r0.0_i586.ipk.
Configuring sqlite3.

Products database creation and initialization:

root@galileo:~# sqlite3 Vending_Prototype/products.sqlite3
SQLite version 3.8.6 2014-08-15 11:46:33
Enter ".help" for usage hints.
sqlite> create table products(name varchar(255) primary key, price smallint, quantity smallint);
sqlite> insert into products values('Coke',150,2);
sqlite> insert into products values('Pepsi',130,3);

Events database creation and initialization:

root@galileo:~# sqlite3 Vending_Prototype/events.sqlite3
SQLite version 3.8.6 2014-08-15 11:46:33
Enter ".help" for usage hints.
sqlite>
sqlite> create table events(time INT, type smallint, key varchar(255), value smallint);

SQLite3 Node.js package installation:

npm install sqlite3

Azure* installation

npm install azure-storage

Conclusion

As this how-to document demonstrates, IoT developers can build prototypes with gateway, administrative, mobile, and cloud software functionality at relatively low cost and without specialized skill sets. Using the Grove* IoT Commercial Developer Kit and an Arduino* 101 board, project teams can conduct rapid prototyping to test the viability of IoT concepts as part of the larger path-to-product process.

Visit GitHub for this project's latest code samples and documentation.


What's New? OpenCL™ Runtime 16.1.1 (CPU only)

$
0
0

16.1.1 release update includes:

  • Fix for the known incompatibility issue with the CPU Kernel Debugger from the Intel® SDK for OpenCL™ Applications 2016 R2 and the CPU only runtime package version 16.1.
  • Performance optimizations:
    • Compiler vectorizer heuristic tuning for a set of workloads
    • Workgroup fusion optimization improvements
    • Performance enhancements of the vload()/vstore() built-in functions
  • Fix for the issue reported on the forum (https://software.intel.com/en-us/comment/1844607#comment-1844607): vectorizer produces incorrect code on SSE42 architectures when using the samplerless read_imagef() built-in function with image2d_t and int2 coordinates as arguments.
  • cl_khr_gl_sharing extension was disabled due to incompatibility with the Microsoft* Basic Display Adapter. To use this extension, please install OpenCL Driver for Intel® Iris™ Graphics and HD Graphics for Windows* OS from https://software.intel.com/en-us/articles/opencl-drivers#iris. The driver package includes the OpenCL Runtime package for CPUs.
  • Due to performance bug Threading Building Blocks (TBB) library was downgraded from 4.2,Interface version 7001, Oct 2 2013" to 4.2, Interface version 7005 , Jun 1 2014

 

Case Study- A Very Good Year: Sommely and Intel® IoT Technology Power Smart Inventory Management for Wine Collectors

$
0
0

Asset tracking is a challenge that seems tailor-made for an Internet of Things solution. Imagine sensors connected to a tiny IoT processor that can be mounted on shipping crates, or on individual objects lining the shelves of a retail operation or private home. The sensors collect data–location, temperature, movement, sales or consumption details, for example–and pass it into the cloud where that data can be sliced, diced, and fed into a recommendation engine. Tapping into the engine then helps someone make an informed decision based on what’s trending, under what circumstances the product will be consumed, and what to expect when they try it for themselves. 

Intel® had been exploring that premise, looking at ways in which Intel® Edison™ technology, Intel® Quark™ processors and microcontrollers could be used to prototype just such a system.

Recognizing that a Portland, Oregon-based design company was focusing on the same opportunity, Intel reached out to Uncorked Studios. “We brought them to an invitation-only development day,” Intel IoT innovation manager Shashi Jain explained. “Our goal was very specific. We wanted to collaborate with them, to demonstrate how Intel IoT technology can be used to quickly take an idea from concept to prototype, and ultimately to market.”

A connoisseur’s collection fitted with Sommely caps.

For Uncorked Studios, that idea was a product called Sommely (from sommelier, or wine steward). With Sommely, the luxury wine market served as the vector for proving that a smart asset-tracking system for wine enthusiasts could scale from the domestic wine rack to restaurants, wineries, and beyond.

Uncorked Studios

The aptly named Uncorked Studios is a product design and development company, which, for the six years of its existence, has been focused on “the relationship between digital environments and the physical world.” The company’s team of 42 has created smart-home and wearable products in collaboration with LEGO*, Google*, adidas*, and Samsung*, among others. Working with Intel, Uncorked helped design the multi-camera array used in theIntel® RealSense™ launch.

Sommely is Uncorked Studios’ first foray into the wine and asset-tracking market.

The Sommely Experience

As a smart inventory-management system for wine collectors, Sommely uses several Intel IoT components to keep a running count of what’s in a particular wine collection. The system can also make smart recommendations by drawing on crowd-sourced data to suggest what to drink, when a bottle is ready to drink, and the food it pairs well with.

To accomplish that, a mobile-friendly website communicates with a gateway or a hub that’s connected to the Internet via WiFi. The hub also communicates with individual caps fitted to each of the bottles in a wine collection. The caps hold batteries, a radio that talks to the hub, sensors, and LED lights.

“Say it’s a Tuesday night, and you’ve just ordered pizza,” Marcelino Alvarez, the founder and CEO of Uncorked Studios, explained. “You don’t want to accidentally open a bottle that’s rare, or too expensive. You just want something that pairs well with your pepperoni and mushroom pizza, so you ask the app and the corresponding caps light up, letting you do a visual search through your inventory.”

Along similar lines, the Sommely app gives users the ability to choose wines for special occasions, based on criteria that includes the food the wine is being paired with, the characteristic elements you’re looking for (light and crisp, big and jammy, etc.), and whether a particular bottle is ready to be enjoyed now or if it needs more time in the cellar. The system will also warn when a bottle is past its prime.

The crowd-sourced component, like Sommely itself, is a work-in-progress. Alvarez’s goal is for their app to “play nice with other apps”. “We’re not competing with apps like Delectable Wines or Vivino, so we could partner with them.” He envisions getting data that includes a large sampling of what wine lovers are drinking at any given moment, along with data such as industry and consumer reviews and tasting notes. Sommely could aggregate and present that information in ways that encourage curiosity and exploration.

What does Sommely have that other systems don’t? Traditional, analog inventory systems that use things like spreadsheets and post-it notes, or tags that hang around the neck of each bottle, have drawbacks. “For example, they require a lot of individual attention,” Alvarez pointed out. “You need to write down drink-by dates, assuming you even have that information.” Another disadvantage is that manual systems don’t automatically update when you drink a bottle, making it easy to lose track of what’s in the collection.

“Even if you’re diligent about keeping track, you’re inevitably going to forget you drank something. A single misplaced bottle can send you back to square one. If you have 40, 50, or 100 bottles, that might only take you an hour,” Alvarez said. “But if you’ve got 500 bottles, or 1,000 bottles, there are other considerations. For example, sizable collections are more likely to include special bottles–gifts, rare and expensive vintages, and so on.”

How Sommely Works

A low-power, 32-bit  Intel® Quark™ microcontroller D1000 resides in each Sommely bottle cap. Uncorked Studios’ engineers leaned heavily on the fine-grained power management features of the Intel Quark microcontroller D1000. Standby mode and fast wake-times helped maximize battery life, which was crucial for a ‘set it and forget it’ solution expected to function for long periods. 

Low-power Intel Quark microcontroller D1000s are tiny enough to fit in a bottle cap. 

The gateway itself uses an Intel NUC and a full Ubuntu* Linux* distribution that enables remote system updates. Currently, the caps and gateway communicate using Bluetooth® Low Energy (BLE) technology. BLE is tailored for low-energy IoT devices. “We decided on BLE for a couple of reasons,” Alvarez said. “When prototyping Sommely, we piggybacked some of our engineering efforts with those of a software engineering team at Intel that was working on a BLE solution. That helped us get from concept to prototype much faster than expected.”

As convenient as BLE seemed, Alvarez doesn’t expect it to be part of the final product. Uncorked Studios is still working with Intel to define an active RF or RFID solution, because the initial Sommely prototype showed that scaling beyond 100 bottles to 500 or more could prove challenging. “With a larger collection, BLE interference could make it challenging to pair everything, so we’ll probably build a custom RF stack for Sommely.”

The gateway’s Intel NUC enables discovery of the BLE caps and maintains their connection state over time by periodically scanning and re-scanning, as well as taking Received Signal Strength Indication (RSSI) measurements. The browser-based system interface draws on a library of collection and search queries, and the design provides an upstream connection to WiFi. The Uncorked Studios team has been using Intel-powered Dell* Venue tablets to test-drive Sommely.

Frictionless User Interaction

A lot of technology has gone into Sommely, and, Alvarez says, “we’re looking at ways to create frictionless interaction with the system. We want to keep it simple.” According to Alvarez, the physical components, the context in which they’re being used, and even the BIOS all posed design challenges.

For starters, the gateway needed to blend in. After all, wine enthusiasts want the focus of attention to be on the bottles in their collection, not on a high-tech telecommunications device. As a result, the Intel Quark microcontroller D1000 and the battery are hidden within the Sommely bottle caps, and the gateway is housed in an Intel NUC.

In early trials, at runtime the Dell Venue tablet appears to interact directly with the bottle caps themselves. “That’s because we limited the perceptual distance from the transaction between the tablet, the caps, and the hub. The end user won’t know that we're running Linux, a full BLE stack, and a WiFi network stack with a node.js web server. It feels like you go to the system, press a button, and see LEDs light up.”

The Sommely bottle caps supply user feedback through a ring of three LEDs. That presented the UX team with another challenge: developing an intuitive user experience for a device that doesn’t have the traditional elements of a user interface–there’s no screen to display things on other than what’s available through the web app.

Inside each Sommely bottle cap sits three LEDs positioned 120 degrees apart. They change color to give visual feedback. 

“One of the interesting things that emerged with IoT is the trend of having a mobile app that acts like an interface to a device that itself provides user feedback on a more intimate level,” Alvarez said. “That holds true for Sommely. Our web app is just a way for people to control an interface to wine bottles, where the bottles themselves become unique elements of that interface. The Sommely caps fit over the actual wine bottle caps, and respond to input and output from the web app. You can pick up a bottle, tap a button on the Sommely cap, and get feedback through them, as well as display relevant information on the app.”

Sommely bottle caps deliver feedback by lighting up.

UX design decisions were based on context. “Where the bottles would be, relative to the tablet; what happens when you’re out of sight,” Alvarez said. “How do you make the app for the interface something that gets out of the way rather than being in your face? You don't want to be scrolling through an infinite series of menus to just find one wine bottle.”

All of that translated to a solution in which the Intel Quark microcontroller D1000 embedded in each cap responds to user push-button inputs. “We implemented a custom ping-pong soft PWM based on the sample code provided with the Intel SDK to drive the LED ring,” Alvarez explained. “The BLE client state machine converts inbound BLE requests to color, brightness, and off/on. Responses to button pushes can be customized based on a variety of queries. If a user asks to see wines ranging from expensive to less expensive, the color palette spans green to yellow, but for a ‘is it ready to drink right now?’ query, the feedback is a more nuanced color palette.”

Batteries - Maximizing Performance

Battery life was crucial, so the system enters power-saving states whenever possible. “We exploited the halt and standby states in our firmware to put the SoC into power-saving mode whenever it's not needed. The MCU is idle during dead clocks in our PWM states, and whenever the LED ring is off. As a result, we've got the Intel Quark D1000 in standby almost all of the time, which translates to battery life approaching one year with some intermittent use, which is our current method.”

The web app notifies users when batteries need to be replaced, but Alvarez cautions against using Sommely caps to run DJ-style light shows. “There will be people who do that, but we don’t want to encourage it. We’d like to implement conductive charging or an RF harvest state. With enough harvesting, we might be able to extend battery life by a couple years. But those are v2 or v3 features.”

Sommely lets users make smart choices by drawing data from the cloud. The app’s backend is hosted on Amazon Web Services. 

With a target audience that holds on to wine for anywhere from a year to three years, Alvarez knows how important extended battery life is. “Cost is the primary issue holding us back. We’re looking at ways to gracefully send new caps to customers as batteries near end-of-life.”

Some wine enthusiasts believe it’s important to annually rotate the bottles in their collections a quarter-turn. Alvarez says they considered using the accelerometer in their sensors and the light ring in the bottle caps to tell users when to turn bottles. “The three LEDs are 120 degrees apart, so we could have lit an LED to signal when it’s time to rotate a bottle, and then turned the LED off when the quarter turn was complete.”

Intel Inside the Software Stack

The Uncorked Studios engineers turned to a number of Intel software development tools when coding Sommely. The gateway runs a custom MEAN stack implementation and Intel® System Studio for Microcontrollers was used to code the Intel Quark microcontroller D1000 firmware.

“Having sample code for the Intel Quark D1000 firmware made a huge difference for us when it came to some of the trickier portions of the system, power management specifically,” Alvarez said.

As a result of the relationships established during the invitation-only developer day, Intel hardware and software engineers helped the Uncorked Studios team with the Intel Quark D1000 firmware. “We were able to get up and running on the Edison NUC very quickly,” Alvarez said. “With the Intel Quark D1000, Intel engineers were able to look at our code and give us input that helped us resolve issues. When we’ve had challenges or questions, they’ve been right there for us.”

More importantly, Alvarez said, “Intel’s Marc Alexander was an advocate. He saw the potential in Sommely from a broader perspective. He saw beyond the immediate implication of how many chips they might sell, and understood that Sommely was an exciting IoT use case with a lot of potential. If we didn't have believers within Intel, or access to domain experts within various aspects of IoT and the startup world, we wouldn't be where we are today.”

Next Steps

Sommely is a work-in-progress. At the time of writing, enhancements to its power management stack are top-of-mind for Alvarez. “If we could harvest energy and not have to replace batteries, that would be a leaps-and-bounds improvement. I think power management is a challenge all IoT devices are going to face at some point.”

Alvarez envisions solutions ranging from the utilitarian to the glamorous. “We have a few ideas for how we could recharge the caps using either existing induction charging technologies as well as some new approaches that we’re excited to prototype. We think it’s a space where we can continue to partner with Intel to solve a much larger industry challenge.”

In Summary

High-value inventory management—keeping track of valuable things—using IoT technology offers solutions to a lot of challenges. For example, tracking tools in a factory, or keeping tabs on a collection of Star Wars figures, Barbie dolls, antiques, and other rare collectibles being shipped across town or across borders, and monitoring the day-to-day life of a wine collection.

Uncorked Studios was in lean, startup mode with Sommely when they started working with Intel IoT technology. Intel hardware and software engineers helped the Uncorked team quickly prototype the smart asset-management solution. The lessons they’ve learned so far, and possible new use cases, hold great potential, especially when scaled to support larger inventories.

“We've looked at different contextual experiences that go beyond one-cap-one-bottle,” Alvarez said. “What could a restaurant or winery do with Sommely? Is there a SKU that just sits on top of a case that holds 12 individual units representing the 12 bottles within? Maybe those units detach and go on the bottles when you crack open the case? We also considered transferring ownership of a bottle and keeping that history in the cap itself. It’d be like an electronic manifest for the bottle. We have thought through a number of scenarios, but aren’t ready to implement them.”

Whatever the future holds for Sommely, it shows how Intel fosters innovation through collaboration with startups. By helping Uncorked Studios overcome technical hurdles and scale Sommely from one to many sensors/caps, Intel gained valuable insights. Those insights, in turn, helped the Intel team refine and enhance Intel’s IoT hardware and software solutions.

If you’ve got a burning desire to change the world with IoT technology, but you’re having technical difficulties, drop by the Intel Developer Zone. Our domain experts, and the developer community at large, might be able to lend a hand.

 

Boost JavaScript* Performance by Exploiting Vectorization using SIMD.js

$
0
0

As JavaScript* applications become more sophisticated, developers are increasingly looking for ways to optimize performance. Single Instruction Multiple Data (SIMD) operations enable you to process multiple data items at the same time when “data-parallelism,” the mutual independence of data, exists. In the past, these operations have been limited to low-level languages and languages that can map closely to the architecture, such as C/C++. Using SIMD.js, these operations are now available to use directly from JavaScript code. This enables JavaScript developers to easily exploit the hardware capabilities of the underlying architecture to significantly improve the performance of code that can benefit from data parallelism. Developers can also easily translate SIMD-optimized algorithms from languages like C/C++ to JavaScript.

In this article, we provide examples of SIMD operations, show you how to enable SIMD.js in Microsoft Edge* and ChakraCore*, and provide tips for writing JavaScript code that will avoid performance cliffs.

Understanding SIMD

Since 1997, Intel has been adding instructions to its processors to perform the same operation on multiple data items in parallel. These SIMD operations can accelerate performance in applications such as rendering calculations, 3D object manipulations, encryption, physics engines, compression/decompression algorithms, and image processing.

As a basic example of how SIMD works, consider adding two arrays so that C[i] = A[i] + B[i] for the entire dataset. A simple solution is to iterate over each pair of elements and perform the addition sequentially. The processor has a SIMD operation, though, that can enable you to perform an addition on multiple independent data chunks at the same time. If you process four data items at the same time, the process could be made up to four times faster. Essentially, the array is divided up into smaller fixed-size arrays, sometimes referred to as vectors.

SIMD operations are commonly used in C/C++, and in certain cases the compiler can automatically vectorize the code. The GCC* and Clang* compilers provide “vector_size” and “ext_vector_type” attributes to auto-vectorize C/C++ code that has data parallelism. The Intel® C++ Compiler and Intel® Fortran Compiler offer the “#pragma SIMD” directive for trying to auto-vectorize loops. Array notation is another Intel-specific language extension in Intel C++ Compilers that enables users to point to the data parallel code that the compiler should try to auto-vectorize. In other cases, intrinsics can be added in the source code to explicitly indicate where vectorization should be used. Intrinsics are language constructs that map directly to sequences of instructions on the underlying architecture. Because intrinsics are specific to a particular architecture, they should only be used if other approaches are not possible.

Intel® architecture supports a range of SIMD operation sets, starting with MMX™ technology (introduced in 1997), going through Intel® Streaming SIMD Extensions (Intel® SSE, Intel® SSE2, Intel® SSE3, Intel® SSE4.1, and Intel® SSE4.2) and Supplemental Streaming SIMD Extensions 3, and most recently with Intel® Advanced Vector Extensions, Intel® Advanced Extensions 2, and Intel® Advanced Extensions 512. SIMD operations are supported on both Intel® Core™ and Intel® Atom™ processors today.

Why create SIMD.js?

Over the years, improvements in compilers and managed runtimes have significantly reduced the performance gap between JavaScript and native apps. One limitation, though, is that JavaScript is a managed language and so does not have direct access to the hardware. The execution engine abstracts away the hardware details, preventing access to the SIMD operations.

JavaScript expressions map less closely to the architecture than C expressions and the JavaScript language is dynamically typed, both of which make it difficult for a compiler to automatically vectorize loops.

To enable JavaScript programmers to use SIMD, Intel has worked with Mozilla, Google, Microsoft, and ARM to bring SIMD.js to JavaScript. SIMD.js is a set of JavaScript extension APIs that expose the SIMD operations of the underlying architecture. This makes it possible for existing C/C++ programs with intrinsics to be easily translated (by hand) to JavaScript and run in the browser, or for new JavaScript code to be written using SIMD operations.

SIMD.js: A simple array addition example

SIMD.js has been designed to cover the common overlap between major infrastructures supporting SIMD instructions: Intel SSE2 and ARM Neon*. Architectures that do not support the SIMD operations are still supported by SIMD.js, but they will not be able to match the performance of a SIMD architecture.

Table 1 shows some example JavaScript code for adding two arrays, and Table 2 shows how the same can be achieved using SIMD.js. SIMD.js supports 128-bit vectors, and since we are operating on 32-bit integer values, we can perform four addition operations at the same time. This potentially accelerates the computation by a factor of four.

/* Set up the integer arrays A, B, C */

var A = new Int32Array(size);
var B = new Int32Array(size);
var C = new Int32Array(size);

for (i = 0; i < size; i++)
{
  C[i] = A[i] + B[i];
}

Table 1: JavaScript* scalar/sequential array addition.

/* Set up the integer arrays A, B, C */

var A = new Int32Array(size);
var B = new Int32Array(size);
var C = new Int32Array(size);

/* Note that i increases by 4 each iteration */

for (i = 0; i < size; i += 4)
{
  /* load vector of four integers from A, starting at index i, into variable x */
  var x = SIMD.Int32x4.load(A, i);

  /* load vector of four integers from B, starting at index i, into variable y */
  var y = SIMD.Int32x4.load(B, i);

  /* SIMD addition operation on vectors x and y */
  var z = SIMD.Int32x4.add(x, y);

  /* SIMD operation to store the vector z results in array C */
  SIMD.Int32x4.store(C, i, z);
}

Table 2: SIMD.js example of array addition.

SIMD.js availability and performance

SIMD.js is currently at stage 3 approval of TC39, and on track to be part of ECMAScript* 2017. As a result of a close collaboration between Intel engineers and Microsoft, SIMD.js is available on the Windows® 10 Microsoft Edge browser as an experimental feature for developers to try. It is also included in the ChakraCore open source version of the browser's engine.

Figures 1 and 2 show the impact of SIMD.js in the latest Microsoft Edge browser. The figures show that the number of meshes (or 3D images) that can be supported at 60 frames per second more than doubles from 63 to 133 when using SIMD.js. This demo, called Skinning SIMD.js, was handwritten in asm.js and simd.js. (Asm.js code is usually cross-compiled and rarely written by hand, but in this case, the module was simple enough to be handwritten).

Skinning SIMD Demo

Enabling SIMD.js

You can use SIMD.js in ChakraCore (which we have used for preparing this paper) or in the Microsoft Edge browser.

Enabling SIMD.js in Microsoft ChakraCore

ChakraCore is the core part of Chakra, the high-performance JavaScript engine that powers the Microsoft Edge browser and Windows applications written in HTML/CSS/JavaScript. ChakraCore also supports the JavaScript Runtime (JSRT) APIs, which allow you to easily embed ChakraCore in independent applications. ChakraCore is currently verified to be working on Windows platforms. For details on how to get, build, and use ChakraCore, seehttps://github.com/Microsoft/ChakraCore.

ChakraCore can run directly from the command line or as part of node.js. ChakraCore has the latest SIMD.js features and optimizations that are yet to be merged into Microsoft Edge. To run SIMD.js code from the command line, make sure to include the following flags “-simdjs -simd128typespec”.

Enabling SIMD.js in Microsoft Edge

Microsoft Edge ships as part of Windows 10. SIMD.js is available in the Microsoft Windows RedStone1 (RS1) update. For the latest features, install the latest version of the Windows 10 release. To enable SIMD.js in the Microsoft Edge browser:

  1. Navigate to “about:flags”.
  2. Under “JavaScript”, check “Enable experimental JavaScript features”.
  3. Restart the browser.

Writing performant SIMD.js code for ChakraCore

SIMD.js offers an extensive set of data types (see Table 3) and operations that are all geared towards boosting performance. There is a lot of flexibility in how the SIMD.js operations could be used. For that reason, JavaScript guarantees that every valid use will work, but does not guarantee the performance. The speed will depend on the implementation of both the JavaScript engine and your own code. In some cases, there could be hidden performance cliffs when using SIMD.js.

This section explains how SIMD.js is implemented in ChakraCore and offers some guidelines for writing SIMD.js code that will work with the Full JIT (just-In-time compiler) for optimal performance.

Supported data types in SIMD.js:

* Float32x4 (32-bit float x 4 data items)

* Int32x4 (32-bit integer x 4 data items)

* Int16x8 (16-bit integer x 8 data items)

* Int8x16 (8-bit integer x 16 data items)

* Uint32x4 (32-bit unsigned x 4 data items)

* Uint16x8 (16-bit unsigned x 8 data items)

* Uint8x16 (8-bit unsigned x 16 data items)

* Bool32x4 (32-bit bool x 4 data items)

* Bool16x8 (16-bit bool x 8 data items)

* Bool8x16 (8-bit bool x 16 data items)

Table 3: SIMD.js supports a range of data types.

Understanding the three versions of SIMD.js in ChakraCore

There are three versions of SIMD.js implemented in ChakraCore:

  • Runtime library. To improve start-up times, ChakraCore starts to interpret the code without compilation using the runtime library. The runtime library is also used by the ChakraCore interpreter and as a fallback when the Full JIT fails to optimize SIMD.js operations, whether that is due to ineffective or ambiguous code, or incompatible hardware. The runtime library is unoptimized, so it guarantees the code will execute but does not have any of the performance gain associated with using SIMD operations. Any performant code should spend as little time as possible in the library. The SIMD.js runtime library may offer a lower performance than code that does not use SIMD.js, so some developers may choose to offer a sequential version of their code for times when SIMD cannot be used.
  • Full JIT. The Full JIT is a type-specializing compiler that attempts to bridge the gap between the high-level data types used in JavaScript and the low-level data types used by the architecture. It will, for example, attempt to make numbers 32-bit integers (rather than JavaScript's default 64-bit floats) where possible to improve vectorization. Where this causes correctness issues, the implementation falls back to the runtime library. Writing efficient SIMD.js code that targets this compiler is the focus of this paper.
  • Asm.js. Asm.js is a strict subset of JavaScript that utilizes JavaScript syntax to distinguish between integer and floating point types and provides other rules that ensure highly effective ahead-of-time compilation, enabling near-native performance. Asm.js code is usually created by compiling from C/C++ using the Emscripten LLVM* compiler. SIMD.js is part of the Asm.js specification and is used either when translating SIMD intrinsics or if the translating compiler does auto-vectorization (something that Emscripten currently supports). Asm.js is not designed to be written by hand, and developers should write their apps in C/C++ and compile to JavaScript to use this implementation. The details of this are outside the scope of this paper. For more information see http://asmjs.org/spec/latest/ and http://kripken.github.io/mloc_emscripten_talk/.

Tip #1: Use SIMD for hot code

Chakra is a multi-tiered execution engine. To achieve a good start-up time, the code runs first through the unoptimized Runtime library. If the code runs a few times, the interpreter will recognize it is repeating code and run it through the Full JIT. Only the Full JIT will carry out SIMD.js optimizations that yield a performance boost. For that reason, it is advisable to only use SIMD.js for code in your applications that is repeated often or consumes a lot of processor time (hot code).

Avoid using SIMD.js in start-up code or loops/functions that don’t run for a long time; otherwise the Full JIT might not kick in, and you may see a degradation of performance compared to sequential code when your SIMD.js code runs in the Runtime library. Some performance testing and code tweaking might be required to achieve performance improvements.

One caveat for this tip: it is important to extract out initializations of SIMD constants into cold sections to avoid SIMD constants from being constructed over and over again.

Tip #2: Explicitly convert strings to numbers

The SIMD constructors and splat operations are type-generic, so they’ll accept both strings and numbers, as shown in Table 4.  However, if strings are found they will not be optimized as arguments for the SIMD APIs. As a result, execution will fall back to the Runtime library, causing a loss of performance.

To guarantee performance, always use arguments of Number or Bool types. If your code depends on non-Number types, introduce the conversion explicitly in your code, as shown in Table 5.

The majority of SIMD operations throw a TypeError on unexpected types, except for a few that expect any JavaScript type and coerce it to a Number or Bool.

/* Set up variables */
var s  = myString; // string
var n  = myVar     // could be Null

/* Set up f4 as vector of s,s,n,n */
var f4 = SIMD.Float32x4(s, s, n, n);

/* After splat, i4 = vector of s,s,s,s */
var i4 = SIMD.Int32x4.splat(s);

Table 4: SIMD constructor with arbitrary types.

var s  = Number(myString); // explicitly converted to number
var n  = Number(myVar); // explicitly converted to number

var f4 = SIMD.Float32x4(s, s, n, n);
var i4 = SIMD.Int32x4.splat(s);

Table 5: SIMD constructors with explicit coercions.

Tip #3: Optimize vector lane access

There are fast instructions that can be used to extract one of the data items (or lanes) from a vector. However, the Full JIT implementation is not able to handle variable index values in these commands, so execution will fall back to the Runtime library in that case. To avoid that situation, use integer literals for lane indices to remove the uncertainty so the compiler can optimize it. Tables 6 and 7 show how code should be rewritten to ensure Full JIT execution.

This guideline also applies to shuffle and swizzle commands, which can be used to rearrange the order of the lanes.

/* function accepts a vector of Int32x4 and puts it in v */
function SumAllLanes(v)
{
  var sum = 0;
  for (var i = 0; i < 4; i ++)
  {
    /* Extract lane I from vector v */
    sum += SIMD.Int32x4.extractLane(v, i);
  }
  return sum;
}

Table 6: Extract lane with variable lane index.

function SumAllLanes(v)
{
  var sum = 0;
  /* Use literals to extract each lane for Full JIT optimization */
  sum += SIMD.Int32x4.extractLane(v, 0);
  sum += SIMD.Int32x4.extractLane(v, 1);
  sum += SIMD.Int32x4.extractLane(v, 2);
  sum += SIMD.Int32x4.extractLane(v, 3);

  return sum;
}

Table 7: Extract lane with literals.

Tip #4: Consistently define variables

If the argument types for SIMD operations do not match or are ambiguous, the Full JIT will decide that a TypeError exception is likely to occur and will not type-specialize, passing execution back to the Runtime library.

Table 8 shows an example of this in practice. In this case, the programmer knows that condition1 == condition2 (if one is true, so is the other, always). The compiler can’t know or infer that, though. In the second if condition, the compiler will be conservative and assume that x could be a Number or Float32x4, because the previous condition creates that possibility. Passing a Number argument to a Float32x4.add operation will cause a TypeError, so the Full JIT will not optimize.

Additionally, y is not defined on every execution path, so in the second if condition, y could be Undefined or Float32x4. Again, that would cause the Full JIT to give up on optimizing.

To avoid these cases, follow these guidelines:

  1. Always define your variables on all execution paths. Never leave SIMD variables undefined. In Table 8, for example, where x is defined as 0, y should also be defined.
  2. Avoid assigning values of different types to a variable. In Table 8, defining x as a Float32x4 and as Number creates uncertainty that prevents optimization.
  3. If (2) is not possible, try to only mix valid SIMD types. For example, using Int32x4 and Float32x4 types would be okay, but don’t mix strings and SIMD types.
  4. If (2) and (3) are not possible, guard any SIMD code using ambiguous variables with a check operation. Table 9 shows an example of how to do that. The check operation will enable the compiler to confirm that a variable is of the expected type, so it can continue optimizing. If a variable is not of the right type, a TypeError is thrown and execution reverts to the Runtime library. Including the check ensures that the Full JIT can attempt optimization. Without it, the uncertainty would result in execution falling immediately back to the Runtime library. There may be a small overhead associated with using a check, but there is a drastic slowdown if code that could be optimized executes in the Runtime library instead.
function Compute()
{
  var x, y;
  if (condition1)
  {
     x = SIMD.Float32x4(1,2,3,4);
     y = SIMD.Float32x4(-1,-2,-3,-4);
  }
  else
  {
     x = 0;
  }

  /* developer knows that condition1 == condition2 always */
  if (condition2)
  {

    /* add vector x to itself. So (1,2,3,4) -> (2,4,6,8) */
    x = SIMD.Float32x4.add(x, x);
    y = SIMD.Float32x4.add(y, y);
  }
}

Table 8: Example of polymorphic variables.

function Compute()
{
  var x, y;
  if (condition1)
  {
     x = SIMD.Float32x4(1,2,3,4);
     y = SIMD.Float32x4(-1,-2,-3,-4);
  }
  else
  {
     x = 0;
  }

  /* developer knows that condition1 == condition2 always */
  if (condition2)
  {
    /* check x is Float32x4 */
    x = SIMD.Float32x4.check(x);

    /* add vector x to itself. So (1,2,3,4) -> (2,4,6,8) */
    x = SIMD.Float32x4.add(x, x);

    y = SIMD.Float32x4.check(y);
    y = SIMD.Float32x4.add(y, y);
  }
}

Table 9: Using check operations to ensure SIMD code optimization.

Conclusion

We hope that this paper helps you to produce better and faster code and showed you how SIMD operations can improve the performance of data-intensive work in JavaScript. Although the performance gain may vary depending on implementation, the coding patterns and SIMD use cases presented here will also apply to other browsers.

References

SIMD.js specification page: http://tc39.github.io/ecmascript_simd/

ChakraCore GitHub* Repo: https://github.com/Microsoft/ChakraCore

Node.js on ChakraCore: https://github.com/nodejs/node-chakracore

Skinning SIMD.js demo: http://huningxin.github.io/skinning_simd/

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and choose carefully when you clone a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

There is no Entitlements.plist file, how do I add Universal Links to my iOS app?

The Intel XDK project does not provide access to an Entitlements.plist file. If you are using Cordova CLI locally you would have the ability to add such a file into the CLI platform build directories located in the CLI project folder. Because the Intel XDK build system is cloud-based, your Intel XDK project folders do not include these build directories.

A workaround has been identified by an Intel XDK customer (Keith T.) and is detailed in this forum post.

Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?

If you submitted an app to the Google Play Store using a version of the Intel XDK prior to version 3088 (prior to March of 2016), you need to use your "converted legacy" certificate when you build your app in order for the Google Play Store to accept an update to your app. The error message you receive will look something like the following:

When using version 3088 (or later) of the Intel XDK, you are given the option to convert your existing Android certificate, that was automatically created for your Android builds with an older version of the Intel XDK, into a certificate for use with the new version of the Intel XDK. This conversion process is a one-time event. After you've successfully converted your "legacy Android certificate" you will never have to do this again.

Please see the following links for more details.

Back to FAQs Main

Coming Soon - the Intel® Quark™ SE Microcontroller C1000 Developer Kit

$
0
0

Coming soon – the Intel® Quark™ SE Microcontroller C1000 Developer Kit.  Based on the Intel® Quark™ SE microcontroller C1000, the developer kit features a small form-factor board, which contains among other things flash storage, a Bluetooth Low Energy (BLE) module with integrated antenna, an 802.15.4 transceiver with on-board antenna and a 6-axis compass/accelerometer with a temperature sensor.  A USB connection enables programming and debugging (JTAG) of the development platform.

Software support comes with the open source Intel® Quark™ Microcontroller Software Interface (Intel® QMSI) board support package featuring all required drivers, sample applications and support with the Zephyr Project* RTOS.  In addition Intel® System Studio for Microcontrollers provides an Eclipse*-based IDE for developing, optimizing, and debugging applications.  Features include the Intel® Compiler for Intel® Quark™ Microcontrollers, GNU* compiler collection (GCC), Intel® Integrated Performance Primitives for Microcontrollers, Intel® QMSI board support package and the Zephyr Project* RTOS.

Stay tuned for updates!

Why Should You Care About Machine Learning?

$
0
0

Machine learning has been around for a while, so even if you haven’t worked on it as a developer, you’re probably very familiar with it as a consumer. When you add something to your cart in Amazon, and see a list of other recommended products that you might also like—that's an example of machine learning. Essentially, machine learning is the development of computer programs that can learn and create their own rules, based on data.

Developing machine learning applications is different than developing standard applications. Instead of writing code that solves a specific problem, machine learning developers create algorithms that are able to take in data and then build their own logic based on that data. In the Amazon example, data about customer behavior and sales is used to determine which products you're most likely to also be interested in. It isn't looking at a 1:1 relationship between what's in your cart and another specific product—like something a marketer or sales person recommended selling together—instead it's taking into account all of the existing data, from all visits and all sales, and using that to predict behavior and determine recommendations that make sense. New products—and new data—are always being input, so the recommendation results are continuously adjusting and improving.

Why should you care about machine learning now? With the current increase in IoT and connected devices, we now have access to so much more data—and along with it, an increased need to manage and understand what we know.

Also, because so many different industries are starting to rely on machine learning, you have a great opportunity as a developer to learn how it works and how it might bring value to your product.
 

Types of Machine Learning Algorithms

There are four main types of machine learning:

Supervised– The training data consist of labeled inputs and known outcomes, which the machine studies until it can apply the label on its own. For example, to create a face detection algorithm, you might provide images of landscapes, people, animals, buildings, and so on, with their respective labels until the machine could reliably recognize a face in an unlabeled image.

Unsupervised– The machine analyzes unlabeled data and categorizes it based on similarities it has identified. So, you might provide the same photos as in the above example, but without their labels. The machine would still be able to cluster images based on shared characteristics (the sharp lines of a cityscape vs. the round shape of a face, for example)—but it would not be able to say that that round shape is a “face.” These programs are used to identify groupings within data sets that may be difficult or impossible for a human to see.

Semi-supervised – A combination of the above, used when there is a large amount of data but only some of it is labeled. Unsupervised learning techniques might be used to group and cluster the unlabeled data, while supervised learning techniques can be used to predict labels for it.

Reinforcement learning– Uses simple reward data to train the machine on ideal behavior within a specific context.
 

Faster than We Can Do by Hand

The biggest advantage to machine learning is that it allows us to do things much more quickly than we'd be able to do otherwise. It can't solve problems that a human being couldn't also solve, but it can take in a huge amount of data and very quickly build connections and predictions based on it. That becomes even more important as we continue to expand the amount of data we're generating through IoT and connected devices. Think of a smart outlet or a step counter—or really, anything in your life that generates data—and then think about how much data it’s able to generate on a daily basis. And then multiply that by every person who owns that product. The more connected we are, the more information there is; machine learning allows us to identify important patterns and insight, at a speed that humans simply can’t.
 

What’s the Market?

Any industry with access to data can benefit from a greater understanding about what that data means—whether that’s a manufacturing plant trying to anticipate repairs, or the makers of a driverless car. Here’s how some industries are using machine learning:


 

Hello, My Name Is Chatbot: Current Trend

This year, Facebook Messenger launched with chatbots, making it possible for companies and consumers to engage using bots. Essentially, this means that when a customer visits your Facebook page, they can hit Message as if sending a direct message, and interact right away with AI that can help them make decisions and learn about products. With each interaction, the chatbot improves. Specific transactions can also take place directly—click on the car icon in Messenger to request a ride from Uber, for example.

These chatbots not only send text, but also images and call-to-action buttons—which means that they can handle automated customer service, e-commerce assistance, and even content. As accuracy continues to improve, this begins to look a lot like an automated concierge, allowing consumers to quickly and easily get the information and service they’re looking for. This is part of a bigger trend, sometimes called “conversational commerce," which taps into the popularity of mobile messaging apps and the increasing power of AI—where the future of shopping happens in a chat window.
 

A Few Places to Get You Started

One of the best ways to learn more about machine learning is to look for groups in your area. There are also a lot of resources online. Here are some links to get you started:

Machine learning is a huge topic, with a rich history, and a lot of things for you to consider. It’s also a topic that we’re very interested in, so check back here for more exploration of this topic in the future.

IoT Path-To-Product: The Making of a Connected Transportation Solution

$
0
0

To demonstrate a rapid path-to-product edge IoT solution for the transportation sector, a proof of concept was created using the Grove* IoT Commercial Developer Kit. That prototype was scaled to an industrial solution using an Intel® IoT Gateway, industrial sensors, and Intel® System Studio. This solution monitors the temperature within a truck’s refrigerated cargo area, as well the open or closed status of the cargo doors. The gateway generates events based on changes to those statuses, to support end-user functionality on a tablet PC application.

Figure 1. The finished product demonstration with custom trailer housing.

 

The core opportunity associated with the Internet of Things (IoT) lies in adding intelligence and connectivity to everyday devices, harnessing information and putting it to use in ways that add value. Monitoring the status of a refrigerated semi-truck trailer hauling perishable goods is a simple example. Alerting the driver when the temperature passes outside a pre-set range or when cargo doors are opened unexpectedly can help avoid financial losses. An IoT solution to monitor and track these aspects of a semi-truck trailer could therefore be a viable commercial product.

Intel undertook a development project to investigate this and other opportunities associated with building a connected transportation solution. The project was presented as a demonstration at Intel® Developer Forum in 2015 and again in 2016. This document recounts the course of the project development effort, to help drive inquiry, invention, and innovation for the Internet of Things.

For a how to for this project, see IoT Path-to-Product: How to Build a Connected Transportation Solution.

Visit GitHub for this project's latest code samples and documentation.

Introduction

The goal of this project was to build a functional prototype and then to transition that proof of concept into an industrial-grade solution for scalable deployment as a commercial product. Rapid prototyping is facilitated by using the Grove IoT Commercial Developer Kit, which consists of an Intel® NUC system, Intel® IoT Gateway Software Suite, and sensors and their components from the Grove Starter Kit Plus (manufactured by Seeed). The project also uses the Arduino* 101 board. Hardware used in the prototype stage of this project is illustrated in Figure 1, and specifications are given in Table 1.

Note: Known in the United States as “Arduino 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

Table 1. Prototype hardware used in connected transportation project

 

Intel® NUC Kit
DE3815TYKHE

Arduino* 101
Board

Processor/
Microcontroller

Intel® Atom™ Processor E3815 (512K Cache, 1.46 GHz)

Intel® Curie™ Compute Module @ 32 MHz

Memory

8 GB DDR3L-1066 SODIMM (max)

  • 196 KB Flash Memory
  • 24 KB SRAM

Networking / IO

Integrated 10/100/1000 LAN

  • 14 Digital I/O Pins
  • 6 Analog IO Pins

Dimensions

190 mm x 116 mm x 40 mm

68.6 mm x 53.4 mm

Full Specs

specs

specs

Figure 2. Intel® NUC Kit DE3815TYKHE and Arduino* 101 board.

The course of this project demonstrates the value of the path-to-product approach: it allows a prototype to be built with a relatively small investment of time and effort, followed by an efficient transition to a commercially viable solution. Using a precompiled OS as well as RPMs helps to eliminate unnecessary downloads, having to customize the OS, and identifying libraries necessary to bring a project to life.

This project was devised to contribute to innovations around solutions for similar use cases being produced and marketed. While this project was designed to provide only basic functionality, its design is flexible and extensible enough that a variety of features could be added. In particular, the project could be expanded in the future to include web connectivity, cloud capabilities, remote monitoring, and other components.

In the project’s earliest stages, the team listed potential features for the prototype and the product. A sample of these included rear-door status (open or closed), temperature of the trailer, alarms based on the state of the door and temperature, an online application to view data, and in-cab monitoring of information. To demonstrate the viability of creating a robust solution while maintaining simplicity and low cost, the team elected to limit the bill of materials for the prototype phase to just the contents of the Grove IoT Commercial Developer Kit.

Creating the Prototype Proof of Concept

To allow for separation of duties and efficient progress, the team divided the solution into three primary areas of effort:

  • User interface (UI). Part of the team began working on the actual production UI layout and design, looking ahead to the production stages of the project.
  • Application business logic. Part of the team began working on the logic for the prototype application, while also recognizing that changes would be needed as the project progressed toward the commercial solution.
  • Prototype sensor solution. Part of the team began to create the configuration of sensors for the solution, utilizing the UPM/MRAA libraries for rapid development.

This approach of separating the project into discrete segments allowed the team to progress through the prototype phase more rapidly than otherwise, taking best advantages of skill sets available within the team. In particular, while the user interface was not strictly required in the early phases of the project, it was expected to require the most development time of the three areas listed above. Therefore, so beginning it as early as possible allowed for it to be well underway by the time it was needed later in the project.

In terms of the application logic, the team was able to look ahead to the expected final functional prototype and make decisions in the early prototype process looking to the future. Overall, the team expected the operation of the door sensor to be relatively simple, allowing greater attention to the proper utilization of a temperature sensor on a small and then commercial scale.

By utilizing the sensors in the Grove Starter Kit Plus, we were able to rapidly create a prototype with a functional sensor environment that the UI team could work with. This approach enabled layout and design elements to come to life quickly and provided a future framework for the final functional use case. The prototype configuration, with the Intel NUC, Arduino 101 board, and sensors, is illustrated in Figure 3. The bill of materials is given in Table 2.

Figure 3. Developer kit with selected sensors enabled.

Table 2. Connected transportation prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* IoT Commercial Developer Kit

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Touch Sensor Module

http://www.seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Buzzer Module

http://www.seeedstudio.com/depot/Grove-Buzzer-p-768.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

Use Case

The use case was built and displayed through an administration application to support the following scenario:

 
  1. Press button to start the use case (simulating opening the door):

    a.    Sets threshold ambient temperature +5 degrees.
    b.    Solid red LED lights up in cab.
    c.    LCD displays current temperature and door status (open), as shown in Figure 4.

    Figure 4. Showing door status.

     

  2.  Touch temperature sensor to raise ambient room temperature by five degrees:

    a.    Buzzer sounds.
    b.    Red LED blinks continuously.
    c.    LCD turns red and displays actual temperature and door status (open), as shown in Figure 5.

    Figure 5. Showing high temperature status.

     

  3. Touch sensor to acknowledge alert (buzzer turns off).
  4. Press button to close the door:

    a.    Red LED continues to blink until temperature passes below threshold.
    b.    LCD displays temperature and door status (closed).
    c.    When temperature passes below threshold, blinking red LED turns off, solid green LED lights up, and LCD turns green.
    d.    LCD displays temperature and door status (closed).

Simulation

This simulation demonstrates the reduced potential loss of temperature-sensitive cargo by monitoring temperature changes and alerting the driver if it becomes critical, as illustrated in Figures 6 and 7.

Figure 6. Log file showing events.
Figure 7. Base online view as envisioned.

Target Commercial Solution

With an operational prototype based on the Intel IoT Developer Kit and Grove IoT Commercial Developer Kit, it was necessary to determine how to proceed to create a commercial solution. Table 3 outlines how the components used in the prototype phase could be transitioned to a production solution.

Table 3. Components in prototype versus production solution.

 

Prototype

Production Solution

Buzzer

Grove Kit Buzzer

Alarm on Phone (customer application)

LCD

Grove LCD panel

Screen on Phone (customer application)

LED (RED)

Grove Kit LED

Light on Phone (customer application)

Button

Grove Kit Button

Industrial magnetic sensor with paired magnet

Touch Sensor

Grove Kit Touch Sensor

Touch on Phone (customer application)

Temp Sensor

Grove Kit Temp Sensor

Commercial Temp Sensor

Heat Source

Person’s Finger

20-watt Halogen Puck Light

Gateway

Intel® NUC and Arduino 101 Board

Intel® IoT Gateway

In addition, there are many commercial gateways available, with design differences making them suited to various industries and use cases. A key consideration for this project was a broad range of I/O options, to support both current and future functionality, specifically for connecting sensors to provide a data feed.

An Intel® IoT Gateway was chosen as the gateway device for the product portion of this project, as shown in Figure 8. The processing power and I/O functions were deemed sufficient for the presented commercial usage.

A wired Modbus temperature sensor was chosen to provide a reliable connection to obtain temperature readings every several seconds. All communications on devices were performed via direct wiring or via Ethernet. Standard MRAA/UPM libraries were maintained throughout the process without any modifications.

The gateway acts as a web server, storing data as well as making calls to the temperature sensor to keep the data fresh.  The Java UPM Library uses libmodbus to read and send periodic updates from the Comet* temperature sensor to the Tomcat* web server.

Figure 8. Gateway installed as part of demo with temperature sensor.

Transferring Code to the Gateway

Typically, ramping up to a commercial gateway involves having to revamp code so that it is compatible with whichever services are available on the system. In this case, the coding on the prototype was all performed in Java*, HTML, and JavaScript*, making the transition to a commercial solution relatively simple. The code transition was simplified by the use of the same MRAA/UPM libraries in both phases of the project.

Mapping Grove Sensors to Industrial Sensors

Using MRAA and UPM libraries can help jumpstart a project. The following steps cover porting the app to the commercial product solution:

  1. Target desired industrial hardware:

    a.    Determine whether the hardware requires additional libraries or application support.
    b.    If needed, integrate libraries and software and create OS layers for software deployment.

  2. Once commercial product hardware is successfully integrated into the prototype solution, remove the code that is no longer needed:

    a.    Utilize existing layers created during the prototype phase to install solution dependencies.
    b.    Make changes as needed for new hardware.

  3. Take new and old layers and build into production runtime.
  4. Complete all installation and testing on production hardware.

Customer Application

The base customer application, shown in Figures 9 through 12, was created to replace the functionality of the Grove LCD, LED, buzzer, and touch sensor that the driver would interact with. In the production solution, the customer application would reside on the mobile device carried by the driver, allowing for easy notification and response to alerts. The customer application is quite simple in this example but could be easily expanded. It has two status indicators that refer to temperature and door status. An alert button becomes active and then gives an acknowledge button to clear the alert.

Figure 9. Main status screen.
Figure 10. Status showing an alert.
Figure 11. Showing a full alert and acknowledge button active.
Figure 12. Initial setup screen finding IP address of gateway.

Conclusion

This exercise demonstrates use of the Grove IoT Commercial Developer Kit to rapidly develop a prototype. With wide-ranging libraries, the ease of use of the Developer Kit simplifies the development process while also providing high compatibility for commercialization of the product. Scaling up to a commercial gateway was quite easy, as the team was able to directly copy code and have it function immediately.

More Information


IoT Path-To-Product: How to Build a Connected Transportation Solution

$
0
0

This Internet of Things (IoT) path-to-product project is part of a series that portrays how to develop a commercial IoT solution from the initial idea stage, through prototyping and refinement to create a viable product. It uses the Grove* IoT Commercial Developer Kit, with the prototype built on an Intel® Next Unit of Computing (Intel® NUC) Kit DE3815TYKHE small-form-factor PC and Arduino* 101 board.

This document demonstrates how to build a prototype and utilize these same technologies in deploying an Intel® IoT Gateway and industrial sensors. It does not require special equipment or deep expertise, and as such, it is intended to be instructive toward developing IoT projects in general.

Note: Known in the US as “Arduino* 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

Building the Prototype

From this exercise, developers will learn to do the following:

  • Connect to the Intel® NUC Kit DE3815TYKHE.
  • Interface with the I/O and sensor repository for the Intel® NUC using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run the Java* code sample in Intel® System Studio IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel® IoT platforms.

To see a making of for this project, see IoT Path-to-Product: The Making of Building a Connected Transportation Solution.

Visit GitHub for this project's latest code samples and documentation.

What it Does

This project simulates the following parts of a transportation monitoring solution:

  • Door. The door can be closed or opened, in which case the driver is signaled that something might be wrong.
  • Temperature. The temperature inside the truck is monitored. The data is logged, and above a certain threshold, an alarm is raised.
  • Alarm. Under certain conditions, an alarm is raised. The alarm can be canceled by pressing the touch button or when the parameters of the system return to normal.
  • Display. Displays the status of the system, temperature, and door status.

How it Works

This connected-transportation application operates based on the following sensor data:

  • Open/closed status of the truck door
  • Temperature of the truck interior
  • Events: open/close door, change temperature, set temperature threshold, trigger/stop alarm

All data is forwarded to the web interface, which can be used to monitor the status of the truck.

Set up the Intel® NUC Kit DE3815TYKHE


This section gives instructions for installing the Intel® IoT Gateway Software Suite on the Intel NUC.

Note: Due to the limited size of the local storage drive, we recommend against setting a recovery partition. You can return to the factory image by using the USB drive again. You can now use your gateway remotely from your development machine if you are on the same network as the gateway. If you would like to use the Intel® IoT Gateway Developer Hub instead of the command line, enter the IP address into your browser and go through the first- time setup.

Note: If you are on an Intel network, you need to set up a proxy server.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Download the Intel® IoT Gateway Software Suite and follow the instructions received by e-mail to download the image file.
  3. Unzip the archive and write the .img file to a 4 GB USB drive:

    On Microsoft Windows*, you can use a tool like Win32 Disk Imager*: https://sourceforge.net/projects/win32diskimager.
    On Linux*, use sudo dd if=GatewayOS.img of=/dev/ sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system and plug it into the Intel NUC along with a monitor, keyboard, and power cable.
  5. Turn on the Intel® NUC and enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive:

    a. From the Advanced menu, select Boot.
    b. From Boot Configuration, under OS Selection, select Linux.
    c. Under Boot Devices, make sure the USB check box is selected.
    d. Save the changes and reboot.
    e. Press F10 to enter the boot selection menu and select the USB drive.

  7. Log into the system with root:root.
  8. Install Wind River Linux on local storage:
    ~# deploytool d /dev/mmcblk0 lvm 0 resetmedia –F
  9. Use the poweroff command to shut down your gateway, unplug the USB drive, and turn your gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable and use the ifconfig eth0 command to find the IP address assigned to your gateway (assuming you have a proper network setup).
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering the following commands:
     ~# smart update
     ~# smart upgrade
     ~# smart install upm
  12. Plug in an Arduino* 101 board and reboot the Intel® NUC. The Firmata* sketch is flashed onto Arduino* 101, and you are now ready to use MRAA and UPM with it.

Set up the Arduino* 101 Board

Setup instructions for the Arduino* 101 board are available at https://www.arduino.cc/en/Guide/Arduino101

Connect other Components

This section covers making the connections from the Intel® NUC to the rest of the hardware components. The bill of materials for the prototype is summarized in Table 1, and the assembly of those components is illustrated in Figure 1.

Table 1. Connected transportation prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

 

Arduino* 101 Board

Sensor hub

USB Type A to Type B Cable

For connecting Arduino* 101 board to NUC

Components from Grove* IoT Commercial Developer Kit

Base Shield V2

 

Touch Sensor Module

Alarm mute

Button Module

Door toggle

Temperature Sensor Module

Monitors temperature

Buzzer Module

Alarm

Red LED

Alarm status light

LCD with RGB Backlight Module

Status display

Figure 1. Connected transportation proof of concept prototype.

How to Build the Product

From this exercise, developers will learn how to do the following:

  • Connect to the Dell iSeries Wyse* 3290 IoT Gateway.
  • Interface with the I/O and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run the code sample in Intel® System Studio IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel® IoT Platforms.

Visit GitHub for this project's latest code samples and documentation.

What it Does

This project simulates the following parts of a transportation monitoring solution:

  • Door. The door can be closed or opened, in which case the driver is signaled that something might be wrong.
  • Temperature. The temperature inside the truck is being monitored. The data is logged and above a certain threshold an alarm is raised.
  • Alarm. Under certain conditions, an alarm is raised. The alarm status can be monitored and canceled through the customer application.
  • Display. Displays the status of the truck on the customer application.

How it Works

This transportation application operates based on the following sensor data:

  • Open/closed status of the truck door
  • Temperature of the truck interior
  • Events: open/close door, change temperature, set temperature threshold, trigger/stop alarm

All data is forwarded to the admin application, which can be used to monitor the status of the truck.

Set up the Dell iSeries Wyse* 3290 IoT Gateway

This section gives instructions for installing the Intel® IoT Gateway Software Suite on the Dell Wyse* 3290.

Note: If you are on an Intel network, you need to set up a proxy server.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Order the Intel® IoT Gateway Software Suite, and then follow the instructions you will receive by email to download the image file.
  3. Unzip the archive, and then write the .img file to a 4 GB USB drive:

    •    On Microsoft Windows, you can use a tool like Win32 Disk Imager: https://sourceforge.net/projects/win32diskimager
    •    On Linux, use sudo dd if=GatewayOS.img of=/dev/ sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system, and then plug it into the Dell Wyse* 3290 along with a monitor, keyboard, and power cable.
  5. Turn on the Dell Wyse* 3290, and then enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive:

    a.    On the Advanced tab, make sure Boot from USB is enabled.
    b.    On the Boot tab, put the USB drive first in the order of the boot devices.
    c.    Save the changes, and then reboot the system.

  7. Log in to the system with root:root.
  8. Install Wind River* Linux on local storage: 
    ~# deploytool d /dev/mmcblk0 lvm 0 reset¬media -F
  9. Use the poweroff command to shut down your gateway, unplug the USB drive, and then turn your gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable, and then use the ifconfig eth0 command to find the IP address assigned to your gateway (assuming you have a proper network setup).
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering the following commands:
    ~# smart update
    ~# smart upgrade
    ~# smart install upm
  12. Connect the FTDI* UMFT4222EV expansion board through an USB cable.
  13. Connect the Comet* T3311 Temperature sensor to the serial port.

Connect other Components

This section covers making the connections from the Dell Wyse* 3290 to the rest of the hardware components. The bill of materials for the product version of the connected transportation project is summarized in Table 2, and the assembly of those components is shown in Figure 2.

Table 2. Transportation product components.

 

Component

Details

Base System

Dell iSeries Wyse* 3290 IoT Gateway

 

FTDI UMFT4222EV

 

USB Type A to Type Micro-B Cable

For connecting UMFT4222EV board to Gateway

Sensors and other Components

Comet T3311

Temperature sensor

Grove* - SPDT Relay(30A)

Fan/light control

Magnetic Switch

Door sensor

10uF Capacitor (Optional)

 

5V DC Lightbulb

 

5V DC Fan

 

Figure 2. Assembled connected-transportation product.

 

How to Set up the Program

  1. To begin, clone the Path to Product repository with Git* on your computer as follows:
    $ git clone https://github.com/intel-iot-devkit/path-to-product.git
  2. Alternatively, the source can be downloaded from https://github.com/intel-iot-devkit/path-to-product. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the Program to Intel® System Studio IoT Edition

Note: The following screenshots are from the Alarm clock sample; however the technique for adding the program is the same, just with different source files and jars.

  1. Open Intel® System Studio IoT Edition. It will start by asking for a workspace directory. Choose one and then click OK.
  2. In Intel® System Studio IoT Edition, select File -> new -> Intel(R) IoT Java Project:
  3. Give the project the name “Transportation Demo” and then click Next.
  4. You now need to connect to your Intel® NUC from your computer to send code to it. Choose a name for the connection and enter IP address of the Intel® NUC in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click Finish when you are done.    
  5. You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's “src” folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.
    The project uses the following external jars: commons-cli-1.3.1.jar, tomcat-embed-core.jar, tomcat-embed-logging-juli. These can be found in the Maven Central Repository. Create a “jars” folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in “jars” folder and right click -> Build path -> Add to build path.
  6. Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on “add external JARs...”
    For this sample, you will need the following jars:

    •    upm_buzzer.jar
    •    upm_grove.jar
    •    upm_i2clcd.jar
    •    upm_t3311.jar
    •    upm_ttp223.jar
    •    mraa.jar

  7. The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java.
  8. Afterwards, copy the www folder to the home directory on the target platform using scp or WinSCP. Create a new Run configuration in Eclipse for the project for the Java Application. Set the Main Class as:com.intel.pathtoproduct.JavaONEDemoMulti in the Main tab. Then, in the arguments tab:
  • For the devkit version (with Intel® NUC): -config devkit -webapp <path/to/www/folder> -firmata
  • For the commercial version (with Dell Wyse* 3290): -config commercial -webapp <path/to/www/folder>

Running without an IDE

Download the repo directly to the target plaform and run the start_devkit.sh or start_commercial.sh scripts.

Conclusion

As this how-to document demonstrates, IoT developers can build prototypes at relatively low cost and without specialized skill sets. Using the Grove* IoT Commercial Developer Kit and an Arduino* 101 board, project teams can conduct rapid prototyping to test the viability of IoT concepts as part of the larger path-to-product process.

More Information

Distributed Training of Deep Networks on Amazon Web Services* (AWS)

$
0
0

Download Document

Ravi Panchumarthy (Intel), Thomas “Elvis” Jones (AWS), Andres Rodriguez (Intel), Joseph Spisak (Intel)

Deep neural networks are capable of amazing levels of representation power resulting in state-of-the-art accuracy in areas such as computer vision, speech recognition, natural language processing, and various data analytic domains. Deep networks require large amounts of computation to train, and the time to train is often days or weeks. Intel is optimizing popular frameworks such as Caffe*, TensorFlow*, Theano*, and others to significantly improve performance and reduce the overall time to train on a single node. In addition, Intel is adding or enhancing multi. node distributed training capabilities to these frameworks to share the computational requirements across multiple nodes and further reduce time to train. A workload that previously required days can now be trained in a matter of hours. Read more about this.

Amazon Web Services* (AWS) Virtual Private Cloud (VPC) provides a great environment to facilitate multinode distributed deep network training. AWS and Intel partnered to create a simple set of scripts for creating clusters that allows developers to easily deploy and train deep networks, leveraging the scale of AWS. In this article, we provide the steps to set up the AWS CloudFormation* environment to train deep networks using the Caffe network.

AWS CloudFormation Setup

The following steps create a VPC that has an Elastic Compute Cloud (EC2) t2.micro instance as the AWS CloudFormation cluster (cfncluster) controller. The cfncluster controller is then used to create a cluster composed of a master EC2 instance and a number of compute EC2 instances within the VPC.

Steps to deploy the Cloudformation and cfncluster

  1. Use the AWS Management Console to launch the AWS CloudFormation (Figure 1).


    Figure 1. CloudFormation in Amazon Web Services

  2. Click Create Stack.
  3. In the section labeled, Choose a template (Figure 2), select Specify an Amazon S3 template URL, and then enter https://s3.amazonaws.com/caffecfncluster/1.0/intelcaffe_cfncluster.template. Click Next.


    Figure 2. Entering the template URL.

  4. Give the Stack a name, such as myFirstStack. UnderSelect a key pair, find the key pair you just named (follow these instructions if you need to create a key pair). Leave the rest of the Parameters as they are. Click Next.
  5. Enter a Key, for example, name, and a Value, such as, cfnclustercaffe.
    Note that you can give any names to the key and value. The name does not have to match the key-pair from the previous step.
  6. Click Next.
  7. Review the stack, check the acknowledgement box, and then click Create. Creating the stacks will take a few minutes. Wait until the status of all three created stacks is CREATE_COMPLETE.
  8. The template used in Step 3 calls two other nested templates, creating a VPC with an EC2 t2.micro instance (Figure 3). Select the stack with the EC2 instance, and then select Resources. Click the Physical ID of the cfnclusterMaster.


    Figure 3. Selecting the Physical ID from the Resources tab.

  9. This will take you to AWS EC2 console (Figure 4). Under Description, note the VPC ID and the Subnet ID as you’ll need them in a later step. Right-click on the instance, select Connect and follow the instructions.


    Figure 4. AWS EC2 console.

  10. Once you ssh into the instance, prepare to modify the cluster’s configuration with the following commands:

    cd .cfncluster
    cp config.edit_this_cfncluster_config config
    vi config

  11. Follow the comments in the config file (opened with the final command in Step 9) to fill in the appropriate information.

    Note that while the master node is not labelled as a compute node, it also acts as a compute node. Therefore, if the total number of nodes to be used in training is 32, then choose a queue_size = 31 compute nodes.

    • Use the VPC ID and Subnet ID obtained in Step 8.
    • The latest custom_ami to use should be ami-77aa6117; this article will be updated when newer AMI are provided.
  12. Launch a cluster with the command: cfncluster create <vpc_name_choosen_in_config_file>. This will launch more AWS CloudFormation templates. You can see them via the AWS CloudFormation page in the AWS Management Console.

Sample Scripts to Train a Few Popular Networks

After the cloud-formation-setup is complete, if you configured the size of the cluster to be N, there will be N+1 instances created (1 master node and N compute nodes). Note that the master node is also treated as a compute node. The created cluster has a shared drive among all N+1 instances. The instances contain intelcaffe, Intel® Math Kernel Library (Intel® MKL), and sample scripts to train CIFAR-10 and GoogLeNet.

To start training a sample network, log in to the master node and configure the scripts provided: CIFAR-10 (~/scripts/aws_ic_mn_run_cifar.sh) and GoogLeNet (~/scripts/aws_ic_mn_run_googlenet.sh). Both the scripts have the following variables which need to be edited before running.

#Set stackname_tag to VPC name prefixed with cfncluster-. For example: cfncluster-myvpc-name. VPC name is the same as the value for vpc_settings.
	stackname_tag=cfncluster-
	num_instances=
	aws_region=us-west-2

There are few other configurable variables for more customization in both ~/scripts/aws_ic_mn_run_cifar.sh and ~/scripts/aws_ic_mn_run_googlenet.sh

To run CIFAR-10 training, after editing the above mentioned variables in the script, run:

cd ~/scripts/
./aws_ic_mn_run_cifar.sh

To run GoogLeNet training, after editing the above mentioned variables in the script, run:

cd ~/scripts/
./aws_ic_mn_run_googlenet.sh

The script aws_ic_mn_run_cifar.sh creates a hosts file (~/hosts.aws) by querying and retrieving the instances information based on stackname_tag variable. It then updates the solver and train_val prototxt files. The script will start the data server, which will provide data to the compute nodes. There will be a little overhead on the master with data server running along with the compute. After the data server is launched, the distributed training is launched using mpirun command.

The script aws_ic_mn_run_googlenet.sh creates a hosts file (~/hosts.aws) by querying and retrieving the instances information based on stackname_tag variable. Unlike the CIFAR-10 example where the data server provides the data, in GoogLeNet training each worker will read its own data. The script will create a separate solver, train_val prototxt files, and train.txt files for each worker. The script will then launch the job using the mpirun command.

Notices

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

For more information go to http://www.intel.com/performance.

API without Secrets: Introduction to Vulkan* Part 5

$
0
0

Download  [PDF 723KB]


Go to: API without Secrets: Introduction to Vulkan* Part 4


Table of Contents

Tutorial 5: Staging Resources – Copying Data Between Buffers

Tutorial 5: Staging Resources – Copying Data between Buffers

In this part of the tutorial we will focus on improving performance. At the same time, we will prepare for the next tutorial, in which we introduce images and descriptors (shader resources). Using the knowledge we gather here, it will be easier for us to follow the next part and squeeze as much performance as possible from our graphics hardware.

What are “staging resources” or “staging buffers”? They are intermediate or temporary resources used to transfer data from an application (CPU) to a graphics card’s memory (GPU). We need them to increase our application’s performance.

In Part 4 of the tutorial we learned how to use buffers, bind them to a host‑visible memory, map this memory, and transfer data from the CPU to the GPU. This approach is easy and convenient for us, but we need to know that host‑visible parts of a graphics card’s memory aren’t the most efficient. Typically, they are much slower than the parts of the memory that are not directly accessible to the application (cannot be mapped by an application). This causes our application to execute in a sub-optimal way.

One solution to this problem is to always use device-local memory for all resources involved in a rendering process. But as device-local memory isn’t accessible for an application, we cannot directly transfer any data from the CPU to such memory. That’s why we need intermediate, or staging, resources.

In this part of the tutorial we will bind the buffer with vertex attribute data to the device-local memory. And we will use the staging buffer to mediate the transfer of data from the CPU to the vertex buffer.

Again, only the differences between this tutorial and the previous tutorial (Part 4) are described.

Creating Rendering Resources

This time I have moved rendering resources creation to the beginning of our code. Later we will need to record and submit a command buffer to transfer data from the staging resource to the vertex buffer. I have also refactored rendering resource creation code to eliminate multiple loops and replace them with only one loop. In this loop we can create all resources that compose our virtual frame.

bool Tutorial05::CreateRenderingResources() {
  if( !CreateCommandPool( GetGraphicsQueue().FamilyIndex, &Vulkan.CommandPool ) ) {
    return false;
  }

  for( size_t i = 0; i < Vulkan.RenderingResources.size(); ++i ) {
    if( !AllocateCommandBuffers( Vulkan.CommandPool, 1, &Vulkan.RenderingResources[i].CommandBuffer ) ) {
      return false;
    }

    if( !CreateSemaphore( &Vulkan.RenderingResources[i].ImageAvailableSemaphore ) ) {
      return false;
    }

    if( !CreateSemaphore( &Vulkan.RenderingResources[i].FinishedRenderingSemaphore ) ) {
      return false;
    }

    if( !CreateFence( VK_FENCE_CREATE_SIGNALED_BIT, &Vulkan.RenderingResources[i].Fence ) ) {
      return false;
    }
  }
  return true;
}

bool Tutorial05::CreateCommandPool( uint32_t queue_family_index, VkCommandPool *pool ) {
  VkCommandPoolCreateInfo cmd_pool_create_info = {
    VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO,       // VkStructureType                sType
    nullptr,                                          // const void                    *pNext
    VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT | // VkCommandPoolCreateFlags       flags
    VK_COMMAND_POOL_CREATE_TRANSIENT_BIT,
    queue_family_index                                // uint32_t                       queueFamilyIndex
  };

  if( vkCreateCommandPool( GetDevice(), &cmd_pool_create_info, nullptr, pool ) != VK_SUCCESS ) {
    std::cout << "Could not create command pool!"<< std::endl;
    return false;
  }
  return true;
}

bool Tutorial05::AllocateCommandBuffers( VkCommandPool pool, uint32_t count, VkCommandBuffer *command_buffers ) {
  VkCommandBufferAllocateInfo command_buffer_allocate_info = {
    VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO,   // VkStructureType                sType
    nullptr,                                          // const void                    *pNext
    pool,                                             // VkCommandPool                  commandPool
    VK_COMMAND_BUFFER_LEVEL_PRIMARY,                  // VkCommandBufferLevel           level
    count                                             // uint32_t                       bufferCount
  };

  if( vkAllocateCommandBuffers( GetDevice(), &command_buffer_allocate_info, command_buffers ) != VK_SUCCESS ) {
    std::cout << "Could not allocate command buffer!"<< std::endl;
    return false;
  }
  return true;
}

bool Tutorial05::CreateSemaphore( VkSemaphore *semaphore ) {
  VkSemaphoreCreateInfo semaphore_create_info = {
    VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,          // VkStructureType                sType
    nullptr,                                          // const void*                    pNext
    0                                                 // VkSemaphoreCreateFlags         flags
  };

  if( vkCreateSemaphore( GetDevice(), &semaphore_create_info, nullptr, semaphore ) != VK_SUCCESS ) {
    std::cout << "Could not create semaphore!"<< std::endl;
    return false;
  }
  return true;
}

bool Tutorial05::CreateFence( VkFenceCreateFlags flags, VkFence *fence ) {
  VkFenceCreateInfo fence_create_info = {
    VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,              // VkStructureType                sType
    nullptr,                                          // const void                    *pNext
    flags                                             // VkFenceCreateFlags             flags
  };

  if( vkCreateFence( GetDevice(), &fence_create_info, nullptr, fence ) != VK_SUCCESS ) {
    std::cout << "Could not create a fence!"<< std::endl;
    return false;
  }
  return true;
}
1.	Tutorial05.cpp

1.Tutorial05.cpp

First we create a command pool for which we indicate that command buffers allocated from this pool will be short lived. In our case, all command buffers will be submitted only once before rerecording.

Next we iterate over the arbitrary chosen number of virtual frames. In this code example, the number of virtual frames is three. Inside the loop, for each virtual frame, we allocate one command buffer, create two semaphores (one for image acquisition and a second to indicate that frame rendering is done) and a fence. Framebuffer creation is done inside a drawing function, just before command buffer recording.

This is the same set of rendering resources used in Part 4, where you can find a more thorough explanation of what is going on in the code. I will also skip render pass and graphics pipeline creation. They are created in exactly the same way they were created previously. Since nothing has changed here, we will jump directly to buffer creation.

Buffer creation

Here is our general code used for buffer creation:

VkBufferCreateInfo buffer_create_info = {
  VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,             // VkStructureType                sType
  nullptr,                                          // const void                    *pNext
  0,                                                // VkBufferCreateFlags            flags
  buffer.Size,                                      // VkDeviceSize                   size
  usage,                                            // VkBufferUsageFlags             usage
  VK_SHARING_MODE_EXCLUSIVE,                        // VkSharingMode                  sharingMode
  0,                                                // uint32_t                       queueFamilyIndexCount
  nullptr                                           // const uint32_t                *pQueueFamilyIndices
};

if( vkCreateBuffer( GetDevice(), &buffer_create_info, nullptr, &buffer.Handle ) != VK_SUCCESS ) {
  std::cout << "Could not create buffer!"<< std::endl;
  return false;
}

if( !AllocateBufferMemory( buffer.Handle, memoryProperty, &buffer.Memory ) ) {
  std::cout << "Could not allocate memory for a buffer!"<< std::endl;
  return false;
}

if( vkBindBufferMemory( GetDevice(), buffer.Handle, buffer.Memory, 0 ) != VK_SUCCESS ) {
  std::cout << "Could not bind memory to a buffer!"<< std::endl;
  return false;
}

return true;

2.Tutorial05.cpp, function CreateBuffer()

The code is wrapped into a CreateBuffer() function, which accepts the buffer’s usage, size, and requested memory properties. To create a buffer we need to prepare a variable of type VkBufferCreateInfo. It is a structure that contains the following members:

  • sType – Standard type of the structure. Here it should be equal to VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO.
  • pNext – Pointer reserved for extensions.
  • flags – Parameter describing additional properties of the buffer. Right now we can only specify that the buffer can be backed by a sparse memory.
  • size – Size of the buffer (in bytes).
  • usage – Bitfield indicating intended usages of the buffer.
  • sharingMode – Queue sharing mode.
  • queueFamilyIndexCount – Number of different queue families that will access the buffer in case of a concurrent sharing mode.
  • pQueueFamilyIndices – Array with indices of all queue families that will access the buffer when concurrent sharing mode is used.

Right now we are not interested in binding a sparse memory. We do not want to share the buffer between different device queues, so sharingMode, queueFamilyIndexCount, and pQueueFamilyIndices parameters are irrelevant. The most important parameters are size and usage. We are not allowed to use a buffer in a way that is not specified during buffer creation. Finally, we need to create a buffer that is large enough to contain our data.

To create a buffer we call the vkCreateBuffer() function, which when successful stores the buffer handle in a variable we provided the address of. But creating a buffer is not enough. A buffer, after creation, doesn’t have any storage. We need to bind a memory object (or part of it) to the buffer to back its storage. Or, if we don’t have any memory objects, we need to allocate one.

Each buffer’s usage may have a different memory requirement, which is relevant when we want to allocate a memory object and bind it to the buffer. Here is a code sample that allocates a memory object for a given buffer:

VkMemoryRequirements buffer_memory_requirements;
vkGetBufferMemoryRequirements( GetDevice(), buffer, &buffer_memory_requirements );

VkPhysicalDeviceMemoryProperties memory_properties;
vkGetPhysicalDeviceMemoryProperties( GetPhysicalDevice(), &memory_properties );

for( uint32_t i = 0; i < memory_properties.memoryTypeCount; ++i ) {
  if( (buffer_memory_requirements.memoryTypeBits & (1 << i)) &&
      ((memory_properties.memoryTypes[i].propertyFlags & property) == property) ) {

    VkMemoryAllocateInfo memory_allocate_info = {
      VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO,       // VkStructureType                sType
      nullptr,                                      // const void                    *pNext
      buffer_memory_requirements.size,              // VkDeviceSize                   allocationSize
      i                                             // uint32_t                       memoryTypeIndex
    };

    if( vkAllocateMemory( GetDevice(), &memory_allocate_info, nullptr, memory ) == VK_SUCCESS ) {
      return true;
    }
  }
}
return false;

3.Tutorial05.cpp, function AllocateBufferMemory()

Similarly to the code in Part 4, we first check what the memory requirements for a given buffer are. After that we check the properties of a memory available in a given physical device. It contains information about the number of memory heaps and their capabilities.

Next we iterate over each available memory type and check if it is compatible with the requirement queried for a given buffer. We also check if a given memory type supports our additional, requested properties, for example, whether a given memory type is host-visible. When we find a match, we fill in a VkMemoryAllocateInfo structure and call a vkAllocateMemory() function.

The allocated memory object is then bound to our buffer, and from now on we can safely use this buffer in our application.

Vertex Buffer Creation

The first buffer we want to create is a vertex buffer. It stores data for vertex attributes that are used during rendering. In this example we store position and color for four vertices of a quad. The most important change from the previous tutorial is the use of a device-local memory instead of a host-visible memory. Device-local memory is much faster, but we can’t copy any data directly from the application to device-local memory. We need to use a staging buffer, from which we copy data to the vertex buffer.

We also need to specify two different usages for this buffer. The first is a vertex buffer usage, which means that we want to use the given buffer as a vertex buffer from which data for the vertex attributes will be fetched. The second is transfer dst usage, which means that we will copy data to this buffer. It will be used as a destination of any transfer (copy) operation.

The code that creates a buffer with all these requirements looks like this:

const std::vector<float>& vertex_data = GetVertexData();

Vulkan.VertexBuffer.Size = static_cast<uint32_t>(vertex_data.size() * sizeof(vertex_data[0]));
if( !CreateBuffer( VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, Vulkan.VertexBuffer ) ) {
  std::cout << "Could not create vertex buffer!"<< std::endl;
  return false;
}

return true;

4.Tutorial05.cpp, function CreateVertexBuffer()

At the beginning we get the vertex data (hard-coded in a GetVertexData() function) to check how much space we need to hold values for all our vertices. After that we call a CreateBuffer() function presented earlier to create a vertex buffer and bind a device-local memory to it.

Staging Buffer Creation

Next we can create an intermediate staging buffer. This buffer is not used during the rendering process so it can be bound to a slower, host-visible memory. This way we can map it and copy data directly from the application. After that we can copy data from the staging buffer to any other buffer (or even image) that is bound to device-local memory. This way all resources that are used for rendering purposes are bound to the fastest available memory. We just need additional operations for the data transfer.

Here is a code that creates a staging buffer:

Vulkan.StagingBuffer.Size = 4000;
if( !CreateBuffer( VK_BUFFER_USAGE_TRANSFER_SRC_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, Vulkan.StagingBuffer ) ) {
  std::cout << "Could not staging buffer!"<< std::endl;
  return false;
}

return true;

5.Tutorial04.cpp, function CreateStagingBuffer()

We will copy data from this buffer to other resources, so we must specify a transfer src usage for it (it will be used as a source for transfer operations). We would also like to map it to be able to directly copy any data from the application. For this we need to use a host-visible memory and that’s why we specify this memory property. The buffer’s size is chosen arbitrarily, but should be large enough to be able to hold vertex data. In real-life scenarios we should try to reuse the staging buffer as many times as possible, in many cases, so its size should be big enough to cover most of data transfer operations in our application. Of course, if we want to do many transfer operations at the same time, we have to create multiple staging buffers.

Copying Data between Buffers

We have created two buffers: one for the vertex attributes data and the other to act as an intermediate buffer. Now we need to copy data from the CPU to the GPU. To do this we need to map the staging buffer and acquire a pointer that we can use to upload data to the graphics hardware’s memory. After that we need to record and submit a command buffer that will copy the vertex data from the staging buffer to the vertex buffer. And as all of our command buffers used for virtual frames and rendering are marked as short lived, we can safely use one of them for this operation.

First let’s see what our data for vertex attributes looks like:

static const std::vector<float> vertex_data = {
  -0.7f, -0.7f, 0.0f, 1.0f,
  1.0f, 0.0f, 0.0f, 0.0f,
  //
  -0.7f, 0.7f, 0.0f, 1.0f,
  0.0f, 1.0f, 0.0f, 0.0f,
  //
  0.7f, -0.7f, 0.0f, 1.0f,
  0.0f, 0.0f, 1.0f, 0.0f,
  //
  0.7f, 0.7f, 0.0f, 1.0f,
  0.3f, 0.3f, 0.3f, 0.0f
};

return vertex_data;

6.Tutorial05.cpp, function GetVertexData()

It is a simple, hard-coded array of floating point values. Data for each vertex contains four components for position attribute and four components for color attribute. As we render a quad, we have four pairs of such attributes.

Here is the code that copies data from the application to the staging buffer and after that from the staging buffer to the vertex buffer:

// Prepare data in a staging buffer
const std::vector<float>& vertex_data = GetVertexData();

void *staging_buffer_memory_pointer;
if( vkMapMemory( GetDevice(), Vulkan.StagingBuffer.Memory, 0, Vulkan.VertexBuffer.Size, 0, &staging_buffer_memory_pointer) != VK_SUCCESS ) {
  std::cout << "Could not map memory and upload data to a staging buffer!"<< std::endl;
  return false;
}

memcpy( staging_buffer_memory_pointer, &vertex_data[0], Vulkan.VertexBuffer.Size );

VkMappedMemoryRange flush_range = {
  VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE,            // VkStructureType                        sType
  nullptr,                                          // const void                            *pNext
  Vulkan.StagingBuffer.Memory,                      // VkDeviceMemory                         memory
  0,                                                // VkDeviceSize                           offset
  Vulkan.VertexBuffer.Size                          // VkDeviceSize                           size
};
vkFlushMappedMemoryRanges( GetDevice(), 1, &flush_range );

vkUnmapMemory( GetDevice(), Vulkan.StagingBuffer.Memory );

// Prepare command buffer to copy data from staging buffer to a vertex buffer
VkCommandBufferBeginInfo command_buffer_begin_info = {
  VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,      // VkStructureType                        sType
  nullptr,                                          // const void                            *pNext
  VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT,      // VkCommandBufferUsageFlags              flags
  nullptr                                           // const VkCommandBufferInheritanceInfo  *pInheritanceInfo
};

VkCommandBuffer command_buffer = Vulkan.RenderingResources[0].CommandBuffer;

vkBeginCommandBuffer( command_buffer, &command_buffer_begin_info);

VkBufferCopy buffer_copy_info = {
  0,                                                // VkDeviceSize                           srcOffset
  0,                                                // VkDeviceSize                           dstOffset
  Vulkan.VertexBuffer.Size                          // VkDeviceSize                           size
};
vkCmdCopyBuffer( command_buffer, Vulkan.StagingBuffer.Handle, Vulkan.VertexBuffer.Handle, 1, &buffer_copy_info );

VkBufferMemoryBarrier buffer_memory_barrier = {
  VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER,          // VkStructureType                        sType;
  nullptr,                                          // const void                            *pNext
  VK_ACCESS_MEMORY_WRITE_BIT,                       // VkAccessFlags                          srcAccessMask
  VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT,              // VkAccessFlags                          dstAccessMask
  VK_QUEUE_FAMILY_IGNORED,                          // uint32_t                               srcQueueFamilyIndex
  VK_QUEUE_FAMILY_IGNORED,                          // uint32_t                               dstQueueFamilyIndex
  Vulkan.VertexBuffer.Handle,                       // VkBuffer                               buffer
  0,                                                // VkDeviceSize                           offset
  VK_WHOLE_SIZE                                     // VkDeviceSize                           size
};
vkCmdPipelineBarrier( command_buffer, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_VERTEX_INPUT_BIT, 0, 0, nullptr, 1, &buffer_memory_barrier, 0, nullptr );

vkEndCommandBuffer( command_buffer );

// Submit command buffer and copy data from staging buffer to a vertex buffer
VkSubmitInfo submit_info = {
  VK_STRUCTURE_TYPE_SUBMIT_INFO,                    // VkStructureType                        sType
  nullptr,                                          // const void                            *pNext
  0,                                                // uint32_t                               waitSemaphoreCount
  nullptr,                                          // const VkSemaphore                     *pWaitSemaphores
  nullptr,                                          // const VkPipelineStageFlags            *pWaitDstStageMask;
  1,                                                // uint32_t                               commandBufferCount
  &command_buffer,                                  // const VkCommandBuffer                 *pCommandBuffers
  0,                                                // uint32_t                               signalSemaphoreCount
  nullptr                                           // const VkSemaphore                     *pSignalSemaphores
};

if( vkQueueSubmit( GetGraphicsQueue().Handle, 1, &submit_info, VK_NULL_HANDLE ) != VK_SUCCESS ) {
  return false;
}

vkDeviceWaitIdle( GetDevice() );

return true;

7.Tutorial05.cpp, function CopyVertexData()

At the beginning, we get vertex data and map the staging buffer’s memory by calling the vkMapMemory() function. During the call, we specify a handle of a memory that is bound to a staging buffer, and buffer’s size. This gives us a pointer that we can use in an ordinary memcpy() function to copy data from our application to graphics hardware.

Next we flush the mapped memory to tell the driver which parts of a memory object were modified. We can specify multiple ranges of memory if needed. We have one memory area that should be flushed and we specify it by creating a variable of type VkMappedMemoryRange and by calling a vkFlushMappedMemoryRanges() function. After that we unmap the memory, but we don’t have to do this. We can keep a pointer for later use and this should not affect the performance of our application.

Next we start preparing a command buffer. We specify that it will be submitted only once before it will be reset. We fill a VkCommandBufferBeginInfo structure and provide it to a vkBeginCommandBuffer() function.

Now we perform the copy operation. First a variable of type VkBufferCopy is created. It contains the following fields:

  • srcOffset – Offset in bytes in a source buffer from which we want to copy data.
  • dstOffset – Offset in bytes in a destination buffer into which we want to copy data.
  • size – Size of the data (in bytes) we want to copy.

We copy data from the beginning of a staging buffer and to the beginning of a vertex buffer, so we specify zero for both offsets. The size of the vertex buffer was calculated based on the hard-coded vertex data, so we copy the same number of bytes. To copy data from one buffer to another, we call a vkCmdCopyBuffer() function.

Setting a Buffer Memory Barrier

We have recorded a copy operation but that’s not all. From now on we will not use the buffer as a target for transfer operations but as a vertex buffer. We need to tell the driver that the type of buffer’s memory access will change and from now on it will serve as a source of data for vertex attributes. To do this we set a memory barrier similarly to what we have done earlier in case of swapchain images.

First we prepare a variable of type VkBufferMemoryBarrier, which contains the following members:

  • sType – Standard structure type, here set to VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER.
  • pNext – Parameter reserved for extensions.
  • srcAccessMask – Types of memory operations that were performed on this buffer before the barrier.
  • dstAccessMask – Memory operations that will be performed on a given buffer after the barrier.
  • srcQueueFamilyIndex – Index of a queue family that accessed the buffer before.
  • dstQueueFamilyIndex – Queue family that will access the buffer from now on.
  • buffer – Handle to the buffer for which we set up a barrier.
  • offset – Memory offset from the start of the buffer (from the memory’s base offset bound to the buffer).
  • size – Size of the buffer’s memory area for which we want to setup a barrier.

As you can see, we can set up a barrier for a specific range of buffer’s memory. But here we do it for the whole buffer, so we specify an offset of 0 and the VK_WHOLE_SIZE enum for the size. We don’t want to transfer ownership between different queue families, so we use VK_QUEUE_FAMILY_IGNORED enum both for srcQueueFamilyIndex and dstQueueFamilyIndex.

The most important parameters are srcAccessMask and dstAccessMask. We have copied data from the staging buffer to a vertex buffer. So before the barrier, the vertex buffer was used as a destination for transfer operations and its memory was written to. That’s why we have specified VK_ACCESS_MEMORY_WRITE_BIT for a srcAccessMask field. But after that the barrier buffer will be used only as a source of data for vertex attributes. So for dstAccessMask field we specify VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT.

To set up a barrier we call a vkCmdPipelineBarrier() function. And to finish command buffer recording, we call vkEndCommandBuffer(). Next, for all of the above operations to execute, we submit a command buffer by calling vkQueueSubmit() function.

Normally during the command buffer submission, we should provide a fence. It is signaled once all transfer operations (and whole command buffer) are finished. But here, for the sake of simplicity, we call vkDeviceWaitIdle() and wait for all operations executed on a given device to finish. Once all operations complete, we have successfully transferred data to the device-local memory and we can use the vertex buffer without worrying about performance loss.

Tutorial05 Execution

The results of the rendering operations are exactly the same as in Part 4:

We render a quad that has different colors in each corner: red, green, dark gray, and blue. The quad should adjust its size (and aspect) to match window’s size and shape.

Cleaning Up

In this part of the tutorial, I have also refactored the cleaning code. We have created two buffers, each with a separate memory object. To avoid code redundancy, I prepared a buffer cleaning function:

if( buffer.Handle != VK_NULL_HANDLE ) {
  vkDestroyBuffer( GetDevice(), buffer.Handle, nullptr );
  buffer.Handle = VK_NULL_HANDLE;
}

if( buffer.Memory != VK_NULL_HANDLE ) {
  vkFreeMemory( GetDevice(), buffer.Memory, nullptr );
  buffer.Memory = VK_NULL_HANDLE;
}

8.Tutorial05.cpp, function DestroyBuffer()

This function checks whether a given buffer was successfully created, and if so it calls a vkDestroyBuffer() function. It also frees memory associated with a given buffer through a vkFreeMemory() function call. The DestroyBuffer() function is called in a destructor that also releases all other resources relevant to this part of the tutorial:

if( GetDevice() != VK_NULL_HANDLE ) {
  vkDeviceWaitIdle( GetDevice() );

  DestroyBuffer( Vulkan.VertexBuffer );

  DestroyBuffer( Vulkan.StagingBuffer );

  if( Vulkan.GraphicsPipeline != VK_NULL_HANDLE ) {
    vkDestroyPipeline( GetDevice(), Vulkan.GraphicsPipeline, nullptr );
    Vulkan.GraphicsPipeline = VK_NULL_HANDLE;
  }

  if( Vulkan.RenderPass != VK_NULL_HANDLE ) {
    vkDestroyRenderPass( GetDevice(), Vulkan.RenderPass, nullptr );
    Vulkan.RenderPass = VK_NULL_HANDLE;
  }

  for( size_t i = 0; i < Vulkan.RenderingResources.size(); ++i ) {
    if( Vulkan.RenderingResources[i].Framebuffer != VK_NULL_HANDLE ) {
      vkDestroyFramebuffer( GetDevice(), Vulkan.RenderingResources[i].Framebuffer, nullptr );
    }
    if( Vulkan.RenderingResources[i].CommandBuffer != VK_NULL_HANDLE ) {
      vkFreeCommandBuffers( GetDevice(), Vulkan.CommandPool, 1, &Vulkan.RenderingResources[i].CommandBuffer );
    }
    if( Vulkan.RenderingResources[i].ImageAvailableSemaphore != VK_NULL_HANDLE ) {
      vkDestroySemaphore( GetDevice(), Vulkan.RenderingResources[i].ImageAvailableSemaphore, nullptr );
    }
    if( Vulkan.RenderingResources[i].FinishedRenderingSemaphore != VK_NULL_HANDLE ) {
      vkDestroySemaphore( GetDevice(), Vulkan.RenderingResources[i].FinishedRenderingSemaphore, nullptr );
    }
    if( Vulkan.RenderingResources[i].Fence != VK_NULL_HANDLE ) {
      vkDestroyFence( GetDevice(), Vulkan.RenderingResources[i].Fence, nullptr );
    }
  }

  if( Vulkan.CommandPool != VK_NULL_HANDLE ) {
    vkDestroyCommandPool( GetDevice(), Vulkan.CommandPool, nullptr );
    Vulkan.CommandPool = VK_NULL_HANDLE;
  }
}

9.Tutorial05.cpp, destructor

First we wait for all the operations performed by the device to finish. Next we destroy the vertex and staging buffers. After that we destroy all other resources in the order opposite to their creation: graphics pipeline, render pass, and resources for each virtual frame, which consists of a framebuffer, command buffer, two semaphores, a fence, and a framebuffer. Finally we destroy a command pool from which command buffers were allocated.

Conclusion

In this tutorial we used the recommended technique for transferring data from the application to the graphics hardware. It gives the best performance for the resources involved in the rendering process and the ability to map and copy data from the application to the staging buffer. We only need to prepare an additional command buffer recording and submission to transfer data from one buffer to another.

Using staging buffers is recommended for more than just copying data between buffers. We can use the same approach to copy data from a buffer to images. And the next part of the tutorial will show how to do this by presenting the descriptors, descriptor sets, and descriptor layouts, which are another big part of the Vulkan API.

OpenGL* Performance Tips: Avoid OpenGL Calls that Synchronize CPU and GPU

$
0
0

Download PDF

Download Code Sample

Introduction

To get the highest level of performance from OpenGL* you want to avoid calls that force synchronization between the CPU and the GPU. This article covers several of those calls and describes ways to avoid using them. It is accompanied by a C++ example application that shows the effect of some of these calls on rendering performance. While this article refers to graphical game development, the concepts apply to all applications that use OpenGL 4.3 and higher. The sample code is written in C++ and is designed for Windows* 8.1 and Windows® 10 devices.

Requirements

The following are required to build and run the example application:

  • A computer with a 6th generation Intel® Core™ processor (code-named Skylake)
  • OpenGL 4.3 or higher
  • Microsoft Visual Studio* 2013 or newer

Avoid OpenGL Calls that Synchronize CPU and GPU

OpenGL contains a variety of calls that force synchronization between the CPU and the GPU. These are called Sync Objects and are designed to synchronize the activity between the GPU and the application. Unfortunately this hurts overall performance because the CPU stalls until the GPU has completed its action. If possible, avoid these calls.

The OpenGL Foundation’s website describes Sync Objects at https://www.opengl.org/wiki/Sync_Object, but here is a summary of ways to avoid this issue:

  • Avoid glReadPixels() or glFinish(), which force synchronization between the CPU and GPU. If you need to use glReadPixels() do so in conjunction with Pixel Buffer Objects.
  • Use glFlush() with caution; if you must synchronize between contexts use Sync Objects instead.
  • Avoid updating resources that are used by the GPU. It is better to create static resources when the application starts and not have to modify them later. Whenever possible, create vertex buffer objects as static (GL_STATIC_DRAW).
  • Avoid updating resources that are used by the GPU. For example, do not call glBufferSubData/glTexImage if there are queued commands that access a given VBO/texture. Limit the chances of simultaneous read/write access to resources.
  • Use immutable versions of API calls to create buffers and textures. For example, use API calls like glBufferStore() and glTexStorage*().
  • Update buffers and avoid GPU/CPU synchronization issues by creating a pool of bigger buffers with glBufferStorage() and permanently map them with the glMapBuffer() function call. The application can then iterate over individual buffers with increasing offsets, providing new chunks of data.
  • Use glBindBufferRange() for uniform buffer objects to bind new chunks of data at the current offset. For vertex buffer objects access newly copied chunks of data with firstIndex (for glDrawArrays) or indices/baseVertex parameters (for glDrawElements/BaseVertex). Increase the initial number of pools if the oldest buffer submitted for GPU consumption is still in use. Monitor the progress of the GPU by accessing the data from the buffers with Sync Objects.

The example application demonstrates the effects of three different OpenGL calls that cause the CPU and GPU to synchronize. The calls are glReadPixels, glFlush, and glFinish. These calls are compared to a non-synchronized performance. The current performance for each approach is displayed in a console window in milliseconds-per-frame and number of frames-per-second. Pressing the spacebar cycles between the methods so you can compare the effects. When switching, the application animates the image as a visual indicator of the change.

Intel Skylake Processor Graphics

6th generation Intel Core processors provide superior two- and three-dimensional graphics performance, reaching up to 1152 GFLOPS. Its multicore architecture improves performance and increases the number of instructions per clock cycle.

The 6th generation Intel Core processors offer a number of all-new benefits over previous generations and provide significant boosts to overall computing horsepower and visual performance. Sample enhancements include a GPU that, coupled with the CPU's added computing muscle, provides up to 40 percent better graphics performance over prior Intel® Processor Graphics. 6th generation Intel Core processors have been redesigned to offer higher-fidelity visual output, higher-resolution video playback, and more seamless responsiveness for systems with lower power usage. With support for 4K video playback and extended overclocking, it is ideal for game developers.

GPU memory access includes atomic min, max, and compare-and-exchange for 32-bit floating-point values in either shared local memory or global memory. The new architecture also offers a performance improvement for back-to-back atomics to the same address. Tiled resources include support for large, partially resident (sparse) textures and buffers. Reading unmapped tiles returns zero, and writes to them are discarded. There are also new shader instructions for clamping LOD and obtaining operation status. There is now support for larger texture and buffer sizes. For example, you can use up to 128k × 128k × 8B mipmapped 2D textures.

Bindless resources increase the number of dynamic resources a shader may use, from about 256 to 2,000,000 when supported by the graphics API. This change reduces the overhead associated with updating binding tables and provides more flexibility to programmers.

Execution units (EUs) have improved native 16-bit floating-point support as well. This enhanced floating-point support leads to both power and performance benefits when using half precision.

Display features further offer multiplane overlay options with hardware support to scale, convert, color correct, and composite multiple surfaces at display time. Surfaces can additionally come from separate swap chains using different update frequencies and resolutions (for example, full-resolution GUI elements composited on top of up-scaled, lower-resolution frame renders) to provide significant enhancements.

Its architecture supports GPUs with up to three slices (providing 72 EUs). This architecture also offers increased power gating and clock domain flexibility, creating a powerful game delivery system.

Building and Running the Application

Follow these steps to compile and run the example application.

  1. Download the ZIP file containing the source code for the example application, and then unpack it into a working directory.
  2. Open the lesson6_gpuCpuSynchronization/lesson6.sln file by double-clicking it to start Microsoft Visual Studio 2013.
  3. Select <Build>/<Build Solution> to build the application.
  4. Upon successful build you can run the example from within Visual Studio.

Once the application is running, a main window open and you will see an image. The console window shows what method was used to render it and the current milliseconds-per-frame and number of frames-per-second. Pressing the spacebar cycles between the methods and compares the performance difference. Pressing ESC exits the application.

Code Highlights

The application uses three calls to force synchronization, as well as the unsynchronized approach. The various combinations are stored in an array that is created during the initialization phase.

 // Array of structures, one item for each option we're testing
#define I(x) { options:: ## x, #x }
struct options {
    enum  { NONE, READPIXELS, FLUSH, FINISH, nOPTS } option;
    const char* optionStr;
} options[]
{
    I(NONE),
        I(READPIXELS),
        I(FLUSH),
        I(FINISH),
};

To test this, the application creates a vertex and fragment shader, plus loads textures into VRAM.

// compile and link the shaders into a program, make it active
    vShader = compileShader(vertexShader, GL_VERTEX_SHADER);
    fShader = compileShader(fragmentShader, GL_FRAGMENT_SHADER);
    program = createProgram({ vShader, fShader });
    offset = glGetUniformLocation(program, "offset");                            GLCHK;
    texUnit = glGetUniformLocation(program, "texUnit");                          GLCHK;
    glUseProgram(program);                                                       GLCHK;

    // configure texture unit
    glActiveTexture(GL_TEXTURE0);                                                GLCHK;
    glUniform1i(texUnit, 0);                                                     GLCHK;

    // create and configure the textures
    glGenTextures(1, &texture);                                                  GLCHK;
    glBindTexture(GL_TEXTURE_2D, texture);                                       GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);                GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);                GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);           GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);           GLCHK;

    // load texture image
    GLuint w, h;  std::vector<GLubyte> img; if (lodepng::decode(img, w, h, "sample.png"))
               __debugbreak();

    // upload the image to vram
    glBindTexture(GL_TEXTURE_2D, texture);                                       GLCHK;
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA,
                 GL_UNSIGNED_BYTE, &img[0]);                                     GLCHK;

Called once for each screen refresh, the display() method checks first whether we are switching between the methods (that is, animating). If not switching, it then uses the option pointed to in the array of options. Switching from method to method walks through this array.

void display()
{
    // attributeless rendering
    glClear(GL_COLOR_BUFFER_BIT);                                               GLCHK;
    glBindTexture(GL_TEXTURE_2D, texture);                                      GLCHK;
    if (animating) {
        glUniform1f(offset, animation);                                         GLCHK;
    } else {
        glUniform1f(offset, 0.f);                                               GLCHK;
    }
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                                      GLCHK;
    if (!animating)
    switch (options[selector].option) {
    case options::NONE:       break;
    case options::READPIXELS: glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE,&buffer[0]);  GLCHK;  break;
    case options::FLUSH:      glFlush();                                                        GLCHK;  break;
    case options::FINISH:     glFinish();                                                       GLCHK;  break;
    }
    glutSwapBuffers();
}

Each time a video frame is drawn, the performance output is updated in the console and the application checks whether the spacebar or ESC has been pressed. Pressing the spacebar causes the application to move through the non-synchronizing and synchronizing calls; pressing ESC exits the application. When switching, the performance measurements are reset and the image animates as a visual indicator that something changed. If no key was pressed, the next frame is rendered.

// GLUT idle function.  Called once per video frame.  Calculate and print timing
// reports and handle console input.
void idle()
{
    // Calculate performance
    static unsigned __int64 skip;  if (++skip 7lt; 512) return;
    static unsigned __int64 start; if (!start && !QueryPerformanceCounter((PLARGE_INTEGER)&start))                      __debugbreak();
    unsigned __int64 now;  if (!QueryPerformanceCounter((PLARGE_INTEGER)&now))
                                                                       __debugbreak();
    unsigned __int64 us = elapsedUS(now, start), sec = us / 1000000;
    static unsigned __int64 animationStart;
    static unsigned __int64 cnt; ++cnt;

    // We're either animating
    if (animating)
    {
        float sec = elapsedUS(now, animationStart) / 1000000.f; if (sec < 1.f) {
            animation = (sec < 0.5f ? sec : 1.f - sec) / 0.5f;
        }
        else {
            animating = false;
            selector = (selector + 1) % options::nOPTS; skip = 0;
            cnt = start = 0;
            print();
        }
    }

    // Or measuring
    else if (sec >= 2)
    {
        printf("frames rendered = %I64u, uS = %I64u, fps = %f,
               milliseconds-per-frame = %fn", cnt, us, cnt * 1000000. / us,
               us / (cnt * 1000.));
        if (swap) {
            animating = true; animationStart = now; swap = false;
        } else {
            cnt = start = 0;
        }
    }

    // Get input from the console too.
    HANDLE h = GetStdHandle(STD_INPUT_HANDLE); INPUT_RECORD r[128]; DWORD n;
    if (PeekConsoleInput(h, r, 128, &n) && n)
        if (ReadConsoleInput(h, r, n, &n))
            for (DWORD i = 0; i < n; ++i)
                if (r[i].EventType == KEY_EVENT && r[i].Event.KeyEvent.bKeyDown)
                    keyboard(r[i].Event.KeyEvent.uChar.AsciiChar, 0, 0);

    // Ask for another frame
    glutPostRedisplay();
}

Closing

Depending upon the game you are developing, it may not be possible to avoid calls that cause synchronization between the CPU and the GPU, especially if your application needs to interact with the pixels on the screen in some fashion or synchronize between different contexts. In general, it is best to avoid synchronization to get the most performance out of your system. This article has covered some of the calls that cause synchronization and suggested alternative approaches.

By combining this technique with the advantages of the 6th generation Intel Core processors, graphic game developers can ensure their games perform the way they were designed.

References

An Overview of the 6th generation Intel® Core™ processor (code-named Skylake)

Graphics API Developer’s Guide for 6th Generation Intel® Core™ Processors

About the Author

Praveen Kundurthy works in the Intel® Software and Services Group. He has a master’s degree in Computer Engineering. His main interests are mobile technologies, Microsoft Windows, and game development.

Code Sample: Access Control in JavaScript* for Intel® Joule™ development board

$
0
0

This code sample illustrates creating a simple alarm system using the development platform and an assortment of extensible sensors.

Once completed, the system will display alarm status on the connected display. Users will also interact with the alarm via a web interface that will allow them to enable or disable the alarm, as well as examine stored alarm data.

Source files and documentation are located on GitHub:  https://github.com/intel-iot-devkit/joule-code-samples/tree/master/access-control-js

Code Sample: Doorbell in JavaScript* for Intel® Joule™ development platform

Code Sample: Exploring C++ on the Intel® Joule™ development platform


Code Sample: Earthquake Detector in JavaScript* for Intel® Joule™ development platform

Code Sample: Bluetooth* LE station in javascript with Intel® Joule™ development platform

$
0
0

This code sample measures weather data using a development platform, along with extensible sensors. accelerometer data, temperature, and humidity data is recorded and stored to the IBM* Bluemix* IoT cloud using a TI SensorTag.

Once set up, users can visualize data using a web based interface. 

Source files and documentation are located on GitHub: https://github.com/intel-iot-devkit/joule-code-samples/tree/master/ble-scan-js

IBM* Bluemix Cloud* Quickstart in JavaScript for the Intel® Joule™ development platform

What is the Intel® Joule™ Module?

$
0
0

Intel presents its newest offering to the hardware-hacking, thing-making, and general hobbyist markets: The Intel® Joule™ module. With a quad-core Intel® Atom™ processor running at 1.7 GHz, 4 GB of PoP LPDDRA RAM, Intel® HD Graphics, 5.0 GHz and 2.4 GHz Wi-Fi*, and Bluetooth* LE support, the Intel® Joule™ module is easily one of the most powerful innovator-focused system on modules (SoM) produced to date.

The Intel® Joule™ module– on its expansion board –  is capable of booting from USB drives, microSD* flash memory cards, and from its own internal eMMC storage device. It runs a Reference Linux* OS for IoT by default, and Intel has preinstalled many common development tools and software packages including Java* and Python*, as well as the Libmraa* and UPM libraries. It is capable of running other desktop Operating Systems (OSs) with ease, with support for Windows 10 IoT Core* and Ubuntu Core (Snappy)* coming later this year.

I’ve personally been using the Intel® Joule™ module for a couple of months, and have been working alongside a team of other individuals to develop a series of C++ code samples. In this time, I’ve been able to get very acquainted with its features. The Intel® Joule™ module is a very innovative platform, and I’ve been very impressed with it so far. Now that it’s been properly released, I’m excited to see how it expands and impacts the industry. 

I foresee the Intel® Joule™ module in everything from land-based robots, to underwater sensor buoys, to remote-controlled drones flying through the air. The possibilities are endless.

Check out our full set of code samples on our repo.

 Visit GitHub  Intel® Joule™ Module Resources

 

Comparing the Intel® Joule™ Module and the Intel® Edison Module

$
0
0

Overview

The Intel® Joule™ module is the newest addition to a line of powerful, multi-purpose development boards from Intel®. This small package contains an Atom™ quad-core processor, clocked at an impressive 1.7 GHz, 4 gigabytes of LPDDR4 RAM, Dual band Wi-Fi* antenna, Bluetooth*, and an Intel® HD Graphics Processing Unit. These specs make the Intel® Joule™ module more powerful than any development board previously created by Intel. This new class of processor allows it to do things that its predecessors, the Intel® Edison platform, and Intel® Galileo platform, could not previously achieve; from advanced computer vision, to machine learning.

Now in order to get a better feel for the Intel® Joule™ module, let’s perform a comparison between it and the Intel® Edison module. We’ll start with a quick hardware comparison, before moving on to a software comparison, then a size evaluation, and finally an appraisal of usability.

 


 

Hardware

Quick Comparison

Intel® Joule™ Module

Intel® Edison Module

1.5 or 1.7 GHz Intel® Atom Processor

500 MHz Intel® Atom Processor

3 or 4 GB of LPDDR4 RAM

1 GB of DDR3 RAM

8 or 16 GB of eMMC Flash

4 GB of eMMC Flash

802.11ac WiFi* with MIMO

Dual band 5.0 GHz and 2.4 Ghz Wi-Fi*

Bluetooth*

Bluetooth*

Intel® HD  Graphics Processing Unit

No Graphics Processing Unit

 

As you can see, the Intel® Joule™ module has an extremely powerful processor for its size. It includes a quad-core processor, giving it four usable executions threads. The Intel® Edison module, on the other hand uses a dual core Intel® Atom processor, clocked at 500 MHz. This means that you get half of the execution threads at less than one third of the processing speed of the Intel® Joule™ module. This means that it has the ability to process more input in a given amount of time. For instance, if you have a bunch of sensors taking in information, the Intel® Edison platform might have to forward it to a Gateway or desktop in order for the data to be properly processed. An Intel® Joule™ module on the other hand could process the data, and send the resulting information to wherever it needs to go.

In addition, the Intel® Joule™ module’s graphics processor allows it to do several things that the Intel® Edison module simply cannot do. It can process video and images on board, taking information from them for use elsewhere. It also gives you a graphical user interface, which can make the system overall easier and faster to use.

When it comes to memory, the Intel® Edison module only has 1 GB of DDR3 RAM and 4 GB of eMMC flash memory, which means that you only have about 2 GB of memory to work with after the Operating System is installed. This limits how much you can install on the device, as well as how much memory any one program can rely on while running. The Intel® Joule™ module on the other hand, comes with either 8 or 16 gigabytes of eMMC flash memory, depending on what you need, as well as 4 GB of LPDDR4 RAM. This increased memory means that a single Intel® Joule™ platform can be used in more one application; simultaneously.

All of this ends up meaning that Intel® Joule™ module can handle a much larger workload than an Intel® Edison module can. It can also handle many smaller workloads at once, even if they are graphics processing threads. All in all the Intel® Joule™ module is simply more powerful than Intel® Edison module, allowing it to process more information in a shorter period, as well as the ability to process graphics.

 


Software

The software differences between the Intel® Joule™ platform and the Intel® Edison platform are minimal because the development environment is similar with support for both the Intel® XDK and the Intel® System Studio IoT Edition . Intel® Edison platform uses a version of Linux, known as Poky, which gave the device a terminal only interface. This forced a user to work with a Linux terminal in order to do anything.

In contrast, the Intel® Joule™ platform comes with a reference Linux for IoT. This flavor of Linux is an interesting platform, and conforms a bit more to a “classical” Linux distro. Like Linux Mint or Ubuntu it offers a root access terminal, which can be output via HDMI or over a serial interface, such as PuTTY. You can use the terminal to get down and dirty with the board, and work with all of the Linux commands you know and love.

This flavor also offers a classic desktop in the form of the user-friendly X interface. This interface allows people who are new to Linux to work with a more entry level graphical interface. While interaction with the terminal will still be required, it is minimized in this mode.

 



Physical Size

The Intel® Edison compute module measures a meager 35.5 by 25 millimeters, with a thickness of 2.9 millimeters. The small size allows you to use the Intel® Edison module in many mobile applications, allowing its power to be taken on the go.

The Intel® Joule™ module is slightly larger at 48 by 24 millimeters, with a height of 3.5 millimeters. This increase in almost 600 cubic millimeters provides Intel® Joule™ compute module with better temperature and power management, as well as better Wi-Fi* and Bluetooth*. Though the bigger package does mean that it might be harder to make it into a ‘wearable’, the extra power the compute module packs could prove very useful.

 



Usability

The Intel® Edison platform is a relatively mature, so it can be a very useful tool. Many people are already comfortable working with it; however getting to that level of comfort presents a steep learning curve. If you don’t know how to use a Linux terminal, you might get stuck rather quickly. Even when taking into consideration the various tools that have been developed specifically to help the Intel® Edison platform be more user-friendly, working knowledge of Linux is required to use it fully.

The Intel® Joule™ platform overcomes that issue by offering more ways to work with the device, including the previously mentioned X interface. Introducing people to reference Linux distribution with an easy to use graphical interface as a means of working with Joule™ makes the learning curve less steep. But the OS still provides the classical terminal access that someone who knows how to use the Intel® Edison platform might prefer. This makes the Intel® Joule™ platform far easier to learn how to use compared to the Intel® Edison platform. In addition, support for Windows 10 IoT Core* and Ubuntu Core (Snappy)* is coming later this year.

 



The Uses for the Intel® Joule™ Platform

The Intel® Joule™ platform provides a more powerful punch for a similarly sized package compared to the Intel® Edison module, but you might be asking, “What can I do with it?” The answer is, almost anything you want. You can create better, faster wearable technology, embed it into manufacturing systems, build drones, or use it for face recognition applications. The possibilities are endless.

I personally have been working with this platform for two months as of the writing of this article. The power that this development platform can deliver has given me plenty of ideas on what the Intel® Joule™ module can be used for. I can’t wait to see the community that will blossom from this device, and what they will achieve!

If the Intel® Edison platform and its predecessor the Intel® Galileo platform, have taught us anything, it’s that people push these technologies to their upper limits; the Intel® Joule™ module will be the same. It’s exciting to think of what people are going to make with this new device, and if its ancestors are anything to go by, we’re going to see some impressive stuff.

 

What will you make?

To give you some ideas to get started, check out our code samples written specifically for the Intel® Joule Module

Visit Github

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>