Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Driving Software Simplicity with Wind River Helix* Chassis

$
0
0

Wind River Helix* Cockpit

Cockpit is an open source, Linux* runtime platform based on the Yocto Project*. Compatible with GENIVI* specifications, Cockpit is designed to allow you to develop and validate rich in-vehicle infotainment (IVI) apps, telecommunications and informatics, and automotive instrument cluster systems quickly. These systems are preintegrated with advanced connectivity and security features.

As a versatile platform for embedded software, Cockpit supports a variety of industry hardware and user-friendly human–machine interface tools. It provides a framework in which to develop complex IVI systems, significantly reducing development time. Cockpit also gives you access to Wind River Helix* App Cloud, a cloud-based software development environment that helps you develop IoT apps when multiple development centers in various locations are involved.

Open Standards-Based Foundation for the In-Vehicle Infotainment Market

Cockpit is ideal for building apps for remote vehicle tracking, automatic roadside assistance, integrated digital cockpit, web radio, or other automotive systems, such as instrument panels or center displays. It gives you a solid foundation on top of which you can add functionality, specific user interfaces (UIs), and other value-added options (Figure 1). The templates, tools, and methods help you create embedded products based on custom Linux-based systems regardless of the underlying hardware architecture. As a result, you have maximum flexibility to use your hardware of choice.

Figure 1. Wind River* Helix* Cockpit architecture

Key features of Cockpit include:

  • Connectivity framework: This framework consists of a UI and connectivity links for external (cloud) and in-vehicle communications. Optional connectivity components include iPod*, Apple* CarPlay*, MirrorLink*, Google AAP, Wi-Fi*, and Bluetooth*.
  • Firmware and software over-the-air (OTA) management: Cockpit supports OTA management to wirelessly manage and update both firmware and software throughout the product life cycle.
  • Flexible platform: Cockpit supports multiple hardware and board support packages (BSPs). This comprehensive, commercially supported development toolchain includes runtime observation, a debugger, memory profiling, a BitBake build system, and Wind River Workbench.
  • Long-term support: The secure Linux base allows future extensions and updates to keep up with the evolution in IoT technologies, protocols, and product offerings.
  • Built-in security: Security Profile for Wind River Linux along with secure boot, device authentication, and other runtime security checks guarantee secure data handling.

Figure 2. Wind River* Helix* Chassis layout diagram

Wind River Helix Drive

Drive is built on VxWorks* 653 3.0 Multi-core Edition, a widely deployed real-time operating system (RTOS) in industrial, defense, automotive, aerospace, and other safety- and security-critical applications. With VxWorks 653 Multi-core Edition, you get improved performance, scalability, and a future-proof operating environment for the IoT. Drive provides you with an International Organization for Standardization (ISO) 26262–certified platform for safety-critical automotive applications, from piloted and highly automated driving to ADAS. Let’s look at some of the key benefits of using Drive.

Safety

For ADAS and autonomous driving functions, the standards for safety are stringent. Drive allows you to develop and integrate multiple apps with different safety criticality into a single hardware platform. Separation and isolation of these applications are maintained in compliance with ISO 26262.

Drive supports the latest multi-core processors and provides robust partitioning that enables DO-178C certification. It also gives you the following benefits:

  • The multicore-enabled scheduler can support a variety of guest operating systems. Apps can run in parallel, so effective compute capacity is increased. Space and time partitioning for each core is also ensured.
  • The two-level virtual machine architecture significantly improves performance and lowers jitter.
  • The number of partitions can scale up to 255.
  • Drive can support multiple safety levels simultaneously.

Streamlined AUTomotive Open System ARchitecture Software Component Integration

Drive conforms to AUTomotive Open System ARchitecture (AUTOSAR) development methodologies for software modules. It supports standardized connectivity and functional interfaces to other automotive software components, enabling faster integration and interoperability.

Robust Security for the Connected Car

Safety- and security-critical automotive systems must prevent the injection and execution of malicious code into the system. Drive has multiple features in place to provide malware protection:

  • It allows only authenticated (signed) binaries to run.
  • Drive enforces secure boot (in conformance with International Electrotechnical Commission 15408) using Intel® Trusted Platform Module and ARM* TrustZone*.

The secure boot function verifies binaries at every stage of the boot process. If a component fails to pass signature verification, the boot process will stop.

Connectivity

Drive uses Data Distribution Services, a real-time, low latency middleware protocol and application programming interface (API) standard from the Object Management Group to provide data connectivity among safety- and security-critical applications. It uses the Socket Controller Area Network (SocketCAN) to provide a uniform interface for opening multiple sockets at the same time to listen for and send frames to CAN identifiers. With the Wind River Certified Network Stack (an embedded TCP/User Datagram Protocol/IP version 4 network stack with multicast), Drive supports a BSD socket API, enabling easy migration of networking software from VxWorks and Linux platforms.

Summary

Wind River Helix Chassis, with Cockpit and Drive, is designed to simplify your software development process and innovations for connected vehicles. Leveraging a time-tested RTOS and built-in security capabilities with standard-based and certified tools, templates, and methods, you can now develop IVI, telematics, and other apps faster and with guaranteed safety and security compliance. An open source architecture allows for greater flexibility, and you can build apps compatible with multiple hardware platforms. The result is the ability to innovate and implement value-added functionality for safer, more efficient connected cars and a better driving experience.


Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use (e.g. version 1.6.3) from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and careful when choosing what you clone from a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

Back to FAQs Main

Intel® Parallel Computing Center at Computational Fluid Dynamic department (ONERA)

$
0
0

Principal Investigators:

Alain ReflochAlain Refloch integrated ONERA in 1990 and was in charge of the user’s support for the scientific computation. He was at the origin of the unit ‘Software engineering and HPC’ in 2000. He became project Leader of the CEDRE software in 2003 (see reference paper) and joined the Computational Fluid Dynamics and AeroAcoustics Department. A. Refloch is a member of the scientific council of the ORAP since 2009.

He was the co-organizer of the International Workshop on High Performance Computing – Computational Fluid Dynamics (HPC-CFD) in Energy/Transport Domains (16th IEEE High Performance Computing and Communication in Paris and ISC'15 in Frankfurt). Today he’s Special Advisor for HPC.

Ivan MaryIvan Mary obtained his PhD in 1999 at Paris-Orsay University in the field of numerical methods for CFD. He joined ONERA in 2000 with the mission to develop methods and software allowing efficient Large Eddy Simulation (LES) and Direct Numerical (DNS) simulations of turbulent flow around complex configurations. He has supervised around 10 PhD students during the last 15 years in the fields of numerical method, fluid dynamics and turbulence modelling. Since 2011, he has focused a large part of his work on HPC, because this is a crucial point for unsteady computations of turbulent flows based (re-engineering, coarse-grain OpenMP parallelization, and vectorization). Since 2015, He is in charge of the Fast demonstrator, which must provide the HPC basis of the next generation elsA software

Description:

The ONERA CFD department develops and supports fluid dynamics software for decades both for its own research and for industrial partners in the aeronautical domain. Nowadays the elsA software, developed at ONERA since 1997, is one of the major CFD tools used by Airbus, Eurocopter and Safran. In their design services, it is massively employed to optimize airplane performance (noise or energy consumption reduction, safety improvement). Due to environmental constraint, noise reduction in the vicinity of airports has become a major challenge for aircraft manufacturer. The noise radiated during the landing phase is due to turbulent vortices generated by landing gears and flaps in the wings, which act like powerful whistles. The numerical simulation of the generated noise requires to handle the complex detailed geometry of landing gear or flaps and to solve billions of unknowns at each time step to describe the time evolution of turbulence vortices during millions of time step in order to compute few seconds of the physical time.

Therefore HPC capabilities, complex geometries (re)meshing and multiphysics coupling (noise generator and propagator) are crucial points for the efficiency of the software to obtain a solution in a reasonable time. For these reasons, a demonstrator named FAST (Flexible Aerodynamic Solver Technology) is under development since 1-2 year in order to prepare a major evolution of elsA in the coming years. This demonstrator aims to provide a software architecture and numerical techniques which will allow better flexibility, evolutivity and efficiency in order to perform simulations out of reach with the actual CFD tools. Thanks to previous expertise, services reclaimed by CFD simulations (pre/post-processing, boundary conditions, solvers, coupling, etc.) are provided by different Python modules in FAST, whereas the CFD General Notation System (CGNS) standard is adopted as a data model, but also for the implementation of this data model in order to facilitate interoperability between modules. To improve flexibility in the meshing of complex geometrical details, an automatic cartesian grid generator, immersed boundary condition and chimera technique will be employed during the present Intel® PCC project to compute the noise generated by the LAGOON landing gear configuration (Lagoon). Thanks to code modernization (memory access, vectorization, etc.) we aim to reduce by at least one order of magnitude the CPU cost of this kind of computation on actual Intel® Xeon and future Intel® Xeon Phi™ processor family.

Complex Structures of Dynamic Stall by LES

Complex Structures of Dynamic Stall by LES" by Ivan Mary (ONERA)

Related websites:

http://www.onera.fr
http://elsa.onera.fr
http://elsa.onera.fr/Cassiopee
http://www.hpctoday.com/state-of-the-art/processor-evolution-what-to-prepare-application-codes-for

Using Libraries in Your IoT Project

$
0
0

Functions and Libraries

Functions are blocks of code that focus on specific tasks. There are two main types of functions: those packaged in a library and those you define in your code. Programming languages such as C++, Python*, Java*, and Node.js* provide built-in functions that you can call at any time. Functions go by different names in different programming languages, such as method, subroutine, and procedure.

Libraries are collections of functions. For example, sqrt(x) is a mathematical C++ built-in function within the math.h C++ library; it facilitates the computation of the square root of a given number. Such a library includes multiple functions, such as logarithm (log10) and exponentials (exp). For Internet of Things (IoT)–related projects, two main libraries are particularly useful: Libmraa* (MRAA) and Useful Packages & Modules (UPM). When working with hardware, you need tools to communicate with the various parts of your board and its connected sensors. These libraries come with definitions for the various sensors and are cross compatible with multiple boards.

Libmraa*

If you want to use Libmraa*, you must first install it on your Intel® Galileo or Intel® Edison board. The library enables you to interact with the board through the functions the library provides. Libmraa has defined configurations for each board and its components. It provides an application programming interface (API) for interfacing with low-level peripherals, such as general-purpose input/output (GPIO), pulse width modulation (PWM), Bluetooth* low energy, interintegrated circuit (I2C), serial peripheral interface (SPI), and Universal Asynchronous Receiver/Transmitter (UART). The physical pins of chips and sensors map to Libmraa. You don’t need to know the details of how communication between the various components happens: The library takes care of which board and breakout pins are connected. Figure 1 illustrates the pins located on the Edison board, and Table 1 reflects a small subset of pins. You can see that the I2C pin is physical pin J17-pin 8, which is mapped in MRAA to 7; also, PWM is shown as physical pins J18-pin 7, with MRAA mapping to pin 20. This way, when you’re programming an MRAA pin, it will communicate with its mapped physical pin on the board. You can see the full breakout board and pins in the paper Intel® Edison Breakout Board.

Figure 1. The Intel® Edison board, with a subset of its pins

Intel® Edison board

Table 1. Subset of pins on the Intel® Edison board

Pin  Description
J17 - pin 1GP182_PWM2 GPIO capable of PWM output
J17 - pin 5GP135UART2_TXGPIO, UART2 transmit output
J17 - pin 8GP20I2C1_SDAGPIO, I2C1 data open collector
J18 - pin 7GP12_PWM0 GPIO capable of PWM output
J18 - pin 8GP183_PWM3 GPIO capable of PWM output
J18 - pin 12GP129UART1_RTSGPIO, UART1 ready to send output

Within Libmraa are multiple API classes, each with numerous functions that you can use for your IoT projects. Here is a small list of those functions:

GPIO class. GPIO interface to Libmraa; functions include:

  • mraa_gpio_init_raw function
  • mraa_gpio_context
  • mraa_gpio_dir

I2C class. I2C to Libmraa; functions include:

  • mraa_i2c_init
  • mraa_i2c_read

AIO class. Analog input/output (AIO) interface to Libmraa; functions include:

  • mraa_aio_init
  • mraa_aio_set_bit

PWM class. PWM interface to Libmraa; functions include:

  • mraa_pwm_init_raw
  • mraa_pwm_period_ms

SPI class. SPI to Libmraa; functions include:

  • mraa_spi_write
  • mraa_spi_transfer_buf
  • mraa_spi_bit_per_word

UART class. UART to Libmraa; functions include:

  • mraa_uart_set_baudrate
  • mraa_uart_set_flowcontrol
  • mraa_uart_set_timeout

COMMON class. Defines the basic shared values for Libmraa; functions include:

  • mraa_adc_raw_bits
  • mraa_get_i2c_bus_count
  • mraa_get_platform_type

For a full list and description of functions that each class supports, see mraa documentation.

To use all the functionality that MRAA provides, you must include mraa.h in your code. Figure 2 shows a C example for connecting an LED to D5 on your board and using GPIO functions. The outcome is a blinking LED; the sleep function controls how often the LED will be in the On and Off states.

Figure 2. C example using MRAA and general-purpose input/output

C example using MRAA

Useful Packages & Modules

UPM provides software drivers for a variety of commonly used sensors and actuator. Drivers enable the hardware to communicate with the operating system. UPM is a level above MRAA and assists with object control of elements such as RGB LCDs and temperature sensors. The list of supported sensors is vast and includes accelerometers, buttons, color sensors, gas sensor, global positioning system (GPS), radio-frequency identification, and servos. See the full list.

Figure 3 illustrates how you can use the UPM library to read the input of a button. First, you need to include the library grove.hpp, which allows you to use the upm::GroveButton function to create an object for the button and later use the name () and value () functions to obtain the desired values from the sensors. See additional UPM examples.

Figure 3. Useful Packages & Modules example, with definitions for functions and libraries

Useful Packages & Modules

Integrated development environments (IDEs) such as Eclipse* and Intel® XDK IoT Edition come with MRAA and UPM as integrated libraries. The supported IDEs are available for download from Intel® IoT Developer Kit Integrated Development Environments. When using Eclipse, you can choose to create a new IoT project that will include a couple of tools not available in other types of projects (Figure 4).

Figure 4. Creating a new Internet of Things project in Eclipse*

Creating new IoT project

The IoT Sensor Support tool, shown in Figure 5, contains the list of sensors and actuators, with a description and picture of the hardware. Simply select all the hardware you will include in your project, and the tool automatically adds the related libraries to your code. For example, if you will be using the MMA7455 accelerometer, the tool will append to your current .ccp code with #include <mma7455.h>.

Figure 5. Internet of Things sensors and libraries included in Eclipse*

IoT sensors and libraries

Libmraa and UPM work hand-in-hand to provide the tools you need to interact easily with the elements in your project. They remove the hassle of knowing the pins and requirements of each piece of hardware, packaging all the specifics into friendly, easy-to-use libraries. From prototyping an automatic pet feeder to creating a wearable that uses GPS, an accelerometer, and an RGB LCD to track your movements, you can develop your projects with the help of MRAA and UPM.

Both libraries are open source projects, with dedicated GitHub* repositories (see the upm and mraa repositories). Developers are welcome to assist with enhancing and writing new APIs and functions, and reviewing documentation. Contributors are required to follow a coding and documentation style. Thanks to the open source community’s efforts, new functions and libraries are added frequently for various coding languages.

Summary

Libraries and their functions help with cleaner, more efficient code that you can reuse. For built-in functions, you don’t need to worry about how the computation is performed: As long as you implement the function with the required input, it will generate the desired output. For your IoT projects, Libmraa and UPM are essential libraries.

For More Information

Getting Started with Galileo and Arduino

$
0
0

 

This article describes how to get started with the Intel® Galileo board and the Arduino* IDE.

If you prefer developing with Java*, JavaScript, or C++, see Programming Options,” below.

Hardware requirements

  • An Intel® Galileo Gen 1 or Gen 2 board.
  • One power supply for the Galileo board, either 5V DC (for Gen 1) or 7–15V DC (for Gen 2). Your power supply should be included in the packaging along with your board.
  • One micro-USB cable (Micro B to Type A).
  • A Windows*, Mac* OS X*, or Linux* computer.

Set up the board

  1. If you have a micro-SD card inserted into your board, remove it.
  2. Plug in the power supply to your board. Always plug in the power supply before the USB cable.
  3. Plug in the micro-USB cable to your board's USB client port. Plug the other end in to your computer.

On the Intel® Galileo Gen 1 board, your setup should look like this:

Intel® Galileo Gen 1 board

On the Intel® Galileo Gen 2 board, your setup should look like this:

Intel® Galileo Gen 2 board

Next Steps

Your next steps depend on the OS you are using: 64-bit Windows*, 32-bit Windows*, OS X*, or Linux*.

For Mac* OS X* or Linux*

You can skip to the next section; there’s no further setup required.

For 64-bit Windows*

To install the drivers required by the Arduino* IDE, run the Galileo Windows 64-bit Arduino installer, which is here: https://software.intel.com/en-us/iot/hardware/galileo/downloads, listed under Installer.

The installer includes a step to create a micro SD card. This step is optional; create a micro SD card only if you want to set up Wi-Fi driver support and sketch persistence on the board.

When finished, continue to the next section.

For 32-bit Windows*

To install the drivers required by the Arduino IDE, follow the instructions to install and run the Firmware Updater tool, listed here: https://software.intel.com/en-us/installing-drivers-and-updating-firmware-for-arduino-windows.

When finished, continue to the next section.

Installing the Arduino* IDE

Install the Arduino IDE for your OS from the appropriate section on this page: https://software.intel.com/en-us/get-started-arduino-install.

When finished, continue to the next section.

Blinking the board’s LED with the Arduino* IDE

To blink the LED on your Galileo board, run the Blink example sketch in in the Arduino IDE as described in the Running Arduino section on this page: https://software.intel.com/en-us/get-started-arduino-blink)

When the LED blinks, you’re done, and have successfully set up your Galileo board.

Programming options

You can program your Galileo board using C++ or Java* with the Intel® System Studio IoT Edition, or using JavaScript with the Intel® XDK IoT Edition.

The Intel® System Studio IoT Edition is a plugin for Eclipse* that supports Java and C/C++ projects. It allows you to connect to, update, and program IoT projects on the Galileo board, as well as add sensors to a project, take advantage of the example code provided with the Intel System Studio IoT Edition, and more.

The Intel System Studio IoT Edition provides two libraries, specially designed for the Intel IoT Developer Kit:

  • MRAA is a low-level library that offers a translation from the input/output interfaces to the pins available on your Galileo board.
  • UPM is a sensor library that utilizes MRAA and provides multiple-language support. UPM allows you to conveniently use or create sensor representations for your projects.

The Intel® XDK IoT Edition supports embedded JavaScript projects with node.JS, and provides a comprehensive, cross-platform development environment for developing and building hybrid HTML5 mobile and web apps.

Make a bootable micro SD card

A bootable micro SD card is required for using the Galileo board with the Intel® System Studio IoT Edition or the Intel® XDK IoT Edition.

Depending on your OS, follow the instructions for making a bootable micro SD card.

Intel XDK Supported Systems

$
0
0

Intel® XDK - Release - 2016 August 2, v3491

Intel XDK is a cross platform development environment.
Develop applications on Microsoft Windows*, Apple OS X* and Ubuntu Linux*.
Build apps to target Android*, iOS*, Windows or Web Apps.
Access device capabilities with Apache Cordova* APIs.
Choose Crosswalk Project web runtime on Android for an updated webview.

Supported Versions

Development systems
OSVersion
Microsoft Windows Desktop 7, 8 and 10 [3]
Apple OS X10.7 (Lion) or newer [1] 
Ubuntu Linux14.04 (Trusty Tahr) or newer [2]                                                                        

 

App Targets
OSVersion
AndroidAPI Level 14 (Ice Cream Sandwich 4.0.1) or newer [4]
iOS6.0 to 9.2 [5]
WindowsWindows 10 UAP [6] , Windows 8 [7] and Windows Phone 8.1 [8]                                                      
Web AppBrowser support for HTML5 [9]

 

Crosswalk runtime 
Crosswalk Project runtime,
Android Embedded build
Release 14, 15 or 16  (=> Chromium versions 43, 44 or 45) [12]                         
Crosswalk Project runtime,
Android ​Shared build
Release 17 (=> Chromium version 46) [12]

 

Cordova API:  CLI and "pinned" framework [13]   
Cordova CLI                        Android                           iOS                                   Windows                             
5.1.1 [11] [Deprecated]4.1.13.8.04.0.0
5.4.1 [10a,b] 5.0.04.0.14.3.1
6.2.0 [14]5.1.14.1.14.3.2

 

References:

[1] Apple OS X versions- https://en.wikipedia.org/wiki/OS_X#Versions

[2] Ubuntu Linux versions- https://wiki.ubuntu.com/Releases

[3] Windows versions- https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions#Client_versions

[4] Android Versions dashboard - https://source.android.com/source/build-numbers.html

[5] iOS versions - https://en.wikipedia.org/wiki/IOS_version_history

[6] Windows 10 Universal App Platform - https://en.wikipedia.org/wiki/IOS_version_history

[7] Windows 8 - http://windows.microsoft.com/en-us/windows-8/apps-windows-store-tutorial

[8] Windows Phone 8.1 - https://www.windowsphone.com/en-US/features-8-1

[9] HTML5 Browser Support, Relies on Browser support of HTML5 features, see http://caniuse.com/

[10a] Apache Cordova version 5.4.0 - https://cordova.apache.org/docs/en/5.4.0/guide/overview/index.html

[10b] Apache Cordova version 5.4.1 RN - http://cordova.apache.org/news/2015/11/24/tools-release.html

[11] Apache Cordova version 5.1.1 - https://cordova.apache.org/docs/en/5.1.1/guide/overview/index.html

[12] Crosswalk Release and Chromium Version - https://github.com/crosswalk-project/crosswalk-website/wiki/Release-dates

[13] FAQ, Understanding Cordova CLI and Pinned versions https://software.intel.com/en-us/xdk/faqs/cordova#cordova-version

[14] Apache Cordova version 6.2.1  - '6.x, Latest',  https://cordova.apache.org/docs/en/latest/index.html

 

Intel® Showcases New Talent; Drexel University Punches Through The Competition

$
0
0

Download PDF

Each year, the Intel® University Games Showcase (IUGS), is the place-to-be to get a first look at some of the most innovative interactive entertainment being developed today. Held in conjunction with the Game Developers Conference (GDC), the competition is intense, with teams from the top ten academic game-developer programs in the U.S. competing for recognition, bragging rights, and $35,000 in hardware prizes.

The third annual showcase, held earlier this year, saw the Best Gameplay award go to Mirrors of Grimaldi, a local multiplayer game developed by the talented 51st and Fire team from Drexel University. A supportive audience, comprising nearly 500 industry professionals, students, the press, and media influencers, was on hand to cheer and encourage. The competition was judged by leading lights from the games industry.

For the Drexel team, the key to winning their category was harnessing the benefits of ideation to come up with a fresh and novel approach to a familiar game concept, namely surviving an attack by a horde of henchmen. This article details the team’s journey from initial concept, through development, to a fully-realized, award-winning game.

Reflecting On Grimaldi

At first glance, Mirrors of Grimaldi looks like a conventional, four-player, split-screen game. (The name, incidentally, references Joseph Grimaldi, a popular English actor of the Regency era who singlehandedly defined our modern image of a clown). Using a medieval, demonic carnival as the backdrop, players are attacked by swarms of evil minions that refuse to die. Your only option to stay alive is to punch your attackers out of your screen and into an opponent’s. But then something unexpected happens.


Mirrors of Grimaldi’s dynamic split screen

As your character’s health waxes or wanes, your screen size expands or shrinks in proportion. This leads to an intriguing dynamic. Becoming weaker risks having the screen collapse around your character, ejecting you from the game. But, conversely, a smaller screen also makes it easier to punch an enemy minion into a neighboring screen, threatening one of your opponents. “We tried to develop gameplay mechanics that use varying screen sizes not just as a feature, but as the principal mechanic of the game,” explained Andrew Lichtsinn, producer of Mirrors of Grimaldi.

In effect, the screen becomes an integral component of the game–“friend or foe” as Lichtsinn describes it–directly affecting how players position themselves, and how they assess threats both on and off their individual screens.

The team that would become 51st and Fire originally coalesced towards the end of the spring term of 2015. After batting several ideas around throughout the summer, the team officially formed in September as part of their senior project in the Digital Media Program. Before reaching the IUGS, however, they had to first get past their fellow classmates. “Drexel hosts an internal competition every year preceding the Games Showcase,” explained Dr. Jichen Zhu, Assistant Professor in the Digital Media Program. Having competed the previous year as a junior, Lichtsinn made it a goal to reach the IUGS in 2016.

Initially consisting of a core group of six members, including a producer, art director, and programmers and artists, the team further reached out to animator Alison Friedlander, as well as programmer Alex Hollander from the College of Computing and Informatics. The team further consulted with an experienced sound designer. “It was a very interdisciplinary team,” noted Zhu.


Drexel’s 51st and Fire team members: (standing from left) Andrew Lichtsinn, Alison Friedlander, Patrick Bastian, Boyd Fox, Evan Freed, Tom Trahey (front, from left) Steven Yaffe and Alex Hollander. They are joined here by Dr Jichen Zhu (standing, far right)

Ideating Innovation

Ideation was key to developing the core ideas behind Mirrors of Grimaldi’s innovative gameplay. The process began with a series of brainstorming sessions, during which time the team developed more than a dozen roughly hewn game ideas, that they would then share with Zhu. Most failed to impress. But at one point, someone on the team proposed a split-screen approach that was quickly mocked up in Adobe* Photoshop*. “It wasn’t even close to how the game would eventually appear, but Professor Zhu reacted so positively that we knew we were onto something,” recalled Lichtsinn.

From the beginning, Zhu underscored the need to constantly evaluate project scope, especially given the relatively short development window available to the team. At the same time, the team was aware that adopting a split-screen scheme meant more than just creating interesting “visual eye candy,” as Lichtsinn described it. As the ideation evolved, Lichtsinn found everybody on the team contributing core concepts, and helping to develop the game organically.

For instance, one person came up with the idea of punching minions between screens, while someone else hit upon the notion that the minions should never expire. Other team members then suggested random global events, which automatically activate whenever screens haven’t fluctuated enough over a period of time. “The game was a conglomerate of a lot of brainstorming in front of a big whiteboard,” explained Lichtsinn.

Interestingly, Lichtsinn attributes the strength of the gameplay to the fact that there wasn’t a preconceived story idea or theme. “Professor Zhu repeatedly stressed initially keeping story out of the gameplay so we wouldn’t feel constrained,” recalled Lichtsinn. When it came time to craft the surrounding narrative, the idea of a demonic or creepy carnival was immediately popular with everybody on the team. From this, the hall of mirrors grew as a natural metaphor, as the four players essentially progress through mirrored, parallel environments.

In the early stages of the design process, the biggest hurdle turned out to be gesture controls. First and foremost, the team wanted the main action within the game—punching minions—to be an intuitive, fun, and challenging gesture, instead of a button-press. At the same time, they didn’t want a game that had a slippery slope when players start to lose. The solution was to have players charge a punch by rotating the stick, and then flicking in the direction of the punch; larger screens would then necessitate heftier punches with longer charging times. “We wanted players who were winning to have to work a bit harder to keep their lead,” explained Lichtsinn.

For the most part, ideation proceeded smoothly, with few disagreements. “We designed the essence of the game early enough that there wasn’t much disagreement about features,” recalled Lichtsinn. The more contentious questions were about the art, with several competing preferences. For instance, early in development, the team had the idea of using four different character styles, each representing a typical profession of the period. “We ended up having to cut that, because we simply didn't have enough time,” lamented Lichtsinn.

In many cases, final decisions were made either by Lichtsinn or art director Evan Freed. More complex decisions, however, went to a team vote—though that didn’t happen very often. The most notable instance was over the issue of whether the characters should carry weapons, or just punch with their fists. “That one had to go to a vote,” recalled Lichtsinn. “But since our team could fit into a small room, we never got gridlocked, or had to stop production because of competing ideas.”

Crafting With Unity

There was even less contention in the choice of development tools: the Drexel team selected the Unity* engine and used C# for all the programming. “For us, it was a pretty easy choice,” noted Lichtsinn. Not only was a free version of Unity available—Unreal* Engine didn’t offer a similar edition at the time—but since the programmers already had considerable experience with Unity and C#, they felt confident about hitting milestones on time. Moreover, Unity allowed the team to implement a feature and then play it in the editor without forcing a compile, significantly speeding development.

The team adopted an agile system with two-week sprints, and followed each full build with a comprehensive play-test. “We would open the door and invite passing students to try the game,” remembered Lichtsinn. The team also took advantage of a weekly meeting of gaming enthusiasts in Philadelphia called Philly Dev Night, which is affiliated with the Philly Game Forge community. “We collected a lot of data there about what people enjoyed, and what needed improvement,” noted Lichtsinn. “That informed our decisions about what to include in upcoming sprints.”


Mirrors of Grimaldi with four parallel environments

Early on, the team identified rendering as a potential roadblock. While most games only need to render a single camera view, Mirrors of Grimaldi essentially needed to draw four environments at the same time, all while maintaining an acceptable frame-rate. The solution was to narrow the range of textures to four atlases, allowing the game to only have to load four texture files, significantly enhancing performance. Another challenge involved the minion enemies. A strictly AI-driven approach across four environments risked overloading the CPU. Instead, the team adopted a commonly used strategy of pre-rendering much of the AI-based behavior when the minions were offscreen.

These solutions, coupled with custom graphic optimizations done entirely within the Unity framework, were all it took to make Mirrors of Grimaldi shine on the benchmark Intel® graphics-powered laptop. (These computers were supplied by Intel to all participants in the competition.) But the team didn’t stop there, going so far as to test against what Lichtsinn described simply as “a really old laptop” that was unearthed by a team member. “Almost everything we did was developed in-house, allowing our programmers to focus on optimization as needed,” explained Lichtsinn. “This allowed us to make sure that the game could run on as many platforms as possible, even on relatively old systems.”

During all this, time was an ever-present issue; both while readying for the internal competition and when approaching the IUGS. Initially, there was a mad dash to get as much art and features into the game as possible to show to the faculty. “But then, just as we were able to let out a sigh of relief for winning at Drexel, we realized that we had only three more weeks until we had to do it again at the IUGS,” recalled Lichtsinn.

The key to success was identifying the essential elements that showed Mirrors of Grimaldi’s truest value. “I think we definitely hit all those points,” summarized Lichtsinn. “The team really focused and re-focused, making sure the unique gameplay was always at the center of the entire experience,” added Zhu.

Taking It To The Next Level

For the Drexel team, winning the Best Gameplay category at the 2016 Intel University Games Showcase was deeply gratifying. “It meant a lot, and we were all pretty shocked when it happened,” said Lichtsinn. “We felt that we had come up with a great idea, but you never really know until you actually build the game, and see people playing it and having fun.”

The team continues to polish and enhance Mirrors of Grimaldi, and recently made it available on Steam* Greenlight. The plan is to complete final development in 2017, with new maps and game modes, among other features, for distribution in the Steam Store, pricing the game competitively to spur interest. At that point, based on feedback, the team could consider any of several options, including porting to other platforms such as Sony* Playstation* or Microsoft* Xbox*.

In the meantime, the team members take pleasure in knowing that they were able to come up with a stylish game that’s not only fun to play, but also brings something new to the genre. “Most everyone gets at least a little startled when they first see the dynamic split-screen starting to move. That’s great,” enthused Lichtsinn.

SIDEBAR

Tips From The Drexel Team For Pushing The Innovation Envelope

  • Encourage all team members, irrespective of role, to participate in the ideation process.
  • Think outside the box, literally and figuratively, when developing within a well-established genre.
  • When developing new features, consider how to make the enhancements integral to the overall mechanics and gameplay.
  • Games are products, and products have deadlines. Pay close attention to how new features affect the scope of the product.
  • Choose a development environment that matches your team’s skills, and is in sync with your development methodologies.
  • Don’t be afraid to try to startle and delight your audience.

 

Resources

Intel University Games Showcase 2016 Overview

Intel University Games Showcase 2016 Results

Drexel University Digital Media Program Web Site

Mirrors of Grimaldi Steam Greenlight page

51st and Fire Web Site

Unity Engine Web Site

Intel® Parallel Studio XE 2017 Composer Edition BETA Fortran - Debug Solutions Release Notes

$
0
0

This page provides the current Release Notes for the Debug Solutions from Intel® Parallel Studio XE 2017 Composer Edition BETA Update 1 for Fortran Linux*, Windows* and OS X* products.

To get product updates, log in to the Intel® Software Development Products Registration Center.

For questions or technical support, visit Intel® Software Products Support.

For the top-level Release Notes, visit:

Table of Contents:

Change History

This section highlights important from the previous product version and changes in product updates.

Changes since Intel® Parallel Studio XE 2017 Composer Edition BETA

  • Fortran Expression Evaluator (FEE):
    • Added support for displaying extended types that are parameterized
    • Added the ability in FEE to change the format in which a value is displayed in the debugger windows (e.g.: Watch, Immediate) by using format  specifiers (specified here). The supported format specifiers are "x","s","d","o","c","e","g", and "f".
    • Display of array dimensions in the "Value" column of the Watch and Locals views and in array tooltips.
    • Added support for viewing a particular element of a data structure across an array of such structures (e.g. students(1:100:1)%name). Value assignment to this type of expressions is not supported, though.
    • Modified to display character variable data that contain nulls. Editing of null containing string is disabled in the Watch and  Locals views. Editing in the memory window is still possible.

Changes since Intel® Parallel Studio XE 2016 Composer Edition

  • Simplified Eclipse* plug-in
  • Support for Intel® Xeon Phi™ coprocessor & processor X200 offload debugging
  • Shipping GNU* Project Debugger (GDB) 7.10 (except for Intel® Debugger for Heterogeneous Compute 2017)
  • Improved Fortran Variable Length Array support for GNU* Project Debugger

Product Contents

  • Linux*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for host CPU and Intel® Xeon Phi™ coprocessor, and Eclipse* IDE plugin for offload enabled applications.
  • OS X*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for CPU only.
  • Windows*:
    • Intel® Debugger Extension for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)
    • Fortran Expression Evaluator (FEE) as extension to debugger of Microsoft Visual Studio* 

GNU* GDB

This section summarizes the changes, new features, customizations and known issues related to the GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition.
 

Features

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition and above is based on GDB 7.10 with additional enhancements provided by Intel. This debugger replaces the Intel® Debugger from previous releases. In addition to features found in GDB 7.10, there are several other new features:
  • Intel® Processor Trace (Intel® PT) support for 5th generation Intel® Core™ Processors:
    (gdb) record btrace pt
  • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture) of Intel® Xeon Phi™ coprocessor X100
  • Support for Intel® Xeon Phi™ coprocessor & processor X200
  • Support for Intel® Transactional Synchronization Extensions (Intel® TSX) (Linux & OSX)
  • Register support for Intel® Memory Protection Extensions (Intel® MPX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
  • Data Race Detection (pdbx):
    Detect and locate data races for applications threaded using POSIX* thread (pthread) or OpenMP* models
  • Branch Trace Store (btrace):
    Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
All features are available for Linux*, but only Intel® TSX is supported for OS X*.
 

Using GNU* GDB

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition comes in different versions:
  • IA-32/Intel® 64 debugger:
    Debug applications natively on IA-32 or Intel® 64 systems with gdb-ia on the command line.
    A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
  • Intel® Xeon Phi™ coprocessor debugger (only for Linux*):
    Debug applications remotely on Intel® Xeon Phi™ coprocessor systems. The debugger will run on a host system and a debug agent (gdbserver) on the coprocessor.
    There are two options:
    • Use the command line version of the debugger with gdb-mic.
      This only works for native Intel® Xeon Phi™ coprocessor X100 applications. For Intel® Xeon Phi™ coprocessor & processor X200 use gdb-ia.
      A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
    • Use an Eclipse* IDE plugin shipped with Intel® Parallel Studio XE 2017 Composer Edition.
      This works only for offload enabled Intel® Xeon Phi™ coprocessor applications. Instructions on how to use GNU* GDB can be found in the Documentation section.

Documentation

The documentation for the provided GNU* GDB can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-ia/gdb.pdf<install-dir>/documentation_2017/en/debugger/gdb-mic/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2016/get_started.htm

The latter is available online as well:

Known Issues and Changes

Not found: libncurses.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libncurses.so.5 (e.g. Fedora 24 and 25). Please install the package ncurses-compat-libs which provides the missing library.

Not found: libtinfo.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libtinfo.so.5 (e.g. SLES 11 SP3). If a package for libtinfo is not available, the following workaround can be applied:

$ sudo ln -s <path>/libncurses.so.5.6 <path>/libtinfo.so.5

As <path>, use the location of the system's ncurses library.

Safely ending offload debug sessions

To avoid issues like orphan processes or stale debugger windows when ending offload applications, manually end the debugging session before the application is reaching its exit code. The following procedure is recommended for terminating a debug session:
  1. Manually stop a debug session before the application reaches the exit-code.
  2. When stopped, press the red stop button in the tool-bar in the Intel® MIC Architecture-side debugger first. This will end the offloaded part of the application.
  3. Next, do the same for the CPU-side debugger.
  4. The link between the two debuggers will be kept alive. The Intel® MIC Architecture-side debugger will stay connected to the debug agent and the application will remain loaded in the CPU-side debugger, including all breakpoints that have been set.
  5. At this point, both debugger windows can safely be closed.

Intel® MIC Architecture-side debugger asserts on setting source directories

Setting source directories in the GNU* GDB might lead to an assertion.
Resolution:
The assertion should not affect debugger operation. To avoid the assertion anyway, don’t use source directory settings. The debugger will prompt you to browse for files it cannot locate automatically.

Debugger and debugged application required to be located on local drive (OS X* only)

In order to use the provided GNU* GDB (gdb-ia), it has to be installed on a local drive. As such, the entire Intel® Parallel Studio XE 2017 package has to be installed locally. Any application that is being debugged needs to be located on a local drive as well. This is a general requirement that’s inherent to GNU GDB with OS X*.

Debugging Fortran applications with Eclipse* IDE plugin for Intel® Xeon Phi™ coprocessor

If the Eclipse* IDE plugin for the Intel® Xeon Phi™ coprocessor is used for debugging Fortran applications, evaluation of arrays in the locals window might be incorrect. The underlying CDT applies the C/C++ syntax with brackets to arrays to retrieve their contents. This does not work for Fortran.
Solution: Use a fully qualified Fortran expression to retrieve the contents of arrays (e.g. with array sections like array(1:10)).
 
This section summarizes new features and changes, usage and known issues related to the Intel® Debugger Extension. This debugger extension only supports code targeting Intel® Many Integrated Core Architecture (Intel® MIC Architecture).
 

Features

  • Support for both native Intel® Xeon Phi™ coprocessor applications and host applications with offload extensions
  • Debug multiple Intel® Xeon Phi™ coprocessors at the same time (with offload extension)

Using the Intel® Debugger Extension

The Intel® Debugger Extension is a plug-in for the Microsoft Visual Studio* IDE. It transparently enables debugging of projects defined by  that IDE. Applications for Intel® Xeon Phi™ coprocessors can be either loaded and executed or attached to. This extension supports debugging of offload enabled code, using:
  • Microsoft Visual Studio* 2012
  • Microsoft Visual Studio* 2013
  • Microsoft Visual Studio* 2015

Documentation

The full documentation for the Intel® Debugger Extension can be found here:
<install-dir>\documentation_2017\en\debugger\ps2017\get_started.htm

This is available online as well:

Known Issues and Limitations

  • Disassembly window cannot be scrolled outside of 1024 bytes from the starting address within an offload section.
  • Handling of exceptions from the Intel® MIC Architecture application is not supported.
  • Starting an Intel® MIC Architecture native application is not supported. You can attach to a currently running application, though.
  • The Thread Window in Microsoft Visual Studio* offers context menu actions to Freeze, Thaw and Rename threads. These context menu actions are not functional when the thread is on an Intel® Xeon Phi™ coprocessor.
  • Setting a breakpoint right before an offload section sets a breakpoint at the first statement of the offload section. This only is true if there is no statement for the host between set breakpoint and offload section. This is normal Microsoft Visual Studio* breakpoint behavior but might become more visible with interweaved code from host and Intel® Xeon Phi™ coprocessor. The superfluous breakpoint for the offload section can be manually disabled (or removed) if desired.
  • Only Intel® 64 applications containing offload sections can be debugged with the Intel® Debugger Extension for Intel® Many Integrated Core Architecture.
  • Stepping out of an offload section does not step back into the host code. It rather continues execution without stopping (unless another event occurs). This is intended behavior.
  • The functionality “Set Next Statement” is not working within an offload section.
  • If breakpoints have been set for an offload section in a project already, starting the debugger might show bound breakpoints without addresses. Those do not have an impact on functionality.
  • For offload sections, using breakpoints with the following conditions of hit counts do not work: “break when the hit count is equal to” and “break when the hit count is a multiple of”.
  • The following options in the Disassembly window do not work within offload sections: “Show Line Numbers”, “Show Symbol Names” and “Show Source Code”
  • Evaluating variables declared outside the offload section shows wrong values.
  • Please consult the Output (Debug) window for detailed reporting. It will name unimplemented features (see above) or provide additional information required to configuration problems in a debugging session. You can open the window in Microsoft Visual Studio* via menu Debug->Windows->Output.
  • When debugging an offload-enabled application and a variable assignment is entered in the Immediate Window, the debugger may hang if assignments read memory locations before writing to them (for example, x=x+1). Please do not use the Immediate Window for changing variable values for offload-enabled applications.
  • Depending on the debugger extensions provided by Intel, the behavior (for example, run control) and output (for example, disassembly) could differ from what is experienced with the Microsoft Visual Studio* debugger. This is because of the different debugging technologies implemented by each and should not have a significant impact to the debugging experience.

Fortran Expression Evaluator (FEE) for debugging Fortran applications with Microsoft Visual Studio*

Fortran Expression Evaluator (FEE) is a plug-in for Microsoft Visual Studio* that is installed with Intel® Visual Fortran Compiler. It extends the standard debugger in Microsoft Visual Studio* IDE by handling Fortran expressions. There is no other change in usability.

Known Issues and Limitations

Microsoft Visual Studio 2013 Shell* does not work

To enable FEE with Microsoft Visual Studio 2013 Shell, you need to move both files ForIntrinsics.dll and ForOps11.dll from:

<Program Files (x86) Directory>\Microsoft Visual Studio 12.0\Common7\IDE\Remote Debugger\x64

to:

<Program Files Directory>\Microsoft Visual Studio 12.0\Common7\IDE\Remote Debugger\x64

After that, restart your Microsoft Visual Studio 2013 Shell to use FEE. This will be fixed in a future update release.

Conditional breakpoints limited

Conditional breakpoints that contain expressions with allocatable variables are not supported for Microsoft Visual Studio 2012* or later.

Debugging might fail when only Microsoft Visual Studio 2013/2015* is installed

For some FEE functionality the Microsoft Visual Studio 2012* libraries are required. One solution is to install Microsoft Visual Studio 2012* in addition to Microsoft Visual Studio 2013/2015*. An alternative is to install the "Visual C++ Redistributable for Microsoft Visual Studio 2012 Update 4" found here.
If you installed Intel® Parallel Studio XE 2017 on a system without any Microsoft Visual Studio* version available, a Microsoft Visual Studio 2013* Shell (incl. libraries) will be installed. It might be that FEE does not work in that environment. Please install the redistributable package mentioned above in addition to enable FEE. A future update will solve this problem for the installation of the shell.

Debugging mixed language programs with Fortran does not work

To enable debugging Fortran code called from a .NET managed code application in Visual Studio 2012 or later, unset the following configuration:
Menu Tools->Options, under section Debugging->General, clear the "Managed C++ Compatibility Mode" or "Use Managed Compatibility Mode" check box

For any managed code application, one must also check the project property Debug > Enable unmanaged code debugging.

Native edit and continue

With Microsoft Visual Studio 2015*, Fortran debugging of mixed code applications is enabled if "native edit and continue" is enabled for the C/C++ part of the code. In earlier versions this is not supported.

FEE truncates entries in locals window

To increase debugging performance, the maximum number of locals queried by the debug engine is limited with Intel® Parallel Studio XE 2016 and later releases. If a location in the source code has more than that number of locals, they are truncated and a note is shown:

Note: Too many locals! For performance reasons the list got cut after 500 entries!

The threshold can be controlled via the environment variable FEE_MAX_LOCALS. Specify a positive value for the new threshold (default is 500). A value of -1 can be used to turn off truncation entirely (restores previous behavior) - but at the cost of slower debug state transitions. In order to take effect, Microsoft Visual Studio* needs to be restarted.

Problem with debugging C# applications

If Microsoft Visual Studio 2015* is used, debugging of C# applications might cause problems, i.e. evaluations like watches won't work.If you experience issues like that, try to enable "Managed Compatibility Mode". More details how to enable it can be found here:
http://blogs.msdn.com/b/visualstudioalm/archive/2013/10/16/switching-to-managed-compatibility-mode-in-visual-studio-2013.aspx

The problem is known and will be fixed with a future version.

Attributions

This product includes software developed at:

GDB – The GNU* Project Debugger

Copyright Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

This program is free software; you can redistribute it and/or modify it under the terms and conditions of the GNU General Public License, version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

GNU* Free Documentation License

Version 1.3, 3 November 2008

 

Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>

 

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

 

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

 

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

The "publisher" means any person or entity that distributes copies of the Document to the public.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

 

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

 

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

 

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as the publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

 

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

 

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

 

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

 

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

 

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

 

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

 

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

 

11. RELICENSING

"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.

"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.

"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.

An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

 

Disclaimer and Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:  http://www.intel.com/design/literature.htm

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:
http://www.intel.com/products/processor_number/

MPEG-1, MPEG-2, MPEG-4, H.261, H.263, H.264, MP3, DV, VC-1, MJPEG, AC3, AAC, G.711, G.722, G.722.1, G.722.2, AMRWB, Extended AMRWB (AMRWB+), G.167, G.168, G.169, G.723.1, G.726, G.728, G.729, G.729.1, GSM AMR, GSM FR are international standards promoted by ISO, IEC, ITU, ETSI, 3GPP and other organizations. Implementations of these standards, or the standard enabled platforms may require licenses from various entities, including Intel Corporation.

BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, InTru, the InTru logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, skoool, the skoool logo, Sound Mark, The Journey Inside, vPro Inside, VTune, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

Java is a registered trademark of Oracle and/or its affiliates.

Copyright (C) 2008–2016, Intel Corporation. All rights reserved.


Intel® Parallel Studio XE 2017 Composer Edition BETA C++ - Debug Solutions Release Notes

$
0
0

This page provides the current Release Notes for the Debug Solutions from Intel® Parallel Studio XE 2017 Composer Edition BETA Update 1 for C++ Linux*, Windows* and OS X* products.

To get product updates, log in to the Intel® Software Development Products Registration Center.

For questions or technical support, visit Intel® Software Products Support.

For the top-level Release Notes, visit:

Table of Contents:

Change History

This section highlights important from the previous product version and changes in product updates.

Changes since Intel® Parallel Studio XE 2016 Composer Edition

  • Simplified Eclipse* plug-in
  • Support for Intel® Xeon Phi™ coprocessor & processor X200 offload debugging
  • Shipping GNU* Project Debugger (GDB) 7.10 (except for Intel® Debugger for Heterogeneous Compute 2017)

Product Contents

This section lists the individual Debug Solutions components for each supported host OS. Not all components are available for all host OSes.

  • Linux*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for host CPU and Intel® Xeon Phi™ coprocessor & processor, and Eclipse* IDE plugin for offload enabled applications.
    • Intel® Debugger for Heterogeneous Compute 2017
  • OS X*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for CPU only.
  • Windows*:
    • Intel® Debugger Extension for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)

GNU* GDB

This section summarizes the changes, new features, customizations and known issues related to the GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition.
 

Features

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition and above is based on GDB 7.10 with additional enhancements provided by Intel. This debugger replaces the Intel® Debugger from previous releases. In addition to features found in GDB 7.10, there are several other new features:
  • Intel® Processor Trace (Intel® PT) support for 5th generation Intel® Core™ Processors:
    (gdb) record btrace pt
  • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture) of Intel® Xeon Phi™ coprocessor X100
  • Support for Intel® Xeon Phi™ coprocessor & processor X200
  • Support for Intel® Transactional Synchronization Extensions (Intel® TSX) (Linux & OSX)
  • Register support for Intel® Memory Protection Extensions (Intel® MPX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
  • Data Race Detection (pdbx):
    Detect and locate data races for applications threaded using POSIX* thread (pthread) or OpenMP* models
  • Branch Trace Store (btrace):
    Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
  • Pointer Checker:
    Assist in finding pointer issues if compiled with Intel® C++ Compiler and having Pointer Checker feature enabled (see Intel® C++ Compiler documentation for more information)
  • Improved Intel® Cilk™ Plus Support:
    Serialized execution of Intel® Cilk™ Plus parallel applications can be turned on and off during a debug session using the following command:
    (gdb) set cilk-serialization [on|off]
All features are available for Linux*, but only Intel® TSX is supported for OS X*.
 

Using GNU* GDB

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition comes in different versions:
  • IA-32/Intel® 64 debugger:
    Debug applications natively on IA-32 or Intel® 64 systems with gdb-ia on the command line.
    A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
  • Intel® Xeon Phi™ coprocessor & processor debugger (only for Linux*):
    Debug applications remotely on Intel® Xeon Phi™ coprocessor systems. The debugger will run on a host system and a debug agent (gdbserver) on the coprocessor.
    There are two options:
    • Use the command line version of the debugger with gdb-mic.
      This only works for native Intel® Xeon Phi™ coprocessor X100 applications. For Intel® Xeon Phi™ coprocessor & processor X200 use gdb-ia.
      A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
    • Use an Eclipse* IDE plugin shipped with Intel® Parallel Studio XE 2017 Composer Edition.
      This works only for offload enabled Intel® Xeon Phi™ coprocessor & processor applications. Instructions on how to use GNU* GDB can be found in the Documentation section.

Documentation

The documentation for the provided GNU* GDB can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-ia/gdb.pdf<install-dir>/documentation_2017/en/debugger/gdb-mic/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm

The latter is available online as well:

Known Issues and Changes

Not found: libncurses.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libncurses.so.5 (e.g. Fedora 24 and 25). Please install the package ncurses-compat-libs which provides the missing library.

Not found: libtinfo.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libtinfo.so.5 (e.g. SLES 11 SP3). If a package for libtinfo is not available, the following workaround can be applied:

$ sudo ln -s <path>/libncurses.so.5.6 <path>/libtinfo.so.5

As <path>, use the location of the system's ncurses library.

Safely ending offload debug sessions

To avoid issues like orphan processes or stale debugger windows when ending offload applications, manually end the debugging session before the application is reaching its exit code. The following procedure is recommended for terminating a debug session:
  1. Manually stop a debug session before the application reaches the exit-code.
  2. When stopped, press the red stop button in the tool-bar in the Intel® MIC Architecture-side debugger first. This will end the offloaded part of the application.
  3. Next, do the same for the CPU-side debugger.
  4. The link between the two debuggers will be kept alive. The Intel® MIC Architecture-side debugger will stay connected to the debug agent and the application will remain loaded in the CPU-side debugger, including all breakpoints that have been set.
  5. At this point, both debugger windows can safely be closed.

Intel® MIC Architecture-side debugger asserts on setting source directories

Setting source directories in the GNU* GDB might lead to an assertion.
Resolution:
The assertion should not affect debugger operation. To avoid the assertion anyway, don’t use source directory settings. The debugger will prompt you to browse for files it cannot locate automatically.
 

Accessing _Cilk_shared variables in the debugger

Writing to a shared variable in an offloaded section from within the CPU-side debugger before the CPU-side debuggee has accessed that variable may result in loss of the written value/might display a wrong value or cause the application to crash.

Consider the following code snippet:

_Cilk_shared bool is_active;
_Cilk_shared my_target_func() {
  //Accessing “is_active” from the debugger *could* lead to unexpected
  // results e.g. a lost write or outdated data is read.
  is_active = true;
  // Accessing "is_active" (read or write) from the debugger at this
  // point is considered safe e.g. correct value is displayed.
}

Debugger and debugged application required to be located on local drive (OS X* only)

In order to use the provided GNU* GDB (gdb-ia), it has to be installed on a local drive. As such, the entire Intel® Parallel Studio XE 2017 package has to be installed locally. Any application that is being debugged needs to be located on a local drive as well. This is a general requirement that’s inherent to GNU GDB with OS X*.
 

Intel® Debugger for Heterogeneous Compute 2017

Features

The version of Intel® Debugger for Heterogeneous Compute 2017 provided as part of Intel® Parallel Studio XE 2017 Composer Edition uses GDB version 7.6. It provides the following features:

  • Debugging applications containing offload enabled code to Intel® Graphics Technology
  • Eclipse* IDE integration

The provided documentation (<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm) contains more information.

Requirements

For Intel® Debugger for Heterogeneous Compute 2017, the following is required:

  • Hardware
    • A dedicated host system is required as the target system will stop the GPU when debugging. Hence no more visual feedback is possible.
    • Network connection (TCP/IP) between host and target system.
    • 4th generation Intel® Core™ processor or later with Intel® Graphics Technology up to GT3 for the target system.
  • Software

Documentation

The documentation can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-igfx/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm
 

Known Issues and Limitations

No call-stack

There is currently no provision for call-stack display. This will be addressed in future version of the debugger.

Un-interruptible threads

Due to hardware limitations it is not possible to interrupt a running thread. This may cause intermittent side-effects while debugging, where the debugger displays incorrect register and variable value for these threads. It might also show up as displaying SIGTRAP messages when breakpoints get removed while other threads are running.

Evaluation of expressions with side-effects

The debugger does not evaluate expressions that contain assignments which read memory locations before writing to them (e.g. x = x + 1). Please do not use such assignments when evaluating expressions.

 
This section summarizes new features and changes, usage and known issues related to the Intel® Debugger Extension. This debugger extension only supports code targeting Intel® Many Integrated Core Architecture (Intel® MIC Architecture).
 

Features

  • Support for both native Intel® Xeon Phi™ coprocessor applications and host applications with offload extensions
  • Debug multiple Intel® Xeon Phi™ coprocessors at the same time (with offload extension)

Using the Intel® Debugger Extension

The Intel® Debugger Extension is a plug-in for the Microsoft Visual Studio* IDE. It transparently enables debugging of projects defined by  that IDE. Applications for Intel® Xeon Phi™ coprocessors can be either loaded and executed or attached to. This extension supports debugging of offload enabled code, using:
  • Microsoft Visual Studio* 2012
  • Microsoft Visual Studio* 2013
  • Microsoft Visual Studio* 2015

Documentation

The full documentation for the Intel® Debugger Extension can be found here:
<install-dir>\documentation_2017\en\debugger\ps2017\get_started.htm

This is available online as well:

Known Issues and Limitations

  • Using conditional breakpoints for offload sections might stall the debugger. If aconditional breakpoint is created within an offload section, the debugger might hang when hitting it and evaluating the condition. This is currently analyzed and will be resolved in a future version of the product.
  • Data breakpoints are not yet supported within offload sections.
  • Disassembly window cannot be scrolled outside of 1024 bytes from the starting address within an offload section.
  • Handling of exceptions from the Intel® MIC Architecture application is not supported.
  • Changing breakpoints while the application is running does not work. The changes will appear to be in effect but they are not applied.
  • Starting an Intel® MIC Architecture native application is not supported. You can attach to a currently running application, though.
  • The Thread Window in Microsoft Visual Studio* offers context menu actions to Freeze, Thaw and Rename threads. These context menu actions are not functional when the thread is on an Intel® Xeon Phi™ coprocessor.
  • Setting a breakpoint right before an offload section sets a breakpoint at the first statement of the offload section. This only is true if there is no statement for the host between set breakpoint and offload section. This is normal Microsoft Visual Studio* breakpoint behavior but might become more visible with interweaved code from host and Intel® Xeon Phi™ coprocessor. The superfluous breakpoint for the offload section can be manually disabled (or removed) if desired.
  • Only Intel® 64 applications containing offload sections can be debugged with the Intel® Debugger Extension for Intel® Many Integrated Core Architecture.
  • Stepping out of an offload section does not step back into the host code. It rather continues execution without stopping (unless another event occurs). This is intended behavior.
  • The functionality “Set Next Statement” is not working within an offload section.
  • If breakpoints have been set for an offload section in a project already, starting the debugger might show bound breakpoints without addresses. Those do not have an impact on functionality.
  • For offload sections, setting breakpoints by address or within the Disassembly window won’t work.
  • For offload sections, using breakpoints with the following conditions of hit counts do not work: “break when the hit count is equal to” and “break when the hit count is a multiple of”.
  • The following options in the Disassembly window do not work within offload sections: “Show Line Numbers”, “Show Symbol Names” and “Show Source Code”
  • Evaluating variables declared outside the offload section shows wrong values.
  • Please consult the Output (Debug) window for detailed reporting. It will name unimplemented features (see above) or provide additional information required to configuration problems in a debugging session. You can open the window in Microsoft Visual Studio* via menu Debug->Windows->Output.
  • When debugging an offload-enabled application and a variable assignment is entered in the Immediate Window, the debugger may hang if assignments read memory locations before writing to them (for example, x=x+1). Please do not use the Immediate Window for changing variable values for offload-enabled applications.
  • Depending on the debugger extensions provided by Intel, the behavior (for example, run control) and output (for example, disassembly) could differ from what is experienced with the Microsoft Visual Studio* debugger. This is because of the different debugging technologies implemented by each and should not have a significant impact to the debugging experience.

Attributions

This product includes software developed at:

GDB – The GNU* Project Debugger

Copyright Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

This program is free software; you can redistribute it and/or modify it under the terms and conditions of the GNU General Public License, version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

GNU* Free Documentation License

Version 1.3, 3 November 2008

 

Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>

 

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

 

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

 

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

The "publisher" means any person or entity that distributes copies of the Document to the public.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

 

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

 

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

 

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as the publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

 

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

 

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

 

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

 

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

 

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

 

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

 

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

 

11. RELICENSING

"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.

"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.

"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.

An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

 

Disclaimer and Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:  http://www.intel.com/design/literature.htm

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:
http://www.intel.com/products/processor_number/

MPEG-1, MPEG-2, MPEG-4, H.261, H.263, H.264, MP3, DV, VC-1, MJPEG, AC3, AAC, G.711, G.722, G.722.1, G.722.2, AMRWB, Extended AMRWB (AMRWB+), G.167, G.168, G.169, G.723.1, G.726, G.728, G.729, G.729.1, GSM AMR, GSM FR are international standards promoted by ISO, IEC, ITU, ETSI, 3GPP and other organizations. Implementations of these standards, or the standard enabled platforms may require licenses from various entities, including Intel Corporation.

BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, InTru, the InTru logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, skoool, the skoool logo, Sound Mark, The Journey Inside, vPro Inside, VTune, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

Java is a registered trademark of Oracle and/or its affiliates.

Copyright (C) 2008–2016, Intel Corporation. All rights reserved.

How to Debug Fortran Coarray Applications on Windows

$
0
0

When a Fortran coarray application is started under the Visual Studio* debugger on Windows, the current debug window does not have control over the images running the application. This article presents a method for getting debug control over a selected instance of the program ("image" in the language's terminology).

Each image is running in a separate process, but there is no external indication of which process corresponds to which image. The Visual Studio debugger has the ability to attach control to another process, but you need a way to know which process is which image and to have all the images wait until you have set the desired breakpoints and resume execution.

The attached source is a module called Coarray_Debugging, containing a subroutine called Enable_Coarray_Debugging. Save this source to your local disk and add it to your project as a source file.

Downloadapplication/octet-streamCoarray_Debugging_Mod.f90

At the beginning of your main program's source file, add the line:

use Coarray_Debugging

This goes after the PROGRAM statement (if any) and before any IMPLICIT statements or other specification statements. If you have other USE statements, it can go in with them in any order.

Next, before the first executable line in your program add the line:

call Enable_Coarray_Debugging

This subroutine makes all images wait until you indicate that they should proceed. Build your application. Before starting the program, open Coarray_Debugging_Mod.f90 and set a breakpoint on the line with the call to SLEEPQQ.

When you set the breakpoint by clicking in the gray bar to the left of the line, a red circle will appear. When the breakpoint is hit later, the circle will also contain a yellow arrow as it does in this image.

Start the program under the debugger (Debug > Start Debugging or press F5).

The program will call Enable_Coarray_Debugging which  will display in the Output pane a line for each image. Each line contains the image number, process name and process ID. For example:

In this example, the process name is "coarray_mcpi.exe" (the name of the executable). Image 1 is process ID 48764, image 2 is 47176, etc. We are going to debug image 2 which is process ID 47176.

In Visual Studio, select Debug > Attach to Process...

The Attach dialog will appear, listing all of the processes on your system. Scroll until you find the process ID that matches the one you want. In this case we want 47176.

Once the correct process has been selected, click the Attach button. You should now see the debugger stop at the breakpoint you set earlier.

At this point you can set breakpoints in other parts of the program as normal. Note that if a process is not currently attached to the debugger, it will not stop at breakpoints, but all the processes are waiting in the routine.

The Enable_Coarray_Debugging subroutine declares a scalar logical coarray variable Global_Debug_Flag that is initialized to .FALSE. in image 1. All images then enter a wait loop checking to see if image 1's Global_Debug_Flag is .TRUE.. In order to preserve system responsiveness, the loop contains a call to SLEEPQQ that waits 500 milliseconds before checking again.

A local variable Local_Debug_Flag is checked inside the loop. This variable is also initialized to .FALSE. Once you have set all the breakpoints, use the debugger's Locals window to change the value of Local_Debug_Flag to .TRUE..

Double-click on the .FALSE. value and change it to .TRUE. Then continue execution in the debugger (click Continue or press F5). 

All images will now resume execution. The image you attached to will stop at any breakpoints you have set, and you can examine local variables. Note that coarray variables may not display properly in the debugger and changing their value in the Locals window typically has no effect.

You can go back to the Attach to Process... dialog and attach to other image processes. If breakpoints have been set they will now stop in that process. If you have attached to multiple processes you can switch among them using the Process control towards the upper left:

We find, however, that switching among processes in a coarray application may cause unexpected errors and recommend against it.

If you have questions specifically about this article you can ask below. For all other questions, or for faster response, please ask in our user forum.

Sensor to Cloud: Connecting Intel® NUC and Arduino 101* to Microsoft Azure* IoT Hub

$
0
0

Introduction

This article will show you how to use an Intel® Next Unit Computing (NUC) device to connect sensors on an Arduino 101* board to the Microsoft Azure* IoT Hub. You learn how to read real-time sensor data from the Arduino 101 board, view it locally on the Intel® NUC, and send it to the Azure IoT Hub, where the data can be stored, visualized, and processed in the cloud. To do all this, you use Node‑RED* on the NUC to create processing flows that perform the input, processing, and output functions that drive your application.

Setup and Prerequisites

  • Intel® NUC connected to the Internet
  • Arduino 101 board connected to the Intel® NUC through USB
  • Seeed Studio Grove* Base Shield attached to the Arduino 101 board and switched to 3V3 VCC
  • Grove sensors connected to the Base Shield: light on A1, rotary encoder on A2, button on D4, green LED on D5, buzzer on D6, and relay on D7
  • An active Azure cloud account
  • The packagegroup-cloud-azure package installed on the Intel® NUC

Read Sensors and Display Data on the Intel® IoT Gateway Developer Hub

Log in to the Intel® NUC’s Intel® IoT Gateway Developer Hub by entering the Intel® NUC’s IP address in your browser and using gwuser as the default user name and password. You’ll see basic information about the Intel® NUC, including its model number, version, Ethernet address, and network connectivity status.

Click the Sensors icon, and then click Manage Sensors to open the Node‑RED canvas, where you’ll see Sheet 1 with a default flow for an RH-USB sensor. You won’t use the RH-USB sensor for this project, so drag a box around the entire flow and delete it. You’re left with a blank canvas.

Along the left side of the Node-RED screen, you see a series of nodes. These are the building blocks for creating a Node‑RED application on the Intel® NUC. For this application, you’ll use the nodes shown in Table 1.

Table 1. Nodes used in the sample application

Read button pressesOn/off LED indicator
Measure light levelFormat chart display on the Intel® NUC device
Measure rotary positionSend data to the Intel® NUC’s Message Queuing Telemetry Transport (MQTT) chart listener
Relay open/closedSend data the Azure IoT Hub

Drag nodes onto the canvas and arrange them as shown in Figure 1. You will need multiple copies of some of the nodes. Use your mouse to connect wires between the nodes as shown.

Note:You’ll use the azureiothub node later; don’t include it now.

Figure 1. Arranging nodes on the Node‑RED canvas

When you first place nodes on the canvas, they are in a default state. You must configure them before they’ll work. To do so, double-click them, and then set parameters in their configuration panels.

Double-click each node on the canvas and set its parameters as shown in Table 2. In some cases, the Name field is left blank (it uses the default name of the node). Pin numbers correspond to the Base Shield jack to the sensor or actuator is connected.

Table 2. Nodes and their parameters

Node

Parameters

Grove Button

Platform: Firmata, Pin: D4, Interval (ms): 1000

Grove Light

Platform: Firmata, Pin: A1, Unit: Raw Value, Interval (ms): 1000

Grove Rotary

Platform: Firmata, Pin: A2, Unit: Absolute Raw, Interval (ms): 1000

Grove LED

Platform: Firmata, Pin: D5, Mode: Output

Grove Relay (upper)

Platform: Firmata, Pin: D7

Grove Relay (lower)

Name: Grove Buzzer, Platform: Firmata, Pin: D6 (you use this node to control the buzzer)

chart tag connected to Grove Button

Title: Button, Type: Status Text

chart tag connected to Grove Light

Title: Light, Type: Gauge, Units: RAW

chart tag connected to Grove Rotary

Title: Rotary, Type: Gauge, Units: RAW

mqtt

Server: localhost:1883, Topic: /sensors, Name: Charts

 

Verify your settings and wiring connections, and then click Deploy to deploy your changes and make them active on the Intel® NUC. After deploying the flow, you should see a data display toward the top of the Intel® IoT Gateway Developer Hub, with live values for Rotary, Light, and Button (Figure 2). Turning the rotary knob and covering the light sensor should make the numbers change up and down; pressing the button should turn on the LED, sound the buzzer, and energize the relay.

Figure 2. The deployed Intel® NUC in the Intel® IoT Gateway Developer Hub

Create the Microsoft Azure* IoT Hub

Before you can send sensor data to the Azure IoT Hub, you must create an Azure IoT Hub in your Azure cloud account. Log in to Azure, and then navigate to the Dashboard. To create an Azure IoT Hub, follow these steps:

  1. Click New > Internet of Things > IoT Hub.
  2. Set the parameters to match Table 3.

    Table 3. Parameters for your Microsoft Azure IoT Hub

    Name

    iothub-3982

    Your Azure IoT Hub name must be unique within Azure. Try different names until you find one that’s available.

    Pricing and scale tier

    F1 - Free

    Use the free tier for this application.

    Resource group

    MyIOT

    Create a new group.

    Subscription

    Pay-As-You-Go

     

    Location

    East US

    Pick a location in your geographic region.

  3. Select Pin to dashboard, and then click Create. Azure IoT Hub is deployed to your Azure account and appears on your Dashboard after a few minutes.
  4. After it has been deployed, find the iothubowner Connection string--primary key, which is a text string that you’ll need later.
  5. Click Dashboard > iothub-3982 > Settings > Shared access policies > iothubowner, and look for Connection string--primary key under Shared access keys. The string is complex, so copy it for use in the next step.

Create a Device Identity in Azure IoT Hub

Before a device can communicate with Azure IoT Hub, it must have a device identity in the Azure IoT Hub Device Identity Registry, which is a list of devices authorized to interact with your Azure IoT Hub instance.

You create and manage device identities through Representational State Transfer (REST) application programming interfaces (APIs) that Azure IoT Hub provides. There are different ways to use the REST APIs; in this guide, you’ll use an Azure open source command-line tool called iothub-explorer, which is available on GitHub. iothub-explorer is a Node.js* application, so you need Node.js 4.x or later installed on your computer to use it.

Use these shell commands to install iothub-explorer on your computer and create a device identity for the Intel® NUC. Make sure you that have the iothubowner Connection string--primary key string (found earlier) ready to paste:

Install the program:npm install -g iothub-explorer
Verify it runs:iothub-explorer help

Next, create and register a new device named intelnuc using the iothubowner Connection string--primary key you copied earlier. Run this shell command using your own iothubowner Connection string--primary key string inside the quotation marks:

iothub-explorer "HostName=iothub-3982.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Be0w9Zew0909LLAKeiseXsdf/adfe239EODo9iwee9w=" create intelnuc --connection-string

When the device has been created, you’ll see the message “Created device intelnuc” and a list of parameters for the newly created device. Locate the connectionString parameter, which will have a long text string value next to it that starts with HostName=. You’ll copy and paste this <device> connectionString value in the next task.

Send Data to Azure IoT Hub

Next, add an Azure IoT Hub output node to the Intel® NUC’s Node‑RED flow to send your data to Azure IoT Hub. Complete the following steps:

  1. In the IoT Gateway Developer Hub, drag an azureiothub output node onto the canvas.
    When the node is dropped onto the canvas, its name changes to Azure IoT Hub.
  2. Connect a wire from the output of Grove Rotary to the input of Azure IoT Hub.
  3. Double-click the Azure IoT Hub node, and set the following parameters:
    Name: Azure IoT Hub
    Protocol: amqp
  4. For the Connection String property, paste in the <device> connectionString text string value you copied from the output of iothub-explorer. Make sure that Protocol remains set to amqp. After pasting the connectionString value, the node’s configuration panel should look like Figure 3.

    Figure 3. Parameters for the azureiothub node

  5. Click Ok, and then click Deploy to deploy your updated flow to the Intel® NUC.

At this point, the data values for your Grove Rotary sensor should be flowing to Azure IoT Hub once per second, which is the rate set in the Grove Rotary node. To view the transmission events in Azure IoT Hub, go to your Azure cloud account and navigate to Dashboard > iothub-3982. Look at the Usage tile. The number of transmission messages should be increasing at a rate corresponding to one message per second (Figure 4).

Figure 4. The number of transmission messages should increase by one per second.

If no messages are flowing, you’ll see 0 messages and 0 devices in the Usage tile. To view the actual event messages using iothub-explorer, run the following shell command using your own iothubowner Connection string--primary key:

iothub-explorer "HostName=iothub-3982.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Be0w9Zew0909LLAKeiseXsdf/adfe239EODo9iwee9w=" monitor-events intelnuc

Note: When you’re done testing this application, be sure to stop your Node‑RED flow (for example, by turning off the Intel® NUC or removing the wire between the Grove Rotary sensor and Azure IoT Hub and redeploying the flow) to preserve the remaining messages in your Free Tier allotment for the Azure IoT Hub instance. Otherwise, the Node‑RED application will consume them as it continues to run.

Where to Go From Here

This application provides a foundation for connecting your Arduino 101 board and Intel® NUC to Azure IoT Hub. From here, you would typically wire up other sensors and send their data to Azure IoT Hub, then build more complex applications that listen to Azure IoT Hub messages and store, process, and visualize the sensor data.

Additional Reading:

Login to leave a comment below. If you are not registered go to the Intel® Developer Zone to sign up.

OpenCL™ Drivers and Runtimes for Intel® Architecture

$
0
0

Some of the OpenCL* Drivers and Runtimes are provided as part of:

 

Packages Available

Installation of a relevant runtime or driver enables OpenCL applications to run on a target hardware set.

By downloading a package from this page, you accept the End User License Agreement.

CPU+GPU+SDK

Also available in Intel® Media Server Studio 

 

GPU Driver Packages

CPU-only Runtime-only  

Deprecated 

 


Intel® SDK for OpenCL™ Applications 2016 R2 for Linux* (64 bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio (MSS).  It provides all components needed to run and compile OpenCL applications. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out the Release Notes.

Intel® SDK for OpenCL™ Applications 2016 R2 for Windows* (64 bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio (MSS).  The Windows* graphics driver contains the driver and runtime library components necessary to run OpenCL applications. This package provides components for OpenCL development. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out Release Notes.


OpenCL™ 2.0 Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux* (64-bit)

The intel-opencl-2.0-2.0 driver for Linux is an intermediate release preceding  Intel® SDK for OpenCL™ Applications 2016 R2 for Linux*.  It provides access to the general-purpose, parallel compute capabilities of Intel® graphics for OpenCL applications as a standalone package. 

Intel has validated the intel-opencl-2.0-2.0 driver on CentOS 7.2 for the following 64-bit kernels.

  • Linux 4.4 kernel patched for OpenCL 2.0

Supported OpenCL devices:

  • Intel Graphics (GPU)

For detailed information please see the Release Notes.


OpenCL™ Driver for Intel® Iris™ and Intel® HD Graphics for Windows* OS (64-bit and 32-bit)

The Intel® Graphics driver includes components needed to run OpenCL* and Intel® Media SDK applications on processors with Intel® Iris™ Graphics or Intel® HD Graphics on Windows* OS.

You can use the Intel Driver Update Utility to automatically detect and update your drivers and software.  Using the latest available graphics driver for your processor is usually recommended.


See also Identifying your Intel® Graphics Controller.

Supported OpenCL devices:

  • Intel Graphics (GPU)
  • CPU

For the full list of Intel® Architecture processors with OpenCL support on Intel Graphics under Windows*, refer to the Release Notes.

 

 


OpenCL™ Runtime for Intel® Core™ and Intel® Xeon® Processors

This runtime software package adds OpenCL CPU device support on systems with Intel Core and Intel Xeon processors.

Supported OpenCL devices:

  • CPU

Latest release (16.1.1)

Previous Runtimes (16.1)

Previous Runtimes (15.1):

For the full list of supported Intel Architecture processors, refer to the OpenCL™ Runtime Release Notes.

 


 Deprecated Releases

Note: These releases are no longer maintained or supported by Intel

OpenCL™ Runtime 14.2 for Intel® CPU and Intel® Xeon Phi™ Coprocessors

This runtime software package adds OpenCL support to Intel Core and Xeon processors and Intel Xeon Phi coprocessors.

Supported OpenCL devices:

  • Intel Xeon Phi Coprocessor
  • CPU

Available Runtimes

For the full list of supported Intel Architecture processors, refer to the OpenCL™ Runtime Release Notes.

Jumpstart your IoT Innovation - Intel System Studio 2016 for Microcontrollers Update 1 is Now Available

$
0
0

What’s New: Support for Intel® Quark™ SE Microcontroller C1000 and Intel® Curie™ Module

We just released Intel® System Studio 2016 for Microcontrollers Update 1. What’s cool about it is additional support for the upcoming Intel® Quark™ SE Microcontroller C1000, and the already available Intel® Curie Module or Arduino/Genuino 101. Developers can use the updated Intel System Studio for Microcontrollers tool suite to create amazing “Things” on these Intel Quark microcontroller platforms.

Listed below are just a few of this release’s top new capabilities. 

  • Support for Intel® Quark™ SE microcontroller C1000, and the Arduino/Genuino* 101 board with Intel Curie module

  • Support for Zephyr project* RTOS and code samples to jumpstart your development

  • Updated Intel® Quark™ Microcontroller software interface and code samples to make your development easier

  • New Intel® C Compiler for Microcontrollers (LLVM-based) optimized for resource constraint environments and performance

  • More optimized Intel® Integrated Performance Primitives for Microcontroller library functions for digital signal processing (DSP).

  • Simplified IDE workflow to make it even easier to start your IoT development

Download Now:  Linux*   Windows* 

Learn more at Intel® System Studio for Microcontrollers site. 

HOW-TO INTEL® IOT TECHNOLOGY CODE SAMPLES: CLOSE CALL REPORTER IN JAVA*

$
0
0

Introduction

This close call fleet driving reporter application is part of a series of how-to Intel® IoT code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® XDK IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for the Intel® Edison or Intel® Galileo board.
  •  Store the close-call data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a close-call fleet driving reporter that:

  • monitors the Grove* IR Distance Interrupter.
  • monitors the Grove GPS.
  • keeps track of close calls and logs them using cloud-based data storage.

How it works

This close-call reporter system monitors the direction the Grove* IR Distance Interrupter is pointed to.

It also keeps track of the GPS position of the Intel® Edison board, updating the position frequently to ensure accurate data.

If a close call is detected (that is, the Grove IR Distance Interrupter is tripped), the Intel® Edison board, if configured, notifies the Intel® IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Grove Transportation & Safety Kit containing:

  1. Intel® Edison platform with an Arduino breakout board
  2. Grove IR Distance Interrupter (http://iotdk.intel.com/docs/master/upm/node/classes/rfr359f.html)
  3. Grove GPS (http://iotdk.intel.com/docs/master/upm/node/classes/ublox6.html)

Software requirements

  1. Intel® XDK IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel® IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

Want to download a .zip file? In your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "CloseCallReporter" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_ublox6.jar
  2. upm_rfr359f.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure you have the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove IR Distance Interrupter, and connect the other end to the D2 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove GPS, and connect the other end to the UART port on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://mySite.azurewebsites.net/logger/close-call-reporter
  AUTH_TOKEN=myPassword

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH:

The files need to be copied from the sample repository: 
Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

HOW-TO INTEL® IOT TECHNOLOGY CODE SAMPLES: BLE SCAN BRACELET IN JAVA*

$
0
0

Introduction

This Bluetooth* Low Energy (BLE) scan bracelet application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio IoT Edition lets you create and test applications on Intel®-based IoT platforms.
  • Store detected BLE devices using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a BLE scan bracelet that:

  • searches for BLE devices that come within its scanning range.
  • displays information about detected devices using the OLED display.
  • keeps track of detected devices, using cloud-based data storage.

How it works

This BLE scanner bracelet uses a Xadow* expansion board for the Intel® Edison platform and the OLED display included in the Xadow kit.

With these components, we'll make a simple BLE scanner that displays information on the OLED display when BLE-equipped devices enter or exit its scanning range.

Optionally, all data can be stored using the Intel® IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Xadow* Starter Kit containing:

  1. Intel® Edison platform with a Xadow* expansion board
  2. Xadow* - OLED display (http://iotdk.intel.com/docs/master/upm/node/classes/ssd1308.html)

Software requirements

  1. Intel® System Studio IoT Edition
  2.  Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "BleScanBracelet" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_i2clcd.jar
  2. tinyb.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Set up the Intel® Edison board for BLE development

To set up the Intel® Edison board for BLE, run the following command:

rfkill unblock bluetooth

Connecting the Grove* sensors

You need to have a Xadow* expansion board connected to the Intel® Edison board to plug in all the Xadow devices.

Plug one end of a Xadow connector into the Xadow OLED, and connect the other end to one of the side connectors on the Xadow* expansion board.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://mySite.azurewebsites.net/logger/fire-alarm
  AUTH_TOKEN=myPassword

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH. The files need to be copied from the sample repository: 

Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.


HOW-TO INTEL® IOT TECHNOLOGY CODE SAMPLES: ALARM CLOCK IN JAVA*

$
0
0

Introduction

This smart alarm clock application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio lets you create and test applications on Intel-based IoT platforms.
  • Set up a web application server to set the alarm time and store this alarm data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.
  • Invoke the services of the Weather Underground* API for accessing weather data.

What it is

Using an Intel® Edison board, this project lets you create a smart alarm clock that:

  • can be accessed with your mobile phone via the built-in web interface to set the alarm time.
  • displays live weather data on the LCD.
  • keeps track of how long it takes you to wake up each morning, using cloud-based data storage.

How it works

This smart alarm clock has a number of useful features. Set the alarm using a web page served directly from the Intel® Edison board, using your mobile phone. When the alarm goes off, the buzzer sounds, and the LCD indicates it’s time to get up. The rotary dial can be used to adjust the brightness of the display.

In addition, the smart alarm clock can access daily weather data via the Weather Underground* API and use it to change the color of the LCD. Optionally, all data can also be stored using the Intel IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Software requirements

  1. Intel® System Studio IoT Edition
  2. MMicrosoft Azure*, IBM Bluemix*, or AWS account (optional)
  3. Weather Underground* API key

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "AlarmClock" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, jetty-all-9.3.7.v20160115-uber, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_buzzer.jar
  2. upm_grove.jar
  3. upm_i2clcd.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove Rotary Analog Sensor, and then connect the other end to the A0 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove Button, and then connect the other end to the D4 port on the Grove Shield.
  3. Plug one end of a Grove cable into the Grove Buzzer, and then connect the other end to the D5 port on the Grove Shield.
  4. Plug one end of a Grove cable into the Grove RGB LCD, and then connect the other end to any of the I2C ports on the Grove Shield.

Weather Underground* API key

To optionally fetch the real-time weather information, you need to get an API key from the Weather Underground* website:

http://www.wunderground.com/weather/api

You cannot retrieve weather conditions without obtaining a Weather Underground API key first. You can still run the example, but without the weather data.

Pass your Weather Underground API key to the sample program by modifying the WEATHER_API_KEY key in the config.properties file as follows:

  WEATHER_API_KEY="YOURAPIKEY"

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  WEATHER_API_KEY: "YOURAPIKEY"
  LOCATION: "San_Francisco"

To configure the example for the optional Microsoft Azure* data store, change the SERVER and AUTH_TOKEN keys in the config.properties file as follows:

  SERVER: "http://intel-examples.azurewebsites.net/logger/alarm-clock"
  AUTH_TOKEN: "s3cr3t"

To configure the example for both the weather data and the Microsoft Azure* data store, change the WEATHER_API_KEY, LOCATION, SERVER, and AUTH_TOKEN keys in the config.properties file as follows:

  WEATHER_API_KEY: "YOURAPIKEY"
  LOCATION: "San_Francisco"
  SERVER: "http://intel-examples.azurewebsites.net/logger/alarm-clock"
  AUTH_TOKEN: "s3cr3t"

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH.

Two sorts of files need to be copied from the sample repository:

  1. Jar files- external libraries in the project need to be copied to "/usr/lib/java"
  2. web files- files within site_contents folder need to be copied to "/var/AlarmClock"


Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel Edison board.

You will see output similar to the following when the program is running.

Setting the alarm

The alarm is set using a single-page web interface served directly from the Intel® Edison board while the sample program is running.

The web server runs on port 8080, so if the Intel® Edison board is connected to Wi-Fi* on 192.168.1.13, the address to browse to if you are on the same network is http://192.168.1.13:8080.

Determining the IP address of Intel Edison board

You can determine what IP address Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

HOW-TO INTEL® IOT TECHNOLOGY CODE SAMPLES: AIR QUALITY SENSOR IN JAVA*

$
0
0

Introduction

This air quality monitor application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition . Intel® System Studio IoT Edition lets you create and test applications on Intel-based IoT platforms.
  • Store air quality data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create an air quality reporter that:

  • continuously checks the air quality for airborne contaminants.
  • sounds an audible warning when the air quality is unhealthy.
  • stores a record of each time the air quality sensor detects contaminants, using cloud-based data storage.

How it works

This shop air quality monitor uses the sensor to constantly keep track of airborne contaminants.

If the sensor detects one of several different gases and the detected level exceeds a defined threshold, it makes a sound through the speaker to indicate a warning.

Also, optionally, the monitor stores the air quality data using the Intel® IoT Examples Data Store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Home Automation Kit containing:

  1. Intel® Edison platform with an Arduino* breakout board
  2. Grove* Air Quality Sensor (http://iotdk.intel.com/docs/master/upm/node/classes/tp401.html)
  3. Grove Speaker (http://iotdk.intel.com/docs/master/upm/node/classes/grovespeaker.html)

Software requirements

  1. Intel® System Studio IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "AirQualitySensor" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_grovespeaker.jar
  2. upm_gas.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure you have the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove Air Quality Sensor, and connect the other end to the AO port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove Speaker, and connect the other end to the D5 port on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS, along with Node.js, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://intel-examples.azurewebsites.net/logger/air-quality
  AUTH_TOKEN=s3cr3t

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH. The files need to be copied from the sample repository: 

Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

NFV Performance Optimization for Virtualized Customer Premises Equipment

$
0
0
Paul Veitch
BT Research & Innovation
Ipswich, UK
paul.veitch@bt.com
Tommy Long
Intel Shannon
County Clare, Ireland
thomas.long@intel.com
Paul Hutchison
Brocade
Bracknell, UK
paul.hutchison@brocade.com

Abstract. A key consideration for real-world network functions virtualization solutions is the ability to provide predictable and guaranteed performance for customer traffic. Although many proof-of-concepts have focused on maximizing network throughput, an equally important—indeed in some use cases a more important performance metric—is latency.This paper describes a testbed at BT’s Adastral Park laboratories based on the Intel® Open Network Platform architecture, devised to characterize the performance of a virtual Customer Premises Equipment setup.

The test results highlight significant performance improvements in terms of reduction of latency and jitter using an optimized setup which incorporates the Data Plane Development Kit. For example, the average latency was reduced from between 38 percent and 74 percent (depending on the packet profile), while maximum latency was reduced by up to a factor of six. Such insights into performance optimization will be an essential component to enable intelligent workload placement and orchestration of resources in both current networks and future 5G deployments.

I. Introduction

Network functions virtualization (NFV) is rapidly moving from laboratory testbeds to production environments and trials involving real-world customers1. Standardization efforts continue apace via the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG), where a key consideration is performance benchmarking and best practice2. Although network throughput is a key performance metric, this paper addresses the problem from a different angle, namely tackling the problem of latency sensitivity. In particular, the virtual Customer Premises Equipment (vCPE) use case for enterprises involves a mix of virtualized network functions (VNFs) that typically reside at the customer premise. In the ETSI use case document3, this is referred to as VE-CPE (Figure 1).

Examples of network functions at customer premises that can run as VNFs on a standard x86 server with hypervisor include, but are not limited to, routers, firewalls, session-border-controllers, and WAN accelerators. The branch sites will often require modest access to WAN bandwidth compared with the hub sites, that is, tens or hundreds of Mbit/s WAN link rather than >=1Gibt/s. From a performance perspective, therefore, there is less emphasis on maximizing throughput in a branch site vCPE implementation and a greater emphasis on ensuring the latency and jitter is kept to a minimum.

Figure 1: Virtual Enterprise CPE (VE-CPE) use case including branch locations.

Most corporate networks connecting branch sites across a WAN infrastructure will involve some proportion of Voice-over-IP (VoIP) traffic, and this will have much more stringent performance targets in terms of latency/jitter to ensure predictable and guaranteed performance. Even if voice-related network functions such as Session-Border Controllers (SBCs) have been implemented as “non-NFV” hardware appliances, if other functions at the customer premise that carry end-user traffic have been virtualized—an obvious example is the Customer Edge (CE) router—it is vital to ensure that the performance of the NFV infrastructure has been suitably “tuned” to provide some level of predictability for latency and jitter performance. This will provide a clearer view of the contribution that the NFV infrastructure components make to the overall performance characterization of latency-sensitive applications.

Section II explains the Intel® Open Networking Platform (Intel® ONP) and Data Plane Development Kit (DPDK), while Section III outlines the approach to testing a vCPE setup in terms of latency/jitter performance characterization. The actual test results are detailed in Section IV, followed by conclusions in Section V and recommended further work in Section VI.

II. Open Network Platform and Data Plane development kit

The way in which telecoms operators define and implement NFV solutions for specific use cases such as vCPE depend on a number of factors, including cost, technical criteria, and ensuring vendor interoperability. Combining these criteria results in increased motivation towards open-source solutions for NFV, for example those that leverage the use of Kernel-based Virtual Machine (KVM) hypervisor technology with Open vSwitch* (OVS), and open management tools such as OpenStack*. Intel ONP combines a number of such open source “ingredients” to produce a modular architectural framework for NFV4.

From a performance perspective, one of the key components of the Intel ONP architecture is the DPDK, which can be useful for maximizing performance characteristics for VNFs running on the KVM hypervisor. Figure 2(a) shows a simple overview of the standard Open vSwitch, while Figure 2(b) depicts the Open vSwitch with DPDK. In the standard OVS, packets that are forwarded between Network Interface Controllers (NICs) do so in the kernel space data path of the virtual switch that consists of a simple flow table indicating what to do with packets that are received. Only the first packets in a flow need to go to the user space of the virtual switch (using the “slow path”) as they do not match any entries in the simple table in the kernel data path. After the user space of the OVS handles the first packet in the flow, it will update the flow table in the kernel space so that subsequent packets in the flow are not sent to the user space. In this way, the number of kernel space flow table entries is reduced and the number of packets that need to traverse the computationally expensive user space path is reduced.

Figure 2: Overview of (a) Open vSwitch* and (b) Data Plane Development Kit vSwitch.

In the Open vSwitch with DPDK model (Figure 2(b)), the main forwarding plane (sometimes called the “fast path”) is in the user space of the OVS and uses DPDK. One of the key differences with this architecture is the fact that the NICs are now Poll Mode Drivers (PMDs), meaning incoming packets are continuously polled rather than being interrupt-driven in an asynchronous fashion. Initial packets in a flow are sent to another module in the user space, following the same path that is traversed by packets in the kernel fast path case.

Figure 3 depicts the actual traffic flows from OVS to the guest virtual machine (VM), which in the context of this paper is a virtual router function. In the case of standard OVS, the OVS forwarding is completed in the kernel space (Figure 3(a)), while for the OVS with DPDK model, the OVS forwarding is completed in the user space (Figure 3(b)), and the guest VM’s virtio queues are mapped to the OVS DPDK and hence can be read/written directly by the OVS. This “user space to user space” path should prove more performant than the “kernel-based” traffic path. Note that for both architectures, the guest VM can use either the DPDK or standard Linux* drivers. In the tests described later, the high-performance scenario uses a VNF with DPDK drivers.

Figure 3: Traffic flows: (a) Open vSwitch* and (b) Data Plane Development Kit vSwitch.

In theory, the Open vSwitch with DPDK should enable improved performance over the standard OVS model. However it is important to conduct specific tests to validate this practically. The next section describes the testbed setup, with the actual results explained in the subsequent section.

III. Overview of testbed

The key components of the testbed are shown in Figure 4.

Figure 4: Baseline and high-performance testbeds (specific hardware details shown relate to compute nodes that are “systems-under-test”).

There were two reference testbed platforms used for the purposes of exploring and comparing the impact of high-performance tuning such as DPDK on latency/jitter test results. Each testbed comprises a single OpenStack controller node and a corresponding compute node, built using the “Kilo” system release. Essentially the compute node and associated guest VNFs running on the hypervisor represent the “systems-under-test” (SUT).

The baseline setup uses an Intel® Xeon® processor E5-2680 (code-named Sandybridge) and does not include any BIOS optimizations. In contrast, the high-performance setup uses an Intel® Xeon® processor E5-2697 v3 (code-named Haswell) and includes certain BIOS tuning such as “maximize performance versus power,” and disablement of C-states and P-states. The baseline uses the standard kernel data path whereas the high-performance setup uses the OVS DPDK data path. Although both testbeds use Fedora* 21 as the base OS, the baseline uses a standard non-real-time kernel (3.18), whereas the high-performance setup uses Linux Real-Time Kernel (3.14) with a tuned configuration (isolation of vSwitch and VM cores from the host OS, disabling Security Enhanced Linux, using idle polling and also selecting the perfect Time-Stamp Counter clock). The baseline setup uses “vanilla” OpenStack settings to spin up the VM and assign network resources. In contrast, the high-performance setup is more finely tuned to allow dedicated CPUs to be pinned for the vSwitch and VNFs respectively. The high-performance setup also ensures that the CPUs and memory from the same socket are used for the VNFs, and the specific socket in use is that which connects directly to the physical NIC interfaces of the server.

In both testbeds, the actual VNF used in all tests was a Brocade 5600* virtual router R3.5, and a Spirent Test Center C1* load-testing appliance was used for running test traffic. In the high-performance scenario, the virtual router uses DPDK drivers. As shown in Figure 5, both single VNF and dual VNF service chain permutations were tested.

Figure 5: Systems-under-test: (a) single virtual router and (b) two virtual routers in series service chain.

The test cases were set up to ensure “safe” operational throughput for low-end branch offices (<=100Mbit/s) such that no adverse impact on the latency/jitter measurements would occur. The following packet profiles were used for all tests:

  • 64-byte frames (bidirectional load of 25 Mbps)
  • 256-byte frames (bidirectional load of 25 Mbps)
  • “iMix” blending frame sizes in a realistic representation (bidirectional load of 50 Mbps)
  • 1500-byte frames (bidirectional load of 100 Mbps)

The test equipment uses a signature with a timestamp to determine the latency between frames. This signature is at the end of the payload next to the FCS (Frame Check Sequence), and has a timestamp, sequence numbers, and stream ID. Jitter is defined here as the time difference between two arriving frames in the same stream. Hence this is a measure of packet delay variation. The same traffic load comprising a single traffic flow was generated to the SUT in each direction, and the results described in the following section capture the “worst-case” metrics observed for a particular direction (that is, the cited values of latency, jitter, and so on are for a single direction only and not round-trip values). It is also worth noting that the test results displayed are runtime in the sense that results are cleared after ~20 seconds, and then allowed to run for the designated duration to ensure that the first packets in a flow taking the slow path do not distort results.

IV. Test results

A. Mixed Tests

The average one-way latency measured over 5-minute durations for the four different packet profile scenarios is shown for the single VNF setup in Figure 6 and the dual VNF setup in Figure 7.

Figure 6:Average latency results in microseconds for single virtualized network function (lower is better).

The results for average latency across the range of packet profiles clearly show significantly improved performance (lower average latency) in the high-performance setup compared with the “vanilla” baseline testbed. For the single VNF tests, the average latency is reduced by a factor of between 38 percent and 74 percent, while in the dual VNF scenario the degree of reduction is between 34 percent and 66 percent. As would be expected, the dual VNF case involves higher overall latency results for both testbeds due to more packet switches between the VNF instances and through the virtual switches within the hypervisor. Note that zero packet loss was observed for these tests.

Figure 7: Average latency results in microseconds for two virtualized network function (lower is better).

B. Focus on 256-Byte Packet Tests

It is instructive to explore in more detail the results for a specific packet profile scenario. For example, the 256-byte packet tests are closely representative of VoIP frames generated using the Real-Time Transport Protocol (RTP) with G.711 encoding5. Figure 8 shows the minimum, average, and maximum one-way latency values for both single and dual VNF scenarios using 256-byte packets.

Figure 8: Detailed latency results in microseconds for 256-byte packets.

Figure 9: Detailed jitter results in microseconds for 256-byte packets.

Figure 9 shows the corresponding average and maximum one-way jitter values. The maximum values of latency and jitter are important for gauging worst-case performance characterization for both testbeds. Crucially, the maximum latency is reduced in the high-performance setup by factors of 6 and 7.4 for the single and dual VNF cases, respectively. The maximum jitter meanwhile is reduced by factors of 24 and 8.5 for the single and dual VNF cases, respectively. Note that zero packet loss was observed for these tests.

As well as assessing performance over short fixed duration intervals, it is important to understand the potential drift in performance over longer periods. Figure 10 shows the results for 5-minute maximum latency tests compared to 16-hour tests carried out using 256-byte packets and the single VNF setup. In essence, the test result highlights the similar performance improvement achieved using the optimized (that is, high performance) versus non-optimized (baseline) setup: the maximum latency is reduced by a factor of 5 for the 16-hour test and a factor of 6 for the 5-minute test. The key point to note however is that the maximum latency values are significantly higher in the 16-hour tests, which can be attributed to very occasional system interrupt events (that is, housekeeping tasks) which will have an impact on only a very small number of test packets. Despite this, the value of 2-msec maximum one-way latency for the 16-hour duration/256 byte packet test for the high-performance setup is still comfortably within the one-way transmission target of 150 msec for voice traffic, as specified in ITU-T specification G.1146. In other words, the 2-msec worst-case contribution added by the vCPE setup only amounts to 1.3% of the overall one-way recommended budget for latency. Indeed, even the non-optimized baseline setup comprising 9.95 msec maximum one-way latency is only 6.6% of this budget.

Figure 10: Soak test latency results in microseconds (16- hour compared to 5-minute tests).

V. Summary and Conclusions

This paper has demonstrated fine-tuning of a virtual CPE infrastructure platform based on a KVM hypervisor. Specifically leveraging some of the components of Intel ONP architecture such as the DPDK can provide significant improvements in performance over a baseline (that is, a non-optimized) setup. For the single VNF tests, the average one-way latency is reduced by a factor of between 38 percent and 74 percent, while in the dual VNF scenario, the degree of reduction is between 34 percent and 66 percent. For the more VoIP-representative test case using 256-byte packets, the maximum latency is reduced in the high-performance setup by factors of 6 and 7.4 for the single and dual VNF cases, respectively, while maximum jitter is reduced by factors of 24 and 8.5 for the single and dual VNF cases, respectively.

Based on the experimental results, it can be concluded that performance-tuning of NFV infrastructure for latency-sensitive applications such as VoIP will achieve better and more deterministic overall performance in terms of latency and jitter than a baseline (that is, non-optimized) setup. Whether a network operator decides to implement such optimizations will be driven largely by the required mix of VNFs being supported on the vCPE infrastructure, and the degree to which SLAs for performance metrics such as latency and jitter must be associated with the services underpinned by the VNFs. In practical terms, actual target SLAs used by network operators will cite performance targets within the scope of the operator’s own backbone/transit network domain, and will vary according to geography. IP Packet round-trip values of 30 msec to 40 msec for Europe, 40 msec to 50 msec for North America, and up to 90 msec for Trans-Atlantic links are typical examples and are targets for average latency.

If network operators do opt for optimized NFV setups using capabilities such as DPDK, rather than a vanilla out-of-the-box solution, they need to be aware of the possible impact on higher-layer orchestration solutions, which will need clearer visibility of underlying infrastructure parameters and settings to ensure VNFs with specific performance requirements are provisioned and managed accordingly. The experiments presented in this paper can be viewed as a foundation to help advance the understanding of such performance optimizations, equally applicable to current networks, and future 5G infrastructures.

VI. Future Work

Further research topics of interest include the following:

  • Consideration of “layers of optimization” and what their individual contribution and impact to the “overall optimization” is: hardware choices (specifically, the contribution of Intel® Xeon® processor E5-2680 and Intel® Xeon® processor E5-2697 v3 to the differences in latencies), BIOS settings (for example, consider P-state enablement to allow utilization of Enhanced Intel® SpeedStep® Technology for improved power/load efficiency), Real-Time Kernel tuning options (for example  “no hertz kernel,” read copy update/RCU polling), hypervisor settings, and VNF setup all contribute to the architecture. Therefore clearer visibility of possible optimizations and their effect at each layer should be assessed.
  • Similar tests can be considered to perform a performance characterization based on a richer suite of diverse VNF types, including VoIP-specific VNFs.
  • Test analytics can be further refined to assess profiling and frequency distributions of packet latency and jitter performance.
  • Further analysis of the impact of NFV optimizations on higher-level management: making an orchestrator aware of underlying resources and the ability to leverage specific fine-tuning of NFV infrastructure using capabilities like DPDK, adds complexity to the management solution, but makes it possible to customize the allocation of latency-sensitive VNFs onto the most suitable NFV infrastructure.

As is evident, there are a number of interesting challenges and problems yet to be addressed in this space.

References

1. J. Stradling. “Global WAN Update: Leading Players and Major Trends,” Current Analysis Advisory Report, Sept 2015.

2. “Network Functions Virtualization Performance & Portability Best Practices,” ETSI ISG Specification GS NFV-PER001, V1.1.1. June, 2014.

3. “Network Functions Virtualisation Use Cases,” ETSI ISG Specification GS NFV001, V1.1.1. October 2013.

4. “Intel® Open Network Platform Server (Release 1.5),” Release Notes, November 2015.

5. “NFV Performance benchmarking for vCPE,” Network Test Report, Overture Networks, May 2015.

6. “ITU-T Specification G114- One Way Transmission Time,” May 2003.

HOW-TO INTEL® IOT TECHNOLOGY CODE SAMPLES: ACCESS CONTROL IN JAVA*

$
0
0

Introduction

This access control system application is part of a series of how-to Intel Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio lets you create and test applications on Intel®-based IoT platforms.
  • Set up a web application server to let users enter the access code to disable the alarm system, and store this alarm data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a smart access control system that:

  • monitors a motion sensor to detect when a person is in an area that requires authorization.
  • can be accessed with your mobile phone via the built-in web interface to disable the alarm.
  • keeps track of access using cloud-based data storage.

How it works

This access control system provides the following user flow: 

  1. Passive infrared (PIR) motion sensor looks for motion.
  2. User sets off the motion detector and has 30 seconds to enter the correct code in the browser.
  3. If the user fails to enter the code in the given time, the alarm goes off.
  4. If the user enters the correct code, the system waits for 30 seconds before allowing the user to pass.

Additionally, various events (looking-for-motion, motion-detected, invalid-code, etc.) are logged. Optionally, all data can be stored using the Intel® IoT Examples data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Software requirements

  1. Intel® System Studio IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same but with different source files and jars.

Open Intel® System Studio IoT Edition. It will start by asking for a workspace directory; choose one then click OK.

In Intel® System Studio IoT Edition, select File -> new -> Intel® IoT Java Project:

Give the project the name "AccessControl" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, jetty-all-9.3.7.v20160115-uber, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

For this sample you will need the following jars:

  1. upm_i2clcd.jar
  2. upm_biss0001.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure the tiny VCC switch on the Grove* Shield is set to 5V.

  1. Plug one end of a Grove cable into the Grove PIR Motion Sensor, and then connect the other end to the D4 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove RGB LCD, and connect the other end to any of the I2C ports on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER: "http://intel-examples.azurewebsites.net/logger/access-control"
  AUTH_TOKEN: "s3cr3t"

To configure the required access code to be used for the example app, change the CODE key in the config.properties file to whatever you want to use. For example:

  CODE: "4321"

Preparing the Intel® Edison board before running the project

In order for the sample to run, you will need to copy some files to the Intel® Edison board. Two sorts of files need to be copied from the sample repository:

  1. Jar files: external libraries in the project need to be copied to "/usr/lib/java"
  2. web files: files within site_contents folder need to be copied to "/var/AccessControl"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

You will see output similar to the following when the program is running.

Stopping the alarm

The alarm is set using a single-page web interface served directly from the Intel® Edison board while the sample program is running.

The web server runs on port 8080, so if the Intel® Edison board is connected to Wi-Fi* on 192.168.1.13, the address to browse to if you are on the same network is http://192.168.1.13:8080.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

 

IoT Path-to-Product: The Making of an Intelligent Vending Machine

$
0
0

To demonstrate a rapid path-to-product IoT solution for retail using cloud data analytics, a proof of concept was created using the Intel® IoT Developer Kit and Grove* IoT Commercial Developer Kit that was scaled to an industrial solution using an Intel® IoT Gateway, industrial sensors, Intel® IoT Gateway Software Suite, Intel® System Studio, and Microsoft* Azure* cloud services. This solution monitors the inventory, product sales, and maintenance of a vending machine. The gateway gathers data from temperature sensors, stepper motors, a coil switch, and a product-purchasing application for edge data analytics.

This article contains an overview of the creation of the Intelligent Vending Machine Prototype. For a how to, see IoT Path-to-Product: How to Build an Intelligent Vending Machine.

Visit GitHub for this project's latest code samples and documentation.

 

A key vector of opportunity related to the Internet of Things (IoT) springs from adding intelligence to everyday devices, as a means to improve their operation as well as the efficiency and effectiveness of the business operations associated with them. For example, vending machines are ubiquitous, and the familiar, common machines with coin or currency acceptors represent a significant potential revenue stream for retailers of various types. It’s no wonder that the scope of goods available in vending machines has grown dramatically in recent years, including consumer electronics being commonly sold in airports and other facilities.

Figure 1. Completed intelligent vending machine.

Vending machines have the advantage over other retail points of presence that they operate 24 hours per day, 365 days per year, without the requirement for human cashiers. They also give distributors significant control where it would otherwise not be possible, such as in public spaces and office buildings. At the same time, however, vending machines require regular service: frequently to replenish the product being sold and less frequently for scheduled and unscheduled maintenance. 

Giving the owners of vending machines greater insight into the status of each unit in their vending fleet has the potential to improve the efficiency of that service effort. Intel® undertook a development project to investigate this and other opportunities associated with building an intelligent vending machine. The completed device is shown in Figure 1. This project drew inspiration in large part from the existing solution blueprint for intelligent vending machines from Intel and ADLINK Technologies. This document recounts the course of that project development effort. It begins with an abstract description of a structured project methodology, divided into phases, and then recounts the process of the project’s development in detail, phase by phase. 

Interested parties can use this narrative to retrace the steps taken by the Intel® project team in developing the intelligent vending machine. Perhaps more importantly, however, it can be generalized as a set of guidelines to address the needs of other types of projects. Intel® makes this document freely available and encourages the use of this methodology and process to drive inquiry, invention, and innovation for the Internet of Things.


Methodology

By its nature, IoT embraces open-ended innovation, with endless diversity of projects to add intelligence to objects from the simple to the complex, and from the mundane to the exotic. At the same time, every project builds on experience the industry has gained from IoT projects that have gone before, and best practices suggest structural elements in common among IoT projects in general.

To take advantage of those commonalities and help increase the chances of success during development, Intel has developed a structured approach to IoT project development. It consists of a six-phase model that guides the entire path to product, which begins with the first glimmer of an idea and follows through until the final solution is in commercial use. It is intended to be general enough that it can be adapted to the needs of any IoT project.

Initiation phases (1-3)

The first three phases of the project methodology are investigative. They focus on ideation and assessing the potential of the project to solve a given problem, in preparation for ultimately leading to a commercially viable product. As such, these phases value brainstorming and proof of concept over rigorously addressing finer points of design. 

Rapid prototyping is facilitated by using the Grove* IoT Commercial Developer Kit, which consists of an Intel® NUC system, Intel® IoT Gateway Software Suite, and the Grove* Starter Kit Plus (manufactured by Seeed). The project also uses the Arduino* 101 board.

Note: Known in the United States as “Arduino 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

  • Phase 1:Define the opportunity that the project will take advantage of. The first step of an IoT project is to identify the problem or opportunity that the project will address. Documentation at this stage should identify the opportunity itself, the value of the solution (to end users as well as the organizations that build and implement it), and limitations of the project concept, including design challenges and constraints.
  • Phase 2: Design a proof of concept to take advantage of the opportunity. The initial solution design proposes a practical approach to building the solution, including hardware, software, and network elements. The design must address the design challenges and constraints identified in Phase 1 to the extent that is possible before building a prototype, including due consideration of factors such as cost and security.
  • Phase 3: Build and refine the proof of concept. The solution prototype is based on the design established in Phase 2, making and documenting changes as needed. Design changes based on shortcomings and additional opportunities identified during testing should be documented as part of this phase.

Completion phases (4-6)

The last three phases of the project methodology proceed only after the decision has been made to go forward with productizing the solution. As such, these phases are explicitly concerned with hardening the solution in terms of stability, security, and manageability, preparing it for mass production, and monetizing it to realize its commercial potential.

The completion phases of the project involve shifting the solution to industrial-grade sensors and related components, building it out using a commercial-grade gateway, and finalizing the feature set.

  • Phase 4: Produce a solid beta version. Once the project has been approved as a viable solution to develop toward production, the next step is to produce a product-oriented version that is explicitly intended to finalize the design. This version represents a significant investment in resources, including commercial-grade sensors and other components, as well as a commercial IoT gateway.
  • Phase 5: Evaluate functionality and add features. The completed beta version of the solution is tested to verify that it functions correctly according to the design parameters. As part of the testing process, the project team also identifies additional features and functionality and incorporates them into the solution to make it more robust and valuable to end users.
  • Phase 6: Finalize design and move into production. Once the product is feature-complete, the team hardens the solution by adding advanced manageability and security features, as well as optimizing the design as needed to enhance factors such as marketability and efficiency of manufacturing. The production user interface (UI) for the solution is finalized. This phase also includes final planning for merchandising and marketing the solution before moving into full production.

Phase 1: Defining the opportunity 

While traditional vending machines represent lucrative revenue streams, they are woefully inefficient. Each machine must be serviced on a regular basis by a human attendant to replenish the machine’s stock of product. This task is typically handled by assigning machines to regular routes that are followed by personnel in trucks. 

To understand the inherent inefficiency in this approach, consider the activity along a route that includes a high-rise office building. Here, the attendant pulls up in front of the building and has the choice either to guess what will be needed in the machines up on the 15th and 20th floors, bring the product up, and then make another round trip to bring the rest of what was needed, or else to make a dedicated inventory trip with a notepad in hand. Either approach takes needless time and effort that costs the vending company money. 

Moreover, distributors must seek a balance between dispatching too many trips by attendants (wasting payroll hours) or too few (leaving machines depleted of stock and missing out on revenue). The situation becomes even more problematic because the distributor must depend to some degree on end-customers to report when a machine is out of order.

Project initiators at Intel® determined that an intelligent vending machine is viable as the basis for a potential project to demonstrate IoT capabilities and the project methodology described in this document. That core group identified skill sets that were likely to be required during the project, including project management, programming, cloud architecture, and documentation. Based on that list of required skills, the core group formed the full project team, with personnel on the project mostly drawn from Intel employees, with external personnel included in a small number of instances to round out the expertise of the team.

The full project team’s first order of business was to quantify the potential opportunity associated with the project, as the basis for the initial prototype design. The core opportunity for this use case was identified as enabling a vending machine to intelligently monitor its level of product inventory and its operational status, and to be able to report that information back through an IoT gateway to the cloud.

The team elected to integrate cloud resources for the data store and administrative functionality. The goal of this approach was to facilitate a fully connected and scalable solution that optimize operations using an overall view of a fleet of vending machines. The key value to the cloud approach lies in the potential for analytics, which could potentially predict sales to optimize the supply chain among many distributed machines. It could also be used to optimize the efficiency of the personnel who replenish the inventory in the machines and perform unscheduled mechanical maintenance.

Phase 2: Designing the proof-of-concept prototype 

The project team determined that, for this project to be as useful as possible for the developer community, it should be based on readily available parts and technologies. Based on that decision, it was decided to limit the bill of materials to the Grove* IoT Commercial Developer Kit, Intel® IoT Developer Kit, and Intel® IoT Gateway Software Suite (https://software.intel.com/en-us/iot/hardware/gateways/wind-river), using software technologies that are widely used in the industry and available at low or no cost, with the use of free open-source software (FOSS) wherever practical.

To accelerate the prototype stage and reduce its complexity, the team elected to build the local portion of the prototype as a bench model that would consist of the compute platform and sensors, without incorporating an actual vending machine, although such a device would be added at a future stage of the project.

Prototype hardware selection

The Intel® NUC Kit DE3815TYKHE small-form-factor PC was chosen for this project. This platform is pictured in Figure 2, and its high-level specifications are given in Table 1. In addition to its robust performance, the team felt that, as Intel’s most recently introduced hardware platform specifically targeting IoT, it was a forward-looking choice for this demonstration project. Based on the Intel® Atom™ processor E3815, the Intel NUC offers a fanless thermal solution, 4 GB of onboard flash storage (and SATA connectivity for additional storage), as well as a wide range of I/O. The Intel NUC is conceived as a highly compact and customizable device that provides capabilities at the scale of a desktop PC.

To simplify the process of interfacing with sensors, the team elected to take advantage of the Arduino* ecosystem using the Arduino* 101 board, also shown in Figure 2, with specifications given in Table 1. This board makes the Intel NUC both hardware and pin compatible with Arduino shields, in keeping with the open-source ideals of the project team. While Bluetooth* is not used in the current iteration of the project, the board does have that functionality, which the team is considering for future use.

Figure 2. Intel® NUC Kit DE3815TYKHE and Arduino* 101 board.

 


Table 1. Prototype hardware used in intelligent vending project.

 

Intel® NUC Kit
DE3815TYKHE

Arduino* 101
Board

Processor/
Microcontroller

Intel® Atom™ processor E3815 (512K Cache, 1.46 GHz)

Intel® Curie™ Compute Module @ 32 MHz

Memory

8 GB DDR3L-1066 SODIMM (max)

  • 196 KB flash memory

  • 24 KB SRAM

Networking / IO

Integrated 10/100/1000 LAN

  • 14 Digital I/O pins

  • 6 Analog IO pins

Dimensions

190 mm x 116 mm x 40 mm

68.6 mm x 53.4 mm

Full Specs

specs

specs

For the sensors and other components needed in the creation of the prototype, the team chose the Grove* Starter Kit for Arduino* (manufactured by Seeed Studio), which is based on the Grove* Starter Kit Plus used in the Grove* IoT Commercial Developer Kit. This collection of components is available at low cost, and because it is a pre-selected set of parts, it reduces the effort required to identify and procure the bill of materials for IoT prototypes in general. Selection of sensors and other components for the prototype (detailed in the following section) are guided by the following key data:

  • Internal temperature of the machine
  • Inventory levels of each vendable item in the machine
  • Door open or closed status
  • Detection of a jam in the vending coil

Prototype software specification

For the prototype OS, the team considered Yocto Linux* as well as the Intel® IoT Gateway Software Suite. Yocto Linux supports the project’s ideal of using free open-source software (FOSS), and it offers a high degree of flexibility, as well as robust control over the source code and the ability to create a custom lightweight embedded OS that is tailored to the needs of the system. Intel IoT Gateway Software Suite, on the other hand, provides an out-of-the-box implementation, without requirements for customization. The team identified this combination of factors as a best practice for prototype development, and so Intel IoT Gateway Software Suite was chosen as the OS for the prototype.

The following applications were identified to be developed as part of the solution:

  • Control application will run on the vending machine itself, gathering data from sensors and handling operation of the electromechanical aspects of the solution (e.g., turning the vending coils) as well as data exchange with both human users (e.g., customers and administrators) and with the cloud.
  • Administration application will operate on a PC or tablet and allow for a detailed view into the operation of the vending machine, including events, status, and logs, as well as access to the cloud data and analytics. This application will also support regular maintenance.
  • Customer application will operate on a smartphone or other mobile device, enabling a customer to purchase products from the machine. 

Phase 3: Building and refining the proof-of-concept prototype

Using the Intel NUC Kit DE3815TYKHE, the Arduino 101 board, and the Grove Starter Kit Plus IoT Edition, the team developed the proof of concept prototype illustrated in Figure 3 to simulate a simple vending machine that dispenses two products. It includes a 2x16-character LCD display that shows product names and price information, as well as two product selection buttons, a step motor to dispense products, and two LEDs (green and red) to show the machine status. It also includes a temperature sensor and a “fault detection” button. Once the buy button on the customer application is pressed, the product is dispensed; for simplicity, payment processing hardware was left out of the prototype.

Figure 3. Intelligent vending machine proof of concept prototype.

Prototype Hardware Implementation

The bill of materials for the prototype is summarized in Table 2.

Table 2. Intelligent vending machine prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* Starter Kit Plus IoT Edition

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Gear Stepper Motor with Driver

http://www.seeedstudio.com/depot/Gear-Stepper-Motor-with-Driver-p-1685.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Green LED

http://www.seeedstudio.com/depot/Grove-Green-LED-p1144.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

Touch Sensor

http://seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

Prototype Software Implementation

The control application used in the proof of concept prototype was written in C++. It also uses a Node.js component for accessing the Azure cloud. The cloud is used to exchange events with the mobile and administration applications. Such events include for example, temperature alerts and product dispense requests. The mobile application was written in JavaScript* for use in a web browser, to avoid the necessity of migrating the application to multiple smartphone platforms.

The development environment used to develop the software for this solution was Intel® System Studio, a plug-in for the Eclipse* IDE that facilitates connecting to the NUC and developing applications in C/C++.

In addition, the development of this solution used Libmraa, a C/C++ library that allows for direct access to I/O on the NUC, as well as Firmata, which allows for programmatic interaction with the Arduino development environment, taking advantage of Arduino’s hardware abstraction capabilities. Abstracting Firmata using Libmraa enables greater programmatic control of I/O on the NUC, simplifying the process of gathering data from sensors. UPM provides the specific function calls that are used to access sensors.

Phase 4: Producing a Solid Beta Version

With a proof of concept prototype up and running, the team turned its attention to creating the production version of the intelligent vending machine. The solution as a whole was conceived as including the following main parts:

  • Vending machine, which dispenses products and communicates data back through the gateway. This part of the solution is complex and custom-built, including a variety of sensors and related components.
  • Gateway, which is purchased as a commercial product based on Intel architecture and implemented using custom-developed software.
  • Administration and Customer application, implemented in JavaScript, which is used to control the solution as a whole and to generate and access cloud-based analytics.
  • Cloud analytics, based on Microsoft Azure, which allow for the development of insights to improve business processes, based on usage data from the vending machine over time.

Selecting Vending Machine Components

An early effort in the completion phases of the project involved selecting the specific components that would make up the final solution in production.

Vending Machine Device Procurement

Whereas the team had elected to create the proof of concept prototype as a board-level simulation of a vending machine, the production version was to be an actual, functioning vending machine. The team investigated having a custom machine purpose built, as well as purchasing a used machine that could be retrofitted for the purposes of this project. Ultimately, a custom machine was selected in order to support selling the widest possible range of products. The initial specification for the custom vending machine and a picture of the machine itself at an early stage of fabrication are shown in Figure 4.


Vending Machine Model Specification

Custom tabletop vending machine for a variety of small products packed in boxes, blister packs, or bags. Each of three coil-driven vend trays will be configured for a different size product:

  • 3” close pitch coil for 12-14 small blister packs
  • 4” medium pitch coil for 9-12 medium size boxes or packages
  • large 5-6” coil for 6-8 larger packages such as t-shirts

The coils will be driven from one stepper motor per coil and drop product in to a singlewide tray at the bottom of the machine. There will be a plexi-window to view vend selections and optional cutouts for Intel choice of flatscreen and/or keypad.

Machine body will be powder-coated steel, trays will be aluminum, and vend coils will be plated steel. Front of machine will open to refill product, and rear will open to install and service vending mechanism. 

Approximate dimensions of machine will be 24-30” deep x 36” tall x 30” wide. Overall target weight is under 70lbs.

Customer applications allows for purchasing products.


Figure 4. Vending machine specification and photo of device during fabrication.

Other key decisions to be made at this stage included the choice of industrial-grade sensors, an Intel® architecture-based commercial gateway, a fully supported production OS, a cloud service for data storage and analytics, and software technologies for the administration and customer application.

Sensors and Related Components Selection

Industrial-grade sensors and related components to replace those from the Grove Starter Kit that were used in the proof of concept prototype are detailed in Table 3.

Table 3. Production intelligent vending machine components.

Component

Details

Vending Machine Model

Custom-fabricated:

  • Chassis with a hinged front door and removable back panel

  • Removable tray with three coils for dispensing products

  • Three stepper motors (one per coil), each equipped with switches for sensing a full coil rotation

  • Removable tray for electronic parts

Dell Wyse* IoT Gateway

https://iotsolutionsalliance.intel.com/solutions-directory/dell-iseries-wyse-3290

USB Type A to Micro-USB Type B Cable

Connects I2C/GPIO controller to gateway

12V 5A Power Supply

For stepper motor driver board

UMFT4222EV USB to I2C/GPIO Controller

http://www.mouser.com/new/ftdi/ftdiumft4222ev/

PCA9555-Based GPIO Expander

http://www.elecfreaks.com/store/iic-gpio-module-p-692.html

SparkFun Quadstepper Motor Driver Board

https://www.sparkfun.com/products/retired/10507

AM2315 Temperature and Humidity Sensor

https://www.adafruit.com/product/1293

Grove LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCDRGB-Backlight-p-1643.html

Red LED Panel Mount Indicator

http://www.mouser.com/ProductDetail/VCC/CNX714C200FVW

White LED Panel Mount Indicator

http://www.mouser.com/ProductDetail/VCC/CNX714C900FVW

Gateway Selection

Factors in selecting the gateway to be used in the product version of the intelligent vending machine included the following:

  • Robust compute resources to ensure smooth performance without errors due to bogging down during operation, particularly considering the need for communication with the cloud as part of normal usage.
  • Ready commercial availability was clearly needed so the project could proceed on schedule. While some members of the team expressed preference for the Vantron VT-M2M-QK gateway, difficulty in obtaining that device in a timely manner disqualified it from use in the project.

Ultimately, the Dell iSeries Wyse 3290 IoT Gateway, specifications of which are summarized in Table 4, was chosen for implementation in the product phase of this project. That gateway provides the needed performance for present and foreseeable functionality, as well as ready availability (potentially in large quantities) for hypothetical distribution of the vending machine as a commercial product.

Table 4. Gateway specifications for intelligent vending machine product phase.

 

Dell iSeries Wyse* 3290 IoT Gateway

Processor

Intel® Celeron® processor N2807 (1 M cache, up to 2.16 GHz)

Memory

4 GB DDR3 RAM 1600 MHz

Networking

  • LAN: 1 x 10/100/1000 BASE-T

  • WLAN: 802.11a/b/g/n/ac

  • PAN: Bluetooth 4.0 Low Energy

Physical Specifications

  • Dimensions: 69mm x 197.5mm x 117mm

  • Weight: 2.34kg

Continuing to use Intel IoT Gateway Software Suite (which the prototype was already based on) was a straightforward decision, particularly because the gateway is pre-validated for that OS. Moreover, Intel NUCs and gateways can both run Intel IoT Gateway Software Suite, simplifying the process of porting software elements from the prototype to the product version of the intelligent vending machine model. Likewise, the other core software components such as Intel System Studio and the libraries used in the prototype were held constant to simplify the transition to the product phase.

Online Operation

The system includes the software running on the IoT gateway, the Azure cloud, and a server-side application, as illustrated in Figure 5.

Figure 5. Intelligent vending machine topology: online operation.

 

IoT Gateway Software Implementation

  • The IoT gateway software consists of three parts: Control application is implemented in C++ using the IoT Developer Kit libraries libmraa and libupm; it performs the following tasks:

    Check for mechanical failures and report failure/no-failure events to the local database.
    Monitor for temperature fluctuations in and out of the allowed range, reporting events when the temperature goes out of and returns to that range.
    Check for events from product selection buttons that occur in the customer application, followed by the application generating a “dispense” event, which is sent to the machine through the cloud.

  • Local DB is used for inter-process communication between the control application and the DB daemon. The local SQLite database uses the file $HOME/Vending_Prototype/events.sqlite3, which contains the “events” table with the events to be reported to the cloud. The events table is replicated both ways to and from the machine.
  • DB daemon is implemented using Node.js; it sends reported events bi-directionally between the local database and the cloud.

Azure Cloud Implementation

The Azure cloud maintains information about product inventory for the intelligent vending machine, keeps track of the events received from customer app and vending machine, and provides functionality to analyze this data and trigger responses to various conditions (e.g., low inventory or mechanical failure). Primary cloud analytics functions are as follows:

  • If a product is out of stock, that information is sent to the cloud and an alert displays in the admin app for the user. 
  • If the vending machine internal temperature reaches above or below a preset threshold, that information is sent to the cloud for analysis. An alert displays in the admin app for the user. 
  • If any of the three vending machine coils function improperly, that information is sent to cloud for analysis. An alert displays in the admin app for the user. 
  • If the vending machine tray is pulled out, “Machine Opened” status displays red in LCD and LED. Once the tray is pushed back in, “Machine is ready” status displays green in LCD and LED.

The Admin app provides information regarding home, setup, log history, inventory status and alert details.

Phase 5: Finalizing Design and Releasing to Production

The project team tasked with developing this solution was engineering-centric, and so producing first-rate UIs for the final product was somewhat outside the team’s core competency. Therefore, the team engaged an external resource on a contract basis for that purpose. The UI provider participated in regular team meetings and dedicated meetings with the core software-development team.

During those discussions, the UIs were refined to incorporate additional capabilities and functionality. For example, the team added color coding and options for switching between Fahrenheit and Celsius temperatures to the administration application UI. Functionality was added to the customer application UI asking users to verify that they wish to make the purchase before the transaction is made, along with other minor refinements.

Admin Application 

The administration application UI, shown in Figure 6, is designed to operate on a tablet computer and to provide administrative functionality on the intelligent vending machine. 

Figure 6. Intelligent vending machine administration application UI.

The administration application UI incorporates the following primary elements:

  1. Menu system contains a “home” button to go the home screen (shown in the figure), an “About” screen with information about the software, a “Setup” button that provides hardware-setup details (including placement and connectivity of sensors), a “Log” button to access an events log that tracks purchases, alerts, and maintenance, and an “Alert” button that provides information about active maintenance alerts, including the type and time of occurrence for each alert.
  2. Inventory panel reflects inventory levels that are set within the cloud, using color coding to indicate those levels: dark blue for levels above two thirds of capacity, lighter blue for levels one third to two thirds, and orange for levels below one third. Clicking on the panel generates a detailed inventory window that displays exact inventory quantities, which tray the item is in, and price for each item.
  3. Temperature module is a dual-threshold radial temperature graph, with display of the machine’s current internal temperature selectable as Fahrenheit or Celsius. The white bar represents the acceptable temperature range; if the temperature goes outside that range, the system generates an alert. The software polls the temperature and updates the UI every few seconds.
  4. Coil status module reports on the status of the vending coils and motors, indicating if there is any malfunction, such as a jam or electrical failure.
  5. Vending unit module provides visual information about the presence and location of error conditions, as well as the door open/closed status.

Customer Application 

The customer application, shown in Figure 7, is designed to operate on a mobile device, allowing customers to interact with the vending machine in order to make a purchase. 

Figure 7. Intelligent vending machine customer application.

The customer application incorporates the following primary elements:

  • Status pane indicates whether the machine is ready for an order to be made, and it also functions as a shopping cart to display a list of items selected by the user, pending sale. When items are added to the shopping cart, a “Buy” button appears that indicates the sale total; clicking on that button completes the purchase by sending the order information to the cloud and upon receipt of confirmation from the cloud, dispensing the item and updating the inventory number.
  • Ordering pane contains a selection button for each product in the machine; when clicked, the button adds the item to the shopping cart list in the status pane. Each product button is accompanied by fields that display the amount of inventory in stock as well as the price of the item.

Completed Production Vending Machine

The assembled intelligent vending machine, with the gateway, sensors, and other components installed, is shown in Figure 9.

Figure 9. Fully assembled intelligent vending machine.

 

Phase 6: Evaluating Functionality and Adding Features

Once the actual product version of the vending machine was operational, several team members began to identify possible future functionality that could be built into the product.

Enhancing Cloud Analytics

The team identified the opportunity to enhance cloud-analytics functionality using the Microsoft Power BI service and Power BI Desktop, a cloud-hosted business intelligence and analytics service that is integrated into Microsoft Azure. These capabilities offer data-visualization enhancements for the intelligent vending solution.

Enhancing Event Notification Data Flows

During the evaluation phase, the team identified possibilities to automate certain aspects of the machine’s operation using event notifications in conjunction with Azure analytics. Specifically, future enhancements based on the following data flows were identified:

  • Inventory. If the quantity of a product drops to two units, a future enhancement could cause a notification to be sent to the cloud for analysis, and an alert could be sent to the administration application as a notification to reorder stock. This sequence could be repeated if the quantity drops to zero, and notification could also be sent to the machine display and the mobile application, indicating that the item is out of stock.
  • Maintenance. If the machine malfunctions (e.g., the coil fails to make a full turn, the temperature goes outside preset limits, etc.), a future enhancement could cause a notification to be sent to the cloud for analysis, and service personnel could be notified. An alert could also be sent to the administration application to monitor the status of the service call.

Conclusion

Tracing the path to product during the development of the intelligent vending machine is intended as a pattern for teams to consider as they build their own solutions. Beginning with an ideation phase and rapid prototyping on low-cost equipment and a simplified physical model allows projects to take off quickly. Decisions can therefore be made early about the potential viability of the project, when a relatively small investment in time and money has been made.

This project also suggests a model for thinking about cloud analytics in IoT solutions. Rather than focusing just on opportunities for big-data insights, this implementation reveals how the cloud can function foremost as a communication nexus and centralized data store. At the same time, the cloud data provides substantial opportunities for generating business intelligence to optimize supply chains, increase maintenance efficiency, and enhance profitability.

More Information

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>