Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Unreal Engine* 4: Blueprint CPU Optimizations for Cloth Simulations

$
0
0

Download [PDF 838 KB]

Download the Code Sample

Content

Cloth Simulations

Realistic cloth movement can bring a great amount of visual immersion into a game. Using PhysX* Clothing* is one way to do this without the need of hand animating. Incorporating these simulations into Unreal Engine* 4 is easy, but as it is a taxing process on the CPU, it’s good to understand their performance characteristics and how to optimize them.

Disabling Cloth Simulation

Cloth simulations in Unreal are in the level they will be simulated, whether they can be seen or not. Optimization can prevent this risk. Do not rely on the Disable Cloth setting for optimizing simulated cloth, as this only works in the construction, and has no effect while the game is in play.

Unreal Physics Stats

To get a better understanding of cloth simulation and its effect on a game and system, we can use a console command Stat PHYSICS in Unreal.

After entering Stat PHYSICS at the command line, the physics table overlay appears (Figure 1). To remove it, just enter the same command into the console.


Figure 1. Physics overlay table.

While there is a lot of information available, we need only worry about the first two (Cloth Total and Cloth Sim) for the purposes of this paper.

Cloth Total represents the total number of cloth draws within the scene, and Cloth Sim (simulation) represents the number of active cloth meshes currently simulated. Keeping these two numbers within a reasonable level to your target platform helps prevent a loss of frame rate due to the CPU being loaded down with processing cloth. By adding an increasing number of cloth meshes to the level, the number of simulations the CPU can handle at once becomes apparent.

Level of Detail

When creating a skeletal mesh and attaching an apex cloth file to it, that cloth simulation will always be tied to the zero value of the Level of Detail (LOD) of that mesh. If the mesh is ever switched off of LOD 0, the cloth simulation will no longer take place. Using this to our advantage, we can create a LOD 1 that is the same in every way as our LOD 0 (minus the cloth apex file), and use it as a switch for whether we want to use the cloth simulation (Figure 2).


Figure 2. Level of Detail information.

Boolean Switch

Now that we have a switch, we can setup a simple blueprint to control it. By creating an event (or function), we can branch using a Boolean switch between simulating the cloth (LOD 0) and not simulating the cloth (LOD 1). This event could be called on a trigger entered to begin simulating the cloth meshes in the next area, and again when the player leaves that area to stop those simulations, or any number of methods, depending on the game level.


Figure 3. Switch blueprint.

Occlusion Culling Switch

If a more automated approach is desired, Occlusion Culling can be used as the switching variable. To do this, call the “Was Recently Rendered” function, and attach its return to the switch branch (Figure 4). This will stop the cloth simulation when the actor is no longer rendered.


Figure 4. The ”Was Recently Rendered” function in the switch blueprint.

The problem with this method comes from the simulation reset that occurs when the simulation is switched back on. If the cloth mesh is drastically different when it is simulated, the player will always see this transition. To mitigate the chance of this happening, the bounds of the mesh can be increased with import settings. However, this also means intentionally rendering objects that cannot be seen by the player, so make sure it is worthwhile in terms of the game’s rendering demands.

A level design approach to solving this issue would include making sure all dynamically capable cloth meshes (such as flags) are placed in the same direction as the wind.

It may be possible to program a method in C++ that will save the position data of every vertex of the cloth simulation and translate the mesh back into that position when the simulation in turned back on. That could be a very taxing method, depending on the data structure used and the amount of cloth simulations in the level.


Figure 5. Cloth Simulations without Occlusion Culling switch.


Figure 6. Cloth Simulations with Occlusion Culling switch.

Combination/Set Piece Switch

If the level happens to have a very dynamic set piece that is important enough to always look its best, an additional branch that uses a Boolean switch can be attached to the actor; in figure 6 we call it “Optimize Cloth?”.


Figure 7. Set Piece switch.

With this new switch, importance can be given to certain cloth meshes that should always be simulated by switching their “Optimize Cloth?” value to false.

Using a Set Piece Switch

In figure 8 below, three cloth meshes are flags that turn away and point backwards, relative to their starting position. It takes a few seconds for this to look natural, but because they really sell the fact that they are not hand animated, I set them to be Set Pieces (Optimize Cloth? false), so they are always being simulated.


Figure 8. Complex flags used with set piece switches.


Building an Arcade Cabinet with Skull Canyon

$
0
0

Hi I’m Bela Messex, one half of Buddy System, a bedroom studio based in Los Angeles, and makers of the game Little Bug.

Why an Arcade Cabinet?

My co-developer and I come from worlds where DIY wasn’t a marketable aesthetic, but a natural and necessary creative path. Before we met and found ourselves in video game design we made interactive sculpture, zines, and comics. We’ve been interested in ways to blend digital games with physical interaction, and while this can take many forms, a straightforward route was to house our debut game, Little Bug, in a custom arcade cabinet. As it turns out doing so was painless, fun, and easy; and at events like Fantastic Arcade and Indiecade, it provided a unique interaction that really drew attendees.

The Plan

To start off, I rendered a design in Unity complete with Image Effects, Animations and completely unrealistic lighting… If only real life were like video games, but at least I now had a direction.

The Components

This worked for us and could be a good starting point for you, but you might want to tailor a bit to your game’s unique needs.

  • Intel NUC Skull Canyon.
  • 2 arcade joysticks.
  • 3 arcade buttons.
  • 2 generic PC joystick boards with wires included.
  • 4’ x 8’ MDF panel.
  • 24” monitor.
  • 8” LED accent light.
  • Power Strip.
  • Power Drill.
  • Nail gun and wood glue.
  • Screws of varying sizes and springs.
  • 6” piano hinge.
  • Velcro strips.
  • Zip ties.
  • Black spray paint and multicolored paint markers.
  • Semi opaque plexi.

Building the Cabinet

When I was making sculptures, I mainly welded, so I asked my friend Paul for some help measuring and cutting the MDF panels. We did this by designing our shapes on the spot with a jigsaw, pencil, and basic drafting tools. Here is Paul in his warehouse studio with the soon to be cabinet.

We attached the cut pieces with glue and a nail gun, but you could use screws if you need a little more strength. Notice the hinge in the front - this was Paul’s idea and ended up being a life saver later on when I needed to install buttons and joysticks. Next to the paint can is a foot pedal we made specific for Little Bug’s unique controls: two joysticks and a button used simultaneously. On a gamepad this dual stick setup is no problem but translated to two full sized arcade joysticks both hands would be occupied, so how do you press that button? Solution: use your foot!

After painting the completed frame, it was time for the fun part - installing electronics. I used a cheap ($15) kit that include six buttons, a joystick, a USB controller board and all the wiring. After hundreds of plays, it’s all still working great. Notice the LED above the screen to light up the marquee for a classic arcade feel.

Once the NUC was installed in the back via velcro strips, I synced the buttons and joysticks inside the Unity inspector and created a new build specifically designed for the cabinet. Little Bug features hand drawn sprites, so we drew on all of the exterior designs with paint markers to keep that look coherent. The Marquee was made by stenciling painter’s tape with spray paint.

The Joy of Arcade

There is really nothing like watching players interact with a game you’ve made. Even though Little Bug itself is the same, the interaction is now fundamentally different, and as game designers it has been mesmerizing to watch people play it in this new way. The compact size and performance of the NUC was perfect for creating experiences like this, and it’s worked so well I’m already drawing up plans for more games in the same vein.

Thermal Management Overview for Intel® Joule™ Developer Kit

Intel Unite Use Case Guide for Audio/Video Conferencing Plugins

$
0
0

Intel Unite is to be the solution to simplify user connections within the conference room. As Intel Unite technology does not have integrated video capture or audio solutions, the expectation is for Cloud Audio/Visual Collaboration Companies to provide those services, both within the corporate firewall and without.

With Intel Unite, the collaboration room can now be made more user friendly, by making it a dongle free environment by the use of connecting to the in room screen over the corporate Wi-Fi network. The individual connections are made by users using the supplied 6 digit pin code that is displayed by the Intel Unite hub to the in room screen.

Audio/Video Conferencing Plugins

Where the Intel Unite plugin comes into play for a Cloud based Audio/Video conferencing solution is by providing a dongle free environment within the collaboration room, while offering their solution of providing screen sharing outside the conference room. This effectively makes the Hub a meeting participant, whom is able to share its own screen or view content shared to those screen external of the collaboration room. The plugin will allow the Intel Unite hub application to control the A/V collaboration tool features, such as session connect/disconnect, volume control and sharing controls.

Suggested Integration

Meeting Invite Handling

Depending on how collaboration software is setup will ultimately affect how this is to be handled. The actual connection to the meeting can be automated or manual.

The invite handling should be as simplified as possible. Such as simply sending a meeting notice to the conference room PC and allowing that PC to connect to the collaboration session as required. This can be app based method or email/calendar based such as the plugin for Skype* for Business uses.

Sharing content in Unite and AV conferencing session

Participants joining the Unite session with 6 digit pin will be seen as a single participant. If someone shares to the Unite Hub, the hub will share out to the AV solution.

AV Control Requirements for plugin

Plugin requirements will include (if applicable) controls for existing room devices, such as:

  • Room Camera control off/on
  • Room microphones mute
  • Room speakers mute
  • Session Disconnect
  • Auto disconnect from AV Session when last Unite User disconnects
  • Audio Controls +, -, mute
  • Sharing Controls
  • Any Optional AV Control that may be needed

Reactively join conference room to an ad-hoc AV conferencing session (reactive action)

  • Ability to "answer" in the room from a connected, Unite-enabled client Toast behavior
  • View toast of upcoming meetings scheduled for hub on room display
  • Indicate when someone has joined or left the AV conferencing session (not the Intel Unite Session)
Custom screen layouts for multi-screen rooms
  • Toggle different visual layouts on displays
  • Camera feed(s) on one display, content sharing on another
  • Different camera feeds on different displays, content sharing on another
  • Mirrored displays - camera feed(s) + content on each display
Support for hardware button behavior
  • Ability to work with speaker/mic/hub hardware on meeting room table - support 'Answer' and 'Hang up' button actions to Join and End scheduled meetings with a button press, where the room hub/PC is an invited resource.

Summary

Intel unite allows for a dongle free environment for collaboration room sharing, while the Cloud A/V collaboration tool handles the 

Improving the Performance of Principal Component Analysis with Intel® Data Analytics Acceleration Library

$
0
0

Have you ever tried to access a website and had to wait a long time before you could access it or not been able to access it at all? If so, that website might be falling victim to what is called a Denial of Service1 (DoS) attack. DoS attacks occur when an attacker floods a network with information like spam emails, causing the network to be so busy handling that information that it is unable to handle requests from other users.

To prevent spam email DoS attack a network needs to be able to identify “garbage”/spam emails and filter them out. One way to do this is to compare an email pattern with those in the library of email spam signatures. Incoming patterns that match those of the library are labeled as attacks. Since spam emails can come in many forms and shapes, there is no way to build a library that can store all the patterns. In order to increase the chance of identifying spam emails there need to be a method to restructure the data in such a way that will make it simpler to analyze.

This article discusses an unsupervised2 machine-learning3 algorithm called principal component analysis4 (PCA) that can be used to simplify the data. It also describes how Intel® Data Analytics Acceleration Library (Intel® DAAL)5 helps optimize this algorithm to improve the performance when running it on systems equipped with Intel® Xeon® processors.

What is Principal Component Analysis?

PCA is a popular data analysis method. It is used to reduce the complexity of the data without losing its properties to make it easier to visualize and analyze. Reducing the complexity of the data means reducing the original dimensions to lesser dimensions while preserving the important features of the original datasets. It is normally used as a pre-step of machine learning algorithms like K-means6, resulting in simpler modeling and thus improving performance.

Figures 1–3 illustrate how the PCA algorithm works. To simplify the problem, let’s limit the scope to two-dimensional space.


Figure 1. Original dataset layout.

Figure 1 shows the objects of the dataset. We want to find the direction where the variance is maximal.


Figure 2. The mean and the direction with maximum variance.

Figure 2 shows the mean of the dataset and the direction with maximum variance. The first direction with the maximal variance is call the first principal component.


Figure 3. Finding the next principal component.

Figure 3 shows the next principal component. The next principal component is the direction where the variance is the second most maximal. Note that the second direction is orthonormal to the first direction.

Figure 4–6 shows how the PCA algorithm is used to reduce the dimensions.


Figure 4. Re-orientating the graph.

Figure 4 shows the new graph after rotating it so that the axis (P1) corresponding to the first principal component becomes a horizontal axis.


Figure 5. Projecting the objects to the P1 axis.

In Figure 5 the whole graph has been rotated so that the axis (P1) corresponding to the first principal component become a horizontal axis.


Figure 6. Reducing from two dimensions to one dimension.

Figure 6 shows the effect of using PCA to reduce from two dimensions (P1 and P2) to one dimension (P1) base on the maximal variance. Similarly, this same concept is used on multi-dimensional datasets to reduce their dimensions while still maintaining much of their characteristics by dropping dimensions with lower variances.

Information about PCA mathematical representation can be found at references 7 and 8.

Applications of PCA

PCA applications include the following:

  • Detecting DoS and network probe attacks
  • Image compression
  • Pattern recognition
  • Analyzing medical imaging

Pros and Cons of PCA

The following lists some of the advantages and disadvantages of PCA.

  • Pros
    • Fast algorithm
    • Shows the maximal variance of the data
    • Reduces the dimension of the origin data
    • Removes noise.
  • Cons
    • Non-linear structure is hard to model with PCA

Intel® Data Analytics Acceleration Library

Intel DAAL is a library consisting of many basic building blocks that are optimized for data analytics and machine learning. These basic building blocks are highly optimized for the latest features of the latest Intel® processors. More about Intel DAAL can be found at reference 5.

The next section shows how to use PCA with PyDAAL, the Python* API of Intel DAAL. To install PyDAAL, follow the instructions in reference 9.

Using the PCA Algorithm in Intel Data Analytics Acceleration Library

To invoke the PCA algorithm in Python10 using Intel DAAL, do the following steps:

  1. Import the necessary packages using the commands from and import
    1. Import the necessary functions for loading the data by issuing the following command:
      from daal.data_management import HomogenNumericTable
    2. Import the PCA algorithm using the following commands:
      import daal.algorithms.pca as pca
    3. Import numpy for calculation.
      Import numpy as np
  2. Import the createSparseTable function to create a numeric table to store input data reading from a file.
    from utils import createSparseTable
  3. Load the data into the data set object declared above.
     dataTable = createSparseTable(dataFileName)
    Where dataFileName is the name of the input .csv data file
  4. Create an algorithm object for PCA using the correlation method.
    pca_alg = pca.Batch_Float64CorrelationDense ()
    Note: if we want to use the svd (single value decomposition) method, we can use the following command:
    pca = pca.Batch_Float64SvdDense()
  5. Set the input for the algorithm.
    pca_alg.input.setDataset(pca.data, dataTable)
  6. Compute the results.
    result = pca_alg.compute()
    The results can be retrieved using the following commands:
    result.get(pca.eigenvalues)
    result.get(pca.eigenvectors)

Conclusion

PCA is one of the simplest unsupervised machine-learning algorithms that is used to reduce the dimensions of a dataset. Intel DAAL contains an optimized version of the PCA algorithm. With Intel DAAL, you don’t have to worry about whether your applications will run well on systems equipped with future generations of Intel Xeon processors. Intel DAAL will automatically take advantage of new features in new Intel Xeon processors. All you need to do is link your applications to the latest version of Intel DAAL.

References

1. Denial of service attacks

2. Unsupervised learning

3. Wikipedia – machine learning

4. Principal component analysis

5. Introduction to Intel DAAL

6. K-means algorithm

7. Principal component analysis for machine learning

8. Principal component analysis tutorial

9. How to install Intel’s distribution for Python

10. Python website

Intel® XDK FAQs - Debug & Test

$
0
0

Why is the Debug tab being deprecated and removed from the Intel XDK?

The Debug tab is being retired because, as previously announced and noted in the release notes, future editions of the Intel XDK will focus on the development of IoT (Internet of Things) apps and IoT mobile companion apps. Since we introduced the Intel XDK IoT Edition in September of 2014, the need for accessible IoT app development tools has increased dramatically. At the same time, HTML5 mobile app development tools have matured significantly. Given the maturity of the free open-source HTML5 mobile app development tools, we feel you are best served by using those tools directly.

Similar reasoning applies to the hosted weinre server (on the Test tab) and the Live Development Pane on the Develop tab.

How do I do "rapid debugging" with remote CDT (or weinre or remote Web Inspector) in a built app?

Attempting to debug a built mobile app (with weinre, remote CDT or Safari Web Inspector) seems like a difficult or impossible task. There are, in fact, many things you can do with a built app that do not require rebuilding and reinstalling your app between each source code change.

You can continue to use the Simulate tab for debugging that does not depend on third-party plugins. Then switch to debugging a built app when you need to deal with third-party plugin issues that cannot be resolved using the Simulate tab. The best place to start is with a built Android app installed directly on-device, which provides full JavaScript and CSS debugging, by way of remote Chrome* DevTools*. For those who have access to a Mac, it is also possible to use remote web inspector with Safari to debug a built iOS app. Alternatively, you can use weinre to debug a built app by installing a weinre server directly onto your development system. For additional help on using weinre locally, watch Using WEINRE to Debug an Intel® XDK Cordova* App (beginning at about 14:30 in the video).

The interactive JavaScript console is your "best friend" when debugging with remote CDT, remote Web Inspector or weinre in a built app. Watch this video from ~19:30 for a technique that shows how to modify code during your debug session, without requiring a rebuild and reinstall of your app, via the JavaScript debug console. The video demonstrates this technique using weinre, but the same technique can also be used with a CDT console or a Web Inspector console.

Likewise, use the remote CDT CSS editor to try manipulate CSS rules in order to figure out how to best style your UI. Or, use the Simulate tab or the Brackets* Live Preview feature. The Brackets Live Preview feature utilizes your desktop browser to provide a feature similar to Intel XDK Live Layout Editing. If you use the Google* Chrome browser with Brackets Live Preview you can use the Chrome device emulation feature to simulate a variety of customizable device viewports.

The Intel XDK is not generating a debug module or is not starting my debug module.

There are a variety of things that can go wrong when attempting to use the Debug tab:

  • your test device cannot be seen by the Debug tab:
  • the debug module build fails 
  • the debug module builds, but fails to install onto your test device 
  • the debug module builds and installs, but fails to "auto-start" on your test device 
  • your test device has run out of memory or storage and needs to be cleared
  • there is a problem with the adb server on your development system

Other problems may also arise, but the above list represents the most common. Search this FAQ and the forum for solutions to these problems. Also, see the Debug tab documentation for some help with installing and configuring your system to use the adb debug driver with your device.

What are the requirements for Testing on Wi-Fi?

  1. Both Intel XDK and App Preview mobile app must be logged in with the same user credentials.
  2. Both devices must be on the same subnet.

Note: Your computer's Security Settings may be preventing Intel XDK from connecting with devices on your network. Double check your settings for allowing programs through your firewall. At this time, testing on Wi-Fi does not work within virtual machines.

How do I configure app preview to work over Wi-Fi?

  1. Ensure that both Intel XDK and App Preview mobile app are logged in with the same user credentials and are on the same subnet
  2. Launch App Preview on the device
  3. Log into your Intel XDK account
  4. Select "Local Apps" to see a list of all the projects in Intel XDK Projects tab
  5. Select desired app from the list to run over Wi-Fi

Note: Ensure the app source files are referenced from the right source directory. If it isn't, on the Projects Tab, change the 'source' directory so it is the same as the 'project' directory and move everything in the source directory to the project directory. Remove the source directory and try to debug over local Wi-Fi.

How do I clear app preview cache and memory?

[Android*] Simply kill the app running on your device as an Active App on Android* by swiping it away after clicking the "Recent" button in the navigation bar. Alternatively, you can clear data and cache for the app from under Settings App > Apps > ALL > App Preview.

[iOS*] By double tapping the Home button then swiping the app away.

[Windows*] You can use the Windows* Cache Cleaner app to do so.

What are the Android* devices supported by App Preview?

We officially only support and test Android* 4.x and higher, although you can use Cordova for Android* to build for Android* 2.3 and above. For older Android* devices, you can use the build system to build apps and then install and run them on the device to test. To help in your testing, you can include the weinre script tag from the Test tab in your app before you build your app. After your app starts up, you should see the Test tab console light up when it sees the weinre script tag contact the device (push the "begin debugging on device" button to see the console). Remember to remove the weinre script tag before you build for the store.

What do I do if Intel XDK stops detecting my Android* device?

Conflicts between different versions of adb can cause device detection issues. 

Ensure that all applications, such as Eclipse, Chrome, Firefox, Android Studio and other Android mobile development tools are not running on your workstation. Exit the Intel XDK and kill all adb processes that are running on your workstation. Restart the Intel XDK only after you have killed all instances of adb on your workstation. 

You can scan your disk for copies of adb using the following command lines:

[Linux*/OS X*]:

$ sudo find / -name adb -type f 

[Windows*]:

> cd \> dir /s adb.exe

For more information regarding Android* USB debug, visit the Intel XDK documentation on debugging and testing.

How do I debug an app that contains third party Cordova plugins?

See the Debug and Test Overview doc page for a more complete overview of your debug options.

When using the Test tab with Intel App Preview your app will not include any third-party plugins, only the "core" Cordova plugins.

The Emulate tab will load the JavaScript layer of your third-party plugins, but does not include a simulation of the native code part of those plugins, so it will present you with a generic "return" dialog box to allow you to execute code associated with third-party plugins.

When debugging Android devices with the Debug tab, the Intel XDK creates a custom debug module that is then loaded onto your USB-connected Android device, allowing you to debug your app AND its third-party Cordova plugins. When using the Debug tab with an iOS device only the "core" Cordova plugins are available in the debug module on your USB-connected iOS device.

If the solutions above do not work for you, then your best bet for debugging an app that contains a third-party plugin is to build it and debug the built app installed and running on your device. 

[Android*]

1) For Crosswalk* or Cordova for Android* build, create an intelxdk.config.additions.xml file that contains the following lines:

<!-- Change the debuggable preference to true to build a remote CDT debuggable app for --><!-- Crosswalk* apps on Android* 4.0+ devices and Cordova apps on Android* 4.4+ devices. --><preference name="debuggable" value="true" /><!-- Change the debuggable preference to false before you build for the store. --> 

and place it in the root directory of your project (in the same location as your other intelxdk.config.*.xml files). Note that this will only work with Crosswalk* on Android* 4.0 or newer devices or, if you use the standard Cordova for Android* build, on Android* 4.4 or greater devices.

2) Build the Android* app

3) Connect your device to your development system via USB and start app

4) Start Chrome on your development system and type "chrome://inspect" in the Chrome URL bar. You should see your app in the list of apps and tabs presented by Chrome, you can then push the "inspect" link to get a full remote CDT session to your built app. Be sure to close Intel XDK before you do this, sometimes there is interference between the version of adb used by Chrome and that used by Intel XDK, which can cause a crash. You might have to kill the adb process before you start Chrome (after you exit the Intel XDK).

[iOS*]

Refer to the instructions on the updated Debug tab docs to get on-device debugging. We do not have the ability to build a development version of your iOS* app yet, so you cannot use this technique to build iOS* apps. However, you can use the weinre script from the Test tab into your iOS* app when you build it and use the Test tab to remotely access your built iOS* app. This works best if you include a lot of console.log messages.

[Windows* 8]

You can use the test tab which would give you a weinre script. You can include it in the app that you build, run it and connect to the weinre server to work with the console.

Alternatively, you can use App Center to setup and access the weinre console (go here and use the "bug" icon).

Another approach is to write console.log messages to a <textarea> screen on your app. See either of these apps for an example of how to do that:

Why does my device show as offline on Intel XDK Debug?

“Media” mode is the default USB connection mode, but due to some unidentified reason, it frequently fails to work over USB on Windows* machines. Configure the USB connection mode on your device for "Camera" instead of "Media" mode.

What do I do if my remote debugger does not launch?

You can try the following to have your app run on the device via debug tab:

  • Place the intelxdk.js library before the </body> tag
  • Place your app specific JavaScript files after it
  • Place the call to initialize your app in the device ready event function

Why do I get an "error installing App Preview Crosswalk" message when trying to debug on device?

You may be running into a RAM or storage problem on your Android device; as in, not enough RAM available to load and install the special App Preview Crosswalk app (APX) that must be installed on your device. See this site (http://www.devicespecifications.com) for information regarding your device. If your device has only 512 MB of RAM, which is a marginal amount for use with the Intel XDK Debug tab, you may have difficulties getting APX to install.

You may have to do one or all of the following:

  • remove as many apps from RAM as possible before installing APX (reboot the device is the simplest approach)
  • make sure there is sufficient storage space in your device (uninstall any unneeded apps on the device)
  • install APX by hand

The last step is the hardest, but only if you are uncomfortable with the command-line:

  1. while attempting to install APX (above) the XDK downloaded a copy of the APK that must be installed on your Android device
  2. find that APK that contains APX
  3. install that APK manually onto your Android device using adb

To find the APK, on a Mac:

$ cd ~/Library/Application\ Support/XDK
$ find . -name *apk

To find the APK, on a Windows machine:

> cd %LocalAppData%\XDK> dir /s *.apk

For each version of Crosswalk that you have attempted to use (via the Debug tab), you will find a copy of the APK file (but only if you have attempted to use the Debug tab and the XDK has successfully downloaded the corresponding version of APX). You should find something similar to:

./apx_download/12.0/AppAnalyzer.apk

following the searches, above. Notice the directory that specifies the Crosswalk version (12.0 in this example). The file named AppAnalyzer.apk is APX and is what you need to install onto your Android device.

Before you install onto your Android device, you can double-check to see if APX is already installed:

  • find "Apps" or "Applications" in your Android device's "settings" section
  • find "App Preview Crosswalk" in the list of apps on your device (there can be more than one)

If you found one or more App Preview Crosswalk apps on your device, you can see which versions they are by using adb at the command-line (this assumes, of course, that your device is connected via USB and you can communicate with it using adb):

  1. type adb devices at the command-line to confirm you can see your device
  2. type adb shell 'pm list packages -f' at the command-line
  3. search the output for the word app_analyzer

The specific version(s) of APX installed on your device end with a version ID. For example:com.intel.app_analyzer.v12 means you have APX for Crosswalk 12 installed on your device.

To install a copy of APX manually, cd to the directory containing the version of APX you want to install and then use the following adb command:

$ adb install AppAnalyzer.apk

If you need to remove the v12 copy of APX, due to crowding of available storage space, you can remove it using the following adb command:

$ adb uninstall com.intel.app_analyzer.v12

or

$ adb shell am start -a android.intent.action.DELETE -d package:com.intel.app_analyzer.v12

The second one uses the Android undelete tool to remove the app. You'll have to respond to a request to undelete on the Android device's screen. See this SO issue for details. Obviously, if you want to uninstall a different version of APX, specify the package ID corresponding to that version of APX.

Why is Chrome remote debug not working with my Android or Crosswalk app?

For a detailed discussion regarding how to use Chrome on your desktop to debug an app running on a USB-connected device, please read this doc page Remote Chrome* DevTools* (CDT).

Check to be sure the following conditions have been met:

  • The version of Chrome on your desktop is greater than or equal to the version of the Chrome webview in which you are debugging your app.

    For example, Crosswalk 12 uses the Chrome 41 webview, so you must be running Chrome 41 or greater on your desktop to successfully attach a remote Chrome debug session to an app built with Crosswalk 12. The native Chrome webview in an Android 4.4.2 device is Chrome 30, so your desktop Chrome must be greater than or equal to Chrome version 30 to debug an app that is running on that native webview.
  • Your Android device is running Android 4.4 or higher, if you are trying to remote debug an app running in the device's native webview, and it is running Android 4.0 or higher if you are trying to remote debug an app running Crosswalk.

    When debugging against the native webview, remote debug with Chrome requires that the remote webview is also Chrome; this is not guaranteed to be the case if your Android device does not include a license for Google services. Some manufacturers do not have a license agreement with Google for distribution of the Google services on their devices and, therefore, may not include Chrome as their native webview, even if they are an Android 4.4 or greater device.
  • Your app has been built to allow for remote debug.

    Within the intelxdk.config.additions.xml file you must include this line: <preference name="debuggable" value="true" /> to build your app for remote debug. Without this option your app cannot be attached to for remote debug by Chrome on your desktop.

How do I detect if my code is running in the Emulate tab?

In the obsolete intel.xdk apis there is a property you can test to detect if your app is running within the Emulate tab or on a device. That property is intel.xdk.isxdk. A simple alternative is to perform the following test:

if( window.tinyHippos )

If the test passes (the result is true) you are executing in the Emulate tab.

Never ending "Transferring your project files to the Testing Device" message from Debug tab; results in no Chrome DevTools debug console.

This is a known issue but a resolution for the problem has not yet been determined. If you find yourself facing this issue you can do the following to help resolve it.

On a Windows machine, exit the Intel XDK and open a "command prompt" window:

> cd %LocalAppData%\XDK\> rmdir cdt_depot /s/q

On a Mac or Linux machine, exit the Intel XDK and open a "terminal" window:

$ find ~ -name global-settings.xdk
$ cd <location-found-above>
$ rm -Rf cdt_depot

Restart the Intel XDK and try the Debug tab again. This procedure is deleting the cached copies of the Chrome DevTools that were retrieved from the corresponding App Preview debug module that was installed on your test device.

One observation that causes this problem is the act of removing one device from your USB and attaching a new device for debug. A workaround that helps sometimes, when switching between devices, is to:

  • switch to the Develop tab
  • close the XDK
  • detach the old device from the USB
  • attach the new device to your USB
  • restart the XDK
  • switch to the Debug tab

Can you integrate the iOS Simulator as a testing platform for Intel XDK projects?

The iOS simulator only runs on Apple Macs... We're trying to make the Intel XDK accessible to developers on the most popular platforms: Windows, Mac and Linux. Additionally, the iOS simulator requires a specially built version of your app to run, you can't just load an IPA onto it for simulation.

What is the purpose of having only a partial emulation or simulation in the Emulate tab?

There's no purpose behind it, it's simply difficult to emulate/simulate every feature and quirk of every device.

Not everyone can afford hardware for testing, especially iOS devices; what can I do?

You can buy a used iPod and that works quite well for testing iOS apps. Of course, the screen is smaller and there is no compass or phone feature, but just about everything else works like an iPhone. If you need to do a lot of iOS testing it is worth the investment. A new iPod costs $200 in the US. Used ones should cost less than that. Make sure you get one that can run iOS 8.

Is testing on Crosswalk on a virtual Android device inside VirtualBox good enough?

When you run the Android emulator you are running on a fictitious device, but it is a better emulation than what you get with the iOS simulator and the Intel XDK Emulate tab. The Crosswalk webview further abstracts the system so you get a very good simulation of a real device. However, considering how inexpensive and easy Android devices are to obtain, we highly recommend you use a real device (with the Debug tab), it will be much faster and even more accurate than using the Android emulator.

Why isn't the Intel XDK emulation as good as running on a real device?

Because the Intel XDK Emulate tab is a Chromium browser, so what you get is the behavior inside that Chromium browser along with some conveniences that make it appear to be a hybrid device. It's poorly named as an emulator, but that was the name given to it by the original Ripple Emulator project. What it is most useful for is simulating most of the core Cordova APIs and your basic application logic. After that, it's best to use real devices with the Debug tab.

Why doesn't my custom splash screen does not show in the emulator or App Preview?

Ensure the splash screen plugin is selected. Custom splash screens only get displayed on a built app. The emulator and app preview will always use Intel XDK splash screens. Please refer to the 9-Patch Splash Screen sample for a better understanding of how splash screens work.

Is there a way to detect if my program has stopped due to using uninitialized variable or an undefined method call?

This is where the remote debug features of the Debug tab are extremely valuable. Using a remote CDT (or remote Safari with a Mac and iOS device) are the only real options for finding such issues. WEINRE and the Test tab do not work well in that situation because when the script stops WEINRE stops.

Why doesn't the Intel XDK go directly to Debug assuming that I have a device connected via USB?

We are working on streamlining the debug process. There are still obstacles that need to be overcome to insure the process of connecting to a device over USB is painless.

Can a custom debug module that supports USB debug with third-party plugins be built for iOS devices, or only for Android devices?

The Debug tab, for remote debug over USB can be used with both Android and iOS devices. Android devices work best. However, at this time, debugging with the Debug tab and third-party plugins is only supported with Android devices (running in a Crosswalk webview). We are working on making the iOS option also support debug with third-party plugins, like what you currently get with Android.

Why does my Android debug session not start when I'm using the Debug tab?

Some Android devices include a feature that prevents some applications and services from auto-starting, as a means of conserving power and maximizing available RAM. On Asus devices, for example, there is an app called the "Auto-start Manager" that manages apps that include a service that needs to start when the Android device starts.

If this is the case on your test device, you need to enable the Intel App Preview application as an app that is allowed to auto-start. See the image below for an example of the Asus Auto-start Manager:

Another thing you can try is manually starting Intel App Preview on your test device before starting a debug session with the Debug tab.

How do I share my app for testing in App Preview?

The only way to retrieve a list of apps in App Preview is to login. If you do not wish to share your credentials, you can create an alternate account and push your app to the cloud using App Preview and share that account's credentials, instead.

I am trying to use Live Layout Editing but I get a message saying Chrome is not installed on my system.

The Live Layout Editing feature of the Intel XDK is built on top of the Brackets Live Preview feature. Most of the issues you may experience with Live Layout Editing can be addressed by reviewing this Live Preview Isn't Working FAQ from the Brackets Troubleshooting wiki. In particular, see the section regarding using Chrome with Live Preview.

My AJAX or XHR or Angular $http calls are returning an incorrect return code in App Preview.

Some versions of App Preview include an XHR override library that is designed to deal with issues related to loading file:// URLs outside of the local app filesystem (this is something that is unique to App Preview). Unfortunately, this override appears to cause problems with the return codes for some AJAX, XHR and Angular $http calls. This XHR special handling code can be disabled by adding a "data-noxhrfix" property to your app's <head> tag, in your app's index.html file. For example:

<!DOCTYPE html><html><head data-noxhrfix><meta charset="UTF-8">
...

This override should only apply to situations where the result status is zero and the responseURL is not empty.

Back to FAQs Main

SOME METHODOLOGIES TO OPTIMIZE YOUR VR APPLICATIONS POWER ON INTEL® PLATFORM

$
0
0

As VR becomes a popular consumer product, more and more VR contents come out. From recent investment, lots of users love VR devices without wires, like AIO devices or Mobile devices. For these devices, it is not charging during playing so developers need to take special care of application power.

For details, please see the attachments.

How AisaInfo ADB* Improves Performance with Intel® Xeon® Processor-Based Systems

$
0
0

Background

Supporting high online transaction volumes in real time, especially at peak time, can be challenging for telecom and financial services. To ensure uninterrupted service and a good customer experience, telecom and financial companies are constantly looking for ways to improve their services by enhancing their applications and systems.

AsiaInfo1 ADB* is a scalable online transaction processing2 database targeted for high-performance and mission-critical businesses such as online charge service3 (OCS). AsiaInfo ADB provides high performance, high availability, and scalability by clustering multiple servers.

This article describes how AsiaInfo ADB was able to take advantage of features like Intel® Advanced Vector Extensions 2 (Intel® AVX2)4 and Intel® Transactional Synchronization Extensions (Intel® TSX)5 as well as faster Intel® Solid State Drive hard disks to improve its performance when running on systems equipped with the latest generation of Intel® Xeon® processors.

AisaInfo ADB on Intel® Xeon® Processor-Based Systems

AsiaInfo engineers modified the ADB code by replacing the “self-implemented” spin lock to pthread_rwlock_wrlock in the GNU* C library6 (glibc). The function pthread_rwlock_wrlock can be configured to enable or disable Intel TSX with the environmental variable. With the new ADB version using glibc lock, when Intel TSX is enabled, the performance improves as shown in Figure 1 as compared to that of the original ADB version using the self-implemented lock.

For customers with limited disk space and cannot be expanded, they can enable the compress function. The ADB data compression function can save disk space by compressing data before writing to disk. This function is CPU intensive and impacts database performance. In order to do that, AsiaInfo engineers modified the ADB compression module using the Intel AVX2 intrinsic instructions.

New Intel Xeon processors like the Intel® Xeon® processor E7 v4 family provide more cores (24 compared to 18) and larger cache size (60 MB compared to 45 MB) compared to the previous generation of Intel® Xeon® processors E7 v3 family. More cores and larger cache size allow more transactions to be served within the same amount of time.

The next section shows how we tested the AsiaInfo ADB workload to compare the performance between the current generation of Intel Xeon processors E7 v4 family and those of the previous generation of Intel Xeon processors E7 v3 family.

Performance Test Procedure

We performed tests on two platforms. One system was equipped with the Intel® Xeon® processor E7-8890 v3 and the other with the Intel® Xeon® processor E7-8890 v4. We wanted to see how Intel TSX, Intel AVX2, and faster solid state drives (SSDs) affect performance.

Test Configuration

System equipped with the quad-socket Intel Xeon processor E7-8890 v4

  • System: Preproduction
  • Processors: Intel Xeon processor E7-8890 v4 @2.2 GHz
  • Cache: 60 MB
  • Cores: 24
  • Memory: 256 GB DDR4-1600 LV DIMM
  • SSD: Intel® SSD DC S3700 Series, Intel SSD DC P3700 Series

System equipped with the quad-socket Intel Xeon processor E7-8890 v3

  • System: Preproduction
  • Processors: Intel Xeon processor E5-2699 v3 @2.5 GHz
  • Cache: 45 MB
  • Cores: 18
  • Memory: 256 GB DDR4-1600 LV DIMM
  • SSD: Intel SSD DC S3700 Series, Intel SSD DC P3700 Series

Operating system:

  • Ubuntu* 15.10 - kernel 4.2

Software:

  • Glibc 2.21

Application:

  • ADB v1.1
  • AsiaInfo ADB OCS ktpmC workload

Test Results

Intel® Transactional Synchronization Extensions
Figure 1: Comparison between the application using the Intel® Xeon® processor E7-8890 v3 and the Intel® Xeon® processor E7-8890 v4 when Intel® Transactional Synchronization Extensions is enabled.

Figure 1 shows that the performance improved by 22 percent with Intel TSX enabled when running the application on systems equipped with Intel Xeon processor E7-8890 v4 compared to that of the Intel Xeon processor E7-8890 v3.

 

Intel® Advanced Vector Extensions 2
Figure 2: Performance improvement using Intel® Advanced Vector Extensions 2.

Figure 2 shows the data compression module performance improved by 34 percent when Intel AVX2 is enabled. This test was performed on the Intel® Xeon® processor E7-8890 v4.

 

Performance comparison between different Intel® SSDs
Figure 3: Performance comparison between different Intel® SSDs.

Figure 3 shows the performance improvement of the application using faster Intel SSDs. In this test case, replacing the Intel SSD DC S3700 Series with the Intel® SSD DC P3700 Series gained 58 percent in performance. Again, this test was performed on the Intel® Xeon® processor E7-8890 v4.

 

Conclusion

AsisInfo ADB gains more performance by taking advantage of Intel TSX and Intel AVX2 as well as better platform capabilities such as more cores and larger cache size resulting in improved customer experiences.

References

  1. AsiaInfo company information
  2. Online transaction processing
  3. Online charging system
  4. Intel AVX2
  5. Intel TSX
  6. GNU Library C

Lindsays TEST article Zero Theme

$
0
0

class="button-cta"

Class Collapse List (3up)

Here is a caption for all of the 3-up images - it is inside the outer div end tag.

 :

Images in a Plain P with a Space Between Each

  
Here is a caption for all of the 3-up images - it is inside the p end tag, preceded by a break.

 

Image Floats

Image class="floatLeft"

 

Image class="floatRight"

 

Image class="half-float-left"

 

Image class="half-float-right"

 

Other float classes:
class="one-third-float-left"
class="one-third-float-right"
class="one-quarter-float-left"
class="one-quarter-float-right"

Clear floats with p class="clearfix"

 

Text types - h1 thru h4 are their own tags; remainder are in a <p>.

H1 Header

H2 Header

H3 Header

H3 Grey (class="grey-heading")

H4 Header

Strong Text

Italic Text

Plain Text

SuperscriptText

SubscriptText

 


 

This paragraph is style="margin-left:.5in;" - or is it?

 

Lists

Standard Unordered

  • Item 1
    • Sub-list (ul)
    • Sub 2
  • Item 2
    1. Sub-list (ol)
    2. Sub 2
  • Item 3

Standard Ordered

  1. Item 1
    • Sub-list (ul)
    • Sub 2
  2. Item 2
    1. Sub-list (ol)
    2. Sub 2
  3. Item 3

Special styles - do they work?

None

  • First (style="list-style-type: none;")
  • Second
  • Third
  • Fourth

Lower Alpha

  1. Arizona (ol style="list-style-type:lower-alpha)
    1. Phoenix
    2. Tucson
  2. Florida
  3. Hawaii

Lower Roman

  1. Alpha (ol style="list-style-type:lower-roman;")
  2. Bravo
  3. Charlie

Other Start

  1. Red (ol start="4")
  2. Blue
  3. Green

 

Inline Code

THE OAK AND THE REEDS

An Oak that grew on the bank of a river was uprooted by a severe gale of wind, and thrown across the stream.

It fell among some Reeds growing by the water, and said to them, "How is it that you, who are so frail and slender, have managed to weather the storm, whereas I, with all my strength, have been torn up by the roots and hurled into the river?""You were stubborn," came the reply, "and fought against the storm, which proved stronger than you: but we bow and yield to every breeze, and thus the gale passed harmlessly over our heads."

 

Sample Code Blocks

class="brush:cpp;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

class="brush:java;"

  azureTable.setDefaultClient({
    accountUrl: 'https://' + this.accountName + '.table.core.windows.net/',
    accountName: this.accountName,
    accountKey: this.config.accessKey,
    timeout: 10000
  });

 

class="brush:plain;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

Image Width Test

 

Tables

class="no-alternate" (also has style="width: 100%;")

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="all-grey"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="grey-alternating-rows"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="alt-col" (this format is the default, when a table has no class)

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

Lindsays TEST article IDZone

$
0
0

class="button-cta"

Yes, you can embed a YouTube video in an article! Remember to use the "http://www.youtube.com/watch?v=" URL, and not "https" or "youtu.be" formats.

How about a larger size of the same video? NO. iFrame not available in an article.

 

Class Collapse List (2up)


Figure 1. Data set layout.

Figure 1 shows the objects of the data set are all over the space.


Figure 2. Initial positions of the centroids.

Figure 2 shows the initial positions of the centroids. In general, these initial positions are chosen randomly, preferably as far apart from each other as possible.


Figure 3. New positions of the centroids after one iteration.

Figure 3 shows the new positions of the centroids. Note that the two lower centroids are re-adjusted to be closer to the two lower chunks of objects


Figure 4. New positions of the centroids in subsequent iterations.

Figure 4 shows the new positions of the centroids after many iterations. Note that the positions of the centroids don’t vary too much compared to those in Figure 3. Since the positions of the centroids are stabilized, the algorithm will stop running and consider those positions final.


Figure 5.

Figure 5 shows that the data set has been grouped into three separate clusters.

 

Class Collapse List (3up)

Here is a caption for all of the 3-up images - it is inside the outer div end tag.

 :

Images in a Plain P with a Space Between Each

  
Here is a caption for all of the 3-up images - it is inside the p end tag, preceded by a break.

 

Class Collapse List (4up)

Here is a caption for all of the 4-up images - it is inside the outer div end tag.

 

Figure 3 (Left) - Level Up Winners demonstrating on the Razer Blade Stealth, (Right) 4 person local multiplayer on Core i7 powered Intel NUC (codenamed Skull Canyon)

Around the show floor, game devs took advantage of our Demo Depot Rentals program which offers game devs equipment and on-site support at very competitive rates. In addition to the nearly dozen or so studios around the floor with our rental hardware, our sponsorship of Indie Mega Booth loaned in about 40 TVs and helped offset the cost of booth for some deserving teams.

Figure 4 (Upper Left) - Interabang, Supertype and High Horse all showing their games in The MIX space. (Upper Right) - SMG showing Death Squared in MegaBooth (Lower Left) Surprise Attack showing as part of the PAX AUS Roadshow on the 6th floor (Lower Right) Vlambeer showed all 12 of their games

 

Image Floats

Image class="floatLeft"

 

Image class="floatRight"

 

Image class="half-float-left"

 

Image class="half-float-right"

 

Other float classes:
class="one-third-float-left"
class="one-third-float-right"
class="one-quarter-float-left"
class="one-quarter-float-right"

Clear floats with p class="clearfix"

 

Text types - h1 thru h4 are their own tags; remainder are in a <p>.

H1 Header

H2 Header

H3 Header

H3 Grey (class="grey-heading")

H4 Header

Strong Text

Italic Text

Plain Text

SuperscriptText

SubscriptText

 


 

Text styles

This is Intel Clear class in the paragraph.

Can I specify Intel Clear font in a span? WYSIWYG says Yes.

Can I specify Intel Clear font in the paragraph? WYSIWYG says Yes.

Can I specify Courier New font in a span? WYSIWYG says Yes.

Can I specify Courier New font in the paragraph? WYSIWYG says Yes.

Can I specify 44px font size in a span? WYSIWYG says Yes.

Can I specify 44px font size in the paragraph? WYSIWYG says Yes.

Can I specify 400% font size in a span? WYSIWYG says Yes.

Can I specify 400% font size in the paragraph? WYSIWYG says Yes.

Can I specify #ff1493 font color in a span? WYSIWYG says Yes.

Can I specify #ff1493 font color in the paragraph? WYSIWYG says No.

Can I specify deeppink font color in a span? WYSIWYG says Yes.

Can I specify #ff1493 deeppink font color in the paragraph? WYSIWYG says No.

Can I specify style="margin-left:.5in;" in a span? WYSIWYG says Yes.

Can I specify style="margin-left:.5in;" in the paragraph? WYSIWYG says Yes.

 

Lists

Standard Unordered

  • Item 1
    • Sub-list (ul)
    • Sub 2
  • Item 2
    1. Sub-list (ol)
    2. Sub 2
  • Item 3

Standard Ordered

  1. Item 1
    • Sub-list (ul)
    • Sub 2
  2. Item 2
    1. Sub-list (ol)
    2. Sub 2
  3. Item 3

Special styles - do they work?

None

  • First (style="list-style-type: none;")
  • Second
  • Third
  • Fourth

Lower Alpha

  1. Arizona (ol style="list-style-type:lower-alpha)
    1. Phoenix
    2. Tucson
  2. Florida
  3. Hawaii

Lower Roman

  1. Alpha (ol style="list-style-type:lower-roman;")
  2. Bravo
  3. Charlie

Other Start

  1. Red (ol start="4")
  2. Blue
  3. Green

 

Inline Code

THE OAK AND THE REEDS

An Oak that grew on the bank of a river was uprooted by a severe gale of wind, and thrown across the stream.

It fell among some Reeds growing by the water, and said to them, "How is it that you, who are so frail and slender, have managed to weather the storm, whereas I, with all my strength, have been torn up by the roots and hurled into the river?""You were stubborn," came the reply, "and fought against the storm, which proved stronger than you: but we bow and yield to every breeze, and thus the gale passed harmlessly over our heads."

 

Sample Code Blocks

class="brush:cpp;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

class="brush:java;"

  azureTable.setDefaultClient({
accountUrl: 'https://' + this.accountName + '.table.core.windows.net/',
accountName: this.accountName,
accountKey: this.config.accessKey,
timeout: 10000
});

 

class="brush:plain;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

Image Width Test

 

Tables

class="no-alternate" (also has style="width: 100%;")

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="all-grey"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="grey-alternating-rows"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="alt-col" (this format is the default, when a table has no class)

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

Back to Pain Points: Marketing Your Enterprise B2B App

$
0
0

When it comes to marketing your B2B enterprise app, a lot of the groundwork has already been laid. Unlike a consumer app, which may be developed with consumer insights in mind, but still in somewhat of a vacuum—you’ve already been working closely with a select group of potential customers, so you should have a good idea of what they need to hear to move forward. Your understanding of pain points led to the development of a solid product plan—and then the creation of your proof of concept helped refine the product and further strengthen your customer relationships. With all of that insight and information, you’re now ready to scale the product to its final version—and market it to real, paying customers. To get those customers on board, you’ll need to convince them that your product solves a real need, helping workers be more efficient and improving the company’s bottom line.

How to Find and Reach Out to Potential Customers

Some of your potential customers are already familiar to you. Those initial interviewees and proof-of-concept partners will hopefully be ready and eager to convert into long-term customers. But now that you have a solid product, you’re also ready to scale, and that means extending your reach and selling your app to a bigger group, reaching new customers who haven’t yet heard about your product and how it can help them.

  1. Compile a list of organizations that fit your target market.
  2. Do research to find names and contact information for decision makers representing both customer types—users and check writers.
  3. Reach out to those people in any way you can—ask for personal introductions when possible, but also make cold calls and send emails.
  4. Remember that with enterprise B2B, face to face interactions are key. Try to schedule in-person meetings whenever possible.

 

Tip:

Try targeting your efforts to people at the Director level. Directors are great because once they buy in, they can take your product up to C level and down to users, becoming your key sponsor within the company.

Craft Your Messaging

As we’ve said, the main focus of your marketing efforts should be on the pain points, and how your product can provide solutions. But how you approach this will also depend on how well-accepted the pain points you’ve identified are. In other words, is this a universally-acknowledged issue? If so, you’ll need to explain why and how you’ve addressed this issue best. However, if it’s something that’s not as well-accepted or well-understood, you’ll need to start with a fair bit of education around what this pain point is and why they should be looking for ways to solve it.

For example, let’s say that you’ve created a new, simple but highly secure file sharing app designed to help creatives share large files with partners and clients. If it was a few years ago, when organizations weren't particularly savvy to the risks of employees putting company files on a public cloud, then you would've needed to start by educating them about the need for a secure system. However, if you were launching this product today, you'd be able to jump right into why your product’s security features are better than the competition’s.

You will also need to craft distinct messages for your two audiences. As in our example of an e-commerce portal for marketers—when you talk to your key user, or e-commerce analyst, you’ll connect with them on the problem of using multiple programs to update inventory and track sales, and explain how your product will make this process more efficient. For the check writer, you’ll want to focus on how your app increases sales and decreases margins.

Be Open to New Insights

Just because you’ve done careful work in the first two phases, that doesn’t mean that you won’t discover new pain points now. As you talk to more people, you may discover that you need to hone your message for certain verticals, or that a different pain point is really more relevant. You might also determine that there are some new features that should be added to the next version of the product.

Marketing your enterprise B2B app based on paint points means that you’re talking to potential customers about the things that really matter to them. You aren’t selling them a slick new technology, shiny but unnecessary, and you aren’t trying to force a one-size-fits-all solution that doesn’t address their specific business needs. When you sit down in a meeting with someone from your contact list, you’ll be able to demonstrate that you’ve been listening, that you understand the issues they face in their business—and that you’ve created the best product to bring their business forward.

Building Your First ReconOS* App

$
0
0

ReconOS* provides a powerful Operating System for Jet Pro*. This document will walk through the steps to developing an app walking through downloading the required tools, to running the app for the first time on Recon Jet Pro.

This downloadble PDF contains an overview of the resources you'll need to develop apps for ReconOS*, an Operating System that provides a powerful platform compatible with Android API level 16. Once you've familiarized yourself with the reference app and with the Recon SDK UI components, you should be comfortable enough to create your own Jet Pro app.

Setting Up Your Environment for Recon Jet*

$
0
0

Before you can begin developing your first apps for ReconOs*, you must have the proper environment set up.

This document contains instructions on how to set up Android Debug Bridge (ADB) – a versatile command line tool that lets you communicate with connected Recon Jet* devices – on Windows & Macintosh computers.

Using the Intel® Edison module to control robots

$
0
0

Intro

The land of tomorrow is slowly becoming the land of today. Robots might one day take over the world, but for now we still start by remotely controlling them with an internet connection. In this article we will create an HTML page that allows the user to control their robot’s movements using MQTT to send commands, while also remotely viewing the surroundings with a webcam mounted on the robot.  This remote control will lay a foundation that can be used as a stepping stone when adding features in the future.

The DFRobot Devastator Robot fully assembled

Figure 1: The DFRobot Devastator Robot fully assembled

For the purposes of this article, we used the DFRobot* Devastator Robot. You can find it here: https://www.dfrobot.com/index.php?route=product/product&search=edison&description=true&product_id=1379. It is a tank style robot with two motors that also comes with a number of sensors and a camera. However, this article and code can be easily used with other comparable microcontroller robots, and even adapted for other projects.

If you want to learn more about the DFRobot Devastator Robot, see my colleagues article here: https://software.intel.com/en-us/articles/overview-of-intel-edison-based-robotics-platform

Note: The Devastator’s battery pack is only 9V with 6 AA batteries, which is not sufficient once all the motors and the camera are mounted for this project. An additional 2 AA battery pack must be connected in series to the 9V one to give the robot 12V needed for the purposes of this article.

Setting up the Camera

The Devastator Robot kit comes with a USB camera and a pan tilt kit which need to be assembled and attached to the base. If creating your own DIY robot using the Intel® Edison module and a USB camera, note that the camera must be UVC compliant to ensure that it is compatible with the module’s USB drivers. For a list of UVC compliant devices see this webpage here: http://www.ideasonboard.org/uvc/#devices.

On the Devastator Robot, plug the USB webcam into the micro OTG USB port using the OTG USB adapter and plug another cable into the other micro USB port from your computer.

To ensure the USB webcam is working, type the following into a serial connection. (Using PuTTY)

ls -l /dev/video0

A line similar to this one should appear:

crw-rw---- 1 root video 81, 0 May  6 22:36 /dev/video0

Otherwise, this line will appear indicated the camera is not found.

ls: cannot access /dev/video0: No such file or directory

Next, install the mjpeg-streamer library on the Intel® Edison module.

Do this by adding the following lines to base-feeds.conf:

echo "src/gz all http://repo.opkg.net/edison/repo/all
src/gz edison http://repo.opkg.net/edison/repo/edison
src/gz core2-32 http://repo.opkg.net/edison/repo/core2-32">> /etc/opkg/base-feeds.conf

Update the repository index:

opkg update

And install:

opkg install mjpg-streamer 

To the start the stream:

mjpg_streamer -i "input_uvc.so -n -f 10 -r 400x400" -o "output_http.so -p 8080 -w ./www"

Note that the frame rate (-f 10) and size (400x400) has been limited to reduce power and computational demands and help ensure an up-to-date image without the stream freezing.

To view the stream while on the same Wi-Fi* network, replace ‘localhost’ in the following URL with the IP address of the Intel® Edison module: http://localhost:8080/?action=stream

A still image of the feed can also be viewed by replacing ‘localhost’ in the following URL with the IP address of the Intel® Edison module: http://localhost:8080/?action=snapshot.

To make the camera feed visible from outside the Wi-Fi network, you will need to configure your router properly. Detailed instructions can be found here: https://ipcamnetwork.wordpress.com/2010/09/23/acessing-your-camera-from-the-internet/

 

We also need to have the camera start-up at boot time, so it is ready for use. First create a script in the /home/root/ directory.

vi cameraScript.sh

Add the following to the script.

#!/bin/sh
mjpg_streamer -i "input_uvc.so -n -f 10 -r 400x400" -o "output_http.so -p 8080 -w ./www"

Make the script executable.

chmod +x /home/root/ cameraScript.sh
chmod +x cameraScript.sh

And make it start up on boot:

update-rc.d cameraScript.sh defaults

To make sure it works, reboot the Intel® Edison module. (This step assumes it is also configured to connect to Wi-Fi).

reboot

HTML WebPage

To remotely control the robot, a simple HTML webpage is created as it easily ties into the webcam stream and is compatible across a broad number of platforms. On your computer you will have an index.html file and a script.js file in the same location. Double clicking on the index.html file will start up the webpage.

Screenshot of the HTML page with webcam view and control buttons

Figure 2: Screenshot of the HTML page with webcam view and control buttons

The index.html file code is below. As the robot is using MQTT and we need some custom methods, we include the mqttws31.js and our own script.js. We also need an object to contain the web stream and some buttons to control the robot.

<!DOCTYPE html><html>

  <head>
    <title>NodeJS Starter Application</title>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="stylesheets/style.css">
 <script src="https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.0.1/mqttws31.js" type="text/javascript"></script>
 <script src="script.js"></script>
  </head><body>
  <div>
    <object type="text/html" data="http://192.168.1.104:8080/?action=stream" width="640px" height="480px" style="overflow:auto;border:5px ridge blue">
    </object>
 </div>
  <div>
  <input onclick="client.connect(options);" type="button" value="Connect" id="connect"></input>
  <input onclick="publish('forward','dfrobotControl',2);moveForward();" type="button" value="Forward" id="forwardButton"></input>
  <input onclick="publish('reverse','dfrobotControl',2);moveReverse();" type="button" value="Reverse" id="reverseButton"></input>
  <input onclick="publish('left','dfrobotControl',2);moveLeft();" type="button" value="Left" id="leftButton"></input>
  <input onclick="publish('right','dfrobotControl',2);moveRight();" type="button" value="Right" id="rightButton"></input>
  <input onclick="publish('stop','dfrobotControl',2);moveStop();" type="button" value="Stop" id="stopButton"></input>
  <input onclick="client.disconnect();" type="button" value="Disconnect" id="disconnect"></input><div id="messages"></div>
  </div></body></html>

Code Sample 1: index.html file

The script.js will highlight the current command button and also handle the MQTT connect and subscribe logic.

Take note of the client instance line in the script.js:

var client = new Paho.MQTT.Client("broker.hivemq.com", 8000, "clientId");

We are using broker.hivemq.com as our MQTT server. It is free to use and allows you to setup your own broker using their services if needed. The clientId must be unique for each client so be sure to change it to something else, maybe even add some random characters and numbers in as well. Also the topic that the robot is using for subscribing and publishing is dfrobotControl, but you can change it to suit your own purposes.

function moveForward()
{
 clearButtons();
 document.getElementById("forwardButton").style.color = "red";
};
function moveReverse()
{
 clearButtons();
 document.getElementById("reverseButton").style.color = "red";
};
function moveLeft()
{
 clearButtons();
 document.getElementById("leftButton").style.color = "red";
};
function moveRight()
{
 clearButtons();
 document.getElementById("rightButton").style.color = "red";
};
function moveStop()
{
 clearButtons();
 document.getElementById("stopButton").style.color = "red";
};
function clearButtons()
{
 document.getElementById("forwardButton").style.color = "black";
 document.getElementById("reverseButton").style.color = "black";
 document.getElementById("leftButton").style.color = "black";
 document.getElementById("rightButton").style.color = "black";
 document.getElementById("stopButton").style.color = "black";
};


// Create a client instance
 var client = new Paho.MQTT.Client("broker.hivemq.com", 8000, "clientId");

// called when the client loses its connection
 client.onConnectionLost = function (responseObject) {
     alert("connection lost: " + responseObject.errorMessage);
 };

 //Connect Options
 var options = {
     timeout: 3,
     //Gets Called if the connection has sucessfully been established
     onSuccess: function () {
         alert("Connected");
   //and subscribe to the topic
   client.subscribe('dfrobotControl/#', {qos: 2});
   alert('Subscribed');
     },
     //Gets Called if the connection could not be established
     onFailure: function (message) {
         alert("Connection failed: " + message.errorMessage);
     }
 };

 //Publish the message to the topic
 var publish = function (payload, topic, qos) {
     var message = new Paho.MQTT.Message(payload);
     message.destinationName = topic;
     message.qos = qos;
     client.send(message);
 
}

Code Sample 2: script.js file for the HTML page

Arduino Sketch

The Devastator uses the Romeo board for Intel® Edison module which by default is programmable with the Arduino* IDE. In the Arduino sketch we need to subscribe to the MQTT topic to receive the commands and then act on them. The webcam is already taken care of, but we will set a static IP address for the Intel® Edison module as well as connect it to Wi-Fi* network. Once set up in this way, the webcam IP will always be the same for the webpage.

The front LEDs are used as a status indicators. When the Wi-Fi connection is successful, the robot’s right LED will illuminate. When MQTT successfully connects and subscribes, the robot’s left LED will illuminate.  

#include <SPI.h>
#include <WiFi.h>
#include <PubSubClient.h>
#include <DFRobot.h>
#include <IIC1.h>

// WiFi Login
IPAddress ip(192, 168, 1, 104);   
char ssid[] = "wifiname";      //  your network SSID (name)
char pass[] = "wifipassword";             // your network password
const char* server = "broker.mqttdashboard.com";

// WiFi connection
WiFiClient wifiClient;
int status = WL_IDLE_STATUS;          // the Wifi radio's status

#define leftLEDPin 7
#define rightLEDPin 10

DFrobotEdison MotorLeft;
DFrobotEdison MotorRight;

The next step in the code is the callback method for when a message is published to the MQTT topic. The sketch is actively listening for a message to arrive and once it does, the appropriate command can be called.

void callback(char* topic, byte* payload, unsigned int length) {
  Serial.print("Message arrived [");
  Serial.print(topic);
  Serial.print("] ");
  String message= "";
  for (int i=0;i<length;i++) {
    Serial.print((char)payload[i]);
    message= message + (char)payload[i];
  }
  Serial.println();
 String forward= "forward";
  if (message.equals("forward") == 1) {
    Serial.println("Forward!!!");
    motorForward();
  }
  if (message.equals("reverse") == 1) {
    motorBackward();
    Serial.println("reverse!!!");
  }
  if (message.equals("left") == 1) {
    motorLeft();
    Serial.println("left!!!");
  }
  if (message.equals("right") == 1) {
    motorRight();
    Serial.println("right!!!");
  }
  if (message.equals("stop") == 1) {
    motorStop();
  }
}

// PubSub Client.
PubSubClient client(server, 1883, callback , wifiClient);

When connecting to the MQTT server, remember to choose a unique clientID for the Intel® Edison module to use and change the topic as well if necessary. Accomplish this by updating the client.connect("clientID ")) and client.subscribe("dfrobotControl") lines.

void reconnectMQTT() {
  // Loop until we're reconnected
  digitalWrite(leftLEDPin, LOW);
  motorStop();
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    // Attempt to connect
    if (client.connect("clientID ")) {
      Serial.println("connected");
      // ... and resubscribe
      client.subscribe("dfrobotControl");
      digitalWrite(leftLEDPin, HIGH);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      // Wait 5 seconds before retrying
      delay(5000);
    }
  }
}

void reconnectWiFi() {
  // Loop until we're reconnected
  digitalWrite(rightLEDPin, LOW);
  digitalWrite(leftLEDPin, LOW);
  motorStop();
  status = WiFi.begin(ssid, pass);
  while(!(status == WL_CONNECTED)){
     Serial.println("WiFi Failed!");
     status = WiFi.begin(ssid, pass);
     delay(5000);
  }
  Serial.println("WiFi Connected!");
   digitalWrite(rightLEDPin, HIGH);
}

void setup()
{
  Serial.begin(9600);
 
  Serial.println("Init the sensor");       

  //Initilize Pin Mode
  pinMode(rightLEDPin, OUTPUT);              
  pinMode(leftLEDPin,OUTPUT);      
  digitalWrite(rightLEDPin, LOW);           
  digitalWrite(leftLEDPin, LOW);           

  //Initilize Motor Drivers
  MotorLeft.begin(M2);
  MotorRight.begin(M1);

  WiFi.config(ip);
  status = WiFi.begin(ssid, pass);
  while(!(status == WL_CONNECTED)){
     Serial.println("WiFi Failed!");
     status = WiFi.begin(ssid, pass);
     delay(5000);
  }
  Serial.println("WiFi Connected!");
   digitalWrite(rightLEDPin, HIGH);
}

void loop()
{
 client.loop();
if (!client.connected()) {
    reconnectMQTT();
 }
 if(WiFi.status()!= WL_CONNECTED){
   reconnectWiFi();
 }
 delay(1000);
 
}

And finally the methods to control the robot’s movement.

void motorBackward()
{
  motorStop();
  MotorLeft.setDirection(ANTICLOCKWISE);
  MotorRight.setDirection(ANTICLOCKWISE);
  for(int i=0;i<150;i+=10)
  {
    MotorLeft.setSpeed(i);
    MotorRight.setSpeed(i);
    delay(20);
  }
}


void motorForward()
{
  motorStop();
  MotorLeft.setDirection(CLOCKWISE);
  MotorRight.setDirection(CLOCKWISE);
  for(int i=0;i<150;i+=10)
  {
    MotorLeft.setSpeed(i);
    MotorRight.setSpeed(i);
    delay(20);
  }
}

inline void motorLeft()
{
  motorStop();
  MotorLeft.setDirection(ANTICLOCKWISE);
  MotorRight.setDirection(CLOCKWISE);
  for(int i=0;i<150;i+=10)
  {
    MotorLeft.setSpeed(i);
    MotorRight.setSpeed(i);
    delay(20);
  }
}

inline void motorRight()
{
  motorStop();
  MotorLeft.setDirection(CLOCKWISE);
  MotorRight.setDirection(ANTICLOCKWISE);
  for(int i=0;i<150;i+=10)
  {
    MotorLeft.setSpeed(i);
    MotorRight.setSpeed(i);
    delay(20);
  }
}


inline void motorStop()
{
  MotorLeft.setSpeed(0);
  MotorRight.setSpeed(0);
  delay(50);
}

Code Sample 3: Arduino sketch to control the robot

Summary

Now our robot can be remotely controlled through the Internet with a simple webpage using MQTT and with the webcam we can see where it is going. From here we can use it as a base to build more features into our robot, like area mapping or panning and tilting the camera.

 

About the author

Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Internet of Things.

 

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others

**This sample source code is released under the Intel Sample Source Code License AgreementLike   SubscribeAdd new commentFlag as spam  .

© 2016 Intel Corporation.

 

Intel Premier Support Legacy Status Update

$
0
0

Current Status:

We are still experiencing problems with the Intel® Premier Support Legacy Tool. We are sorry for the inconvenience. 

In the meantime please use our Developer Forums to post your issues and they will be answered in a timely manner. You can use the private message capability to convey any information you would like to keep confidential.

We will give another update by 2 pm Pacific time, December 13, 2016.


Installing Android Things* on Intel® Edison Kit for Arduino

$
0
0

This document describes how to setup your Intel® Edison Kit for Arduino with Android Things*.

Android Things is an open-source operating system from Google that can run on a wide variety of development boards, including the Intel Edison device. Android Things is based on Android* and the Linux kernel. For more information about Android Things, see https://developer.android.com/things.
 

CSharp Application with Intel Software Guard Extension

$
0
0

C# Application with Intel Software Guard Extension

Enclaves must be 100 percent native code and the enclave bridge functions must be 100 percent native code with C (and not C++) linkages, it is possible, indirectly, to make an ECALL into an enclave from .NET and to make an OCALL from an enclave into a .NET object.

Mixing Managed Code and Native Code with C++/CLI

Microsoft Visual Studio* 2005 and later offers three options for calling unmanaged code from managed code:

  • Platform Invocation Services, commonly referred to by developers as P/Invoke:
    • P/Invoke is good for calling simple C functions in a DLL, which makes it a reasonable choice for interfacing with enclaves, but writing P/Invoke wrappers and marshaling data can be difficult and error-prone.
  • COM:
    • COM is more flexible than P/Invoke, but it is also more complicated; that additional complexity is unnecessary for interfacing with the C bridge functions required by enclaves
  • C++/CLI:
    • C++/CLI offers significant convenience by allowing the developer to mix managed and unmanaged code in the same module, creating a mixed-mode assembly which can in turn be linked to modules comprised entirely of either managed or native code.
    • Data marshaling in C++/CLI is also fairly easy: for simple data types it is done automatically through direct assignment, and helper methods are provided for more complex types such as arrays and strings.
    • Data marshaling is, in fact, so painless in C++/CLI that developers often refer to the programming model as IJW (an acronym for “it just works”).
    • The trade-off for this convenience is that there can be a small performance penalty due to the extra layer of functions, and it does require that you produce an additional DLL when interfacing with Intel SGX enclaves.

Please find the detailed information in the PDF and also i have shared sample code.

Android Things* Developer Preview Now Available on the Intel® Edison board

$
0
0

Today Google released their developer preview of Android Things*, previously known as Project Brillo*. Android Things is Google’s open-source, Android*-based, Internet of Things (IoT) operating system. This first iteration provides developers an opportunity to test key features of the platform, which include:

  • Android developer framework that includes Android Studio and other familiar Android tools to develop, run and debug code
  • Android framework APIs to simplify access to peripheral interfaces, support libraries for common hardware peripherals, and Android extensions to support devices with zero or more displays 
  • Ability to use Google Play* Services on the device, giving access to many popular Google APIs for authentication, cloud and voice services

According to Google, future iterations will incorporate Google’s Weave platform, Google Cloud Platform*, over-the-air updates from Google and the Google IoT Developer Console.

Intel has been involved with Google’s team from the beginning. Intel delivered the first Brillo-compliant starter board, the Intel® Edison Kit for Arduino*, which was featured in Google’s IoT Tech Awards, a global research grant spanning 83 projects selected to experiment with Google and partner IoT technologies. Dr. Max Senges, Google Research Program lead says: “We received very positive feedback from academic researchers, and an overwhelming majority said the experiments laid the foundation for their R&D efforts and they plan to continue to use our products in their research. We want to thank Intel for their contribution and are looking forward to continuing this partnership.”

Deploying Android Things on Intel® architecture combines the power of Android with the performance of Intel architecture to scale IoT projects from proof-of-concept to product reality. Because the firmware is Android-based, developers benefit from working with familiar Android frameworks, languages and tools. Access to Google Play services and over-the-air security updates further simplify and accelerate development and increase security, allowing developers to focus on creating products and the user experience.

Google used the Intel Edison system-on-module (SOM) as a reference device for further development of Android Things. With the launch of Android Things, we added Android Things support for Intel Edison on different expansion boards such as the Intel Edison Kit with Breakout Board  and the SparkFun blocks for Intel Edison. Support for the Intel® Joule™ Compute Module is coming soon.

Intel Edison for Android Things is a compute module that works with a variety of expansion boards, which can be tailored to different application domains. The Intel Edison Breakout Board is a small form factor board that’s just slightly larger than the module itself and provides a minimal set of features and easy access to the GPIO. The Intel Edison Kit for Arduino includes a larger expansion board that allows the Intel Edison module to interface and access the open source Arduino shields, libraries and resources. The module can also be used with the SparkFun’s Blocks series for further customization. These expansion options enable quick adoption, ease of use, and are well suited for developers interested in quick prototyping and making fast time-to-market IoT solutions.

The Intel Joule compute module is an advanced, high performance technology that offers high-end computing, large memory and 4K video capture and display. Once validated, developers will be able to use the Intel Joule platform to build out an embedded system or prototype in the areas of robotics, drones, industrial machine vision, augmented reality and more.

“We are only beginning to see the technological revolution made possible by the Internet of Things,” says Sameer Sharma, GM of Intel’s New Market Developments, Internet of Things Group. “I believe Android Things will be a catalyst in the widespread adoption of IoT in both consumer and enterprise applications. The OS will allow companies to quickly bring new, life-changing products to the market.” Excited at the possibilities presented by Android Things, Sharma notes that while “Android changed the way we use and think about our mobile devices, Android Things has the potential to do the same with the billions of connected IoT devices that will come to market, including products for home, building, and industrial settings.”

Visit our page to get started with Google’s Android Things on Intel architecture. 

Flashing the Zephyr* Application Using a JTAG Adapter on the Arduino 101* (branded Genuino 101* outside the U.S.) on Ubuntu* under VMware*

$
0
0

Introduction

The Zephyr Project* is a small open source real-time operating system (RTOS) for the Internet of Things (IoT). It’s ideal for low-powered, small memory footprint devices and prioritizes task execution like the Arduino 101* (branded Genuino 101* outside the U.S.). The Arduino 101 is a learning and development platform that uses a low-power Intel® Curie™ module powered by the Intel® Quark™ SE SoC. The Intel Quark SE SoC contains a single core 32 MHz x86 processor and the 32 MHz ARC processor. This guide demonstrates steps to flash a Zephyr application onto the X86, and ARC processors on an Arduino 101 platform. This is done with Ubuntu* in a VMware* workstation using a JTAG adapter. This JTAG method enables engineers to perform advanced development and debugging on the Arduino 101 platform through a small number of dedicated pins.

Hardware Components

The hardware components used in this project are listed below:

Setting up VMware Workstation on Ubuntu*

Go to VMwarewebsite to download and install the latest VMware Workstation player for Windows*. Browse to the Ubuntu website to download the latest version of Ubuntu Desktop. Open VMware and create a new virtual machine using the downloaded Ubuntu image. Check virtualization settings in your computer’s BIOS to ensure they are enabled, otherwise VMware will not work.

Setting up Development Environment for Zephyr* on Ubuntu* under VMware*

Ensure the Ubuntu OS is up to date and install dependent Ubuntu packages.

  • sudo apt-get update
  • sudo apt-get install git make gcc gcc-multilib g++ libc6-dev-i386 g++-multilib python3-ply

Install the Zephyr Software Development (SDK) kit and run the installation binary.

Export the Zephyr SDK environment variables. ZEPHYR_SDK_INSTALL is where the Zephyr SDK installed.

  • export ZEPHYR_GCC_VARIANT=zephyr
  • export ZEPHYR_SDK_INSTALL=/home/nnle/zephyr-sdk

Save the Zephyr SDK environment variables for later use in new sessions.

  • vi  ~/.zephyrrc
  • export ZEPHYR_GCC_VARIANT=zephyr
  • export ZEPHYR_SDK_INSTALL=/home/nnle/zephyr-sdk

Building the Application for Arduino 101* (branded Genuino 101* outside the U.S.)

Git clone a repository to the Ubuntu.

  • git clone https://gerrit.zephyrproject.org/r/zephyr && cd zephyr && git checkout tags/v1.5.0

Navigate to the Zephyr project directory and set the project environment variables.

  • cd zephyr
  • source zephyr-env.sh

Navigate to the example project and build the project for the Arduino 101 board.

  • cd $ZEPHYR_BASE/samples/hello_world/microkernel
  • make BOARD=arduino_101

Connecting FlySwatter2 to Arduino 101* (branded Genuino 101* outside the U.S.)

The first step is to connect the ARM micro JTAG connector to the FlySwatter2. Then, connect the FlySwatter2 to the Arduino 101 micro JRAG connector. The small white dot beside the micro JTAG header on the Arduinio 101 indicates the location of pin one. Insert the cable so it matches the dot. For more information on locating the micro JTAG connector on the Arduino 101, visit https://www.zephyrproject.org/doc/1.4.0/board/arduino_101.html.

Figure 1: Connect FlySwatter2 to Arduino 101 using ARM Micro JTAG Connector.

A Hardware Abstraction Layer (HAL) allows the computer operating system to interact with a hardware device. Add your username to HAL layer interaction permissions to control the FlySwatter2.

  • sudo usermod –a –G plugdev $USERNAME

Grant members of the plugdev group permission to control the FlySwatter2.

  • sudo vi /etc/udev/rules.d/99-openocd.rules
  • # TinCanTools FlySwatter2
  • ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", MODE="664", GROUP="plugdev"

Reload udev rules to apply the new udev rule without restarting the system.

  • sudo  udevadm control --reload-rules

Insert the standard A plug to B plug USB type B cable into the FlySwatter2 and the computer.

  • dmesg | grep FTDI

In the Ubuntu window, FTDI USB Serial device should be displayed as below.

 FlySwatter2 on Ubuntu.

Figure 2: FlySwatter2 on Ubuntu.

The FlySwatter2 device should be connected to the Ubuntu as a removable device.

Figure 3: FlySwatter2 pop up message on virtual machine.

Backup and Restore Factory Reset

To backup the Arduino 101 factory settings, connect the FlySwatter2 to the Arduino 101 platform and the computer. Navigate to the Zephyr project directory, set the project environment variables, and then follow the prompt to backup the Arduino 101 factory settings. JTAG will back up the original flash into two files (A101_BOOT.bin and A101_OS.bin) in your Zephyr project directory.

  • cd zephyr
  • source zephyr-env.sh
  • ./boards/arduino_101/support/arduino_101_backup.sh

Similarly, to restore the Arduino 101 factory settings, ensure the FlySwatter2 is properly connected to the Arduino 101 platform and the computer. The JTAG script will update the two files (A101_BOOT.bin and A101_OS.bin) in your Zephyr project directory.

  • cd zephyr
  • source zephyr-env.sh
  • ./boards/arduino_101/support/arduino_101_load.sh

Flashing the Zephyr* Image into the ARC Processor

Navigate to the Zephyr project and build the binary image. The board type for ARC processor is arduino_101_sss_factory.

  • cd zephyr
  • source zephyr-env.sh
  • cd $ZEPHYR_BASE/samples/hello_world/microkernel
  • make pristine
  • make BOARD=arduino_101_sss_factory

Ensure the FlySwatter2 is connected to the Arduino 101 platform and the computer, and then flash the image.

  • make BOARD=arduino_101_sss_factory flash

Flashing the Zephyr* Image onto the Intel® Quark™ SE SoC

Navigate to the Zephyr project and build the binary image. The board type for X86 processor is arduino_101_factory.

  • cd zephyr
  • source zephyr-env.sh
  • cd $ZEPHYR_BASE/samples/hello_world/microkernel
  • make pristine
  • make BOARD=arduino_101_factory

Ensure the FlySwatter2 is connected to the Arduino 101 platform and the computer, and then flash the image.

  • make BOARD=arduino_101_factory flash

The hello_world image should be successfully flashed onto the X86 processor.

Figure 5: Flash image successfully.

Summary

We have described how to build and flash the Zephyr hello_world image onto the ARC or X86 processor of the Arduino 101 platform on Ubuntu in VMware for experimentation and testing purposes. To experiment with different Zephyr projects on Arduino 101, such as heart rate monitor, go to https://gerrit.zephyrproject.org/r/#/admin/projects. To receive more information about the Intel Curie module and Zephyr project, sign up at https://software.intel.com/en-us/iot/hardware/curie.

Helpful References

About the Author

Nancy Le is a software engineer at Intel Corporation in the Software and Services Group working on Intel® Atom™ processor scale-enabling and IoT projects.

 

 

Link Aggregation Configuration and Usage in Open vSwitch* with DPDK

$
0
0

Download PDF  [1914 KB]

This article explains the link aggregation feature of Data Plane Development Kit (DPDK) ports on Open vSwitch* (OVS), and shows how to configure them. Link aggregation can be used for high availability, traffic load balancing and extending the link capacity using multiple links/ports. Link aggregation combines multiple network connections in parallel in order to increase the throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links should fail. Link aggregation support for OVS-DPDK is available in OVS 2.4 and later.

Test Environment

OVS-DPDK link aggregation test setup

Figure 1: OVS-DPDK link aggregation test setup

The test setup uses two hypervisors (physical host machines) both running OVS 2.6 with DPDK 16.7 and QEMU 2.6. The VMs (VM1 and VM2, respectively) running on each hypervisor are connected to a bridge named br0. The two hypervisors are connected to each other using an aggregated link consisting of two physical interfaces named dpdk0 and dpdk1. The member ports (dpdk0, dpdk1) on each host must have the same link properties, such as speed and bandwidth, to form an aggregated link. However, it is not necessary that the port names should be the same on both hosts to form an aggregated link. The VMs in each hypervisor can reach each other via the aggregated ports between the host machines.

Link Aggregation in OVS with DPDK

At the time of writing, OVS considers each member port in an aggregated port as an independent OpenFlow* port. When a user issues the following command to see the available OpenFlow ports in OVS-DPDK, the member ports are displayed separately, without any bond interface information.

ovs-ofctl show br0

This makes it impossible to program OpenFlow rules on bond ports, and also limits the OVS to operate only in the NORMAL action mode. In the NORMAL action mode, OVS operates like a traditional MAC learning switch.

The following link aggregation modes are supported in OVS with DPDK.

Active-Backup (Active/Standby)

Active/standby failover mode is where one of the ports in the link aggregation port is active and all others are in standby mode. One MAC address (MAC address of the active link) is used as the MAC address of the aggregated link.

Note: No traffic load balancing is offered in this mode.

balance-slb

Load balance the traffic based on source MAC and VLAN. This mode uses a simple hashing algorithm on source MAC and VLAN to choose the port in an aggregated link to forward the traffic. This mode is a simple static link aggregation similar to the mode-2 bonds in Linux* bonding driver1.

balance-tcp

The preferred load-balancing mode. It uses 5-tuple (source and destination IP, source and destination port, protocol) to balance traffic across the ports in an aggregated link. This mode is similar to mode-4 bonds in Linux bonding driver1. It uses Link Aggregation Control Protocol (LACP)2 for signaling/controlling the link aggregation between switches. LACP offers high resiliency for link failure detection and additional diagnostic information about the bond. It is observed that the balance-tcp is less performant due to the overhead of hashing on more header fields when compared to the balance-slb.

Link Aggregation Configuration and Testing

The test setup uses two identical host machines with the following configuration:

Hardware: Intel® Xeon® processor E5-2695 V3 product family, Intel® Server Board S2600WT2, and Intel® 82599ES 10-G SFI/SFP+ (rev 01) NIC.

Software: Ubuntu* 16.04, Kernel version 4.2.0-42-generic, OVS 2.6, DPDK 16.07, and QEMU2.6.

To test the configuration, make sure iPerf* is installed on both VMs. iPerf can be run in client mode or server mode.

To set up the link aggregation, run the following commands on each hypervisor (physical host 1 and physical host 2):

  • Create a bridge named br0 of type netdev:
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
  • Create a link aggregation port using the DPDK physical ports dpdk0 and dpdk1:
ovs-vsctl add-bond br0 dpdkbond1 dpdk0 dpdk1

-- set Interface dpdk0 type=dpdk \

-- set Interface dpdk1 type=dpdk
  • Add the vhost-user port of the VM to the bridge:
ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
  • Delete all the flows and set the bridge to NORMAL forwarding mode:
ovs-ofctl del-flows br0

ovs-ofctl add-flow br0 actions=NORMAL
  • Start the VM on each hypervisor using the vhost-user interface vhost-user1.

Active-Backup

  • The default mode of link aggregation (bond) interface is active-backup. To set the mode explicitly:
ovs-vsctl set port dpdkbond1 bond_mode=active-backup
  • Verify the bond interface configuration by using:
ovs-appctl bond/show
  • Assign Ip address to the VNIC interfaces of VM1 and VM2:
ip addr flush eth0

ip addr add <ip-addr> dev eth0

In this example, 10.0.0.1/24 and 10.0.0.5/24 are the <ip-addr> for VM1 and VM2, respectively.

ip link set dev eth0 up
  • Run iPerf server on VM1 in UDP mode at port 8080:
iperf –s  –p 8080
  • Run iPerf client on VM2 in UDP mode at port 8080:
iperf -c 10.0.0.01 -p 8080

After 10 seconds the client shows a series of results for traffic between VM1 and VM2, which is similar to the following, Figure 2, though the numbers may vary.

iPerf client on VM2 in Active-Backup mode

Figure 2: Screenshot of iPerf client on VM2 in Active-Backup mode

Only the active port in the bond interface is used for traffic forwarding. In this example, the stats of dpdk1 (port: 2, active port) shows all traffic is on dpdk1. A small number of packets on port dpdk0 (port: 1) are related to link negotiation. The OpenFlow port numbers assigned to ports dpdk0 and dpdk1 are port: 1 and port: 2, respectively.

OpenFlow port statistics on physical host-1

Figure 3: OpenFlow port statistics on physical host-1 in Active-Backup mode

balance-slb

  • Set the link aggregation mode to balance-slb:
ovs-vsctl set port dpdkbond1 bond_mode=balance-slb
  • Verify the bond interface configuration by using:
ovs-appctl bond/show
  • Create two VLAN logical interfaces on the VNIC port of each VM; the balance-slb load balances the traffic based on the VLAN and source MAC address:
ip link add link eth0 name eth0.10 type vlan id 10

ip link add link eth0 name eth0.20 type vlan id 20
  • Assign IP address to the VNIC interfaces of VM1 and VM2:
ip addr flush eth0

ip addr flush eth0.10

ip addr add <ip-addr1> dev eth0.10


10.0.0.1/24 and 10.0.0.5/24 are the <ip-addr1> for VM1 and VM2, respectively, for the logical interface eth0.10.

ip addr flush eth0.20

ip addr add <ip-addr2> dev eth0.20


20.0.0.1/24 and 20.0.0.5/24 are the <ip-addr2> for VM1 and VM2, respectively, for the logical interface eth0.20.

ip link set dev eth0.10 up

ip link set dev eth0.20 up
  • Run iPerf server on VM2 in UDP mode at port 8080:
iperf –s –u –p 8080
  • Run two iPerf client streams on VM1 in UDP mode at port 8080:
iperf -c 10.0.0.5 -u -p 8080 –b 1G

iperf -c 20.0.0.5 -u -p 8080 –b 1G

In this example each stream uses a separate port in the bond interface for the traffic. The port stats show the same.

OpenFlow port statistics balance-slb mode

Figure 4: OpenFlow port statistics at physical host-1 in balance-slb mode

balance-tcp

  • Set link aggregation mode to balance-tcp and enable LACP:
ovs-vsctl set port dpdkbond1 bond_mode=balance-tcp

ovs-vsctl set port dpdkbond1 lacp=active


Disabling LACP will fall back the balance-tcp bond interface to the default mode (active-standby). To disable LACP on bond interface:

ovs-vsctl set port dpdkbond1 lacp=passive
  • Verify the bond interface configuration by using:
ovs-appctl bond/show
  • Assign Ip address to the VNIC interfaces of VM1 and VM2:
ip addr flush eth0

ip addr add <ip-addr> dev eth0


In this example, 10.0.0.1/24 and 10.0.0.5/24 are the <ip-addr> for VM1 and VM2, respectively.

ip link set dev eth0 up
  • Run iPerf server instance on VM2 in TCP mode at port 9000; logical interfaces are not needed here as load balancing is performed on layer-4:
iperf –s –p 9000
  • Run two iPerf client instances on VM1 in TCP mode:
iperf -c 10.0.0.5 -p 9000

iperf -c 10.0.0.5 -p 9000

The two independent TCP streams are load balanced between two ports in the bond interface as the iPerf client uses different source ports for each stream.

iPerf server on VM2 in balance-tcp mode

Figure 5: Screenshot of iPerf server on VM2 in balance-tcp mode.

The statistics of bond member ports (highlighted in Figure 6) show that the streams are balanced between the ports.

OpenFlow port statistics on physical host-1 in balance-tcp mode

Figure 6: OpenFlow port statistics on physical host-1 in balance-tcp mode

Additional Configuration and Display Options for Link Aggregation

  • Setting LACP mode to passive/off:
ovs-vsctl set port dpdkbond1 lacp=passive

ovs-vsctl set port dpdkbond1 lacp=off
  • Setting LACP behavior to switch to bond_mode=active_backup as a fallback:
ovs-vsctl set port dpdkbond1 other_config:lacp-fallback-ab=true
  • Setting LACP negotiation time interval either fast (30 ms) or slow (1 second); the default is slow:
ovs-vsctl set port dpdkbond1 other_config:lacp-time=fast

ovs-vsctl set port dpdbond1 other_config:lacp-time=slow
  • The number of milliseconds for a port to be up before activating in the bond interface:
ovs-vsctl set port dpdkbond1 other_config:bond_updelay=1000
  • The time interval in milliseconds to rebalance the flow between bond member ports; set to zero to disable:
ovs-vsctl set port dpdkbond1 other_config:bond-rebalance-interval=10000
  • To display the bond interface configuration details:
ovs-appctl bond/show

ovs-appctl bond/show dpdkbond0

The following bond interface information is displayed for the given test setup in a balance-tcp mode.

‘bond show’ on physical host in balance-tcp mode

Figure 7: ‘bond show’ on physical host in balance-tcp mode

Summary

Link aggregation is a useful method for combining multiple links to form a single (aggregated) link.  The main features of link aggregation are:

  • Increased link capacity/bandwidth
  • Traffic load balancing
  • High availability (automatic failover in the event of link failure)

OVS-DPDK offers three modes of link aggregation:

  • Active/Standby(Active-Backup)
    No load balancing is offered in this mode and only one of the member ports is active/used at a time.
  • balance-slb
    Considered as a static load-balancing mode. Traffic is load balanced between member ports based on the source MAC and VLAN.
  • balance-tcp
    This is the preferred bonding mode. It offers traffic load balancing based on 5-tuple header fields. LACP must be enabled at both endpoints to use this mode. The aggregate link will fall back to default mode (active-passive) in the event of LACP negotiation failure.

About the Author

Sugesh Chandran is a network software engineer with Intel. His work is primarily focused on accelerated software switching solutions in the user space running on Intel® architecture. His contributions to Open vSwitch with DPDK include tunneling acceleration and enabling hardware acceleration in OVS-DPDK.

Additional Information

  1. Link Aggregation Wiki
  2. Link Aggregation Control Protocol Wiki
  3. Linux Bonding Documentation
  4. Open vSwitch Documentation and Manpages
  5. Open vSwitch with DPDK Installation guide
Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>