Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > del *.* /s/q

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > del *.* /s/q
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

To do the same on a Linux or Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • debug-software.intel.com (for using the Test tab weinre debug feature)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

 

Back to FAQs Main


Using Vulkan graphics API to render a cloud of animated particles in Stardust Application

$
0
0

Download PDF [774 KB]

Download Code Sample

Vulkan™ API overview

Vulkan™ is the new generation, open standard API for high-efficiency access to graphics and compute on modern GPUs. This API is designed from the ground up to provide applications direct control over GPU acceleration for maximized performance and predictability. Vulkan is a unified specification that minimizes driver overhead and enables multithreaded GPU command preparation for optimal graphics and compute performance on diverse mobile, desktop, console, and embedded platforms.

The Vulkan API uses so-called command buffer objects to record and send work to the GPU. Recorded commands include commands to draw, commands to change GPU state, and so on. To record and execute command buffers, an application calls the appropriate Vulkan API functions.

Stardust demo overview

The Stardust sample application uses the Vulkan graphics API to efficiently render a cloud of animated particles. To highlight Vulkan’s low CPU overhead and multithreading capabilities, particles are rendered using 200,000 draw calls. The demo is not using instancing; each draw call uses different region of a vertex buffer so that each draw could be a unique piece of geometry. Furthermore, all CPU cores are used for draw call preparation to ensure that graphics commands are generated and delivered to the GPU as fast as possible. The per-core CPU load graph is displayed in the upper-right corner.

GPU work generation and submission

Stardust sends 2,000,000 point primitives with 200,000 10-point draw calls to the GPU each frame. Please note that this is an exaggerated, extreme scenario that is intended only to demonstrate how thin and fast the Vulkan driver is. Despite so many draw calls, the CPU load is quite low as shown on the green graphs in the image above.

Intel’s Stardust demo records and submits several command buffers each frame. All those command buffers together include 200,000 draw calls and other GPU commands. Command buffers are recorded in parallel by N CPU worker threads, where N is equal to the number of logical CPU cores present on a machine. Each worker thread records one command buffer that includes a subset of all draw calls and other commands. After all worker threads complete, the main thread submits all of the command buffers with one API call. In that way CPU load is distributed evenly across all cores, and the GPU is fed with commands as fast as possible.


GPU work generation and submission on a four-core CPU.

Particle animation

The position of each particle is computed entirely on the GPU in the vertex shader stage. The following steps are performed to compute particle positions:

  1. A per-particle input “seed” value is used to compute the initial particle position.
  2. A “matrix seed” value is used to compute two sets of one rotation, one translation, and one scaling transformation matrix. All three matrices in each set are then concatenated and linearly time-interpolated to form one final transformation matrix.
  3. The initial particle position is multiplied by the computed matrix and then additional non-linear transformations are performed. This step is repeated a number of times. A 1D texture coordinate is also computed in this step.
  4. The particle position computed in the previous step is multiplied by the concatenated view and projection matrices.

Animation of the particle cloud is achieved by updating the “matrix seed” value periodically on the CPU.

Particle coloring

To achieve nice color effects, the particles are rasterized with additive blending enabled. Particle color is computed in the fragment shader by sampling and interpolating two “palette” textures. Interpolation between two color samples is based on the time to get a smooth color change.

Intel® XDK FAQs - Crosswalk

$
0
0

How do I play audio with different playback rates?

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');
myAudio.play();
myAudio.playbackRate = 1.5;

Why are Intel XDK Android Crosswalk build files so large?

When your app is built with Crosswalk it will be a minimum of 15-18MB in size because it includes a complete web browser (the Crosswalk runtime or webview) for rendering your app instead of the built-in webview on the device. Despite the additional size, this is the preferred solution for Android, because the built-in webviews on the majority of Android devices are inconsistent and poorly performing.

See these articles for more information:

Why is the size of my installed app much larger than the apk for a Crosswalk application?

This is because the apk is a compressed image, so when installed it occupies more space due to being decompressed. Also, when your Crosswalk app starts running on your device it will create some data files for caching purposes which will increase the installed size of the application.

Why does my Android Crosswalk build fail with the com.google.playservices plugin?

The Intel XDK Crosswalk build system used with CLI 4.1.2 Crosswalk builds does not support the library project format that was introduced in the "com.google.playservices@21.0.0" plugin. Use "com.google.playservices@19.0.0" instead.

Why does my app fail to run on some devices?

There are some Android devices in which the GPU hardware/software subsystem does not work properly. This is typically due to poor design or improper validation by the manufacturer of that Android device. Your problem Android device probably falls under this category.

How do I stop "pull to refresh" from resetting and restarting my Crosswalk app?

See the code posted in this forum thread for a solution: /en-us/forums/topic/557191#comment-1827376.

An alternate solution is to add the following lines to your intelxdk.config.additions.xml file:

<!-- disable reset on vertical swipe down --><intelxdk:crosswalk xwalk-command-line="--disable-pull-to-refresh-effect" />

Which versions of Crosswalk are supported and why do you not support version X, Y or Z?

The specific versions of Crosswalk that are offered via the Intel XDK are based on what the Crosswalk project releases and the timing of those releases relative to Intel XDK build system updates. This is one of the reasons you do not see every version of Crosswalk supported by our Android-Crosswalk build system.

With the September, 2015 release of the Intel XDK, the method used to build embedded Android-Crosswalk versions changed to the "pluggable" webview Cordova build system. This new build system was implemented with the help of the Cordova project and became available with their release of the Android Cordova 4.0 framework (coincident with their Cordova CLI 5 release). With this change to the Android Cordova framework and the Cordova CLI build system, we can now more quickly adapt to new version releases of the Crosswalk project. Support for previous Crosswalk releases required updating a special build system that was forked from the Cordova Android project. This new "pluggable" webview build system means that the build system can now use the standard Cordova build system, because it now includes the Crosswalk library as a "pluggable" component.

The "old" method of building Android-Crosswalk APKs relied on a "forked" version of the Cordova Android framework, and is based on the Cordova Android 3.6.3 framework and is used when you select CLI 4.1.2 in the Project tab's build settings page. Only Crosswalk versions 7, 10, 11, 12 and 14 are supported by the Intel XDK when using this build setting.

Selecting CLI 5.1.1 in the build settings will generate a "pluggable" webview built app. A "pluggable" webview app (built with CLI 5.1.1) results in an app built with the Cordova Android 4.1.0 framework. As of the latest update to this FAQ, the CLI 5.1.1 build system supported Crosswalk 15. Future releases of the Intel XDK and the build system will support higher versions of Crosswalk and the Cordova Android framework.

In both cases, above, the net result (when performing an "embedded" build) will be two processor architecture-specific APKs: one for use on an x86 device and one for use on an ARM device. The version codes of those APKs are modified to insure that both can be uploaded to the Android store under the same app name, insuring that the appropriate APK is automatically delivered to the matching device (i.e., the x86 APK is delivered to Intel-based Android devices and the ARM APK is delivered to ARM-based Android devices).

For more information regarding Crosswalk and the Intel XDK, please review these documents:

How do I prevent my Crosswalk app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

How can I improve the performance of my Construct2 game build with Crosswalk?

Beginning with the Intel XDK CLI 5.1.1 build system you must add the --ignore-gpu-blacklist option to your intelxdk.config.additions.xml file if you want the additional performance this option provides to blacklisted devices. See this forum post for additional details.

If you are a Construct2 game developer, please read this blog by another Construct2 game developer regarding how to properly configure your game for proper Crosswalk performance > How to build optimized Intel XDK Crosswalk app properly?<

Also, you can experiment with the CrosswalkAnimatable option in your intelxdk.config.additions.xml file (details regarding the CrosswalkAnimatable option are available in this Crosswalk Project wiki post: Android SurfaceView vs TextureView).

<!-- Controls configuration of Crosswalk-Android "SurfaceView" or "TextureView" --><!-- Default is SurfaceView if >= CW15 and TextureView if <= CW14 --><!-- Option can only be used with Intel XDK CLI5+ build systems --><!-- SurfaceView is preferred, TextureView should only be used in special cases --><!-- Enable Crosswalk-Android TextureView by setting this option to true --><preference name="CrosswalkAnimatable" value="false" />

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for some additional tools that can be used to modify the Crosswalk's webview runtime parameters, especially the --ignore-gpu-blacklist option.

Why does the Google store refuse to publish my Crosswalk app?

For full details, please read Android and Crosswalk Cordova Version Code Issues. For a summary, read this FAQ.

There is a change to the version code handling by the Crosswalk and Android build systems based on Cordova CLI 5.0 and later. This change was implemented by the Apache Cordova project. This new version of Cordova CLI automatically modifies the android:versionCode when building for Crosswalk and Android. Because our CLI 5.1.1 build system is now more compatible with standard Cordova CLI, this change results in a discrepancy in the way your android:versionCode is handled when building for Crosswalk (15) or Android with CLI 5.1.1 when compared to building with CLI 4.1.2.

If you have never published an app to an Android store this change will have little or no impact on you. This change might affect attempts to side-load an app onto a device, in which case the simplest solution is to uninstall the previously side-loaded app before installing the new app.

Here's what Cordova CLI 5.1.1 (Cordova-Android 4.x) is doing with the android:versionCode number (which you specify in the App Version Code field within the Build Settings section of the Projects tab):

Cordova-Android 4.x (Intel XDK CLI 5.1.1 for Crosswalk or Android builds) does this:

  • multiplies your android:versionCode by 10

then, if you are doing a Crosswalk (15) build:

  • adds 2 to the android:versionCode for ARM builds
  • adds 4 to the android:versionCode for x86 builds

otherwise, if you are performing a standard Android build (non-Crosswalk):

  • adds 0 to the android:versionCode if the Minimum Android API is < 14
  • adds 8 to the android:versionCode if the Minimum Android API is 14-19
  • adds 9 to the android:versionCode if the Minimum Android API is > 19 (i.e., >= 20)

If you HAVE PUBLISHED a Crosswalk app to an Android store this change may impact your ability to publish a newer version of your app! In that case, if you are building for Crosswalk, add 6000 (six with three zeroes) to your existing App Version Code field in the Crosswalk Build Settings section of the Projects tab. If you have only published standard Android apps in the past and are still publishing only standard Android apps you should not have to make any changes to the App Version Code field in the Android Builds Settings section of the Projects tab.

The workaround described above only applies to Crosswalk CLI 5.1.1 and later builds!

When you build a Crosswalk app with CLI 4.1.2 (which uses Cordova-Android 3.6) you will get the old Intel XDK behavior where: 60000 and 20000 (six with four zeros and two with four zeroes) are added to the android:versionCode for Crosswalk builds and no change is made to the android:versionCode for standard Android builds.

NOTE:

  • Android API 14 corresponds to Android 4.0
  • Android API 19 corresponds to Android 4.4
  • Android API 20 corresponds to Android 5.0
  • CLI 5.1.1 (Cordova-Android 4.x) does not allow building for Android 2.x or Android 3.x

Why is my Crosswalk app generating errno 12 Out of memory errors on some devices?

If you are using the WebGL 2D canvas APIs and your app crashes on some devices because you added the --ignore-gpu-blacklist flag to your intelxdk.config.additions.xml file, you may need to also add the --disable-accelerated-2d-canvas flag. Using the --ignore-gpu-blacklist flag enables the use of the GPU in some problem devices, but can then result in problems with some GPUs that are not blacklisted. The --disable-accelerated-2d-canvas flag allows those non-blacklisted devices to operate properly in the presence of WebGL 2D canvas APIs and the --ignore-gpu-blacklist flag.

You likely have this problem if your app crashes after running a few seconds with the an error like the following:

<gsl_ldd_control:364>: ioctl fd 46 code 0xc00c092f (IOCTL_KGSL_GPMEM_ALLOC) failed: errno 12 Out of memory <ioctl_kgsl_sharedmem_alloc:1176>: ioctl_kgsl_sharedmem_alloc: FATAL ERROR : (null).

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for additional info regarding the --ignore-gpu-blacklist flag and other Chromium option flags.

Construct2 Tutorial: How to use AdMob and IAP plugins with Crosswalk and the Intel XDK.

See this tutorial on the Scirra tutorials site > How to use AdMob and IAP official plugins on Android-Crosswalk/XDK < written by Construct2 developer Kyatric.

Also, see this blog written by a Construct2 game developer regarding how to build a Construct2 app using the Appodeal ad plugin with your Construct2 app and the Intel XDK > How to fix the build error with Intel XDK and Appodeal? <.

What is the correct "Target Android API" value that I should use when building for Crosswalk on Android?

The "Target Android API" value (aka android-targetSdkVersion), found in the Build Settings section of the Projects tab, is the version of Android that your app and the libraries associated with your app are tested against, it DOES NOT represent the maximum level of Android onto which you can install and run your app. When building a Crosswalk app you should set to this value to that value recommend by the Crosswalk project.

The recommended "Target Android API" levels for Crosswalk on Android apps are:

  • 18 for Crosswalk 1 thru Crosswalk 4
  • 19 for Crosswalk 5 thru Crosswalk 10
  • 21 for Crosswalk 11 thru Crosswalk 18

As of release 3088 of the Intel XDK, the recommended value for your android-targetSdkVersion is 21. In previous versions of the Intel XDK the recommended value was 19. If you have it set to a higher number (such as 23), we recommend that you change your setting to 21.

Can I build my app with a version of Crosswalk that is not listed in the Intel XDK Build Settings UI?

As of release 3088 of the Intel XDK, it is possible to build your Crosswalk for Android app using versions of the Crosswalk library that are not listed in the Project tab's Build Settings section. You can override the value that is selected in the Build Settings UI by adding a line to the intelxdk.config.additions.xml file.

NOTE: The process described below is for experts only! By using this process you are effectively disabling the Crosswalk version that is selected in the Build Settings UI and you are overriding the version of Crosswalk that will be used when you build a custom debug module with the Debug tab.

When building a Crosswalk for Android application, with CLI 5.x and higher, the Cordova Crosswalk Webview Plugin is used to facilitate adding the Crosswalk webview library to the build package (the APK). That plugin effectively "includes" the specified Crosswalk library when the app is built. The version of the Crosswalk library selected in the Build Settings UI is effected by a line in the Android build config file, similar to the following:

<intelxdk:crosswalk version="16"/>

The line above is added automatically to the intelxdk.config.android.xml file by the Intel XDK. If you attempt to change lines in the Android build config file they will be overwritten by the Intel XDK each time you use the Build tab (perform a build) or the Test tab. In order to modify (or override) this line in the Android config file you need to add a line to the intelxdk.config.additions.xml file.

The precise line you include in the intelxdk.config.additions.xml file depends on the version of the Crosswalk library you want to include. 

<!-- Set the Crosswalk embedded library to something other than those listed in the UI. --><!-- In practice use only one, multiple examples are shown for illustration. --><preference name="xwalkVersion" value="17+"/><preference name="xwalkVersion" value="14.43.343.24" /><preference name="xwalkVersion" value="org.xwalk:xwalk_core_library_beta:18+"/>

The first example line in the code snippet above asks the Intel XDK to build with the "last" or "latest" version of the Crosswalk 17 release library (the '+' character means "last available" for the specified version). The second example requests an explicit version of Crosswalk 14 when building the app (e.g., version 14.43.343.24). The third example shows how to request the "latest" version of Crosswalk 18 from the Crosswalk beta Maven repository.

NOTE: only one such "xwalkVersion" preference tag should be used. If you include more than one "xwalkVersion" only the last one specified in the intelxdk.config.additions.xml file will be used.

The specific versions of Crosswalk that you can use can be determined by reviewing the Crosswalk Maven repositories: one for released Crosswalk libraries and one for beta versions of the Crosswalk library.

Not all Crosswalk libraries are guaranteed to work with your built app, especially the beta versions of the Crosswalk library. There may be library dependencies on the specific version of the Cordova Crosswalk Webview Plugin or the Cordova-Android framework. If a library does not work, select a different version.

Detailed instructions on the preference tag being used here are available in the Crosswalk Webview Plugin README.md documentation.

If you are curious when a specific version of Chromium will be supported by Crosswalk, please see the Crosswalk Release Dates wiki published by the Crosswalk Project.

Back to FAQs Main

What's New? OpenCL™ Runtime 16.1 (CPU only)

$
0
0

16.1 release includes:

  • Support for Intel® Core™ 6th generation and Xeon® v4 processors (former Intel microarchitecture codename Broadwell)
  • Support for OpenCL™ 2.0 specification
  • Improved cross-CPU support of pre-compiled kernel binary in Runtime:
    • Enables loading pre-generated kernel binaries that saves OpenCL program build time. For more information, see https://software.intel.com/en-us/node/540584
    • Enables generating a JIT binary for target CPU model by the Intel® SDK for OpenCL™ - Offline Compiler. For more information, see https://software.intel.com/en-us/node/539388
  • Bug and memory leak fixes.
  • Compiler infrastructure was updated to LLVM version 3.6.2

Improve the Security of Android* Applications using Hooking Techniques: Part 2

$
0
0

Download PDF [PDF 1.2 MB]

Contents


Study of the PIC Code in libtest_PIC.so

If the object is compiled in PIC mode, relocation is implemented differently. By observing the sections information of the libtest_PIC.so which is shown in Figure 17, the printf() relocation information is located in two relocation sections: .rel.dyn and .rel.plt. Two new relocation types R_386_GLOB_DAT and R_386_JMP_SLOT are used, and the absolute 32-bit address of the substituted function should be filled in with these offset addresses.

 Relocation section of libtest_PIC.so
Figure 1:Relocation section of libtest_PIC.so

The Figure 18 shows the assembly code of function libtest2() which is compiled in non-PIC mode. The entry addresses of printf() marked with red color are specified in the relocation sections .rel.dyn and .rel.plt in Figure 17.

 Disassemble code of libtest2(), compiled with -PIC parameter
Figure 2:Disassemble code of libtest2(), compiled with -PIC parameter

 Working flow of &#039;printf(&quot;libtest2: 1st call to the original printf()\n&quot;);&#039;
Figure 3:Working flow of 'printf("libtest2: 1st call to the original printf()\n");'

 Working flow of &#039;global_printf2(&quot;libtest2: global_printf2()\n&quot;);&#039;
Figure 4:Working flow of 'global_printf2("libtest2: global_printf2()\n");'

 Working flow of &#039;local_printf(&quot;libtest2: local_printf()\n&quot;);&#039;
Figure 5:Working flow of 'local_printf("libtest2: local_printf()\n");'

From Figures 19-21, it can be seen that when working with the dynamic library generated with the -PIC parameter, the code in libtest2() will jump to the address placed in offset addresses 0x1fe0, 0x2010, and 0x2000, which are the entrances to printf().

Hook Solution

If the hook module wants to intercept the calls to printf() and redirect to another function, it should write the redirected function address to the offset addresses of the symbol ‘printf’ defined in the relocation sections, after the linker loaded the dynamic library into memory.

To replace the call of the printf() function with the call of the redirected hooked_printf() function, as shown in the software flow diagram in Figure 22, a hook function should be implemented between the dlopen() and libtest() calls. The hook function will first get the offset address of symbol printf, which is 0x1fe0 from the relocation section named .rel.dyn. The hook function then writes the absolute address of hooked_printf() function to the offset address. After that, when the code in libtest2() calls into the printf(), it will enter the hooked_printf() instead.

 Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.
Figure 6:Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.

To consider all the possible cases previously listed, the entire flow chart of the hook function is shown in Figure 23. And the part of the change in main() function is depicted in Figure 24.

 The flow chart of ELF hook module
Figure 7: The flow chart of ELF hook module

 Code in main() after hooking
Figure 8:Code in main() after hooking

The output of the program is shown in Figure 25, you can see that when the first call to libtest1()/libtest2() executes, the printf() is called inside the functions. When calling the two functions again, after the hook functions are executed, the calls to the printf() are redirected to the hooked_printf() function. The hooked_printf() function will attach the string “is HOOKED” at the end of the normal printed string. Figure 26 shows the program running flow after hooking, compare with the original flow shown in Figure 8, the hooked_printf() has been injected into libtest1() and libtest2().

 Output of the test program, printf() has been hooked
Figure 9:Output of the test program, printf() has been hooked

 The running flow of the test project after hooking
Figure 10:The running flow of the test project after hooking

Case Study – a Hook-Based Protection Scheme in Android

Based on the studies of the hooking technique in the previous sections, we developed a plug-in to help Android application developers improve the security of their applications. Developers need to add only one Android native library to their projects and add one line of Java code to load this native library at start-up time. Then this library injects some protection code to other third-party libraries in the application. The protection code will aid encrypting the local file's input/output stream, as well as bypass the function __android_log_print() to avoid some user privacy leakage by printing debugging information through Logcat.

To verify the effectiveness of the protection plug-in, we wrote an Android application to simulate a scene of an application that contains a third-party library. In the test application, the third party library does two things:

  1. When an external Java instruction calls the functions in the library, it will print some information by calling __android_log_print().
  2. In the library, the code creates a file (/sdcard/data.dat) to save data in local storage without encryption, then reads it back and prints it on the screen. This action is to simulate the application trying to save some sensitive information in the local file system.

Figures 27-30 compare the screenshots of the test program, output of Logcat, and the content of the saving file in the device’s local file system before and after hooking.

 The Android* platform is Teclast X89HD, Android 4.2.2
Figure 11:The Android* platform is Teclast X89HD, Android 4.2.2

 App output - no change after hooking
Figure 12:App output - no change after hooking

 Logcat output - empty after hooking
Figure 13:Logcat output - empty after hooking

 Local file ‘data.dat’ at /sdcard has been encrypted after hooking
Figure 14:Local file ‘data.dat’ at /sdcard has been encrypted after hooking

As the figures show, the running flow of the program after hooking is the same as the one without hooking. However, the Logcat cannot catch any output from the native library after hooking. Further, the content of the local file is no longer stored in a plain text format.

The plug-in helps the test application improve security against malicious attacks on collecting information via Logcat, as well as offline attacks to the local file system.

Conclusion

The hooking technique can be used in many development scenarios, providing seamless security protection to Android applications. Hook-based protection schemes can not only be used on Android, but also can be expanded to other operating systems such as Windows*, Embedded Linux, or other operating systems designed for Internet of Things (IoT) devices. It can significantly reduce the development cycle as well as maintenance costs. Developers can develop their own hook-based security scheme or use the professional third-party security solutions available on the market.

References

Redirecting functions in shared ELF libraries
Apriorit Inc, Anthony Shoumikhin, 25 Jul 2013
http://www.codeproject.com/Articles/70302/Redirecting-functions-in-shared-ELF-libraries

x86 API Hooking Demystified
Jurriaan Bremer
http://jbremer.org/x86-api-hooking-demystified/

Android developer guide
http://developer.android.com/index.html

Android Open Source Project
https://source.android.com/

About the Author

Jianjun Gu is a senior application engineer in the Intel Software and Solutions Group (SSG), Developer Relations Division, Mobile Enterprise Enabling team. He focuses on the security and manageability of enterprise application.

Process and Thread Affinity for Intel® Xeon Phi™ Processors x200

$
0
0

The Intel® MPI Library and OpenMP* runtime libraries can create affinities between processes or threads, and hardware resources. This affinity keeps an MPI process or OpenMP thread from migrating to a different hardware resource, which can have a dramatic effect on the execution speed of a program.

Hardware Threading

The Intel® Xeon Phi™ processor x200 (code-named Knights Landing) supports up to four hardware thread contexts per core. Two cores sharing a single level 2 cache comprise one tile, as in Figure 1.


Figure 1:An Intel® Xeon Phi™ processor x200 tile has two cores, four vector processing units, a 1 MB cache shared by the two cores, and a cache home agent.

Additional hardware threads help hide latencies. While one hardware thread is stalled, another can schedule a core. The optimal number of hardware threads an application uses per core or per tile depends on the application. Some applications may benefit from executing only one thread per hardware tile. For all examples in this paper, an Intel Xeon Phi processor has 34 tiles.

OpenMP Thread Affinity

OpenMP separates allocating hardware resources from pinning threads to the hardware resources.

Intel compilers support both OpenMP 4 affinity settings (as of version 13.0) and the Intel OpenMP runtime extensions. The following settings are used to allocate hardware resources and pin OpenMP threads to hardware resources.

 

OpenMP* 4 Affinity

Intel OpenMP Runtime Extensions

Allocate hardware threads

OMP_PLACES

KMP_PLACE_THREADS

Pin OpenMP threads to hardware threads

OMP_PROC_BIND

KMP_AFFINITY

Thread Affinity Using Intel OpenMP Runtime Extensions

KMP_PLACE_THREADS controls allocation of hardware resources. An OpenMP application may be assigned a number of cores and a number of threads per core. The letter C indicates cores, and T indicates threads. For example, 68c,4t specifies four threads per core on 68 cores, and 34c,2t specifies two threads per core on 34 cores.

KMP_AFFINITY controls how OpenMP threads are bound to resources. Common choices are COMPACT, SCATTER, and BALANCED. The granularity can be set to CORE or THREAD. The affinity choices are illustrated in Figure 2, Figure 3, and Figure 4.


Figure 2:KMP_AFFINITY=compact


Figure 3:KMP_AFFINITY=balanced


Figure 4:KMP_AFFINITY=scatter

A full explanation of KMP_PLACE_THREADS and KMP_AFFINITY is available in the Thread Affinity Interface section of the Intel compiler documentation. For Intel Xeon Phi processors x200, the Intel compilers default to KMP_AFFINITY=compact. For Intel® Xeon® processors, the default setting is KMP_AFFINITY=none.

The following examples demonstrate how to pin OpenMP threads to a specific number of threads per tile or core on Intel Xeon Phi processors x200 using Intel OpenMP runtime extensions on a Linux* system.

The default affinity KMP_AFFINITY=compact is assumed.

1 thread per tile

KMP_AFFINITY=proclist=[0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66],explicit

1 thread per core

KMP_PLACE_THREADS=1T

2 threads per core

KMP_PLACE_THREADS=2T

3 threads per core

KMP_PLACE_THREADS=3T

4 threads per core

Default, no additional setting needed

Tip: Use the KMP_AFFINITY VERBOSE modifier to see how threads are mapped to OS processors. This modifier also shows how the OS processors map to physical cores.

 

The same settings work when undersubscribing the cores.

1 thread per tile

KMP_AFFINITY=proclist=[0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66],explicit OMP_NUM_THREADS=4

1 thread per core

KMP_PLACE_THREADS=1T OMP_NUM_THREADS=8

2 threads per core

KMP_PLACE_THREADS=2T OMP_NUM_THREADS=16

3 threads per core

KMP_PLACE_THREADS=3T OMP_NUM_THREADS=24

4 threads per core

OMP_NUM_THREADS=32

Thread Affinity Using OpenMP 4 Affinity

Version 4 of the OpenMP standard introduced affinity settings controlled by OMP_PLACES and OMP_PROC_BIND environment variables. OMP_PLACES specifies hardware resources. The value can be either an abstract name describing a list of places or (uncommonly) an explicit list of places. Choices are CORES or THREADS. OMP_PROC_BIND controls how OpenMP threads are bound to resources. Common values for OMP_PROC_BIND include CLOSE and SPREAD.

The following examples show how to run an OpenMP threaded application using one to four hardware threads per core using OpenMP 4 affinity.

1 thread per tile

OMP_PROC_BIND=spread OMP_PLACES=threads OMP_NUM_THREADS=34

1 thread per core

OMP_PROC_BIND=spread OMP_PLACES=threads OMP_NUM_THREADS=68

2 threads per core

OMP_PROC_BIND=spread OMP_PLACES=threads OMP_NUM_THREADS=136

3 threads per core

OMP_PROC_BIND=spread OMP_PLACES=threads OMP_NUM_THREADS=204

4 threads per core

OMP_PROC_BIND=close OMP_PLACES=threads

 

These examples show how to undersubscribe the cores using OpenMP 4 affinity.

1 thread per tile

OMP_PROC_BIND=spread OMP_PLACES=”threads(32)” OMP_NUM_THREADS=4

1 thread per core

OMP_PROC_BIND=spread OMP_PLACES=”threads(32)” OMP_NUM_THREADS=8

2 threads per core

OMP_PROC_BIND=close OMP_PLACES=”cores(8)” OMP_NUM_THREADS=16

3 threads per core

OMP_PROC_BIND=close OMP_PLACES=”cores(8)” OMP_NUM_THREADS=24

4 threads per core

OMP_PROC_BIND=close OMP_PLACES=threads OMP_NUM_THREADS=32

Nested Thread Affinity Using OpenMP 4 Affinity Settings

When an application has more than one level of OpenMP threading, additional values are specified for OMP_PLACES and OMP_NUM_THREADS. The following example executes nested threads using one hardware thread per core. For additional hardware threads, increase the second value of OMP_NUM_THREADS to 4, 6, or 8.

OMP_NESTED=1

OMP_MAX_ACTIVE_LEVELS=2

KMP_HOT_TEAMS=1

KMP_HOT_TEAMS_MAX_LEVEL=2

OMP_NUM_THREADS=34,2

OMP_PROC_BIND=spread,spread

OMP_PLACES=cores

MPI Library Affinity

MPI library affinity is controlled by environment variable I_MPI_PIN_PROCESSOR_LIST. A list may be an explicit list of logical processors or a processor set defined by keywords. Common keywords include ALL, ALLCORES, GRAIN, and SHIFT.

  • ALL specifies all logical processors, including the hardware threads.
  • ALLCORES specifies the physical cores.
  • GRAIN specifies the pinning granularity.
  • SHIFT specifies the granularity of the round-robin scheduling in GRAIN units.

The following are examples of how to run an MPI executable with one rank per tile, and one, two, or four ranks per core.

1 rank per tile

mpirun –perhost 34 –env I_MPI_PROCESSOR_LIST all:shift=cache2

1 rank per core

mpirun –perhost 68 –env I_MPI_PROCESSOR_LIST allcores

2 ranks per core

mpirun –perhost 136 -env I_MPI_PROCESSOR_LIST all:grain=2,shift=2

4 ranks per core

mpirun –perhost 272 -env I_MPI_PROCESSOR_LIST all

Tips:

  • Set I_MPI_DEBUG to 4 or higher to see how ranks are mapped to OS processors.
  • Intel MPI cpuinfo utility shows how the OS processors map to physical caches.

Intel® MPI Library Interoperability with OpenMP

Intel® MPI and OpenMP affinity settings may be combined for hybrid execution. When using all cores, specifying one to four hardware threads per core is straightforward in the following examples, using either OpenMP runtime extensions or OpenMP 4 affinity.

Intel MPI/OpenMP affinity examples using Intel OpenMP runtime extensions, for Intel Xeon Phi processors x200:

1 thread per core

mpirun –env KMP_PLACE_THREADS 1T

2 threads per core

mpirun –env KMP_PLACE_THREADS 2T

3 threads per core

mpirun -env KMP_PLACE_THREADS 3T

4 threads per core

Default, no extra settings needed

 

Intel MPI/OpenMP affinity examples using OpenMP 4 affinity, for Intel Xeon Phi processors x200:

1 thread per tile

mpirun -env OMP_PROC_BIND spread –env OMP_PLACES threads –env OMP_NUM_THREADS 8

1 thread per core

mpirun –env OMP_PROC_BIND spread –env OMP_PLACES threads –env OMP_NUM_THREADS 17

2 threads per core

mpirun -env OMP_PROC_BIND spread –env OMP_PLACES threads –env OMP_NUM_THREADS 34

4 threads per core

mpirun -env OMP_PROC_BIND close –env OMP_PLACES threads

 

Intel MPI also provides an environment variable, I_MPI_PIN_DOMAIN, for use with executables launching both MPI ranks and OpenMP threads. The variable is used to define a number of non-overlapping subsets of logical processors, binding one MPI rank to each of these domains. An explicit domain binding is especially useful for undersubscribing the cores. The following examples run a hybrid MPI/OpenMP executable on fewer than the 68 cores of the Intel Xeon Phi processor x200.

1 thread per core on 2 quadrants

mpirun -perhost 2 –env I_MPI_PIN_DOMAIN 68 –env KMP_PLACE_THREADS 1T

12 ranks, 1 rank per tile, 2 threads per core

mpirun -perhost 12 –env I_MPI_PIN_DOMAIN 8 –env KMP_PLACE_THREADS 2T

Tip: When I_MPI_PIN_DOMAIN is set, I_MPI_PIN_PROCESSOR_LIST is ignored.

Future

The Intel MPI library and Intel OpenMP runtime extensions will be extended in 2016 and 2017 to simplify placing processes and threads on Intel Xeon Phi processor x200 NUMA domains.

Conclusion

The Intel MPI Library and OpenMP runtimes provide mechanisms to bind MPI ranks and OpenMP threads to specific processors. Our examples showed how to experiment with different core and hardware thread configurations on Intel® Xeon Phi™ processor x200 (code-named Knights Landing). Following the examples, we can discover whether an application performs best using from one to four hardware threads per core, and we can look for optimal combinations of MPI ranks and OpenMP threads.

More Information

Intel® Fortran Compiler User and Reference Guide, Thread Affinity Interface, https://software.intel.com/en-us/compiler_15.0_ug_f

Intel® C++ Compiler User and Reference Guide, Thread Affinity Interface, https://software.intel.com/en-us/compiler_15.0_ug_c

OpenMP* 4.0 Complete Specifications, http://openmp.org

Intel® MPI Library Developer Reference for Linux* OS, Process Pinning, https://software.intel.com/en-us/intel-mpi-library/documentation

Intel® MPI Library Developer Reference for Linux* OS, Interoperability with OpenMP* API, https://software.intel.com/en-us/mpi-refman-lin-html

Using Nested Parallelism In OpenMP, https://software.intel.com/en-us/videos/using-nested-parallelism-in-openmp

Beginning Hybrid MPI/OpenMP Development, https://software.intel.com/en-us/articles/beginning-hybrid-mpiopenmp-development

Intel® Parallel Computing Center at the University of California

$
0
0

Principal Investigators:

Dr. Laura Carrington, Ph.D, is an expert in High Performance Computing. Her work has resulted in many publications in HPC application performance modeling, analysis of accelerators (i.e. FPGAs, GPUs, CPU) for scientific workloads, tools in performance analysis (i.e. processor, memory, and network simulators), benchmarking, workload analysis, and energy-efficient computing. At UCSD, she is the director of the Performance, Modeling, and Characterization (PMaC) Lab. She is in charge of Energy-efficiency thrust for the DoE SciDAC-3 Institute for Sustained Performance, Energy, and Resilience (SUPER) DoE as well as PI on a number of other awards that support the lab.

Description:

In the converging world of HPC and Big Data, and with the emergence of new storage technology, data movement is becoming a critical aspect of performance and energy efficiency.  In addition, Intel’s upcoming technologies, like 3D XPoint® technology and the Intel® Xeon Phi™ coprocessor (Knights Landing), introduce complex memory sub-systems with new degrees of configuration and heterogeneity. Leveraging the different types of memory available efficiently is an optimization process that must be tackled both during the system design and in the application development. The Intel Parallel Computing Center (Intel® PCC) at PMaC lab, will focus on developing a framework to enable optimal data placement for large scale HPC and Big Data applications on systems with complex memory sub-systems based primarily on 3D XPoint memory, though applicable to the Intel® Xeon Phi coprocessor. The framework’s main component is the Advanced DAta Movement Analysis Toolkit (ADAMANT)1, which automates capturing data movement metrics for each data structure/data object allocated within a large HPC/Big Data application. The metrics include simulated data movement metrics for each data object, and for multiple memory configurations and data layouts. These metrics capture the data movement and potential impact of re-configuring the memory sub-system and/or changing the data structure’s placement within the existing configuration. The modeling component of the framework includes performance models that produce predictions based on the data structures layout in memory, and on the simulated data movement metrics for a series of configurations and data layouts. With these models, the performance optimization problem is reduced to a search for the best configuration, and the framework can inform the user of the optimal data object layout for that configuration. This insight into the performance and the behavior of an application enables users to devise optimal data layouts even on complex memory sub-systems.

The Intel® PCC at PMaC Lab will continue to harden the framework built upon ADAMANT and the features to automate the data layout and configurations optimization strategies for 3D XPoint memory. In addition, we will continue to apply ADAMANT to a series of scientific applications of interest to SDSC and the research community. Finally, we will develop adaptors that enable the interoperability of the framework with Intel performance analysis tools.

1http://www.sdsc.edu/pmac/tools/adamant.html
3D XPoint is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Related websites:

PMaC Lab Home page: http://www.sdsc.edu/pmac/
Laura Carrington, Ph.D. home page: http://users.sdsc.edu/~lcarring/
ADAMANT: http://www.sdsc.edu/pmac/tools/adamant.html

Sharp's New Innovative Security Camera is built with Intel® Architecture & Media Software

$
0
0
Sharp security camera at NAB
Sharp previews its QG- B20C camera used for digital security surveillance at the Intel booth at NAB Show, April 18-21. The camera runs on an Intel® Celeron® processor (N3160) and utilizes the Intel® Media SDK for hardware-accelerated encoding.

With digital surveillance and security concerns now an everyday part of life, SHARP has unveiled a new omnidirectional wireless, intelligent, digital security surveillance camera to better meet these needs. Built with an Intel® Celeron® processor (N3160), SHARP 12 megapixel image sensors, and by utilizing the Intel® Media SDK for hardware accelerated encoding, the QG- B20C camera can capture video in 4Kx3K resolution, provide all-around views, and is armed with intelligent automatic detection functions.

As demand for digital security and surveillance solutions and the Internet of Things (IoT) is growing - top camera systems used in this space incorporate multi-channel streaming, real-time software-based analytics, event-triggered alerts and more. Sharp combined its expertise in security products and camera sensors with the powerful, efficient Intel Celeron processor and Intel® Quick Sync Video-enabled graphics processors to create its new, video surveillance QG- B20C camera, which delivers fast video capture, encoding and smart analytics.

“This is an excellent example of Intel visual compute technologies in use that help advance the digital security industry for smarter cities and safer communities. Intel’s hardware and software building blocks coupled with Sharp’s innovative camera represent one of the endless opportunities that both enterprise and consumers will experience with high-definition video content,” says Jeff McVeigh, Intel Software and Services Group vice president and Visual Computing Products general manager.

The SHARP QG- B20C camera’s advanced functions include all-around video capturing, h.264/AVC encoding and streaming in 4Kx3K (4,064 x 3,048) resolution with 15 FPS rate. With Intel architecture inside, the QG- B20C camera can perform various computer vision analytics, such as:

  • Sharp all-around security cameraIntrusion detection (in 8 distinct areas)
  • Objects left behind/removed
  • Line crossing/loitering (15 distinct detection areas )
  • Queue length/crowd detection (people counting, congestion etc.)
  • Parking space/lot monitoring (100 distinct areas)
  • And more

The camera can trigger video recordings and provide electronic notification when certain actions are detected or enable/disable per time of day or day of the week (see Figure 1). Intel architecture helps the QG- B20C camera perform these computationally-intensive applications “at the edge” to decrease alert and response latency times, reduce network bandwidth and video storage costs, and enhance overall system performance. Sharp also has a new Access Points wireless connection product, QX-C300, powered by Intel® PUMA™ 6.

Sharp camera analytics
Figure 1 Shows Sharps QG- B20C camera's image analysis capabilities for detection and alerts of an object being left behind or removed.

 

“We launched the new business of IoT, Smart Network Business, with a new type of wireless access point, QX-C300. At the same time, SHARP is providing not only devices, but also solution services together as the Smart Network Business. QG-B20C is the second product for this business. Using Intel architecture, media accelerators, and media software helps us bring these new innovative products to market faster and with stronger competitive capabilities,” says Shingo Fueta, Sharp Smart Network Business Promotion Department manager.

The Sharp QG- B20C camera was just previewed this month at NAB Show, the world’s largest media and broadcasting industry event. Learn more at the Sharp site.


Strata+Hadoop World Presentations from the Intel Booth

$
0
0

If you missed, or want to view again any of Intel’s Big Data Analytics sessions from Strata + Hadoop World, San Jose, CA (March 29 - 31, 2016), view them here.

Chuck Freedman, Chief Developer Advocate at Intel, talks about the benefits of the Trusted Analytics Platform and how to get started

 

Open Source Project: Intel Data Analytics Acceleration Library (DAAL)

$
0
0

We have created a data analytics acceleration project on github, to help accelerate data analytics applications. We have placed the Intel® Data Analytics Acceleration Library (Intel® DAAL), the high performance analytics (for "Big Data") library for x86 and x86-64, into open source to create this project

Intel DAAL helps accelerate big data analytics by providing highly optimized algorithmic building blocks for all data analysis stages (preprocessing, transformation, analysis, modeling, validation, and decision making) for batch, online and distributed processing modes of computation. It’s designed for use with popular data platforms including Hadoop*, Spark*, R, and Matlab* for highly efficient data access. Intel DAAL is available for Linux*, OS X* and Windows* and is licensed with the Apache 2.0 license. The DAAL project is available on github for download, feedback and contributions.

Intel DAAL has benefited from customer feedback since its initial release in 2015. Following a year of intense feedback and additional development as a full product, we are excited to introduce it as a very solid open source project ready for use and participation. Intel DAAL remains an integral part of Intel's software developer tools and is backed by Intel with support and future development investments.


WHERE


ACCELERATE DATA ANALYTICS

The Intel Data Analytics Acceleration Library (Intel DAAL) is a library delivering high performance machine learning and data analytics algorithms.  Intel DAAL is an essential component of Intel’s overall machine learning solution including Intel® Xeon® Processor E7 Family, the Trusted Analytics Platform and Intel® Xeon Phi™ Processors (Knights Landing). Intel DAAL works with a wide selection of data platforms and programming languages including Hadoop, Spark, Python, Java and C++. Intel DAAL was first released in 2015 without source code to give us time to evolve some interfaces on our path to open sourcing this year. We appreciate the many users who have given feedback and encouraged us to get where we are today. Previous versions of Intel DAAL required separate installation of the Intel Math Kernel Library (Intel MKL) and Intel Integrated Performance Primitives (Intel IPP). The latest version of Intel DAAL actually comes with the necessary binary parts of Intel MKL (for BLAS and LAPACK) as well as Intel IPP (compression and decompression) so that the tremendous performance from these key routines are available automatically with no additional downloads needed! In order to make the most of multicore and many-core parallelism, and for superior threading interoperability, it is notable that the threading in Intel DAAL relies on the open source project known as "TBB" (Intel Threading Building Blocks).


EXPERIENCE PERFORMANCE

In the exciting and rapidly-evolving data analytics market, this key Intel performance library can really boost performance. At the Intel Developers Forum in 2015, Capital One discussed significant acceleration (over 200X - see slide 26) as an early user of Intel DAAL. We've seen numerous examples across many industries in the first year of product of substantial performance improvements using Intel DAAL -  it is definitely worth a try!

Many more details about the product are available on the product page including some benchmarking data to share more related to the potential performance gains when using DAAL.


SPEEDING TOWARD 2017 - JOIN US!

DAAL is currently speeding toward a "2017" release (expected in late Q3 2016) in conjunction with Intel's award winning Intel Parallel Studio suite of developer tools.  Precompiled binaries with installers are available for free as part of the beta program. Registration for the beta is available at tinyurl.com/ipsbeta2017.

The open source project feeds the product; there are no features held exclusively for the product version. The only difference when purchased is that Intel's Premier support is included for the entire product.

Support for all users of Intel DAAL is available online through the online Intel DAAL forum.

Interview with Martin Hall (Director of Marketing & Business Development) on Big Data & Analytics at Intel

$
0
0

In this theCUBE interview, Martin Hall, Director of Marketing & Business Development, Big Data & Analytics at Intel, discusses the benefits of Open Source software and how Trusted Analytics Platform (TAP) makes it easier for data scientists and developers to deploy big data analytics projects. theCUBE is the leading live internet interview show covering enterprise technology and innovation.

View complete interview (YouTube)

Trials & Triumphs of Market Validation – A Case Study

$
0
0

Every so often, we come across an article that sums up an idea so well, we just have to share it - this article is just that. Kaloyan Yankulo gives us an inside look at how he went about validating his idea for HeadReach with a small test group, committing minimal time and resources.

Here are a couple of key takeaways we found worth sharing:

  • It doesn't cost a lot to validate your idea! Yankulo prices out his approach to give you an idea of how he did it. But there are a number of other free resources available to use throughout the validation process.
     
  • It's important to listen to your customers when they say they do or don't want something. If you get negative feedback, take it into consideration, pivot your idea and go back to your customers to validate the new concept.
     
  • Be creative in your approach to validating your idea! You don't always need an MVP or working prototype to test the market. Leverage discussion groups, wireframes, proof of concepts, etc. when talking to your customers.
     
  • Validate your pricing model while you're at it. Not only is the idea validation stage the time to perfect your offering, but it's also a great opportunity to test out your price point. Start your pricing higher than you plan to, test the price point and adjust from there. You never know if your idea might be worth more than you think.
     
  • Keep your marketing simple. A minimalistic, clean and concise single landing page is more than enough to highlight a single call-to-action, share an example of your idea, and get feedback.
     

Read the original article for the full story, and then let us know: What trials and triumphs have you faced when validating your ideas?

We'd love to hear about it! Leave us a comment below.

Intel® Xeon® Processor E5-2600 V4 Product Family Technical Overview

$
0
0

Executive Summary

The Intel® Xeon® processor E5-2600 v4 product family, code-named Broadwell EP, is a two-socket platform based on Intel’s most recent microarchitecture. Intel uses a “tick-tock” model associated with its generation of processors. This new generation is a “tick” based on 14nm process technology. Major architecture changes take place on a “tock,” while minor architecture changes and a die shrink occur on a “tick.” 

“Tick-Tock” model.
Figure 1:“Tick-Tock” model.

In addition to a die shrink, an increase in processor cores, an increase in the memory bandwidth, and power enhancements, Broadwell has many new features compared to the previous-generation Haswell EP microarchitecture (Intel® Xeon® processor E5-2600 v3 product family). These features include architecture improvements with a Gather Index Table, Transition Lookaside Buffer (TLB), Instruction Set Architecture (ISA), Floating Point Instructions, and Intel® Transactional Synchronization Extensions (Intel® TSX) as well as new characteristics with virtualization, cryptographic, and security enhancements. 

Introduction

This paper discusses the new features available in the Intel Xeon processor E5-2600 v4 product family compared to the Intel Xeon processor E5-2600 v3 product family. It also describes what developers need to do to take advantage of these new features.

Intel Xeon Processor E5-2600 V4 Product Family Microarchitecture Overview

 Overview of the Intel® Xeon® processor E5-2600 v4 product family microarchitecture.
Figure 2: Overview of the Intel® Xeon® processor E5-2600 v4 product family microarchitecture.

The Intel Xeon processor E5-2600 v4 product family provides up to 22 cores, which bring additional computing power to the table compared to the 18 cores of its predecessor. Additional improvements include expanded last level cache (LLC), faster 2400 MHz DDR4 memory, support for 3D LRDIMMs, improved data integrity with detection of DDR4 bus faults during a write, a reduced Thermal Design Power (TDP), hardware-managed power-states, new RDSEED instruction, end-to-end data protection for transaction layer packets for the PCIe* I/O subsystem, new virtual technologies, and more. 

Table 1: Generational comparison of the Intel® Xeon® processor E5-2600 v4 product family to the Intel® Xeon® processor E5-2600 v3 product family.
 Generational comparison of the Intel® Xeon® processor E5-2600 v4 product family to the Intel® Xeon® processor E5-2600 v3 product family

Intel Xeon Processor E5-2600 V4 Product Family Feature Overview

The rest of this paper discusses some of the new features in the Intel Xeon processor E5-2600 v4 product family. These features provide additional performance improvements, new capabilities, security enhancements, and virtualization enhancements.  

Table 2: New features and technologies of the Intel® Xeon® processor E5-2600 v4 product family.
 New features and technologies of the Intel® Xeon® processor E5-2600 v4 product family.

Intel® Advanced Vector Extensions (Intel® AVX) Optimization

Intel AVX workloads have a lower processor frequency for the base and maximum turbo frequency as compared to non-Intel AVX workloads. On Haswell workloads that use a mixture of Intel AVX and non-Intel AVX code, all the cores on a processor socket are limited to the lower processor frequencies of the Intel AVX workload. Broadwell improves this situation by allowing non-Intel AVX code to run at its optimum processor frequency with a mixed workload.

 Processor frequency comparisons for non-Intel® Advanced Vector Extensions (Intel® AVX), Intel AVX, and mixed workloads.
Figure 3: Processor frequency comparisons for non-Intel® Advanced Vector Extensions (Intel® AVX), Intel AVX, and mixed workloads.

Improved Floating Point Operations

Broadwell introduces several improvements with floating point operations including the reduction of latency with vector floating point multiply operations MULPS and MUPLD from five cycles to three cycles. There have also been latency reductions for floating point divide operations DIVSS, DIVSD, DIVPS, and DIVPD.  This benefits workloads that require precision when dealing with division of large floating point numbers such as some financial and scientific calculations. The latency reduction is possible due to the Radix-1024 divider, which has been increased in size providing the ability to compute 10 bits in each step.

A new split scalar operation provides the ability for scalar divides to be split into two segments and processed simultaneously improving the throughput cycles. See below for a multi-generation comparison of the benefit of the split operation on Broadwell versus Nehalem (Intel® Xeon® Processor 5500 Series), Sandy Bridge (Intel® Xeon® processor E5-2600 product family), Ivy Bridge (Intel® Xeon® processor E5-2600 v2 product family), and Haswell (Intel® Xeon® processor E5-2600 v3 product family). 

Table 3: Generational comparison of latency and throughput (cycles) for floating point divide operations.
 Generational comparison of latency and throughput (cycles) for floating point divide operations.

No recompilation is required to take advantage of these enhancements, allowing immediate benefits for existing code that already utilizes these types of operations. The Intel® Compiler 14.1+ and GCC 4.7+ support these instructions for those who want to gain access to additional benefits provided by Broadwell.

Translation Lookaside Buffer

Software TLB improvements include an increase buffer size from 1 kB to 1.5 kB, and a native 16 entry array that handles 1 GByte page translations, which help in situations with large code or data footprints that have locality. The Branch Prediction Unit Target Array has been increased from 8 ways to 10 along with other improvements that help with address prediction for branches and returns. Included is a “Bottomless” return stack buffer (RSB), which uses indirect predictor to predict return address if the return stack underflows. Lastly store-to-load forwarding benefits from an increase in size from 60 entries to 64 on the out-of-order scheduler. 

Instruction Set Architecture (ISA) changes include micro-op reductions for several instructions, which speed up performance with cryptography. ADC, CMOV, and PCLMULQDQ instructions have each been reduced to one micro-op. The ADC instruction is helpful with emulating large number arithmetic, the CMOV instruction helps with a conditional move, and the PCLMULQDQ instruction helps with cryptographic and hashing. The VCVTPS2PH (mem form) instruction has been reduced from 4 micro-ops to 3 micro-ops. 

No recompilation is required to take advantage of these enhancements, allowing immediate benefits for existing code that already utilize these types of operations. The Intel Compiler 14.1+ and GCC 4.7+ support these instructions for those who want to gain access to additional benefits provided by Broadwell.

Improved Gather Operation

Broadwell adds additional hardware capability with a gather index table (GIT) to improve performance. The GIT provides storage for full width indices near the address generation unit. A special load grabs the correct index, simplifying the index handling. Loaded elements are merged directly into the destination. These improvements provide a significant reduction in micro-ops versus the previous generation of silicon, approximately 60 percent fewer micro-ops. These improvements provide a significant reduction in overhead and can reduce the latency of the gather operation by approximately 60 percent. 

No recompilation is required to take advantage of this new feature, allowing immediate benefits for existing code that already utilize these types of operations. The Intel Compiler 14.1+ and GCC 4.7+ support these instructions for those who want to gain access to additional benefits provided by Broadwell.

 Gather index table conceptual block diagram.
Figure 4: Gather index table conceptual block diagram.

Intel® Transactional Synchronization Extensions

This technology was previously introduced on the Intel® Xeon® processor E7 v3 family and is now available on the Intel Xeon processor E5-2600 v4 product family. Intel TSX provides a set of instruction set extensions that allow programmers to specify regions of code for transactional synchronization. Programmers can use these extensions to achieve the performance of fine-grain locking while actually programming using coarse-grain locks.

Intel TSX provides two software interfaces. The first, called Hardware Lock Elision (HLE), is a legacy-compatible instruction set extension (comprising the XACQUIRE and XRELEASE prefixes) that are used to specify transactional regions. HLE is compatible with the conventional lock-based programming model. Software written using the HLE hints can run on both legacy hardware without Intel TSX and new hardware with Intel TSX. The second, called Restricted Transactional Memory (RTM) is a new instruction set interface (comprising the XBEGIN, XEND, and XABORT instructions) that allows programmers to define transactional regions in a more flexible manner than is possible with HLE. Unlike the HLE extensions, but just like most new instruction set extensions, the RTM instructions will generate an undefined instruction exception (#UD) on older processors that do not support RTM. RTM also requires the programmer to provide an alternate code path for a transactional execution that is not successful.

 Lock boundaries for critical sections of code for a given thread and how the lock appears free throughout from the perspective of the hash table.
Figure 5: Lock boundaries for critical sections of code for a given thread and how the lock appears free throughout from the perspective of the hash table.

For an overview on Intel TSX, see Transactional Synchronization in Haswell. The Intel® Architecture Instruction Set Extensions Programming Reference describes these extensions in detail and outlines various programming considerations to get the most out of them. 

Intel Xeon Processor E5-2600 v4 Product Family Features

Virtual Technology Enhancements with Cache Monitoring Technology, Cache Allocation Technology, and Memory Bandwidth Monitoring

Intel® Resource Director Technology (Intel® RDT) is a set of technologies designed to help monitor and manage shared resources. See Optimize Resource Utilization with Intel® Resource Director Technology for an animation illustrating the key principles behind Intel RDT. Haswell introduced a new Cache Monitoring Technology (CMT) feature. Broadwell provides further expansion of virtual technology with Cache Allocation Technology (CAT), Memory Bandwidth Monitoring (MBM), and Code and Data Prioritization (CDP). These new features help to address the lack of hardware support for the operating system or the Virtual Machine Manager (VMM) to deal with shared resources on the server. Chapters 17.15 and 17.16 in volume 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM) cover programming details on CMT, CAT, MBM, and CDP.

 Cache and memory bandwidth monitoring and enforcement vectors.
Figure 6: Cache and memory bandwidth monitoring and enforcement vectors.

Cache Monitoring Technology allows for monitoring of the Last Level Cache on a per-thread, application, or virtual machine (VM) basis.  Misbehaving threads can be isolated to increase performance. On Haswell the information gleaned via CMT could be used by a scheduler to implement migrate a problematic thread/application/VM. With Broadwell, CAT makes this process easier. For more detailed information, see Chapter 17.15 in volume 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM). Using this feature requires enabling at the OS or VMM level, and the Intel® Virtualization Technology (Intel® VT) for IA-32, Intel® 64 and Intel® Architecture (Intel® VT-x) feature must be enabled at the BIOS level. For instructions on setting Intel VT-x, refer to your OEM BIOS guide. 

 Generational comparison with and without Cache Allocation Technology.
Figure 7: Generational comparison with and without Cache Allocation Technology.

Cache Allocation Technology allows the OS to specify how much cache space an application can utilize on a per-thread, application, or VM basis allowing the VMM or OS scheduler to make changes based on policy enforcement. This feature can be beneficial in a multi-tenant environment when a VM is causing a lot of thrash with the cache. The VMM or OS can migrate this “noisy neighbor” to a different location where it may have less of an impact on other VMs. CAT introduces a new capability to manage the processor LLC based on pre-defined levels of service, independent of the OS or VMM. A QoS mask can be used to provide 16 different levels of enforcement to limit the amount of cache that a thread can consume. The CPUID function is used for enumeration of cache allocation functionality.  IA32_L3_QOS_MASK_n is a model-specific register used to configure the class of service.  IA32_PQR_ASSOC is a model-specific register used to associate a core/thread/application with a configuration. For more detailed information see Chapter 17.16 in volume 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM). Using this feature requires enabling at the OS or VMM level, and the Intel VT-x feature must be enabled at the BIOS level.  For instructions on setting Intel VT-x, refer to your OEM BIOS guide. 

 Generation-to-generation comparison with and without Cache Allocation Technology.
Figure 8: Generation-to-generation comparison with and without Cache Allocation Technology.

Memory Bandwidth Monitoring enables the OS or VMM to monitor memory bandwidth on a per-core or thread basis allowing for the OS or VMM to make scheduling decisions. An example of this situation is when one core is being heavily utilized by two applications, while another core is being underutilized by two other applications. With memory bandwidth monitoring the OS or VMM now has the ability to schedule a VM or an application to a different core to balance out memory bandwidth utilization. In Figure 9 high memory bandwidth applications are competing for the same resource. The OS or VMM can move one of the high bandwidth memory applications to another resource to balance out the load. For more detailed information, see Chapter17.15 in volume 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM). Using this feature requires enabling at the OS or VMM level, and the Intel VT-x feature must be enabled at the BIOS level.  For instructions on setting Intel VT-x, refer to your OEM BIOS guide. 

 Generation-to-generation comparison with and without Memory Bandwidth Monitoring.
Figure 9: Generation-to-generation comparison with and without Memory Bandwidth Monitoring.

Code and Data Prioritization technology is an extension of CAT. CDP enables isolation and separate prioritization of code and data fetches to the L3 cache in a software configurable manner, which can enable workload prioritization and tuning of cache capacity to the characteristics of the workload. CDP extends CAT by providing separate code and data masks per Class of Service (COS). This can assist with optimizing the relationship between the last level cache and a given workload.  For more detailed information, see Chapter 17.16.2 in volume 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM). Using this feature requires enabling at the OS or VMM level, and the Intel VT-x feature must be enabled at the BIOS level.  For instructions on setting Intel VT-x, refer to your OEM BIOS guide. 

 Capacity bitmasks allow for separation of the code and data.
Figure 10: Capacity bitmasks allow for separation of the code and data.

Cryptographic Enhancements

ADCX (unsigned integer add with carry) and ADOX (unsigned integer add with overflow) have been introduced for Asymmetric Crypto Assist1 in addition to faster ADC/SSB instructions (no re-compilation required for ADC/SSB benefits). ADCX and ADOX are extensions of ADC (add with carry) and ADO (add with overflow) instructions for use in large integer arithmetic, greater than 64 bits. Performance improvements are due to two parallel carry chains being supported at the same time. ADOX/ADCX can be combined with MULX for additional performance improvements with public key encryption such as RSA. Large integer arithmetic is also used for Elliptic Curve Cryptography (ECC) and Diffie-Hellman (DH) Key Exchange. Beyond cryptography, there are many use cases in complex research and high-performance computing. The demand for this functionality is high enough to warrant a number of commonly used optimized libraries, such as the GNU Multi-Precision (GMP) library (for example, Mathematica), see New Instructions Supporting Large Integer Arithmetic on Intel® Architecture Processors. To take advantage of these new instructions, you need to obtain a new software library and recompilation (Intel Compiler 14.1+ and GCC 4.7+). For more information about these instructions, see the Intel® 64 and IA-32 Architectures Developer’s Manual.

1Intel® processors do not contain crypto algorithms, but support math functionality that accelerates the sub-operations.

RDSEED

The RDSEED instruction is intended for seeding a Pseudorandom Number Generator (PRNG) of arbitrary width, which can be useful when you want to create stronger cryptography keys. If you do not need to seed another PRNG, use the RDSEED instruction. For more information see Table 4, Figure 11, and The Difference Between RDRAND and RDSEED.

Table 4: RDSEED and RDRAND compliance and source information.

Instruction

Source

NIST Compliance

RDRAND

Cryptographically secure pseudorandom number generator

SP 800-90A

RDSEED

Non-deterministic random bit generator

SP 800-90B & C (drafts)

 RDSEED and RDRAND conceptual block diagram.
Figure 11: RDSEED and RDRAND conceptual block diagram.

The Intel® Compiler 15+ and GCC 4.8+ support RDSEED.

RDSEED loads a hardware-generated random value and stores it in the destination register. The random value is generated from an Enhanced NRBG (Non Deterministic Random Bit Generator) that is compliant with NIST SP 800-90B and INST SP 800-90C in the XOR construction mode.

In order for the hardware design to meet its security goals, the random number generator continuously tests itself and the random data it is generating. The self-test hardware detects runtime failures in the random number generator circuitry or statistically anomalous data occurring by chance and flags the resulting data as bad. In such extremely rare cases, the RDSEED instruction will return no data instead of bad data.

Intel C/C++ Compiler Intrinsic Equivalent:

RDSEED int_rdseed16_step( unsigned short * );

RDSEED int_rdseed32_step( unsigned int * );

RDSEED int_rdseed64_step( unsigned __int64 *);

As with RDRAND, RDSEED will avoid any OS- or library-enabling dependencies and can be used directly by any software at any protection level or processor state.

For more information see section 7.3.17.2 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM).

Supervisor Mode Access Prevention (SMAP)

Supervisor Mode Access Protection (SMAP) is a new CPU-based mechanism for user-mode address-space protection. It extends the protection that previously was provided by Supervisor Mode Execution Prevention (SMEP). SMEP prevents supervisor mode execution from user pages, while SMAP prevents unintended supervisor mode accesses to data on user pages. There are legitimate instances where the OS needs to access user pages, and SMAP does provide support for those situations

 SMAP conceptual diagram.
Figure 12: SMAP conceptual diagram.

Snoop Modes

The ability to maintain the coherence of shared resource data stored in multiple caches has become more difficult over time with the evolving complexity of the microarchitecture. Home agents and cache agents work together to maintain coherence of the memory data and cache lines between processor sockets.

Broadwell offers four different snoop modes a reintroduction of Home Snoop with Directory and Opportunistic Snoop Broadcast (HS with DIR + OSB) previously available on Ivy Bridge, and three snoop modes that were available on Haswell, Early Snoop, Home Snoop, and Cluster on Die Mode (COD). Table 5 maps the memory bandwidth and latency trade-offs that will vary across each of the different modes. Most workloads will find that Home Snoop with Directory and Opportunistic Snoop Broadcast will be the best choice. 

Table 5: Comparison of Snoop Mode Characteristics. Higher is better for memory bandwidth. Lower is better for memory latency.
 Comparison of Snoop Mode Characteristics. Higher is better for memory bandwidth. Lower is better for memory latency.
*Depends on the directory state. Clean directory – low latency; Dirty directory – high latency

+Local latencies are snoop bound

Early Snoop mode always uses the cache agent to generate the snoop request. The request is broadcast to the other cache agents, which creates a lot of traffic. Although memory latency is better compared to Home Snoop mode, the memory bandwidth is worse due to the amount of broadcast traffic. 

Home Snoop mode always uses the home agent for the memory controller to generate the snoop request. This method creates higher local memory latencies, but because the snoop is not being broadcasted by the caching agent there is less snoop traffic. This means there is more available memory bandwidth as compared to Early Snoop mode. 

Home Snoop with Directory and Opportunistic Snoop Broadcast combines multiple features and generally will be the best snoop mode for most workloads. Each home agent has a small cache that holds the directory state of migratory cache lines. The home agent will speculatively snoop the remote socket in parallel with the directory read. This enables low cache-to-cache latencies, low memory latencies and higher memory bandwidth. It is also used to minimize directory lookup overhead for non-temporal writes. 

Cluster on Die Mode (COD) was introduced with Haswell and is found on processors with 10 cores or higher. COD supplies two home agents that provide a second NUMA node per processor socket. For highly optimized NUMA workloads where latency is more important than sharing data across the caching agents, COD can help improve reduced latencies with average LLC hits and local memory access. Because the number of hardware threads is split between two home agents, this can lead to higher memory bandwidth. The affinity decisions based on the number of NUMA nodes is owned by the OS or VMM. 

Refer to your OEM BIOS guide for instructions on setting this feature. Typically it will be a selectable option in the Advanced CPU or QPI menus of your BIOS. 

Posted Interrupts

Posted Interrupts enables efficient co-migration of interrupts with virtual processors avoiding the need for a VM-exit. When a sequence of external interrupts are sent to the VM, they are treated like a posted write and stored in memory. Since posted interrupts are directly supported by the hardware there is a reduction in the number of VM-exits that occur as compared to using software to resolve the interrupt. Posted interrupts is also complimentary to APIC Virtualization, further improving virtual-interrupt performance.

 Comparison of software-based interrupt handling against APIC Virtualization, which was introduced on Ivy Bridge, and lastly with Broadwell, which provides additional support with posted interrupt support in the hardware.
Figure 13: Comparison of software-based interrupt handling against APIC Virtualization, which was introduced on Ivy Bridge, and lastly with Broadwell, which provides additional support with posted interrupt support in the hardware.

Refer to your OEM BIOS guide for instructions on setting the Intel VT-x feature in your BIOS.  Contact your VMM provider to verify support of this feature. 

Page Modification Logging

In Haswell the EPT Accessed and Dirty bits were implemented in the hardware to reduce the number of VM-exits. This enabled more efficient live migration of VMs and fault tolerance. Broadwell has added Page Modification Logging to keep track of these events. This can help provide a method to reduce overhead through rapid check pointing on fault tolerance based VMs. It can also help maintain the availability for critical workloads, while providing prioritization in a mixed workload environment.

Refer to your OEM BIOS guide for instructions on setting the Intel VT-x feature in your BIOS. Contact your VMM provider to verify support of this feature. 

HWPM – Hardware Power Management

Broadwell introduces Hardware Power Management (HWPM), a new optional processor power management feature in the hardware that liberates the OS from making decisions about processor frequency. HWPM allows the platform to provide information on all available constraints, allowing the hardware to choose optimal operating point. Operating independently, the hardware uses information that is not available to software and is able to make a more optimized decision in regard to the p-states and c-states. For example, the performance profile improves latency response when demand changes, while the energy profile delivers optimal energy efficiency and potentially provides power savings (see Figure 14). 

 Comparison of performance and power of HWPM versus without HWPM.
Figure 14: Comparison of performance and power of HWPM versus without HWPM.

Note: Performance and power differences between HWPM and without HWPM may vary based on workload.

Refer to your OEM BIOS guide for instructions on setting this feature. Typically it will be a selectable option in the power profile menu of the BIOS labeled HWPM OOB. 

Intel® Processor Trace

Intel® Processor Trace (Intel® PT) is an exciting feature supported on Broadwell that can be enormously helpful in debugging, because it exposes an accurate and detailed trace of activity with triggering and filtering capabilities to help with isolating the tracing that matters.

Intel PT provides the context around all kinds of events. Performance profilers can use Intel PT to discover the root causes of “response-time” issues—performance issues that affect the quality of execution, if not the overall runtime. 

Further, the complete tracing provided by Intel PT enables a much deeper view into execution than has previously been commonly available; for example, loop behavior, from entry and exit down to specific backedges and loop tripcounts, is easy to extract and report.

Debuggers can use it to reconstruct the code flow that led to the current location. Whether this is a crash site, a breakpoint, a watchpoint, or simply the instruction following a function call we just stepped over. They may even allow navigating in the recorded execution history via reverse stepping commands.

Another important use case is debugging stack corruptions. When the call stack has been corrupted, normal frame unwinding usually fails or may not produce reliable results. Intel PT can be used to reconstruct the stack back trace based on actual CALL and RET instructions.

Operating systems could include Intel PT into core files. This would allow debuggers to not only inspect the program state at the time of the crash, but also to reconstruct the control flow that led to the crash. It is also possible to extend this to the whole system to debug kernel panics and other system hangs. Intel PT can trace globally so that when an OS crash occurs, the trace can be saved as part of an OS crash dump mechanism and then used later to reconstruct the failure.

Intel PT can also help to narrow down data races in multi-threaded operating systems and user program code. It can log the execution of all threads with a rough time indication. While it is not precise enough to detect data races automatically, it can give enough information to aid in the analysis.

To utilize Intel PT you need Intel® Vtune™ Amplifier version 2015 Update 1 or greater. 

For more information see Debug and fine-grain profiling with Intel processor trace given by Beeman Strong, Senior and Processor tracing by James Reinders.

Intel® Node Manager 

Intel® Node Manager is a core set of power management features that provide a smart way to optimize and manage power, cooling, and compute resources in the data center. This server management technology extends component instrumentation to the platform level and can be used to make the most of every watt consumed in the data center. First, Intel Node Manager reports vital platform information, such as power, temperature, and resource utilization using standards-based, out-of-band communications. Second, it provides fine-grained controls to limit platform power in compliance with IT policy. This feature can be found across Intel’s product segments, including Broadwell, providing consistency within the data center.

To use this feature you must enable the BMC LAN and the associated BMC user configuration at the BIOS level, which should be available under the server management menu. The Programmer’s Reference Kit is simple to use and requires no additional external libraries to compile or run. All that is needed is a C/C++ compiler and to then run the configuration and compilation scripts.

Resources

Intel® Xeon® Processor E5 Family

Intel® 64 and IA-32 Architectures Software Developer’s Manual (SDM)

Intel® Resource Director Technology (Intel® RDT)

Optimize Resource Utilization with Intel® Resource Director Technology

Intel® Resource Director Technology Features in Intel® Xeon® Processors E5 v4

Benefits of Intel Cache Monitoring Technology in the Intel® Xeon® Processor E5 v3 Family

Cache Allocation Technology Improves Real-Time Performance

Intel’s Cache Monitoring Technology Software-Visible Interfaces

Intel's Cache Monitoring Technology: Use Models and Data

Intel's Cache Monitoring Technology: Software Support and Tools

The Difference Between RDRAND and RDSEED

Processor tracing by James Reinders

Debug and fine-grain profiling with Intel processor trace given by Beeman Strong, Senior

Intel® Node Manager Website

Intel® Node Manager Programmer’s Reference Kit

Open Source Reference Kit for Intel® Node Manager

How to set up Intel® Node Manager

Haswell Cryptographic Performance

Intel® Performance Counter Monitor a better way to measure CPU utilization

Intel® Memory Latency Checker (MLC) a Linux* tool available for measuring the DRAM latency on your system

Intel® VTune™ Amplifier 2016 a rich set of performance insight into hotspots, threading, locks & waits, OpenCL bandwidth and more, with profiler to visualize results

THE INTEL® XEON® PROCESSOR-BASED SERVER REFRESH SAVINGS ESTIMATOR
 

MAGIX takes Video Editing to a New Level by Providing HEVC to Broad Users

$
0
0

MAGIX's Video Pro X delivers Intel HEVC encoding through Intel® Media Server Studio

 

MagixWhile elite video pros have access to high-powered video production applications with bells and whistles available traditionally only to enterprise, MAGIX has taken a broader approach unveiling its latest version of Video Pro X (Figure 1), a video editing software that sets new standards for semi-professional video production for broader users. Optimized with Intel® Media Server Studio, MAGIX Video Pro X delivers Intel HEVC encoding to prosumers and semi-pros to help alleviate a bandwidth-constrained internet where millions of videos are shared and distributed.

Magix Video Pro X
Figure 1: MAGIX Video Pro X for semi-professional video production

 

Video takes up a massive―and growing―share of Internet traffic. And meeting consumers’ demands for higher online video quality pushes the envelope even more. 

One solution to make those demands more manageable is the HEVC standard (also known as H.265), which delivers huge gains in compression efficiency over H.264/AVC, currently the most commonly used standard. Depending on the testing scenarios, Intel's HEVC GPU-accelerated encoder can deliver 43% better compression for the same quality as 2-pass x264-medium.1, 2 While video is the largest and fastest growing category of Internet traffic, it also consumes much more internet bandwidth than other content formats. By using HEVC, massive gains in compression efficiency is innovative not only for major online video streaming providers, but also for general internet users and the growing audiences that create and share videos online every day.

With the 6th generation Intel® Core Processors launch in September last year, Intel's platforms allow both hardware-based decoding and encoding of HEVC capabilities. Since then, MAGIX worked closely to optimize its video production software with Intel Media Server Studio's Professional Edition for access to hardware acceleration and graphics processor capabilities, Intel's HEVC codec, and to use expert-grade performance and advanced visual quality analyzers.

MAGIX technical experts elaborated that thanks to the high quality of Intel's media software product and easy integration, MAGIX was able to incorporate this extremely efficient compression technology in Video Pro X, making it a premier, semi-pro video editing software with the competitive advantage of providing hardware-accelerated HEVC encoding. The production software is also a great tool for the import, editing and export of 4K/UHD videos.

"Through integrating Intel’s HEVC decoder and encoder that is a part of Intel® Media Server Studio Professional Edition, we put the power in our customer's hands to use the benefits of better compression rate of next gen codec that allows to deliver high quality video with less bandwidth," Sven Kardelke, MAGIX Chief Product Officer Video/Photo.

"We’re working with many industry leaders to help bring their solutions to the marketplace, and MAGIX Video Pro X is an innovative example of a new video editing software solution that supports HEVC. Optimized with Media Server Studio, it’s one of the newest, prosumer software products that’s available enabling individuals to create, edit, and share their own broadcast-ready 4K content online in compressed formats. This promises to have a huge effect on improving video viewer experiences via the internetwhere it is so bandwidth constrained today," said Jeff McVeigh, Intel Software and Services Group vice president and Visual Computing Products general manager.

All in all, it’s a great step forward in taking video editing to a new level.  

 


1Intel Media Server Studio HEVC Codec Scores Fast Transcoding Title

Remote Power Management of Intel® Active Management Technology (Intel® AMT) Devices with InstantGo

$
0
0

Download Document

Introduction

InstantGo, also known as Connected Standby, creates a low OS power state that must be handled differently from how remote power management was handled in the past. This article provides information on how to support the InstantGo feature.

How to support Remote Power Management of Intel® Active Management Technology (Intel® AMT) enabled devices with InstantGo

InstantGo, formerly known as Windows* Connected Standby, is a Microsoft power-connectivity standard for Windows* 8 and 10. This hardware and software specification defines low power levels while maintaining network connectivity and allows for a quick startup (500 milliseconds). InstantGo replaces the s3 power state.

To verify if a system supports InstantGo, type “powercfg /a” from a command prompt. If you have InstantGo you’ll see the Standby (Connected) as an option.

Intel AMT and InstantGo

Intel AMT added support for InstantGo in version 10.0, but the manufacturer must enable the feature.

How are Intel AMT and "InstantGo" related? Intel AMT has to properly handle the various power states by communicating with the firmware, however in this case the OS, not the hardware, controls the low power state.

Intel AMT and InstantGo prerequisites

The only platforms fully compatible with InstantGo run Windows* 8.1 or newer with Intel AMT 10.0 or later. To remotely determine if a device OS is in a low power state, use the OSPowerSavingState method.

One way of determining the Intel AMT version is to inspect the CIM_SoftwareIdentity.VersionString property as shown in the Get Core Version Use Case.

Remote Verification of Device Power State

In order to verify power state in the past we looked at the hardware power state using the CIM_AssociatedPowerManagementService.PowerState method. Now when a system is in a low OS power of InstantGo, the hardware power state will be return s0 (OS powered on) You now need to make an additional query for the OSPowerSavingState in order to determine if the OS is in FULL or low power mode.

The Power Management Work Flow in Intel AMT

Previous work flow for Power-On operations

  1. Query for Intel AMT Power State 
  2. If system is in s0 (power on), do nothing
  3. If system is in s3, s4 or s5, then issue a power on command using the Change Power State API

Current recommendation to properly handle Intel AMT Devices with InstantGo

  1. Query for Intel AMT Power State 
  2. If system is in s3, s4 or s5 then issue a power on command using the Change Power State API
  3. If a system is in s0 (power on) then:
    • If Intel AMT version is 9.0 and below, do nothing
    • If Intel AMT version is 10.0 and above, query the OSPowerSavingState method
      1. If OSPowerSavingState is full power, do nothing
      2. If OSPowerSavingState is in a low power state, wake up the system to full power using RequestOSPowerSavingState method.

There is also a sample PowerShell Script demonstrating this available for download. The script has 4 basic sections:

  1. Establishes the connection and identifies the Intel AMT Version
  2. Queries the Intel AMT device’s current power state (hardware) – Note: script assumes Intel AMT 10 and is in InstantGo low power mode
  3. Queries for the OS Power State
  4. Wakes up the Device

For information on running PowerShell scripts with the Intel® vPro™ module please refer to the Intel AMT SDK and related Intel AMT Implementation and Reference Guide.

Additional Resources

Summary

As more devices support InstantGo, integration of this technology with remote power management methodology will become critical. You want to avoid cases where devices may be detected in powered On (s0) state, when the system is actually running at a lower power state. Fortunately, supporting of InstantGo technology isn’t a difficult task, just a few additional steps to determine the actual power state.

About the Author

Joe Oster has been active at Intel around Intel® vPro™ technology and Intel AMT technology since 2006. He is passionate about technology and is a MSP/SMB technology advocate. When not working, he is a Dad and spends time working on his family farm or flying drones and RC Aircraft.


Innovative Media Solutions Showcase

$
0
0

New, Inventive Media Products Made Possible with Intel Media Software Tools

With Intel media software tools, media/video solutions providers can create inspiring, innovative new products that capitalize on next gen capabilities like HEVC, high-dynamic range (HDR) content delivery, video security solutions with smart analytics, and more. Check these out. Envision how your company can use Intel's advanced media tools to re-invent new solutions for the media and broadcasting industry.

    Mobile Viewpoint Live Reporting Ronde of Norg

    Mobile Viewpoint Delivers HEVC HDR Live Broadcasting

    Mobile Viewpoint recently announced a new bonding transmitter that delivers HEVC (H.265) HDR video running on the latest 6th generation Intel® processors, - and through using the Intel® Media Server Studio Professional Edition optimizes HEVC compression and quality. And for broadcast-quality video, Intel’s graphics-accelerated codec enabled Mobile Viewpoint to develop a hardware platform that combines low power hardware-accelerated encoding and transmission. The new HEVC enabled software will be used in Mobile Viewpoint's Wireless Multiplex Terminal (WMT) AGILE high-dynamic range (HDR) back of the camera solutions and in its 19-inch FLEX IO encoding and O2 decoding products. The results: fast, high-quality, video broadcasting on-the-go so the world can stay better informed of fast-changing news and events. Read more.

     

    Sharp all-around security camera

    Sharp's New Innovative Security Camera is built with Intel® Architecture & Media Software

    With digital surveillance and security concerns now an everyday part of life, SHARP unveiled a new omnidirectional wireless, intelligent, digital security surveillance camera to better meet these needs. Built with an Intel® Celeron® processor (N3160), SHARP 12 megapixel image sensors, and by utilizing the Intel® Media SDK for hardware accelerated encoding, the QG- B20C camera can capture video in 4Kx3K resolution, provide all-around views, and is armed with many intelligent automatic detection functions. Read more.

     

    Magix Video Pro XMAGIX takes Video Editing to a New Level by Providing HEVC to Broad Users

    While elite video pros have access to high-powered video production applications with bells and whistles available traditionally only to enterprise,MAGIX has taken a broader approach unveiling its latest version of Video Pro X, a video editing software that sets new standards for semi-professional video production to widespread users. Optimized with Intel Media Server Studio, MAGIX Video Pro X provides Intel HEVC encoding to prosumers and semi-pros to help alleviate a bandwidth-constrained internet where millions of videos are shared and distributed. Read more.

     

    Comprimato2

    New JPEG2000 Codec Now Native for Intel Media Server Studio

    Comprimato recently worked with Intel on providing the best video encoding technology as part of Intel Media Server Studio by providing a plug-in for the software, which delivers high quality, low latency JPEG2000 encoding. The result is a powerful encoding option available to Media Server Studio users so that they can transcode JPEG2000 contained in IMF, AS02 or MXF OP1a files to distribution formats like AVC/H.264 and HEVC/H.265, and enable software-defined processes of IP video streams in broadcast applications. By using Intel Media Server Studio to access hardware-acceleration and programmable graphics in Intel GPUs, encoding can run super fast. This is a vital benefit because fast media processing significantly reduces latency in the connection, which is particularly important in live broadcasting. Read more.

     

    SPB TV AG Showcases Innovative Mobile TV/On-demand Transcoder enabled by Intel

    Unveiled at Mobile World Congress (MWC) 2016, SPB TV AG showed its innovative single-platform product line at the event, which included a new SPB TV Astra transcoder powered by IntelSPB TV Astra is a professional solution for fast, high-quality processing of linear TV broadcast and on-demand video streams from a single head-end to any mobile, desktop or home device. The transcoder uses Intel® Core™ i7 processors with media accelerators and delivers high-density transcoding via Intel Media Server Studio. “We are delighted that our collaboration with Intel ensures faster and high quality transcoding, making our new product performance remarkable,” said CEO of SPB TV AG Kirill Filippov. Read more.

     

    SURF Communications collaborates with Intel for NFV & WebRTC all-inclusive platforms

    Also at MWC 2016, SURF Communication Solutions announced SURF ORION-HMP* and SURF MOTION-HMP*, the next building blocks of the SURF-HMP™ family. The new SURF-HMP architecture delivers fast, high-quality media acceleration - facilitating up to 4K video resolutions and ultra-high capacity HD voice and video processing - running on Intel® processors with integrated graphics, and optimized by Media Server Studio. SURF-HMP is flexibly architectured to meet the requirements of evolving and large-scale deployments, is driven-by a powerful processing engine that supports all major video and voice codecs and protocols in use, and delivers a multitude of applications such as transcoding, conferencing/mixing, MRF, playout, recording, messaging, video surveillance, encryption and more. Read more.

     


    More about Intel Media Software Tools

    Intel Media Server Studio - Provides an Intel® Media SDK, runtimes, graphics drivers, media/audio codecs, and advanced performance and quality analysis tools to help video solution providers deliver fast, high-density media transcoding.

    Intel Media SDK - A cross-platform API for developing client and media applications for Windows*. Achieve fast video plaback, encode, processing, media format conversion, and video conferencing. Accelerate RAW video and image processing. Get audio decode/encode support.

    Accelerating Media Processing: Which Media Software Tool do I use? English | Chinese

    QoS Configuration and usage for Open vSwitch* with DPDK

    $
    0
    0

    The goal of this article is to configure an egress policer Quality-of-Service (QoS) instance for a Data Plane Development Kit (DPDK) interface on Open vSwitch* (OVS) with DPDK. This article was written with network admin users in mind who wish to use QoS to guarantee performance for DPDK port types in their Open vSwitch server deployment.

    Note: At the time of writing, QoS for OVS with DPDK is only available on the OVS master branch. Users can download the OVS master branch as a zip here. Installation steps for OVS with DPDK are available here.

    Test Environment

    Figure 1: Test Environment

    Note: Both the host and the virtual machines (VMs) used in this setup run Fedora 23 Server 64bit with Linux* kernel 4.4.6. Each VM has a virtual NIC that is connected to the vSwitch bridge via a DPDK vhost user interface. The vnic appears as a Linux kernel device (for example, "ens0") in the VM OS. Ensure there is connectivity between the VMs (for example, ping VM2 from VM1).

    QoS in OVS with DPDK

    Before we configure QoS we need to understand its place and how it interacts with traffic in the vSwitch. When QoS is configured in OVS with DPDK it operates only on egress traffic transmitted from a port on the vSwitch.

    A list of supported QoS types for a given port (for example, vhost-user2) can be obtained with the following command.

    ovs-appctl -t ovs-vswitchd qos/show-types vhost-user2

    Currently OVS with DPDK supports only 1 QoS type though this may change over time as new QoS types are supported. The call above would return the following:

    QoS type: egress-policer

    Egress policer is a QoS type supported by OVS with DPDK. An egress policer simply drops packets once a certain transmission rate is surpassed on the interface (a token bucket implementation). For a physical device it will drop traffic that is to be transmitted out of the host via a NIC. For a virtual interface, that is, DPDK vhost-user, it will drop traffic that is transmitted to the guest from the vSwitch, in effect limiting the reception rate of the traffic for the guest on that port. Figure 2 below provides an illustration of this.

    Figure 2: Egress policer QoS configured for vhost-user port

    QoS Configuration and Testing

    To test the configuration, make sure iPerf is installed on both VMs. Users should ensure to match the rpm version to the OS guest version; in this case the Fedora 64bit rpm should be used. If using a package manager such as ‘dnf’ on Fedora 23 then the user can install iPerf automatically with the following command:

    dnf install iperf

     iPerf can be run in a client mode or server mode. In this example, we will run the iPerf client on VM1 and the iPerf server on VM2.

    Test Case without QoS Configured

    From VM2, run the following to deploy an iPerf server in UDP mode on port 8080:

    iperf –s –u –p 8080

    From VM1, run the following to deploy an iPerf client in UDP mode on port 8080 with a transmission bandwidth of 100Mbps:

    iperf -c 7.7.7.2 -u -p 8080 -b 100m

    This will cause VM1 to attempt to transmit UDP traffic at a rate of 100Mbps to VM2. After 10 seconds, this will output a series of values. Run these commands before QoS is configured, and you will see results similar to Figure 3 below. Note we are interested in the "Bandwidth" column in the server report.

    Figure 3: Output without QoS configured.

    The figures above indicate that a bandwidth of 100Mbps was attained between the VMs.

    Test Case with Egress Policer QoS Type Configured

    Now an egress policer will be configured on vhost-user2 to police traffic at a rate of 10Mbps with the following command:

    ovs-vsctl set port vhost-user2 qos=@newqos -- --id=@newqos create qos type=egress-policer other-config:cir=1250000 other-config:cbs=2048

    The relevant parameters are explained below:

    • ‘type= egress-policer‘: The QoS type to set on the port. In this case ‘egress-policer’.
    • ‘other_config=cir’: Committed Information Rate, the maximum rate (in bytes) that the port should be allowed to send.
    • ‘other_config=cbs’: Committed Burst Size measured in bytes and represents a token bucket. At a minimum should be set to the expected largest size packet.

    Repeating the iPerf UDP bandwidth test, now you will we see something similar to Figure 4 below.

    Figure 4: Output with QoS configured

    Note that the attainable bandwidth with QoS configured is now 9.81Mbps rather than 100Mbps. iPerf has sent UDP traffic at the 100 Mbits/sec bandwidth from its client on VM1 to its server on VM2; however, the traffic has been policed on the vSwitch vhost-user2 ports transmission path via QoS. This has limited the traffic received at the iPerf server on VM2 to ~10Mbits/sec.

    It should be noted that if using TCP traffic, the CBS parameter should be set at a sizable fraction of the CIR, a general rule of thumb is > 10%. This is due to how TCP interacts poorly when packets are dropped which causes issues with packet retransmission.

    The current QoS configuration for vhost-user2 can be examined with:

    ovs-appctl -t ovs-vswitchd qos/show vhost-user2

    To remove the QoS configuration from vhost-user2, use:

    ovs-vsctl -- destroy QoS vhost-user2 -- clear Port vhost-user2 qos

    Conclusion

    In this article, we have shown a simple use case where traffic is transmitted between 2 VMs over Open vSwitch with DPDK configured with a QoS egress policer. We have demonstrated utility commands to show the supported QoS types, configure QoS on a given DPDK port, how to examine current QoS configuration details, and finally how clear a QoS configuration from a port.

    Additional Information

    For more details on QoS usage, types and parameters and so on, users should refer to the QoS sections detailed in the vsiwtch.xml and ovs-vswitchd.conf.db.

    Have a question? Feel free to follow up with the query on the Open vSwitch discussion mailing thread.

    To learn more about Open vSwitch with DPDK readers are encouraged to check out the following videos on Intel Network Builders University.

    Open vSwitch with DPDK Architectural Deep Dive

    DPDK Open vSwitch: Accelerating the Path to the Guest

    About the Author

    Ian Stokes is a network software engineer with Intel. His work is primarily focused on accelerated software switching solutions in user space running on Intel Architecture. His contributions to Open vSwitch with DPDK include the OVS DPDK QoS API and egress/ingress policer solutions.

    如何在VT-d 虚拟化中安装Intel® Media Server Studio

    $
    0
    0

    Intel® Graphics Virtualization Technology (Intel® GVT)是Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) 的一个扩展, 它可以为Intel® HD graphics 提供硬件加速I/O虚拟化支持,- 它是一种GPU虚拟化解决方案. Intel GVT允许虚拟机 (VM) 通过I/O调用GPU功能.

    仅支持“-d” (直接或者“传送”)虚拟化。这意味着每次只有一个VM能够使用GPU。但是,其他虚拟机可以执行不需要GPU的工作。

    了解更多有关通用图形(非Media SDK或OpenCL)访问GPU, 请参考Intel GVT and KVMGT.

    术语

    主机:实际虚拟化发生的机器,在此系统上安装Intel Media Server Studio通用包.
    VM:虚拟机对计算机系统的特定模拟,在此系统上安装Intel Media Server Studio Gold配置.
    远程操作系统:用于从远程系统访问主机和虚拟机的操作系统。

    下列步骤提供在Intel VT-d环境下启用Intel Media Server Studio的方法. 这包括主机和VM的安装,以及对安装的验证。

     

    1. 设置主机

    1.0 硬件要求

    • 主机:你需要一个支持Intel® Media Server Studio的CPU. 你可以在Release Notes for Linux* here中找到这部分的说明. 此外,确保这个CPU 支持Intel VT-d: 登录http://ark.intel.com以你CPU模式作为关键字进行搜索。在本文中,我们使用 GIGABYTE* GB-BXi7-4770R作为主机.
    • 远程操作系统: 要远程访问主机,你可以安装Microsoft* Windows* 或Linux*。在本文中,我们使用PuTTY 运行在Windows 8.1 从远程访问主机。

    1.1 设置主机上的BIOS

    以下设置应该在BIOS中启用: - VT-d 选项 - legacy 启动选项(设置UEFI 启动选项可能会导致VM 启动失败)

    1.2 主机操作系统的设置

    1. 设置主机系统使用"Server with GUI"选项

    在安装CentOS* 7.1期间, 更改"Software Selection"中的默认选项到"Base Environment"到"Server with GUI".

    2. 设置代理,如果你的主机需要设置代理服务器

    以下行添加到 /etc/yum.conf
    proxy=<proxy-server-ip>:<proxy-port>

    3. 设置wget需要使用的环境变量,如果需要代理

    以下行添加到 ~/.bashrc to get wget access:
    export http_proxy=proxy-server-ip:proxy-port
    export https_proxy=proxy-server-ip:proxy-port
    export ftp_proxy=proxy-server-ip:proxy-port

    4. 安装前提和依赖的软件包:

    # yum groupinstall "Development Tools"
    # yum install -y uuid-c++-devel lzo-devel gcc gcc-c++ git patch texinfo python-devel acpica-tools libuuid-devel ncurses-devel glib2 glib2-devel libaio-devel openssl-devel yajl-devel glibc-devel glibc-devel.i686 pixman-devel libpciaccess-devel pciutils-devel pciutils-devel-static pciutils-libs
    # yum install -y tigervnc tigervnc-server SDL.i686 bridge-utils
    # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/dev86-0.16.17-15.1.el6.x86_64.rpm
    # rpm -Uvh dev86-0.16.17-15.1.el6.x86_64.rpm

    2. 设置Dom0

    2.1 安装Intel® Media Server Studio 通用软件包

    1. 用户模式驱动和组件的安装

    (如欲了解更多信息请参阅Intel Media Server Studio getting started guide.)

    $ mkdir driver-setup
    $ tar xf SDK*/Generic/intel-linux-media-ocl_generic*.tar.gz -C driver-setup
    $ cd driver-setup
    (as root)
    # ./install_media.sh
    # reboot

    2. 内核模式驱动(KMD)的安装

    作为普通用户安装是,请执行以下步骤:

    $ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.5.tar.xz
    $ tar -xJf linux-3.14.5.tar.xz
    $ cp  /opt/intel/mediasdk/opensource/patches/kmd/3.14.5/intel-kernel-patches.tar.bz2 .
    $ tar -xvjf intel-kernel-patches.tar.bz2
    $ cd linux-3.14.5
    $ for i in ../intel-kernel-patches/*.patch; do patch -p1 < $i; done
    $ cp config-3.14.5-VTd.txt .config  ## refer note for config-3.14.5-VTd.txt
    $ make -j 8
    (as root)
    # make modules_install
    # make install
    # grub2-mkconfig –o /boot/grub2/grub.cfg
    # reboot      ## ==> new kernel 3.14.5

    2.2 在Xen上安装Dom0

    1. 通过源码构建Xen

    $ git clone git://xenbits.xen.org/xen.git
    $ cd xen
    $ git checkout -b FOR-4.5.0 RELEASE-4.5.0
    $ ./configure
    $ make dist
    (as root)
    # make install

    2. 设置守护程序和动态库

    (as root)
    # chkconfig --level 35 xencommons on
    # chkconfig --level 35 xendomains on
    
    # cd /etc/ld.so.conf.d/
    # cat > libxen-x86_64.conf << EOF
    /usr/local/lib
    EOF
    # ldconfig -v
    # reboot

    3. 配置Grub

    a) 编辑/etc/grub.d/40_custom, 并复制新内核启动项目,从/boot/grub2/grub.cfg中拷贝新内核启动配置.

    #!/bin/sh
    exec tail -n +3 $0
    # This file provides an easy way to add custom menu entries.  Simply type the
    # menu entries you want to add after this comment.  Be careful not to change
    # the 'exec tail' line above.
    menuentry 'CentOS Linux 7 (Core), with Linux 3.14.5-MSSr6+' --class rhel fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.14.5-MSSr6+-advanced-62f16e28-adcb-4a47-925a-f4813854416f' {
        load_video
        insmod gzio
        insmod part_msdos
        insmod ext2
        set root='hd0,msdos3'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3 --hint='hd0,msdos3'  62f16e28-adcb-4a47-925a-f4813854416f
        else
          search --no-floppy --fs-uuid --set=root 62f16e28-adcb-4a47-925a-f4813854416f
        fi
        linux16 /boot/vmlinuz-3.14.5-MSSr6+ root=UUID=62f16e28-adcb-4a47-925a-f4813854416f ro rd.lvm.lv=centos/swap crashkernel=auto rhgb quiet
        initrd16 /boot/initramfs-3.14.5-MSSr6+.img
    }

    b) 插入如下行,这只dom0内存和虚拟CPU核心分配

    multiboot /boot/xen.gz dom0_mem=2048M iommu=1 dom0_max_vcpus=2 dom0_vcpus_pin=1

    c) 用"module"更换"linux16"和"initrd16", 如下所示

    menuentry 'CentOS Linux 7 (Core), with Linux 3.14.5-MSSr6+ Xen Hypervisor' --class rhel fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.14.5-MSSr6+-advanced-62f16e28-adcb-4a47-925a-f4813854416f' {
        load_video
        insmod gzio
        insmod part_msdos
        insmod ext2
        set root='hd0,msdos3'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3 --hint='hd0,msdos3'  62f16e28-adcb-4a47-925a-f4813854416f
        else
          search --no-floppy --fs-uuid --set=root 62f16e28-adcb-4a47-925a-f4813854416f
        fi
    
        multiboot /boot/xen.gz dom0_mem=2048M iommu=1 dom0_max_vcpus=8 dom0_vcpus_pin=1
        module /boot/vmlinuz-3.14.5-MSSr6+ root=UUID=62f16e28-adcb-4a47-925a-f4813854416f ro rd.lvm.lv=centos/swap crashkernel=auto rhgb quiet
        module /boot/initramfs-3.14.5-MSSr6+.img
    }
    

    d) 更新grub菜单

    (as root)
    # grub2-mkconfig -o /boot/grub2/grub.cfg

    e) 重新启动,并在grub菜单中选择'CentOS Linux 7 (Core), with Linux 3.14.5-MSSr6+ Xen Hypervisor' .

    (as root)
    # reboot

    4. 检查Xen环境

    $ lsmod | grep xen
    xen_pciback            55503  0
    xen_netback            36536  0
    xen_blkback            37265  0
    xen_gntalloc           13626  0
    xen_gntdev             18675  2
    xen_evtchn             13033  1
    xenfs                  12978  1
    xen_privcmd            13243  4 xenfs
    
    (as root)
    # xl list
    Name                                        ID   Mem VCPUs      State   Time(s)
    Domain-0                                     0  7852     8     r-----      54.4
    
    # xl info
    host                   : centos71.hsw.zhou
    release                : 3.14.5-MSSr6+
    version                : #1 SMP Tue Sep 1 13:36:53 CST 2015
    machine                : x86_64
    ……
    

    2.3 在Dom0上检查MediaSDK

    $ DISPLAY=:0.0 vainfo
    libva info: VA-API version 0.35.0
    libva info: va_getDriverName() returns 0
    libva info: User requested driver 'iHD'
    libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_0_32
    libva info: va_openDriver() returns 0
    vainfo: VA-API version: 0.35 (libva 1.3.1)
    vainfo: Driver version: 16.4.2.1.39163-ubit
    vainfo: Supported profile and entrypoints
    ……
    
    $ ./sample_multi_transcode_drm -hw -i::h264 input.h264 -o::h264 output.h264 -angle 180 -opencl
    $ ./sample_encode_drm h264  -hw -i test.yuv -o test.h264 -w 720 -h 480 -la -lad 20
    

    CAUTION: If you run media functions on dom0 and Dom0 is down for some reason, this will cause all VMs to fail.

    3. 设置DomU

    1. 启动主机并选择"Linux 3.14.5-MSSr6+ Xen Hypervisor"菜单条目

    2. 创建并设置虚拟网桥 xenbr0

    a) 搜索以下文件名: /etc/sysconfig/network-scripts/ifcfg-

    b) 启用Xenbr0

    (as root)
    # br0.sh <NIC> xenbr0        ## br0.sh refer to 7.1

    c) 验证xenbr0

    $ ifconfig xenbr0
    xenbr0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
            inet 10.239.10.16  netmask 255.255.255.0  broadcast 10.239.10.255
            ether fc:aa:14:db:cf:0d  txqueuelen 0  (Ethernet)
            RX packets 136  bytes 14838 (14.4 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 47  bytes 9236 (9.0 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    3. 创建虚拟机

    a) 创建一个虚拟硬盘映像

    # dd if=/dev/zero of=./image30g.img seek=10000 bs=3M count=1

    b) 创建一个虚拟机的配置文件 -  /etc/xen/xlexample.hvm是一个可以参考的示例

    $ cat centos71-no-vtd.hvm
    kernel = "hvmloader"
    builder = 'hvm'
    memory = 3072
    vcpus = 6
    cpus = "4,5,6,7"
    name = "centos-vm1"
    vif = [ 'mac=fc:aa:14:db:cf:4e, bridge=xenbr0, model=e1000' ]
    disk = [ 'file:/home/lmsdk/setup-vtd/vm/image30g.img,hda,w', 'file:/home/lmsdk/setup-vtd/vm/CentOS-7-x86_64-DVD-1503-01.iso,hdc:cdrom,r' ]
    xen_platform_pci=1
    device_model_version='qemu-xen'
    sdl=0
    vnc=1
    vnclisten='0.0.0.0'
    vncpasswd='intel123'
    serial='pty'
    tsc_mode="default"
    usb=1
    keymap='en-us'

    c) 启动虚拟机操作系统和设置CentOS 7.1

    (as root)
    # xl create centos71-no-vtd.hvm
    # vncviewer :0   # input password (intel123) setting in VM configure file

    安装操作系统过程中, 在 "Software"– "Software Selection"– "Base Environment"中,选择 "Server with GUI".

    d) 安装Intel Media Server Studio 使用Gold 配置. 安装方法可以参考media_server_studio_getting_started_guide.pdf

    e) 关闭虚拟机,不要测试媒体功能

    (as root)
    # xl list
    Name                                        ID   Mem VCPUs      State   Time(s)
    Domain-0                                     0  2048     8     r-----     977.1
    centos-vm1                                   3  3072     6     -b----      39.8
    
    # xl dest 3
    

    4. 设置VGA PCI 传送模式

    1. 使用putty登录主机,确保主机已经登录到桌面

    2. 设置VGA PCI 传送

    (as root)
    # xl pci-assignable-add 0000:00:02.0@1c,msitranslate=1
    # xl pci-assignable-list
    0000:00:02.0

    3. 修改虚拟机配置,使用Intel® GVT

    $ cat centos71.hvm
    kernel = "hvmloader"
    builder = 'hvm'
    memory = 2048
    vcpus = 2
    cpus = ["4", "5"]
    name = "centos-vm1"
    gfx_passthru=1
    pci=['00:02.0@1c,msitranslate=1']
    vif = [ 'mac=fc:aa:14:db:cf:4e, bridge=xenbr0, model=e1000' ]
    disk = [ 'file:image30g.img,hda,w', 'file:CentOS-7-x86_64-DVD-1503-01.iso,hdc:cdrom,r' ]
    xen_platform_pci=1
    device_model_version='qemu-xen-traditional'
    device_model_override='/usr/local/lib/xen/bin/qemu-dm'
    sdl=0
    #vnc=1
    #vnclisten='0.0.0.0'
    #vncpasswd='intel123'
    serial='pty'
    tsc_mode=0
    usb=1
    keymap='en-us'

    4. 启动带有Intel® GVT虚拟机

    (as root)
    # xl create centos71.hvm

    5. 在虚拟机上确认Intel® GVT

    1. 确认显卡PCI设备

    $ lspci –nn | grep VGA
    00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:0d22] (rev 08)
    

    2. 请参与 section in Intel Media Server Studio R6 Getting Started Guide中的"Verify correct installation"部分,用root用户验证 Intel Media Server Studio.

    3. 在虚拟机上验证OpenCL和look-ahead码率控制方法

    (as root)
    # DISPLAY=:0.0 ./sample_multi_transcode_drm -hw -i::h264 test.h264 -o::h264 output.h264 -angle 180 -opencl
    # DISPLAY=:0.0 ./sample_encode_drm h264  -hw -i test.yuv -o test-la-out.h264 -w 720 -h 480 -la -lad 20

    6. 设置Intel® GVT的脚本

    1. 运行br0.sh安装虚拟网桥

    #!/bin/bash
    if [ $# != 2 ] ; then
        echo "Usage: `basename $0`  "
        echo "       NIC: find it via ifconfig"
        echo "       BRIDGE: xenbr0 or br0"
        exit 1;
    fi
    
    eth0=$1
    br0=$2
    
    ifconfig $eth0 down                         # close eth0 first
    brctl addbr $br0                            # add virtual bridge br0
    brctl addif $br0 $eth0                      # add interface eth0 on br0
    brctl stp $br0 off                          # only on bridge, so close generate tree protocol
    brctl setfd $br0 1                          # set br0 delay time of forward
    brctl sethello $br0 1                       # set br0 hello time
    ifconfig $br0 0.0.0.0 promisc up            # open br0
    ifconfig $eth0 0.0.0.0 promisc up           # open eth0
    dhclient $br0                               # br0 get IP address from DHCP server
    brctl show $br0                             # show virtual bridge list
    brctl showstp $br0                          # show interface information on br0
    

    7. 故障配出

    下面的文章和资料可以帮您解决安装问题

    如何在CentOS* 7.1上安装和部署Intel® Media Server Studio

    $
    0
    0

    CentOS* 7.1默认内核被视为目前版本 Intel Media Server Studio的一个主要操作系统 。更多信息请参阅Getting Started Guide.

    在build_kernel_rpm_CentOS.sh脚本中使用以下kernel: kernel-3.10.0-229.1.2.el7.src.rpm. 但是, 有许多其他最新的内核。请参阅: vault.centos.org,获得最新的内核列表。

    以下部分介绍Intel Media Server Studio Gold内核版本(kernel-3.10.0-229.11.1.el7.src.rpm)以外其他来自CentOS*官方网站上内核上安装Intel Media Server Studio 的细节

    1 CentOS* 7.1上安装Intel Media Server Studio

    以下章节描述的在CentOS 7.1 OS上安装Intel Media Server Studio 的步骤。

    1.1 安装用户模式图形驱动(UMD)

    (as regular user)
    $ tar –xvzf mediaserverstudio*.tar.gz
    $ cd mediaserverstudio*
    $ tar -xvzf SDK*.tar.gz
    $ cd SDK*/CentOS
    $ tar -xvzf install_scripts*.tar.gz
    
    (as root)
    # ./install_sdk_UMD_CentOS.sh
    

    For more information, see the Getting Started Guide.

    1.2 安装内核模式图形驱动程序(KMD)

    以下部分介绍三个不同的选项,用于安装内核模式图形驱动程序(KMD)

    通过内核源码
    按照以下步骤,在内核源码中为 Intel Media Server Studio 安装内核模式驱动程序(KMD)。

    1. 设置rpm的构建环境

    (as regular user)
    $ mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
    $ echo '%_topdir %(echo $HOME)/rpmbuild'> ~/.rpmmacros

    2. 安装内核的源码rpm包

    (as regular user)
    $ wget http://vault.centos.org/7.1.1503/updates/Source/SPackages/kernel-3.10.0-229.11.1.el7.src.rpm
    $ rpm -i kernel-3.10.0-229.11.1.el7.src.rpm 2>&1 | grep -v mock
    $ cd ~/rpmbuild/SPECS
    $ rpmbuild -bp --target=$(uname -m) kernel.spec

    3. 应用Intel Media Server Studio的内核补丁

    (as regular user)
    $ cd ~/rpmbuild/BUILD/kernel-3.10.0-229.11.1.el7
    $ cp /opt/intel/mediasdk/opensource/patches/kmd/3.14.5/intel-kernel-patches.tar.bz2 .
    $ tar -xvjf intel-kernel-patches.tar.bz2
    $ cd linux-3.10.0-229.11.1.el7.centos.x86_64
    $ for i in ../intel-kernel-patches/*.patch; do patch -p1 < $i; done

    注意: 你需要解决应用补丁造成的补丁冲突和拒绝

    4. 构建和安装补丁的内核:

    (as regular user)
    $ make oldconfig
    $ sed -i 's/CONFIG_LOCALVERSION=""/CONFIG_LOCALVERSION="-MSSr6"/g' .config
    $ make –j8
    
    (as root)
    # make modules_install
    # make install
    # reboot ==> new kernel

    5. 创建源码和二进制rpm包,以便轻松在其他机器上安装(可选)

    (as regular user)
    $ make rpm-pkg

    6. 确认安装

    请参阅 media_server_studio_getting_started_guide.pdf中的 "Verifying correct installation"

    通过rpmbuild

    1. 升级至新内核

    (as root)
    # yum update kernel-3.10.0-229.11.1.el7   ## upgrade to new kernel

    2. 应用内核补丁到内核源码中

    参见前文中 "通过内核源码"中1 - 3 节。

    3. 构建和安装带有补丁的内核

    (as regular user)
    $ cd ~/rpmbuild/SPECS
    $ sed -i "s#%define specrelease 229.11.1%{?dist}#%define specrelease 229.11.1%{?dist}.MSSr6.39163%{?dist}#" kernel.spec
    $ rpmbuild –ba --target=$(uname –m) kernel.spec
    
    (as root)
    # cd ~/rpmbuild/RPMS/x86_64
    # rpm -Uvh kernel-3.10.*.rpm
    # reboot
    

    注意: source rpm package can be found in 可以在~/rpmbuild/SRPMS中找源码rpm包

    4. 验证成功,则继续执行

    参见media_server_studio_getting_started_guide.pdf中"Verifying correct installation"

    通过build_kernel_rpm_CentOS.sh

    1. 升级到新内核

    (As root)
    # yum update kernel-3.10.0-229.11.1.el7   ## upgrade to new kernel

    2. 修改build_kernel_rpm_CentOS.sh

    -CENTOS7_LATEST_KER_BID=229.1.2
    +CENTOS7_LATEST_KER_BID=229.11.1

    3. 构建和安装带有补丁内核

    (As a regular user)
    $ cp build_kernel_rpm_CentOS.sh /MSS
    $ cd /MSS
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version
    
    (as root)
    # cd /MSS/rpmbuild/RPMS/x86_64
    # rpm -Uvh kernel-3.10.*.rpm
    # reboot
    

    注意: 如何这种方法不成功, 请参阅 “通过内核源码”中步骤3,4

    1.3 故障排除

    1.3.1 修复补丁拒绝,并安装新KMD

    在Linux*上的Intel® 图形驱动包括三个重要的内核模块(drm.ko, drm_kms_helper.ko and i915.ko). 通常情况下, Intel Media Server Studio 的内核补丁会应用的这些内核模块上。如果Intel Media Server Studio的内核补丁打到CentOS* 7.1源码中失败, 你会在内核源码树中看到拒绝的文件 (*.rej) 。要修复问题,你需要手动的更改和合并他们到内核源码中。此外你需要修复在内核重建过程的编译和链接的任何问题。

    请重新运行以下命令:

    $ ./sample_multi_transcode_drm -hw -i::h264 test.h264 -o::h264 output.h264 -angle 180 -opencl
    $ ./sample_encode_drm h264  -hw -i test.yuv -o test-la-out.h264 -w 720 -h 480 -la -lad 20

    如果命令运行成功,表明内核补丁可供使用。如果这些命令行遇到错误,你需要调试KMD代码。你可以使用以下命令行分开构建DRM和i915。

    (As a regular user)
    $ cp /boot/config-`uname -r` .config
    $ make oldconfig
    $ make menuconfig
    $ make prepare
    $ make modules_prepare
    $ make M=drivers/gpu/drm

    你可以使用以下命令单独安装DRM和i915。

    (As root)
    # cp drivers/gpu/drm/drm.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm
    # cp drivers/gpu/drm/ drm_kms_helper.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm
    # cp drivers/gpu/drm/i915/i915.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm/i915
    # mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
    # reboot

    1.3.2 使用Intel Media Server Studio 的实例程序验证新的KMD

    参阅media_server_studio_getting_started_guide.pdf中的 "Samples and Tutorials"部分,并运行一下命令测试KMD。

    $ ./sample_multi_transcode_drm -hw -i::h264 test.h264 -o::h264 output.h264 -angle 180 -opencl
    $ ./sample_encode_drm h264  -hw -i test.yuv -o test-la-out.h264 -w 720 -h 480 -la -lad 20

    1.3.3 常见问题解答

    • Q: 我可否在我的CentOS* 7.1内核上安装Intel Media Server Studio? 如何可以, 如何安装?
      A: 是的。 修复补丁后, Intel Media Server Studio应该可以运行你的内核上。
    • Q: 如何安装失败,如何回滚到上一个内核?
      A: 尝试在grub菜单中选择一个先前的内核,如果grub菜单不能使用,可以运行“make install”安装之前的内核,在重启之前在grub菜单中检查有要启动的新内核。

    2 CentOS* 7.1上部署Intel® Media Server Studio

    此部分提供有关在CentOS* 7.1上测试和部署Intel Media Server Studio。

    1. 部署 Intel Media Server Studio 的UMD

    使用二进制运行时软件包,而不是开发软件包。你可以使用命令行 "rpm -qlp *.rpm"检查这些软件包。

    (as regular user)
    $ tar –xvzf mediaserverstudio*.tar.gz
    $ cd mediaserverstudio*
    $ tar -xvzf SDK*.tar.gz
    $ cd SDK*/CentOS
    $ find -name "*.rpm" | grep -v devel ## list runtime rpm package

    2. 部署Intel Media Server Studio 的KMD

    创建内核二进制rpm包

    (as regular user)
    $ cd /MSS
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version

    注意: 使用内核的二进制rpm包,可由build_kernel_rpm_CentOS.sh创建. 更多详细信息可在media_server_studio_getting_started_guide.pdf中找到。

    (as regular user)
    $ cd /MSS
    $ sed -i 's/\-bb/\-bs/' build_kernel_rpm_CentOS.sh
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version

    注意: 如果你遇到内核rpm包问题,使用内核源码rpm包进行重建。内核源码rpm包/MSS/rpmbuild/SRPMS/kernel-3.10.0-229.1.2.39163.MSSr6.el7.centos.src.rpm 可以在~/rpmbuild/RPMS/x86_64 folder中找到。

    3. 从内核源码rpm包重建内核二进制rpm包

    (as regular user)
    $ mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
    $ echo '%_topdir %(echo $HOME)/rpmbuild'> ~/.rpmmacros
    $ rpmbuild --rebuild --target=$(uname -m) --with firmware --without debug --without debuginfo --without perf --without tools kernel-3.10.0-229.1.2.39163.MSSr6.el7.centos.src.rpm

    4. 结论

    我们建议你在部署 Intel Media Server Studio时, 包含着三个组件。你可以运行自己的脚本在CentOS* 7.1上安装这些组件。

    • UMD中不带开发rpm包。
    • 内核二进制rpm包。
    • 内核源码rpm包。

    如何在CentOS* 7.1上安装和部署Intel® Media Server Studio

    $
    0
    0

    CentOS* 7.1默认内核被视为目前版本 Intel Media Server Studio的一个主要操作系统 。更多信息请参阅Getting Started Guide.

    在build_kernel_rpm_CentOS.sh脚本中使用以下kernel: kernel-3.10.0-229.1.2.el7.src.rpm. 但是, 有许多其他最新的内核。请参阅: vault.centos.org,获得最新的内核列表。

    以下部分介绍Intel Media Server Studio Gold内核版本(kernel-3.10.0-229.11.1.el7.src.rpm)以外其他来自CentOS*官方网站上内核上安装Intel Media Server Studio 的细节

    1 CentOS* 7.1上安装Intel Media Server Studio

    以下章节描述的在CentOS 7.1 OS上安装Intel Media Server Studio 的步骤。

    1.1 安装用户模式图形驱动(UMD)

    (as regular user)
    $ tar –xvzf mediaserverstudio*.tar.gz
    $ cd mediaserverstudio*
    $ tar -xvzf SDK*.tar.gz
    $ cd SDK*/CentOS
    $ tar -xvzf install_scripts*.tar.gz
    
    (as root)
    # ./install_sdk_UMD_CentOS.sh
    

    For more information, see the Getting Started Guide.

    1.2 安装内核模式图形驱动程序(KMD)

    以下部分介绍三个不同的选项,用于安装内核模式图形驱动程序(KMD)

    通过内核源码
    按照以下步骤,在内核源码中为 Intel Media Server Studio 安装内核模式驱动程序(KMD)。

    1. 设置rpm的构建环境

    (as regular user)
    $ mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
    $ echo '%_topdir %(echo $HOME)/rpmbuild'> ~/.rpmmacros

    2. 安装内核的源码rpm包

    (as regular user)
    $ wget http://vault.centos.org/7.1.1503/updates/Source/SPackages/kernel-3.10.0-229.11.1.el7.src.rpm
    $ rpm -i kernel-3.10.0-229.11.1.el7.src.rpm 2>&1 | grep -v mock
    $ cd ~/rpmbuild/SPECS
    $ rpmbuild -bp --target=$(uname -m) kernel.spec

    3. 应用Intel Media Server Studio的内核补丁

    (as regular user)
    $ cd ~/rpmbuild/BUILD/kernel-3.10.0-229.11.1.el7
    $ cp /opt/intel/mediasdk/opensource/patches/kmd/3.14.5/intel-kernel-patches.tar.bz2 .
    $ tar -xvjf intel-kernel-patches.tar.bz2
    $ cd linux-3.10.0-229.11.1.el7.centos.x86_64
    $ for i in ../intel-kernel-patches/*.patch; do patch -p1 < $i; done

    注意: 你需要解决应用补丁造成的补丁冲突和拒绝

    4. 构建和安装补丁的内核:

    (as regular user)
    $ make oldconfig
    $ sed -i 's/CONFIG_LOCALVERSION=""/CONFIG_LOCALVERSION="-MSSr6"/g' .config
    $ make –j8
    
    (as root)
    # make modules_install
    # make install
    # reboot ==> new kernel

    5. 创建源码和二进制rpm包,以便轻松在其他机器上安装(可选)

    (as regular user)
    $ make rpm-pkg

    6. 确认安装

    请参阅 media_server_studio_getting_started_guide.pdf中的 "Verifying correct installation"

    通过rpmbuild

    1. 升级至新内核

    (as root)
    # yum update kernel-3.10.0-229.11.1.el7   ## upgrade to new kernel

    2. 应用内核补丁到内核源码中

    参见前文中 "通过内核源码"中1 - 3 节。

    3. 构建和安装带有补丁的内核

    (as regular user)
    $ cd ~/rpmbuild/SPECS
    $ sed -i "s#%define specrelease 229.11.1%{?dist}#%define specrelease 229.11.1%{?dist}.MSSr6.39163%{?dist}#" kernel.spec
    $ rpmbuild –ba --target=$(uname –m) kernel.spec
    
    (as root)
    # cd ~/rpmbuild/RPMS/x86_64
    # rpm -Uvh kernel-3.10.*.rpm
    # reboot
    

    注意: source rpm package can be found in 可以在~/rpmbuild/SRPMS中找源码rpm包

    4. 验证成功,则继续执行

    参见media_server_studio_getting_started_guide.pdf中"Verifying correct installation"

    通过build_kernel_rpm_CentOS.sh

    1. 升级到新内核

    (As root)
    # yum update kernel-3.10.0-229.11.1.el7   ## upgrade to new kernel

    2. 修改build_kernel_rpm_CentOS.sh

    -CENTOS7_LATEST_KER_BID=229.1.2
    +CENTOS7_LATEST_KER_BID=229.11.1

    3. 构建和安装带有补丁内核

    (As a regular user)
    $ cp build_kernel_rpm_CentOS.sh /MSS
    $ cd /MSS
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version
    
    (as root)
    # cd /MSS/rpmbuild/RPMS/x86_64
    # rpm -Uvh kernel-3.10.*.rpm
    # reboot
    

    注意: 如何这种方法不成功, 请参阅 “通过内核源码”中步骤3,4

    1.3 故障排除

    1.3.1 修复补丁拒绝,并安装新KMD

    在Linux*上的Intel® 图形驱动包括三个重要的内核模块(drm.ko, drm_kms_helper.ko and i915.ko). 通常情况下, Intel Media Server Studio 的内核补丁会应用的这些内核模块上。如果Intel Media Server Studio的内核补丁打到CentOS* 7.1源码中失败, 你会在内核源码树中看到拒绝的文件 (*.rej) 。要修复问题,你需要手动的更改和合并他们到内核源码中。此外你需要修复在内核重建过程的编译和链接的任何问题。

    请重新运行以下命令:

    $ ./sample_multi_transcode_drm -hw -i::h264 test.h264 -o::h264 output.h264 -angle 180 -opencl
    $ ./sample_encode_drm h264  -hw -i test.yuv -o test-la-out.h264 -w 720 -h 480 -la -lad 20

    如果命令运行成功,表明内核补丁可供使用。如果这些命令行遇到错误,你需要调试KMD代码。你可以使用以下命令行分开构建DRM和i915。

    (As a regular user)
    $ cp /boot/config-`uname -r` .config
    $ make oldconfig
    $ make menuconfig
    $ make prepare
    $ make modules_prepare
    $ make M=drivers/gpu/drm

    你可以使用以下命令单独安装DRM和i915。

    (As root)
    # cp drivers/gpu/drm/drm.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm
    # cp drivers/gpu/drm/ drm_kms_helper.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm
    # cp drivers/gpu/drm/i915/i915.ko /lib/modules/`uname -r`/kernel/drivers/gpu/drm/i915
    # mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
    # reboot

    1.3.2 使用Intel Media Server Studio 的实例程序验证新的KMD

    参阅media_server_studio_getting_started_guide.pdf中的 "Samples and Tutorials"部分,并运行一下命令测试KMD。

    $ ./sample_multi_transcode_drm -hw -i::h264 test.h264 -o::h264 output.h264 -angle 180 -opencl
    $ ./sample_encode_drm h264  -hw -i test.yuv -o test-la-out.h264 -w 720 -h 480 -la -lad 20

    1.3.3 常见问题解答

    • Q: 我可否在我的CentOS* 7.1内核上安装Intel Media Server Studio? 如何可以, 如何安装?
      A: 是的。 修复补丁后, Intel Media Server Studio应该可以运行你的内核上。
    • Q: 如何安装失败,如何回滚到上一个内核?
      A: 尝试在grub菜单中选择一个先前的内核,如果grub菜单不能使用,可以运行“make install”安装之前的内核,在重启之前在grub菜单中检查有要启动的新内核。

    2 CentOS* 7.1上部署Intel® Media Server Studio

    此部分提供有关在CentOS* 7.1上测试和部署Intel Media Server Studio。

    1. 部署 Intel Media Server Studio 的UMD

    使用二进制运行时软件包,而不是开发软件包。你可以使用命令行 "rpm -qlp *.rpm"检查这些软件包。

    (as regular user)
    $ tar –xvzf mediaserverstudio*.tar.gz
    $ cd mediaserverstudio*
    $ tar -xvzf SDK*.tar.gz
    $ cd SDK*/CentOS
    $ find -name "*.rpm" | grep -v devel ## list runtime rpm package

    2. 部署Intel Media Server Studio 的KMD

    创建内核二进制rpm包

    (as regular user)
    $ cd /MSS
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version

    注意: 使用内核的二进制rpm包,可由build_kernel_rpm_CentOS.sh创建. 更多详细信息可在media_server_studio_getting_started_guide.pdf中找到。

    (as regular user)
    $ cd /MSS
    $ sed -i 's/\-bb/\-bs/' build_kernel_rpm_CentOS.sh
    $ ./build_kernel_rpm*.sh 6 ## 6 for R6 version

    注意: 如果你遇到内核rpm包问题,使用内核源码rpm包进行重建。内核源码rpm包/MSS/rpmbuild/SRPMS/kernel-3.10.0-229.1.2.39163.MSSr6.el7.centos.src.rpm 可以在~/rpmbuild/RPMS/x86_64 folder中找到。

    3. 从内核源码rpm包重建内核二进制rpm包

    (as regular user)
    $ mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
    $ echo '%_topdir %(echo $HOME)/rpmbuild'> ~/.rpmmacros
    $ rpmbuild --rebuild --target=$(uname -m) --with firmware --without debug --without debuginfo --without perf --without tools kernel-3.10.0-229.1.2.39163.MSSr6.el7.centos.src.rpm

    4. 结论

    我们建议你在部署 Intel Media Server Studio时, 包含着三个组件。你可以运行自己的脚本在CentOS* 7.1上安装这些组件。

    • UMD中不带开发rpm包。
    • 内核二进制rpm包。
    • 内核源码rpm包。
    Viewing all 3384 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>