Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Connecting an Intel® IoT Gateway to Amazon Web Services*

$
0
0

This guide will walk you through adding the IoT Cloud repository to your Intel® IoT Gateway and adding support for Amazon Web Services* so you can begin developing applications for this platform in your programming language of choice.

Prerequisites

  • Intel® IoT Gateway Technology running IDP 3.1 or above with internet access
  • A development device (e.g., laptop) on the same network as the Intel® IoT Gateway
  • Terminal access to the Intel® IoT Gateway from your development device
  • Amazon Web Services account: https://aws.amazon.com/ 

Please see the following documentation for setting up your Intel® IoT Gateway:https://software.intel.com/en-us/node/633284 

Adding the IoT Cloud repository to your Intel® IoT Gateway


1. Access the console on your gateway using either a monitor and keyboard connected directly, or SSH (recommended).

2. Add the GPG key for the cloud repository using the following command:
rpm --import http://iotdk.intel.com/misc/iot_pub.key

3. On your development device (e.g., laptop) open a web browser and load the IoT Gateway Developer Hub interface by entering the IP address of your gateway in the address bar.
Tip: You can find your gateway’s IP address using the ifconfig command.

4. Login to the IoT Gateway Developer Hub interface using your credentials. The default login and password are both root.


 
5. Add the IoT Cloud repository.

6. Go to the Packages section and click the Add Repo + button.

7. Populate the fields with the following information and click Add Repository:

Name: IoT_Cloud
URL: http://iotdk.intel.com/repos/iot-cloud/wrlinux7/rcpl13

8. Finally, click the Update Repositories button to update the package list.

Adding AWS* support to your Intel® IoT Gateway

1. Click the Add Packages + button to bring up the list of packages you can install.

Search for cloud-aws using the search box at the top of the package window. Click the Install button next to the packagegroup-cloud-aws entry.

Setup your user in AWS* console

1. In a browser navigate to the AWS* console at https://console.aws.amazon.com and login to your AWS account.

2. Assign the AWSIoTFullAccess policy to your user.


 
3. Click on your account name in the top right corner of the console and select Security Credentials from the drop-down list.

IF you get the popup message above select Continue to Security Credentials.

4. Select Users from the left-hand panel to get a list of all users in your AWS account. If there are no users listed click the Create New Users button, enter the usernames you would like to create and click Create. Your AWS users should then be listed as above. 

5. Click on your user to show a summary page. Select the Permissions tab and click on Attach Policy

6. Scroll down through the list of policies until you find AWSIoTFullAccess. Select this policy and click Attach Policy to add this policy to your user.

7. Create an access key for your device

8. Back on the user summary screen, select the Security Credentials tab and click on Create Access Key.

At this point, a window will appear showing you your unique access key pair. The Secret Access Key will not be shown again once this window is closed, which means you will need to generate a new access key.

Warning: Do not close this window before completing the next section!

Configuring your gateway

Tip: It is recommended that you use SSH to connect to your gateway, or access the command line through the Intel Developer Hub interface to make copying access keys easier. If you are accessing the command line of your gateway directly, using a monitor and keyboard, you will need to manually enter the access key and secret access key in the next section.

1. Add your user credentials to the gateway.

Enter the following command to add your user credentials to the gateway:
aws configure
When prompted, enter the following information:

  • AWS Access Key ID: The Access Key ID you just generated.
  • AWS Secret Access Key: The Secret Access Key which pairs the access key you just generated.
  • Default region name: See here for a list of regions (e.g. eu-west-1) http://docs.aws.amazon.com/general/latest/gr/rande.html#iot_region 
  • Default output format: Default is fine, so hit Enter to continue.

2. Create a thing.

Enter the following commands on your gateway to create an associated thing in your AWS* instance:

aws iot create-thing --thing-name gateway-test-01

If adding the thing is successful, you will get output similar to below.

3.    Create a permissive policy

Enter the following command to create a new policy in your AWS instance:

aws iot create-policy --policy-name gateway-policy --policy-document '{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action":["iot:*"], "Resource": ["*"] }] }'

If the policy is successfully added, the console output should be similar to that below.

4.    Create keys and certificate for thing.

Enter the following commands on your gateway to create keys and certificates to communicate with AWS*:

wget -O rootCA.pem https://www.symantec.com/content/en/us/enterprise/verisign/roots/VeriSign-Class%203-Public-Primary-Certification-Authority-G5.pem

aws iot create-keys-and-certificate --set-as-active --certificate-pem-outfile cert.pem --private-key-outfile privkey.pem

 
You should get output similar to that shown below, followed by a lot of JSON data. For the next step, we need to know only the certificateArn value, which is at the beginning of the console output.

5.    Attach the policy to the certificate.

You now need to attach the thing certificate we just generated to the policy you created earlier. Do this with the following command:

aws iot attach-principal-policy --policy-name ${POLICY_NAME} –principal ${CERTIFICATE_ARN}

Be sure to enter the policy name you entered above (e.g. gateway-policy) and the certificateArn from the previous step. For example:

aws iot attach-principal-policy --policy-name gateway-policy --principal arn:aws:iot:eu-west-1:681450608718:cert/122c86b84c6e0b919353882c03ca37385855897e16804438a20d44b3f9934cb3

6.    Check device in AWS* IoT Console.

In your browser navigate to the AWS* console home screen by clicking on the AWS icon in the top-left of the page. In the top right-hand corner check that the region you configured your gateway with is selected (e.g. Ireland) and then select the AWS IoT service from the list.

Your AWS IoT dashboard should now contain your thing, policy and certificate you just configured on your gateway.

Sending data to AWS* IoT service using Python

Now that your gateway is configured, you are ready to begin sending data to AWS IoT. There are a number of Python samples included which you can use for testing.

Monitor gateway communication in the AWS* console

1. From your AWS IoT console select MQTT Client near the top-right of the page.

2. In the MQTT Client window, enter the thing name for your gateway which you assigned earlier (e.g. gateway-test-01) and click Connect.

The Connection status indicator will turn green and say Connected if AWS is able to communicate with your gateway.


 
3. Select Subscribe to topic from the MQTT Client Actions.

4. In the Subscription topic field, enter sdk/test/Python and click Subscribe.

Messages received from the gateway will now appear in the message box on the left.

Send messages from the gateway

1. Enter the following command to get the endpoint to send messages to AWS:

aws iot describe-endpoint

This will return the endpointAddress, which we will need for the next step. So, copy the address in quotes.

2. Navigate to the directory that contains the AWS samples:

cd /usr/share/awsiotsdk/samples/python/

3. Run the sample using the following command:

python basicPubSub.py -e [ENDPOINT] -r [ROOTCA_PATH] -c [CERT_PATH] –k [PRIVATE_KEY_PATH]
  • ENDPOINT: This is the endpoint address you discovered in the previous step.
  • ROOTCA_PATH: This is the path to the rootCA.pem file you downloaded earlier.
  • CERT_PATH: This is the path to the cert.pem file you generated earlier.
  • PRIVATE_KEY_PATH: This is the path to the privkey.pem file you generated earlier.

All of the certificates and keys should have been downloaded or created in the same path. By default this will be /root or $HOME unless you changed directory after logging into the gateway.

Below is a working example:

python basicPubSub.py -e a1gx5hswnkj6kf.iot.eu-west-1.amazonaws.com -r $HOME/rootCA.pem -c $HOME/cert.pem -k $HOME/privkey.pem

If the sample app is running correctly, you will start seeing console output like that shown above, indicating that messages are being sent on the sdk/test/Python topic.

To verify this, head back to your browser and take a look in the message window. You should see new messages being displayed similar to those in the screenshot below.

Your gateway is now connected to AWS* IoT and able to send and receive data.


Intel® Software Guard Extensions Tutorial Series: Part 3, Designing for Intel® SGX

$
0
0

In Part 3 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series we’ll talk about how to design an application with Intel SGX in mind. We’ll take the concepts that we reviewed in Part 1, and apply them to the high-level design of our sample application, the Tutorial Password Manager, laid out in Part 2. We’ll look at the overall structure of the application and how it is impacted by Intel SGX and create a class model that will prepare us for the enclave design and integration.

You can find the list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.

While we won’t be coding up enclaves or enclave interfaces just yet, there is source code provided with this installment. The non-Intel SGX version of the application core, without its user interface, is available for download. It comes with a small test program, a console application written in C#, and a sample password vault file.

Designing for Enclaves

This is the general approach we’ll follow for designing the Tutorial Password Manager for Intel SGX:

  1. Identify the application’s secrets.
  2. Identify the providers and consumers of those secrets.
  3. Determine the enclave boundary.
  4. Tailor the application components for the enclave.

Identify the Application’s Secrets

The first step in designing an application for Intel SGX is to identify the application’s secrets.

A secret is anything that is not meant to be known or seen by others. Only the user or the application for which it is intended should have access to a secret, and it should not be exposed to others users or applications regardless of their privilege level. Potential secrets can include financial information, medical records, personally identifiable information, identity data, licensed media content, passwords, and encryption keys.

In the Tutorial Password Manager, there are several items that are immediately identifiable as secrets, shown in Table 1.


Secret


The user’s account passwords

The user’s account logins

The user’s master password or passphrase

The master key for the password vault

The encryption key for the account database


Table 1:Preliminary list of application secrets.


These are the obvious choices, but we’re going to expand this list by including all of the user’s account information and not just their logins. The revised list is shown in Table 2.


Secret


The user’s account passwords

The user’s account logins information

The user’s master password or passphrase

The master key for the password vault

The encryption key for the account database


Table 2: Revised list of application secrets.

Even without revealing the passwords, the account information is valuable to attackers. Exposing this data in the password manager leaks valuable clues to those with malicious intent. With this data, they can choose to launch attacks against the services themselves, perhaps using social engineering or password reset attacks, to obtain access to the owner’s account because they know exactly what to target.

Identify the Providers and Consumers of the Application’s Secrets

Once the application’s secrets have been identified, the next step is to determine their origins and destinations.

In the current version of Intel SGX, the enclave code is not encrypted, which means that anyone with access to the application files can disassemble and inspect it. By definition, something cannot be a secret if it is open to inspection, and that means that secrets should never be statically compiled into enclave code. An application’s secrets must originate from outside its enclaves and be loaded into them at runtime. In Intel SGX terminology, this is referred to as provisioning secrets into the enclave.

When a secret originates from a component outside of the Trusted Compute Base (TCB), it is important to minimize its exposure to untrusted code. (One of the main reasons why remote attestation is such a valuable component of Intel SGX is that it allows a service provider to establish a trusted relationship with an Intel SGX application, and then derive an encryption key that can be used to provision encrypted secrets to the application that only the trusted enclave on that client system can decrypt.) Similar care must be taken when a secret is exported out of an enclave. As a general rule, an application’s secrets should not be sent to untrusted code without first being encrypted inside of the enclave.

Unfortunately for the Tutorial Password Manager application, we do need to send secrets into and out of the enclave, and those secrets will have to exist in clear text at some point. The end user will be entering his or her account information and password via a keyboard or touchscreen, and recalling it at a future time as needed. Their account passwords will need to be shown on the screen, and even copied to the Windows* clipboard on request. These are core requirements for a password manager application to be useful.

What that means for us is that we can’t completely eliminate the attack surface: we can only minimize it, and we’ll need some mitigation strategy for dealing with secrets when they exist outside the enclave in plain text.

Secret

Source

Destination

The user’s account passwords

User input*

Password vault file

User interface*

Clipboard*

Password vault file

The user’s account information

User input*

Password vault file

User interface*

Password vault file

The user’s master password or passphrase

User input

Key derivation function

The master key for the password vault

Key derivation function

Database key crypto

The encryption key for the password database

Random generation

Password vault file

Password vault crypto

Password vault fil

Table 3: Application secrets, their sources, and their destinations. Potential security risks are denoted with an asterisk (*).

Table 3 adds the sources and destinations for the Tutorial Password Manager’s secrets. Potential problems—areas where secrets may be exposed to untrusted code—are denoted with an asterisk (*).

Determine the Enclave Boundary

Once the secrets have been identified, it’s time to determine the boundary for the enclave. Start by looking at the data flow of secrets through the application’s core components. The enclave boundary should:

  • Encompass the minimum set of critical components that act on your application’s secrets.
  • Completely contain as many secrets as is feasible.
  • Minimize the interactions with, and dependencies on, untrusted code.

The data flows and chosen enclave boundary for the Tutorial Password Manager application are shown in Figure 1.

Figure 1

Figure 1: Data flow for secrets in the Tutorial Password Manager.

Here, the application secrets are depicted as circles, with blue circles representing secrets that will exist in plain text (unencrypted) at some point during the application’s execution and green circles representing secrets that are encrypted by the application. The enclave boundary has been drawn around the encryption and decryption routines, the key derivation function (KDF) and the random number generator. This does several things for us:

  1. The database/vault key, which is used to encrypt some of our application’s secrets (account information and passwords), is generated within the enclave and is never sent outside of it in clear text.
  2. The master key is derived from the user’s passphrase inside the enclave, and used to encrypt and decrypt the database/vault key. The master key is ephemeral and is never sent outside the enclave in any form.
  3. The database/vault key, account information, and account passwords are encrypted inside the enclave using encryption keys that are not visible to untrusted code (see #1 and #2).

Unfortunately, we have issues with unencrypted secrets crossing the enclave boundary that we simply can’t avoid. At some point during the Tutorial Password Manager’s execution, a user will have to enter a password on the keyboard or copy a password to the Windows clipboard. These are insecure channels that can’t be placed inside the enclave, and the operations are absolutely necessary for the functioning of the application. This is potentially a huge problem, which is compounded by the decision to build the application on top of a managed code base.

Protecting Secrets Outside the Enclave

There are no complete solutions for securing unencrypted secrets outside the enclave, only mitigation strategies that reduce the attack surface. The best we can do is minimize the amount of time that this information exists in a form that is easily compromised.

Here is some general advice for handling sensitive data in untrusted code:

  • Zero-fill your data buffers when you are done with them. Be sure to use functions such as SecureZeroMemory (Windows) and memzero_explicit (Linux) that are guaranteed to not be optimized out by the compiler.
  • Do not use the C++ standard template library (STL) containers to store sensitive data. The STL containers have their own memory management, which makes it difficult to ensure that the memory allocated to an object is securely wiped when the object is deleted. (By using custom allocators you can address this issue for some containers.)
  • When working with managed code such as .NET, or languages that feature automatic memory management, use storage types that are specifically designed for holding secure data. Other storage types are at the mercy of the garbage collector and just-in-time compilation, and may not be cleared or freed on demand (if at all).
  • If you must place data on the clipboard be sure to clear it after a short length of time. In particular, don’t allow it to remain there after the application has exited.

For the Tutorial Password Manager project, we have to work with both native and managed code. In native code, we’ll allocate wchar_t and char buffers, and use SecureZeroMemory to wipe them clean before freeing them. In the managed code space, we’ll employ .NET’s SecureString class.

When sending a SecureString to unmanaged code, we’ll use the helper functions from System::Runtime::InteropServices to marshal the data. 

using namespace System::Runtime::InteropServices;

LPWSTR PasswordManagerCore::M_SecureString_to_LPWSTR(SecureString ^ss)
{
	IntPtr wsp= IntPtr::Zero;

	if (!ss) return NULL;

	wsp = Marshal::SecureStringToGlobalAllocUnicode(ss);
	return (wchar_t *) wsp.ToPointer();
}

When marshaling data in the other direction, from native code to managed code, we have two methods. If the SecureString object already exists, we’ll use the Clear and AppendChar methods to set the new value from the wchar_t string.

password->Clear();
for (int i = 0; i < wpass_len; ++i) password->AppendChar(wpass[i]);

When creating a new SecureString object, we’ll use the constructor form that creates a SecureString from an existing wchar_t string.

try {
	name = gcnew SecureString(wname, (int) wcslen(wname));
	login = gcnew SecureString(wlogin, (int) wcslen(wlogin));
	url = gcnew SecureString(wurl, (int) wcslen(wurl));
}
catch (...) {
	rv = NL_STATUS_ALLOC;
}

Our password manager also supports transferring passwords to the Windows clipboard. The clipboard is an insecure storage space that can potentially be accessed by other users and for this reason Microsoft recommends that sensitive data never be placed on there. The point of a password manager, though, is to make it possible for users to create strong passwords that they do not have to remember. It also makes it possible to create lengthy passwords consisting of randomly generated characters which would be difficult to type by hand. The clipboard provides much needed convenience in exchange for some measure of risk.

To mitigate this risk, we need to take some extra precautions. The first is to ensure that the clipboard is emptied when the application exits. This is accomplished in the destructor in one of our native objects.

PasswordManagerCoreNative::~PasswordManagerCoreNative(void)
{
	if (!OpenClipboard(NULL)) return;
	EmptyClipboard();
	CloseClipboard();
}

We’ll also set up a clipboard timer. When a password is copied to the clipboard, set a timer for 15 seconds and execute a function to clear the clipboard when it fires. If a timer is already running, meaning a new password was placed on the clipboard before the old one was expired, that timer is cancelled and the new one takes its place.

void PasswordManagerCoreNative::start_clipboard_timer()
{
	// Use the default Timer Queue

	// Stop any existing timer
	if (timer != NULL) DeleteTimerQueueTimer(NULL, timer, NULL);

	// Start a new timer
	if (!CreateTimerQueueTimer(&timer, NULL, (WAITORTIMERCALLBACK)clear_clipboard_proc,
		NULL, CLIPBOARD_CLEAR_SECS * 1000, 0, 0)) return;
}

static void CALLBACK clear_clipboard_proc(PVOID param, BOOLEAN fired)
{
	if (!OpenClipboard(NULL)) return;
	EmptyClipboard();
	CloseClipboard();
}

Tailor the Application Components for the Enclave

With the secrets identified and the enclave boundary drawn, it’s time to structure the application while taking the enclave into account. There are significant restrictions on what can be done inside of an enclave, and these restrictions will mandate which components live inside the enclave, which live outside of it, and when porting an existing applications, which ones may need to be split in two.

The biggest restriction that impacts the Tutorial Password Manager is that enclaves cannot perform any I/O operations. The enclave can’t read from the keyboard or write to the display so all of our secrets—passwords and account information—must be marshaled into and out of the enclave. It also can’t read from or write to the vault file: the components that parse the vault file must be separated from components that perform the physical I/O. That means we are going to have to marshal more than just our secrets across the enclave boundary: we have to marshal the file contents as well.

 Class diagram for the Tutorial Password Manager.

Figure 2:Class diagram for the Tutorial Password Manager.

Figure 2 shows the basic class diagram for the application core (excluding the user interface), including which classes serve as the sources and destinations for our secrets. Note that the PasswordManagerCore class is considered the source and destination for secrets which must interact with the GUI in this diagram for simplicity’s sake. Table 4 briefly describes each class and its purpose.

Class

Type

Function

PasswordManagerCore

Managed

Interact with the C# graphical user interface (GUI) and marshal data to the native layer.

PasswordManagerCoreNative

Native, Untrusted

Interact with the managed PasswordManagerCore class. Also responsible for converting between Unicode and multibyte character data (this will be discussed in more detail in Part 4).

VaultFile

Managed

Reads and writes from the vault file.

Vault

Native, Enclave

Stores the password vault data in AccountRecord members. Deserializes the vault file on reads, and reserializes it for writing.

AccountRecord

Native, Enclave

Stores the account information and password for each account in the user’s password vault.

Crypto

Native, Enclave

Performs cryptographic functions.

DRNG

Native, Enclave

Interface to the random number generator.

Table 4:Class descriptions.

Note that we had to split the handling of the vault file into two pieces: one that does the physical I/O, and one that stores its contents once they are read and parsed. We also had to add serialization and deserialization methods to the Vault object as intermediate sources and destinations for our secrets. All of this is necessary because the VaultFile class can’t know anything about the structure of the vault file itself, since that would require access to cryptographic functions that are located inside the enclave.

We’ve also drawn a dotted line when connecting the PasswordManagerCoreNative class to the Vault class. As you might recall from Part 2, enclaves can only link to C functions. These two C++ classes cannot directly communicate with one another: they must use an intermediary which is denoted by the Bridge Functions box.

The Non-Intel® Software Guard Extensions Code Path

The diagram in Figure 2 is for the Intel SGX code path. The PasswordManagerCoreNative class cannot link directly to the Vault class because the latter is inside the enclave. In the non-Intel SGX code path, however, there is no such restriction: PasswordManagerCoreNative can directly contain a member of class Vault. This is the only shortcut we’ll take in the application design for the non-Intel SGX code path. To simplify the enclave integration, the non-enclave code path will still separate the vault processing into the Vault and VaultFile classes.

Another key difference between the two code paths is that the cryptographic functions in the Intel SGX path will come from the Intel SGX SDK. The non-Intel SGX code path can’t use these functions, so they will draw upon Microsoft’s Cryptography Next Generation* API (CNG). That means we have to maintain two, distinct copies of the Crypto class: one for use in enclaves and one for use in untrusted space. We’ll have to do the same with the DRNG class, too, since the Intel SGX code path will call sgx_read_rand instead of using the RDRAND intrinsic.

Sample Code

As mentioned in the introduction, there is sample codeprovided with this part for you to download. The attached archive includes the source code for the Tutorial Password Manager core DLL, prior to enclave integration. In other words, this is the non-Intel SGX version of the application core. There is no user interface provided, but we have included a rudimentary test application written in C# that runs through a series of test operations. It executes two test suites: one that creates a new vault file and performs various operations on it, and one that acts on a reference vault file that is included with the source distribution. As written, the test application expects the test vault to be located in your Documents folder, though you can change this in the TestSetup class if needed.

This source code was developed in Microsoft Visual Studio* Professional 2013 per the requirements stated in the introduction to the tutorial series. It does not require the Intel SGX SDK at this point, though you will need a system that supports Intel® Data Protection Technology with Secure Key.

Coming Up Next

In part 4 of the tutorial we’ll develop the enclave and the bridge functions. Stay tuned!

Find the list of all the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.

Intel® Media Server Studio Release notes

$
0
0

Release Notes of Intel Media Server Studio include important information, such as system requirements, what's new, feature table and known issues since the previous release.  Below is the list of release notes from last few releases to track new features and supported system requirements. 

Version/What's new blogSDK Release NotesComment 
Media Server Studio 2017Windows*|Linux*Supports select skus of 5th& 6th generation
Intel® Xeon® & Core™ processors (codename Broadwell & Skylake)
 
Media Server Studio 2016Windows|Linux

Supports select skus of 4th& 5th generation
Intel® Xeon® & Core™ processors (codename Haswell & Broadwell)

Media Server Studio 2015 R6Windows|LinuxSupports select skus of 4th & 5th generation 
Intel® Xeon® & Core™ processors (codename Haswell Broadwell)
 

    For latest documents, getting started guide and release notes, check Media Server Studio documents webpage. If you have any issue, connect with us at Intel media forum

     

    Case Study: Many-Fermion Dynamics using Intel® Xeon Phi™ Processors

    $
    0
    0

    Many-Fermion Dynamics---nuclear, or MFDn, is a configuration interaction (CI) code for nuclear structure calculations. It is a platform independent Fortran 90 code using a hybrid MPI/ OpenMP* programming model, and is being used on current supercomputers, such as Edison at NERSC, for ab initio calculations of atomic nuclei using realistic nucleon-nucleons and three-nucleon forces. A calculation consists of generating a many-body basis space, constructing the many-body Hamiltonian matrix in this basis, obtaining the lowest eigenpairs, and calculating a set of observables from those eigenpairs. Key computational challenges for MFDn include effectively using the available aggregate memory, efficient construction of the matrix, and efficient sparse matrix--vector products used in the solution of the eigenvalue problem.

    To accurately capture this need NERSC developed a test code which uses representative data for production calculations on 5,000 Intel® Xeon Phi™ processors nodes (approximately half the size of Cori at NERSC) using over 80~GB of memory per node.

    Check out the entire paper: http://www.nersc.gov/users/computational-systems/cori/application-porting-and-performance/application-case-studies/mfdn-case-study/

    Case Study: BerkeleyGW using Intel® Xeon Phi™ Processors

    $
    0
    0

    BerkeleyGW is a Materials Science application for calculating the excited state properties of materials such as band gaps, band structures, absoprtion spectroscopy, photoemission spectroscopy and more. It requires as input the Kohn-Sham orbitals and energies from a DFT code like Quantum ESPRESSO, PARATEC, PARSEC etc. Like such DFT codes, it is heavily dependent on FFTs, Dense Linear algebra and tensor contraction type operations similar in nature to those found in Quantum Chemistry applications. 

    The target science application for the Cori timeframe is to study realistic interfaces in organic photo-voltaics. Such systems require 1000+ atoms and considerable amount of vacuum that contributes to the computational complexity. GW calculations general scale as the number of atoms to the fourth power (the vacuum space roughly counting as having more atoms). This is 2-5 times bigger problem than has been done in the past. Therefore, successfully completing these runs on Cori requires not only taking advantage of the compute capabilities of the Intel® Xeon Phi™ Processor architecture but also improving the scalability of the code in order to reach full-machine capability.

    Check out the entire paper: http://www.nersc.gov/users/computational-systems/cori/application-porting-and-performance/application-case-studies/berkeleygw-case-study/

    Lessons Learned

    1. Optimal performance for this code required restructuring to enable optimal thread scaling, vectorization and improved data reuse.

    2. Long loops are best for vectorization. In the limit of long loops, effects of loop peeling and remainders can be neglected.

    3. There are many coding practices that prevent compiler auto-vectorization of code. The use of profilers and compiler reports can greatly aid in producing vectorizable code.

    4. The absence on L3 cache on Intel® Xeon Phi™ architectures makes data locality ever more important than on traditional Intel® Xeon® architectures. 

    5. Optimization is a continuous process. The limiting factor in code performance may change between IO/communication, memory bandwidth, latency and CPU clockspeed as you continue to optimize.

     

    Intel® IPP Memory Function ippMalloc/Free FAQ

    $
    0
    0

    Introduction

    The Intel® Integrated Performance Primitives (Intel® IPP) is a cross-architecture software library that provides a broad range of library functions for image processing, signal processing, data compression, cryptography, and computer vision, as well as math support routines for such processing capabilities. Intel® IPP is optimized for the wide range of Intel microprocessors.

    One of the key advantages within Intel® IPP is performance coming through highly optimized functions and resource management including memory management. This paper covers the different flavors of those functions in IPP dealing with memory allocation, deallocation, and alignment. After reading this article you should be able to use the correct memory allocation functions for your specific needs. Further documentation on Intel® IPP can be found at Intel® Integrated Performance Primitives – Documentation.


    What are the Intel® IPP memory functions?

    The Intel® IPP provides easy to use functions for pointer alignment, memory allocation and deallocation:

    Function

    Purpose

    Notes

    void* ippAlignPtr(void* ptr, int alignBytes);

    Aligns a pointer.

    Can align to 2/4/8/16/…

    void* ippMalloc(int length);

    64-byte aligned memory allocation.

    Can only free memory with ippFree.

    void ippFree(void* ptr);

    Free memory allocated by ippMalloc.

    Can only free memory allocated by ippMalloc.

    Ipp<datatype>* ippsMalloc_<datatype>(int len);

    64-byte aligned memory allocation for blocks of 32-bit length and signal elements of different data types.

    Limited to memory blocks of up to 2 GB.
    Can only free memory with ippsFree.

    Ipp<datatype>* ippsMalloc_<datatype>_L(IppSizeL len);Platform-aware 64-byte aligned memory allocation for signal elements of different data types.Available since IPP 2017.
    Can only free memory with ippsFree.

    void ippsFree(void* ptr);

    Free memory allocated by ippsMalloc.

    Can only free memory allocated by ippsMalloc.

    Ipp<datatype>* ippiMalloc_<mod>(
    int widthPixels, int heightPixels,
    int* pStepBytes);

    64-byte aligned memory allocation for Images where every line of the image is padded with zeros.

    Limited to memory blocks of up to 2 GB.
    Can only free memory with ippiFree.

    Ipp<datatype>* ippiMalloc_<mod>_L (IppSizeL widthPixels,
    IppSizeL heightPixels,
    IppSizeL* pStepBytes);
    Platform-aware 64-byte aligned memory allocation for Images where every line of the image is padded with Zeros.Available since IPP 2017.
    Can only free memory with ippiFree.

    void ippiFree(void* ptr);

    Frees memory allocated by ippiMalloc.

    Can only free memory allocated by ippiMalloc.

     


    How are Intel IPP memory functions different from the standard malloc and free functions?

    Intel IPP memory functions align the memory to a 64-byte boundary for optimal performance on Intel architecture.


    How do I call ippsMalloc or ippiMalloc?

    Please take a look at the examples in the manual.

    void func_malloc(void)
    {
        Ipp8u* pBuf = ippsMalloc_8u(8*sizeof(Ipp8u));
        if (NULL == pBuf)
        {
            // not enough memory
            ippsFree(pBuf);
        }
    }
    

    Example 3-2: Using the function ippsMalloc.

    ippiMalloc  from Source Code Examples from Intel IPP book* (including the whole example project)


    What is the maximum amount of memory that can be allocated by Intel IPP functions?

    We expect that there are no restrictions to the amount of memory that can be allocated except as defined by the the user's operating system and system hardware. For memory allocations beyond 2 GB on 64-bit systems the platform-aware (e.g. ippsMalloc_<datatype>_L, ippiMalloc_<mod>_L) have to be used instead. Those functions are available since IPP 2017. It is good practice to use platform-aware functions.


    How do I deallocate memory?

    Use the corresponding Intel IPP memory deallocation functions: ippFree, ippsFree and ippiFree. These functions cannot be used to free memory allocated by standard functions like malloc or calloc; nor can the memory allocated by the Intel IPP malloc functions be freed by the standard function free.


    What is the advantage of using ipp*Malloc functions?

    Intel IPP functions can perform better on aligned data. By calling an ipp*Malloc functions to allocate data, the data is aligned for optimal performance on your Intel processor.


    Why should memory be aligned?

    Processor pipeline stalls occur when memory is accessed on the cache-line boundary. Alignment of memory buffers is used in order to minimize such occurrences. Please check the Intel® Developer Zone to access Software Optimization Guides for your Intel processor.


    How is the image stride or step used when calling ippiMalloc?

    Image stride or step is the number of bytes in one row of the image. The ippiMalloc function takes the width and height of the image in pixels as arguments and returns the memory stride in bytes and a pointer to the memory. The size of the memory allocated is not known until after the function call; then it can be calculated as:

    memSize = stride * height * numChannels;
    Examples:
    • The function ippiMalloc_8u_C3 first sets the stride to the smallest multiple of 64 that is greater than width * numChannels. Then the function allocates stride * height bytes.
    • The function ippiMalloc_32f_C3, which allocates 32-bit floating-point pixels, sets the stride to the smallest multiple of 64 that is greater than width * numChannels * bytesPerChannel.

    Is there any difference between ippMalloc/ippFree() and ippsMalloc/ippsFree() functions?

    Basically there is no difference between these functions. Both allocating array of bytes (where for example ippsMalloc_16s allocates array of short integers) and both provide the same alignment for allocated buffer.
    It is recommended you use corresponding free functions, ippFree for ippMalloc and ippsFree for all ippsMalloc variants. To get more information about these functions refer to
    Intel® Integrated Performance Primitives Developer Reference documentation.


    Is there any requirement for Intel® IPP functions to use memory only allocated through ippMalloc?

    No, Intel® IPP functions does not require that memory should be allocated only with Intel® IPP functions. You also may use CRT malloc or any other memory manager API. In that case you have to care about alignment by yourself if you want to maximize calculation speed.

     


    * Other names and brands may be claimed as the property of others.

    Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

    Optimization Notice

    Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

    Notice revision #20110804

    Copyright © 2002-2016, Intel Corporation. All rights reserved.

    Deploying applications with Intel® IPP DLLs

    $
    0
    0

    Introduction

    The Intel® Integrated Performance Primitives (Intel® IPP) is a cross-architecture software library that provides a broad range of library functions for image processing, signal processing, data compression, cryptography, and computer vision, as well as math support routines for such processing capabilities. Intel® IPP is optimized for the wide range of Intel microprocessors.

    One of the key advantages within Intel® IPP is performance. The performance advantage comes through per processor architecture optimized functions, compiled into one single library. Intel® IPP functions are “dispatched” at run-time. The “dispatcher” chooses which of these processor-specific optimized libraries to use when the application makes a call into the IPP library. This is done to maximize each function’s use of the underlying vector instructions and other architecture-specific features.

    This paper covers application deployment with Intel® IPP dynamic-link libraries (DLLs). It is important to understand processor detection and library dispatching, so that software redistribution is problem free. Additionally you want to consider two key factors when it comes to DLLs:

    1. Selection of an appropriate DLL linking model.
    2. The location for the DLLs on the target system.

    This document explains how the Intel® IPP dynamic libraries work and discusses these important considerations. For information on all Intel® IPP linking models, please refer to the document Intel® IPP Linkage Models – Quick Reference Guide. Further documentation on Intel® IPP can be found at Intel® Integrated Performance Primitives – Documentation.

    Version Information
    This document applies to Intel® IPP 2017.xx.xxx for Windows* running 32-bit and 64-bit applications but concepts can also be applied to other operating systems supported by Intel® IPP.

    Library Location
    Intel® IPP is also a key component of Intel® Parallel Studio XE and Intel® System Studio. The IPP libraries of Parallel Studio can be found in redist directory. For default installation on Windows*, the path to the libraries is set to ’C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_x.xx.xxx\<target_os>’ where ‘x.xx.xxx’ designates the version installed (on certain systems, instead of ‘Program Files (x86)’, the directory name is ‘Program Files’). For convenience <ipp directory> will be used instead throughout this paper.


    Note: Please verify that your license permits redistribution before distributing the Intel® IPP DLLs. Any software source code included with this product is furnished under a software license and may only be used or copied in accordance with the terms of that license. Please see the Intel® Software Products End User License Agreement for license definitions and restrictions on the library.
     


    Key Concepts

    Library Dispatcher
    Every Intel® IPP function has many binary implementations, each performance-optimized for a specific target CPU. These processor-specific functions are contained in separate DLLs. The name of each DLL has a prefix identification code that denotes its target processor. For example, a 32-bit Intel processor with SSE4.2 support requires the image processing library named ippip8.dll, where ‘p8’ is the CPU identification code for 32-bit SSE4.2.

    IA-32 Intel® architecture

    Intel® 64 architecture

    Meaning

    px

    Mx

    Generic code optimized for processors with Intel® Streaming SIMD Extensions (Intel® SSE)

    w7

    My

    Optimized for processors with Intel SSE2

    s8

    n8

    Optimized for processors with Supplemental Streaming SIMD Extensions 3 (SSSE3)

    -

    m7

    Optimized for processors with Intel SSE3

    p8

    y8

    Optimized for processors with Intel SSE4.2

    g9

    e9

    Optimized for processors with Intel® Advanced Vector Extensions (Intel® AVX) and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI)

    h9

    l9

    Optimized for processors with Intel® Advanced Vector Extensions 2 (Intel® AVX2)

    -

    k0

    Optimized for processors with Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

     

    n0

    Optimized for processors with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)

    Table 1: CPU Identification Codes Associated with Processor-Specific Libraries

    When the first Intel® IPP function call occurs in the application, the application searches the system path for an Intel® IPP dispatcher library. The dispatcher library identifies the system processor and invokes the function version that has the best performance on the target CPU. This process does not add overhead because the dispatcher connects to an entry point of an optimized function only once during application initialization. This allows your code to call optimized functions without worrying about the processor on which the code will execute.

    Dynamic Linking
    Dynamic-link libraries are loaded when an application runs. Simply link the application to the Intel® IPP libraries located in the <ipp directory>\ipp\lib\ia32 or <ipp directory>\ipp\lib\intel64 folder, which load the dispatcher libraries and link to the correct entry points. Ensure that the dispatcher DLLs and the processor-specific DLLs are on the system path. In the diagram below, the application links to the ipps.lib and ipps.dll automatically loads ippsv8.dll at runtime.


    Figure 1: Processor-Specific Dispatching

    Dynamic linking is useful if many Intel® IPP functions are called in the application. Most applications are good candidates for this model.

    Building a Custom DLL
    In addition to dynamic linking, the Intel® IPP provides a tool called Intel® IPP Custom Library Tool for developers to create their own DLL. This tool can be found under <ipp directory>\ipp\tools\custom_library_tool and links selected Intel® IPP functions into a new separate DLL and generates an import library to which the application can link. A custom DLL is useful if the application uses a limited set of functions. The custom DLL must be distributed with the application. Intel® IPP supports two dynamic linking options. Refer to Table 2 below to choose which dynamic linking model best suits the application.

    Feature

    Dynamic Linking

    Custom DLL

    Processor Updates

    Automatic

    Recompile and redistribute

    Optimization

    All processors

    All processors

    Build

    Link to stub static libraries

    Build and link to a separate import library which dispatches a separate DLL

    Function Naming

    Regular names

    Regular names

    Total Binary Size

    Large

    Small

    Executable Size

    Smallest

    Smallest

    Kernel Mode

    No

    No

    Table 2: Dynamic Linking Models

    For detailed information on how to build and link to a custom DLL, unzip the example package files under <ipp directory>\ipp\examples and look at the core examples under components\examples_core\ipp_custom_dll.

    Threading and Multi-core Support
    Intel continues deprecation of internal threading that was started in version Intel® IPP 7.1. Internal (inside a primitive) threading is significantly less effective than external (at the application level) threading. For threading Intel® IPP functions, external threading is recommended which gives significant performance gain on multi-processor and multi-core systems. A good starting point on how to develop code for external threading can be found here.


    Linking the Application

    The Intel® IPP can be compiled with Microsoft Visual Studio* and Intel® C++ Compiler. Instructions for configuring Microsoft Visual Studio to link to the Intel® IPP libraries can be found in the Getting Started with Intel® Integrated Performance Primitives document.


    Deploying the Application

    The Intel® IPP dispatcher and processor-specific DLLs, located in <ipp directory>\redist\ia32\ipp or <ipp directory>\redist\intel64\ipp, or a custom DLL must be distributed with the application software. The Intel® IPP core functions library, ippcore.dll must also be distributed.

    When distributing a custom DLL, it is best to create a distinct naming scheme to avoid conflicts and for tracking purposes. This is also important because custom DLLs must be recompiled and redistributed to include new processor optimizations not available in previous Intel® IPP versions.

    On Microsoft Windows*, the system PATH variable holds a list of folder locations that is searched for executable files. When the application is invoked, the Intel® IPP DLLs need to be located in a folder that is listed in the PATH variable. Choose a location for the Intel® IPP DLLs and custom DLLs on the target system so that the application can easily find them. Possible distribution locations include %SystemDrive%/WINDOWS\system32, the application folder or any other folder on the target system. Table 3 below compares these options.

     

    System PATH

    Permissions

    %SystemDrive%\WINDOWS\system32

    This folder is listed on the system PATH by default.

    Administrator permissions may be required to copy files to this folder.

    Application Folder or Subfolder

    Windows will first check the application folder for the DLLs.

    Special permissions may be required.

    Other Folder

    Add this directory to the system PATH.

    Special permissions may be required.

    Table 3: Intel® IPP DLL Location

    In all cases, the application must be able to find the location of the Intel® IPP DLLs and custom DLLs in order to run properly.

    The Intel® IPP provides a convenient method to performance optimize a 32-bit or Intel 64-bit application for the latest processors. Application and DLL distribution requires developers to do the following:

    1. Choose the appropriate DLL linking model
      • Dynamic linking– Application is linked to stub libraries. At runtime, dispatcher DLLs detect the target processor and dispatch processor-specific DLLs. Dispatcher and processor-specific DLLs to be distributed with the application.
      • Custom DLL– Application is linked to a custom import library. At runtime, the custom DLL is invoked. Custom DLL to be distributed with the application.
    1. Determine the best location for the Intel® IPP DLLs on the end–user system.
      • %SystemDrive%\WINDOWS\system32
      • Application folder or subfolder
      • Other folder

    * Other names and brands may be claimed as the property of others.

    Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

    Optimization Notice

    Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

    Notice revision #20110804

    Copyright © 2002-2016, Intel Corporation. All rights reserved.

    Understanding CPU Dispatching in the Intel® IPP Libraries

    $
    0
    0

    Introduction

    The Intel® Integrated Performance Primitives (Intel® IPP) is a cross-architecture software library that provides a broad range of library functions for image processing, signal processing, data compression, cryptography, and computer vision, as well as math support routines for such processing capabilities. Intel® IPP is optimized for the wide range of Intel microprocessors.

    One of the key advantages within Intel® IPP is performance. The performance advantage comes through per processor architecture optimized functions, compiled into one single library. Intel® IPP functions are “dispatched” at run-time. The “dispatcher” chooses which of these processor-specific optimized libraries to use when the application makes a call into the IPP library. This is done to maximize each function’s use of the underlying vector instructions and other architecture-specific features.

    This paper covers CPU dispatching of the Intel® IPP library in more detail. After reading this article you will understand how CPU dispatching works and which libraries are needed for which processor architecture. Further documentation on Intel® IPP can be found at Intel® Integrated Performance Primitives – Documentation.


    Dispatcher

    Dispatching refers to the process of detecting CPU features at run-time and then selecting the Intel® IPP optimized library set that corresponds to your CPU. For example, in the <ipp directory>\ia32\ipp directory, the ippip8.dll library file contains the 32-bit optimized image processing libraries for processors with Intel® SSE4.2; ‘ippi’ refers to the image processing library, ‘p8’ refers to 32-bit SSE4.2 architecture.

    Note: You can build custom processor-specific libraries that do not require the dispatcher, but that is outside thescope of this article. Please read this IPP linkage models articlefor information on how to build custom versions of the Intel® IPP library.

    In the general case, the “dispatcher” identifies the run-time processor only once, at library initialization time. It sets an internal table or variable that directs your calls to the internal functions that match your architecture. For example, ippsCopy_8u(), may have multiple implementations stored in the library, with each version optimized to a specific Intel® processor architecture. Thus, the p8_ippsCopy_8u() version of ippsCopy_8u() is called by dispatcher when running on an Intel processor with Intel® SSE4.2 on IA-32, because it is optimized for this processor architecture.

    Note:IPP architectures generally correspond to SIMD (MMX, SSE, AES, etc.) instructions sets.


    Initializing the IPP Dispatcher

    The process of identifying the specific processor being used, and initialization of the dispatcher, should be performed before making any calls into the IPP library. If you are using a dynamic link library this process is handled automatically when the dynamic link library is initialized. However, if you are using a static library you must perform this step manually. See this article on the ipp*Init*() functions for more information on how to do this.

    The following table lists all the architecture codes defined by the Intel® IPP library through version 8.2 of the product. Note that some of these IPP architectures have been deprecated and are no longer supported in the current version of the product. Deprecated architectures are identified in the “Notes” column of the table.

    IA-32 Intel® architecture

    Intel® 64 architecture

    Meaning

    px

    mx

    Generic code optimized for processors with Intel® Streaming SIMD Extensions (Intel® SSE)

    w7

    my

    Optimized for processors with Intel SSE2

    s8

    n8

    Optimized for processors with Supplemental Streaming SIMD Extensions 3 (SSSE3)

    -

    m7

    Optimized for processors with Intel SSE3

    p8

    y8

    Optimized for processors with Intel SSE4.2

    g9

    e9

    Optimized for processors with Intel® Advanced Vector Extensions (Intel® AVX) and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI)

    h9

    l9

    Optimized for processors with Intel® Advanced Vector Extensions 2 (Intel® AVX2)

    -

    k0

    Optimized for processors with Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

     

    n0

    Optimized for processors with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)

    Table 1: CPU Identification Codes Associated with Processor-Specific Libraries

    For non-Intel based processors support, please see the article titled Use Intel® IPP on Intel or Compatible AMD* Processors.


    P8/Y8 Internal Run-Time Dispatcher

    Within the 32-bit 'p8' and equivalent 64-bit 'y8' architectures there is an additional "run-time" dispatching mechanism, a kind of mini-dispatcher. The Nehalem (Intel® Core i7) and Westmere processor families add additional SIMD instructions beyond those defined by SSE4.1. The Nehalem processor family adds the SSE4.2 SIMD instructions and the Westmere family adds AES-NI.

    Creating two additional internal versions of the IPP library for the SSE4.2 and AES-NI instructions would be very space inefficient, so they are bundled as part of the SSE4.1 library. When you call a function that includes, for example, AES-NI optimizations, an additional jump directs your call to the AES-NI version within the p8/y8 library. Because the enhancements affect the optimization of only a small number of IPP functions, this additional overhead occurs infrequently and only when your application is executing on a p8/y8 architecture processor.


    Processor Architecture Table

    The following table was copied from an Intel® Compiler Options for Intel® SSE and Intel® AVX generation (SSE2, SSE3, SSSE3, ATOM_SSSE3, SSE4.1, SSE4.2, ATOM_SSE4.2, AVX, AVX2, AVX-512) and processor-specific optimizations article describing some compiler architecture options. It contains a list of Intel processors showing which processors support which vector instructions. For the latest table please refer to the original article; it gets updated on a regular basis. Please note that the behavior of the Intel Compiler SIMD dispatcher described in that article does not apply to the Intel® IPP library.

    Note:The Intel® IPP library dispatching mechanism behaves different than the one in the Intel Compiler products, and may also behave different than other Intel library products.

    Additional information regarding dispatching and how it relates to non-Intel processors can be found here. How to identify your specific processor is described here. To correlate a processor family name with an Intel CPU brand name, use the ark.intel.com web site.

    COMMON-AVX512A future Intel® Processor. 
    MIC-AVX512The Intel® Xeon Phi™ processor x200 product family.
    CORE-AVX512A future Intel® Processor
    CORE-AVX2

    4th Generation Intel® Core™ Processors
    5th Generation Intel® Core™ Processors
    6th Generation Intel® Core™ Processors
    Intel® Xeon® Processor E7 v3 Family
    Intel® Xeon® Processor E5 v3 Family
    Intel® Xeon® Processor E3 v3 Family
    Intel® Xeon® Processor E7 v4 Family
    Intel® Xeon® Processor E5 v4 Family
    Intel® Xeon® Processor E3 v4 Family

    CORE-AVX-I3rd Generation Intel® Core™ i7 Processors
    3rd Generation Intel® Core™ i5 Processors
    3rd Generation Intel® Core™ i3 Processors
    Intel® Xeon® Processor E7 v2 Family
    Intel® Xeon® Processor E5 v2 Family
    Intel® Xeon® Processor E3 v2 Family
    AVX2nd Generation Intel® Core™ i7 Processors
    2nd Generation Intel® Core™ i5 Processors
    2nd Generation Intel® Core™ i3 Processors
    Intel® Xeon® Processor E5 Family
    Intel® Xeon® Processor E3 Family
    SSE4.2Previous Generation Intel® Core™ i7 Processors
    Previous Generation Intel® Core™ i5 Processors
    Previous Generation Intel® Core™ i3 Processors
    Intel® Xeon® 55XX series
    Intel® Xeon® 56XX series
    Intel® Xeon® 75XX series
    Intel® Xeon® Processor E7 Family
    ATOM_SSE4.2Intel® Atom™ processors that support Intel® SSE4.2 instructions.
    SSE4.1Intel® Xeon® 74XX series
    Quad-Core Intel® Xeon 54XX, 33XX series
    Dual-Core Intel® Xeon 52XX, 31XX series
    Intel® Core™ 2 Extreme 9XXX series
    Intel® Core™ 2 Quad 9XXX series
    Intel® Core™ 2 Duo 8XXX series
    Intel® Core™ 2 Duo E7200
    SSSE3Quad-Core Intel® Xeon® 73XX, 53XX, 32XX series
    Dual-Core Intel® Xeon® 72XX, 53XX, 51XX, 30XX series
    Intel® Core™ 2 Extreme 7XXX, 6XXX series
    Intel® Core™ 2 Quad 6XXX series
    Intel® Core™ 2 Duo 7XXX (except E7200), 6XXX, 5XXX, 4XXX series
    Intel® Core™ 2 Solo 2XXX series
    Intel® Pentium® dual-core processor E2XXX, T23XX series
    ATOM_SSSE3Intel® Atom™ processors
    SSE3Dual-Core Intel® Xeon® 70XX, 71XX, 50XX Series
    Dual-Core Intel® Xeon® processor (ULV and LV) 1.66, 2.0, 2.16
    Dual-Core Intel® Xeon® 2.8
    Intel® Xeon® processors with SSE3 instruction set support
    Intel® Core™ Duo
    Intel® Core™ Solo
    Intel® Pentium® dual-core processor T21XX, T20XX series
    Intel® Pentium® processor Extreme Edition
    Intel® Pentium® D
    Intel® Pentium® 4 processors with SSE3 instruction set support
    SSE2Intel® Xeon® processors
    Intel® Pentium® 4 processors
    Intel® Pentium® M
    IA32Intel® Pentium® III Processor
    Intel® Pentium® II Processor
    Intel® Pentium® Processor

    Table 2:  Intel Processors Associated with Specific CPU Vector Instructions


    * Other names and brands may be claimed as the property of others.

    Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

     

    Optimization Notice

    Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

    Notice revision #20110804

    Copyright © 2002-2016, Intel Corporation. All rights reserved.


    Configuration File Format

    $
    0
    0

    * NOTE: In reference to the articles “Silent Installation Guide for Intel® Parallel Studio XE for OS X*” and "Silent Installation Guide for Intel® Parallel Studio XE for Linux*."*

    A few comments on the directives inside the silent install configuration file:

    ACCEPT_EULA=accept

    • This directive tells the install program that the invoking user has agreed to the End User License Agreement or EULA.  This is a mandatory option and MUST be set to 'accept'. If this is not present in the configuration file, the installation will not complete.  By using the silent installation program you are accepting the EULA.
    • The EULA is in a plain text file in the same directory as the installer.  It has file name "license".  Read this before proceeding as using the silent installer means you have read and agree to the EULA.  If you have questions, go to our user forum: https://software.intel.com/en-us/forums/intel-software-development-products-download-registration-licensing

    INSTALL_MODE=RPM

    ·         This directive tells the install program that the RPM method should be used to install the software.  This will only work if the install user is "root" or has full root priveleges and your distribution support RPM for package management.  In some cases, where the operating system of the target system does not support RPM or if the install program detects that the version of RPM supported by the operating system is flawed or otherwise incompatible with the install program, the installation will proceed but will switch to non-RPM mode automatically.  This is the case for certain legacy operating systems (e.g. SLES9) and for operating systems that provide an RPM utility, but do not use RPM to store or manage system-installed operating system infrastructure (e.g. Ubuntu, Debian).  THUS, Ubuntu and Debian users set this to INSTALL_MODE=NONRPM.

    ·         If you do not want to use RPM, then this line should read "INSTALL_MODE=NONRPM".  In this case, the products will be installed to the same location, but instead of storing product information in the system's RPM database, the Intel product install information will be stored in a flat file called "intel_sdp_products.db", usually stored in /opt/intel (or in $HOME/intel for non-root users).  To override this default, use configuration file directive NONRPM_DB_DIR

    ​NONRPM_DB_DIR

    • If INSTALL_MODE=NONRPM the directive NONRPM_DB_DIR can be used to override the default directory for the installation database.  The default is /opt/intel or in $HOME/intel for non-root users.  The format for this directive is:
    • NONRPM_DB_DIR=/path/to/your/db/directory

    ACTIVATION=exist_lic

    • This directive tells the install program to look for an existing license during the install process.  This is the preferred method for silent installs.  Take the time to register your serial number and get a license file (see below).  Having a license file on the system simplifies the process.  In addition, as an administrator it is good practice to know WHERE your licenses are saved on your system.  License files are plain text files with a .lic extension.  By default these are saved in /opt/intel/licenses which is searched by default.  If you save your license elsewhere, perhaps under an NFS folder, set environment variable INTEL_LICENSE_FILE to the full path to your license file prior to starting the installation or use the configuration file directive ACTIVATION_LICENSE_FILE to specify the full pathname to the license file.
    • Options for ACTIVATION are { exist_lic, license_file, server_lic, serial_number, trial_lic }
    • exist_lic directs the installer to search for a valid license on the server.  Searches will utilize the environment variable INTEL_LICENSE_FILE, search the default license directory /opt/intel/licenses, or use the ACTIVATION_LICENSE_FILE directive to find a valid license file.
    • license_file is similar to exist_lic but directs the installer to use ACTIVATION_LICENSE_FILE to find the license file.
    • server_lic is similar to exist_lic and exist_lic but directs the installer that this is a client installation and a floating license server will be contacted to active the product.  This option will contact your floating license server on your network to retrieve the license information.  BEFORE using this option make sure your client is correctly set up for your network including all networking, routing, name service, and firewall configuration.  Insure that your client has direct access to your floating license server and that firewalls are set up to allow TCP/IP access for the 2 license server ports.  server_lic will use INTEL_LICENSE_FILE containing a port@host format OR a client license file.  The formats for these are described here https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license
    • serial_number directs the installer to use directive ACTIVATION_SERIAL_NUMBER for activation.  This method will require the installer to contact an external Intel activation server over the Internet to confirm your serial number.  Due to user and company firewalls, this method is more complex and hence error prone of the available activation methods.  We highly recommend using a license file or license server for activation instead.
    • trial_lic is used only if you do not have an existing license and intend to temporarily evaluate the compiler.  This method creates a temporary trial license in Trusted Storage on your system.
    • No license file but you have a serial number?  If you have only a serial number, please visit https://registrationcenter.intel.com to register your serial number.  As part of registration, you will receive email with an attached license file.  If your serial is already registered and you need to retrieve a license file, read this:  https://software.intel.com/en-us/articles/how-do-i-manage-my-licenses
    • Save the license file in /opt/intel/licenses/ directory, or in your preferred directory and set INTEL_LICENSE_FILE environment variable to this non-default location.  If you have already registered your serial number but have lost the license file, revisit https://registrationcenter.intel.com and click on the hyperlinked product name to get to a screen where you can cut and paste or mail yourself a copy of your registered license file.
    • Still confused about licensing?  Go to our licensing FAQS page https://software.intel.com/en-us/articles/licensing-faq

    ACTIVATION_LICENSE_FILE

    • This directive instructs the installer where to find your named-user or client license.  The format is:
    • ACTIVATION_LICENSE_FILE=/use/a/path/to/your/licensefile.lic  where licensefile.lic is the name of your license file.

    CONTINUE_WITH_OPTIONAL_ERROR

    • This directive controls behavior when the compiler encounters an "optional" error.  These errors are non-fatal errors and will not prevent the installation to proceed if the user has set CONTINUE_WITH_OPTIONAL_ERROR=yes.  Examples of optional errors include an unrecognized or unsupported linux distribution or version or certain prerequisites for a product cannot be found at the time of installation (such as a supported Java runtime or missing 32bit development libraries for 32bit tool installation).   Fatal errors found during installation will cause the installer to abort with appropriate messages printed.
    • CONTINUE_WITH_OPTIONAL_ERROR=yes directs the installer to ignore non-fatal installation issues and continue with the installation.
    • CONTINUE_WITH_OPTIONAL_ERROR=no directs the installer to abort with appropriate warning messages for the non-fatal error found during the installation.

    PSET_INSTALL_DIR

    • This directive specifies the target directory for the installation.  The Intel Compilers default to /opt/intel for installation target.  Set this directive to the root directory for the final compiler installation.

    CONTINUE_WITH_INSTALLDIR_OVERWRITE

    • Determines the behavior of the installer if the PSET_INSTALL_DIR already contains a existing installation of this specific compiler version. The Intel compiler allows co-existence of multiple versions on a system.  This directive does not affect this behavior, each version of the compiler will have a unique installation structure that does not overwrite other versions.  This directive dictates behavior when the SAME VERSION is already installed in the PSET_INSTALL_DIR.
    • CONTINUE_WITH_INSTALLDIR_OVERWRITE=yes directs the installer to overwrite the existing compiler version of the SAME VERSION
    • CONTINUE_WITH_INSTALLDIR_OVERWRITE=no directs the installer to exit if an existing compiler installation of the SAME VERSION already exists in PSET_INSTALL_DIR

    COMPONENTS

    • A typical compiler package contains multiple sub-packages, such as MKL, IPP, TBB, Debugger, etc.  This directive allows the user to control which sub-packages to install.
    • COMPONENTS=DEFAULTS directs the installer to install the pre-determined default packages for the compiler (recommended setting).  The defaults may not include some sub-packages deemed non-essential or special purpose.  An example is the cluster components of MKL, which are only needed in a distributed memory installation.  If you're not sure of the defaults you can do a trial installation of the compiler in interactive mode and select CUSTOMIZE installation to see and select components.
    • COMPONENTS=ALL directs the installer to install all packages for the compiler.
    • COMPONENTS=<pattern> allows the user to specify which components to install.  The components vary by compiler version and package.  The components should be enclosed in double-quotes and semi-colon separated.  For a list of component, grep for the <Abbr> tags in <installation directory>/uninstall/mediaconfig.xml, such as this :
    • cd <compiler root>/uninstall
    • grep Abbr mediaconfig.xml
    • note that the list may have close to or over 100 components.

    PSET_MODE

    • Sets the installer mode.  The installer can install, remove, modify, or repair an installation.
    • PSET_MODE=install directs the installer to perform an installation
    • PSET_MODE=remove directs the installer to remove a previous installation.  If multiple versions of the compiler are installed, the installer removes the most recent installation.  This information is kept in the RPM database or the non-rpm database depending on the mode used for the installation.
    • PSET_MODE=modify allows the user to redo an installation.  The most common scenario is to overwrite an existing installation with more COMPONENTS set or unset.
    • PSET_MODE=repair directs the installer to retry an installation again, checking for missing or damaged files, directories, and symbolic links, permissions, etc.

    CLUSTER_INSTALL_AUTOMOUNT (optional)

    • This directive is only needed for installation of the Intel(R) Parallel Studio XE 2017 Cluster Edition product.  For Composer and Professional Editions leave this directive commented out.
    • CLUSTER_INSTALL_AUTOMOUNT=yes tells the installer to only perform the main package installation on a cluster head node or admin node in a directory that is remote mounted on all the cluster compute nodes.  This prevents the cluster installation from replicating all files on all nodes.  The head or admin nodes has the tools installed whereas compute nodes assume the PSET_INSTALL_DIR is  remote mounted - hence they do not need a full installation and just a few symbolic links and other small changes as necessary.
    • CLUSTER_INSTALL_AUTOMOUNT=no directs the installer to use CLUSTER_INSTALL_MACHINES_FILES to find all cluster nodes and perform local installations as if those nodes were stand-alone servers.  This requires additional time and replicates files on all nodes.

    CLUSTER_INSTALL_MACHINES_FILE (optional)

    • This directive is only needed for installation of the Intel(R) Parallel Studio XE 2017 Cluster Edition product.  For Composer and Professional Editions leave this directive commented out.
    • This directive instructs the installer where to find the machines file for a cluster installation.  The machines file is any text file with the names of all the cluster hosts on which to install the compiler.  The work performed on each host depends on CLUSTER_INSTALL_AUTOMOUNT (see above)
    • CLUSTER_INSTALL_MACHINES_FILE=/your/path/to/your/machinefile/machinefile.txt

    PHONEHOME_SEND_USAGE_DATA

    • This directive guides the installer in the user's intent for the optional Intel Software Improvement Program.  This setting determines whether or not the compiler periodically sends customer usage information back to Intel.  The intent is for Intel to gather information on what compiler options are being used, amongst other information.  More information on the Intel Software Improvement Program can be found here: https://software.intel.com/en-us/articles/software-improvement-program.
    • PHONEHOME_SEND_USAGE_DATA=no directs the installer to configure the compiler to not send usage data back to the Intel Software Improvement Program.
    • PHONEHOME_SEND_USAGE_DATA=yes directs the installer to configure the compiler to send usage data back to the Intel Software Improvement Program.  Setting this to YES is your consent to opt-into this program.

    SIGNING_ENABLED

    • Directs the installer whether or not to check RPM digital signatures.  Checking signatures is recommended.  It allows the installer to find data corruption from such things as incomplete downloads of compiler packages or damaged RPMs.
    • SIGNING_ENABLED=yes directs the installer to check RPM digital signatures.
    • SIGNING_ENABLED=no directs the installer to skip the checking of RPM digital signatures.

    Jefferson Lab - Thomas Jefferson National Accelerator Facility

    $
    0
    0

    Jefferson Lab

    Principal Investigators:

    Balint Joo

    Balint Joo is a Computational Scientist working at Jefferson Lab on Lattice QCD calculations. He is a co-author and maintainer of the Chroma code for LQCD calculations, and is one of the original authors of the QPhiX library for Intel® Xeon® processors & Intel® Xeon Phi™ coprocessors.  His current main interests are enabling QCD calculations on new architectures, developing and integrating implementations of new algorithms into Chroma, and working to enable successful QCD calculations on upcoming clusters and extreme scale systems. 

     

    Chip WatsonChip Watson is the head of Scientific Computing at Jefferson Lab, a group that also encompasses High Performance Computing at the lab.  He has been the LQCD Computing architect for a number of Lattice QCD computing resources at the lab, optimizing the configurations of a succession of clusters including conventional x86 machines, GPU accelerated machines, and most recently a Intel® Xeon Phi™ coprocessor (Knights Corner) cluster. He also leads the deployment and operation of resources for detector simulation and analysis of experimental physics data from the laboratory.

    Description:

    The Intel® Parallel Computing Center (Intel® PCC) at Jefferson Lab will develop and optimize codes for Lattice Quantum Chromodynamics (LQCD) calculations. Our focus is to enable the freely available Chroma LQCD code, to work with the greatest efficiency on current and future generations of Intel® Xeon® processors and Intel® Xeon Phi™ processor family systems, including the forthcoming Cori system at the National Energy Research Scientific Computing Center (NERSC) based on the future Intel® Xeon Phi™ processor family, and future systems such as the announced Aurora system at Argonne Leadership Computing Facility (ALCF). These optimizations will be made public, benefiting the worldwide community of Chroma users and other LQCD practitioners.

    Due to its layered and modular software architecture, the modernization of Chroma is primarily achieved through developing its sub-component layers. We will focus on two of these, notably the QDP++ data parallel domain-specific productivity layer on which Chroma is built, and QPhiX, which is an open source library of lattice QCD solvers developed in partnership with Intel® Parallel Computing Labs. We will extend QPhiX with additional improved solvers, and investigate design trade-offs which can arise when the problems being solved reside in different levels of the memory hierarchy. An efficient, portable implementation of the QDP++ sub-layer will be provided by QDP-JIT/LLVM package.

    In addition we will continue to engage in active collaboration with our partners at Intel® Parallel Computing Labs, the Lattice QCD software community in the US and worldwide, within the U.S. Department of Energy SciDAC program, and engage with other high performance computing centers to share lessons learned and experiences gained during these code-modernization efforts. We will form partnerships with local universities in joint research activities, to further understand the best ways to exploit new multi-core architectures for LQCD calculations and to train the next generation of computational scientists.

    The research and development activities outlined above are funded through the U. S. Department of Energy, Offices of Nuclear Physics, High Energy Physics and Advanced Scientific Computing Research and are supported via partner institutions, for example the NERSC Exascale Application Program.

    Thomas Jefferson National Accelerator Facility (Jefferson Lab) is one of 17 national laboratories funded by the U.S. Department of Energy. The lab also receives support from the City of Newport News and the Commonwealth of Virginia. The lab’s primary mission is to conduct basic research of the atom's nucleus using the lab’s unique particle accelerator, known as the Continuous Electron Beam Accelerator Facility (CEBAF). Jefferson Lab also conducts a variety of research using its Free Electron Laser, which is based on the same electron-accelerating technology used in CEBAF. In addition to its science mission, the lab provides programs designed to help educate the next generation in science and technology, and to engage the public. Managing and operating the lab for DOE is Jefferson Science Associates, LLC. JSA is a limited liability company created by Southeastern Universities Research Association and PAE Applied Technologies.

    Related websites:

    http://www.jlab.org

    http://jeffersonlab.github.io/qphix/

    http://jeffersonlab.github.io/qphix-codegen

    http://jeffersonlab.github.io/chroma/

    Silent Installation Guide for Intel® Parallel Studio XE for Linux*

    $
    0
    0

    * NOTE: The configuration format for this guide can be found in the “Configuration File Format” article.*

    This guide provides details and instructions for the silent installation of the Intel® Parallel Studio XE 2017 Composer Editions for Linux*. The silent install method allows the user to perform a command line installation of an entire package with no need to answer prompts or make product selections. Refer to the Intel® Parallel Studio XE 2017 Release Notes for general installation advice.

    Below are the steps needed to install the Intel® Parallel Studio XE 2017 Composer Edition for Linux* in silent mode. There are two independent methods available, “Custom Configuration Method” and “Copy and Repeat Method”.

    Copy and Repeat Method

    The “copy and repeat” method allows you to complete the same installation multiple times. A typical scenario involves installing across multiple unique systems. Under this method, options meeting your particular needs/selections are saved in a configuration file during a first-time interactive installation. The resulting configuration file is then used on another system to repeat the same installation but in a “silent” manner.

             1. Extract the package tar file in a temporary directory and then create a new configuration file by                  running the install.sh script with the --duplicate option and specifying the name and location of                    the configuration file to be created.

                    a. cd /tmp          

                    b. tar -zxvf parallel_studio_xe_2017_initial_release.tgz  

                    c. cd parallel_studio_xe_2017_initial_release        

                    d. ./install.sh --duplicate /tmp/parallel_studio_xe_2017_initial_release/silent.cfg                                                                         

            2. To "repeat" the installation in a "silent" manner on a different system, copy the configuration file to             the new system, extract the package tar file in a temporary directory and then run the installation               and provide the configuration file.

                   a. cd /tmp

                   b. tar -zxvf parallel_studio_xe_2017_initial_release.tgz

                   c. cd parallel_studio_xe_2017_initial_release

                   d. ./install.sh --silent/tmp/silent.cfg

    Custom Configuration Method

    The “Custom Configuration” method allows you to complete silent installations using custom configuration settings from a configuration file.

    1. Ensure a valid Intel license file with world-readable permissions exists in the standard Intel license file directory, /opt/intel/licenses.
    2. Create or edit an existing configuration file. Refer to the Configuration File Format article for information on configuration file fields. Refer to the “Copy and Repeat Method” section for information about creating a configuration file.
    3. Extract the package tar file in a temporary directory and then run the installation script with the --silent option and specifying the custom configuration file. Refer to the example shown below.

    Example:  

                        a. cd /tmp

                        b. tar –zxvf parallel_studio_xe_2017_initial_release.tgz

                        c. cd parallel_studio_xe_2017_initial_release

                        d. ./install.sh --silent /tmp/parallel_studio_xe_2017_initial_release/silent.cfg

    Errors Installing Visual Studio 2013 Shell on Windows 10

    $
    0
    0

    When installing an Intel Parallel Studio XE product including Fortran onto a system where no supported Microsoft Visual Studio is installed, Microsoft Visual Studio 2013 Shell will be installed to support Fortran. (For more information on Visual Studio requirements for Fortran see Intel C++ and Fortran Compilers for Windows* - Required Microsoft* development software)

    Installing onto a Windows 10 systems the Visual Studio 2013 Shell install may fail. The Intel installer will end with an error display similar to this, saying "Setup Wizard ended prematurely":

    Look at the "Module name" in the message. If it is "vs_isoshell.exe" or "vs_intshelladditional.exe", it is related to the Visual Studio 2013 Shell install. This article suggests several possibilities for recovering from this error.

    The first thing to do is to run the failing component installer manually. Look in C:\Users\<yourname>\Downloads\Intel\parallel_studio_xe_2017_setup\installs\cmp_and_libs\ and you will see one or more subfolders corresponding to the package ID of the install you are using, for example, "109". Under the package ID number for the current install, look in ww_vsshell_2013_isolated if the failing module is vs_isoshell.exe, or ww_vsshell_2013_integrated if the failing module is vs_intshelladditional.exe. In the respective folders you will find the executable that failed.

    Right click on the EXE file and select "Run as administrator" and see if you can install the component that way. It will probably fail and give a more detailed error message. For example, when running vs_isoshell.exe:

    In this case it tells you that the computer needs to be restarted in order to complete a previous Windows Update. If you see this message, restart the computer and try running vs_isoshell.exe again.

    When the failing module is vs_intshelladditional.exe, you may see a different error "Windows Program Compatibility is on. Turn it off and then try setup again." In this case, the suggested steps are:

    • Go to Settings (in the Start menu), Updates. Check for Windows Updates and install all available updates. Restart the PC and try running vs_intshelladditional.exe again.
    • Log in to the computer using the local Administrator account. This is disabled by default - to enable it see How to enable the hidden Windows 10 administrator account (this link is not hosted by Intel) and try running the EXE again

    In most cases, one of these steps will allow the Shell to install. If successful, you should then rerun the full Intel Parallel Studio XE installer.

    If you need more assistance with this problem, please see Intel Software Support.

    Intel® IPP ZLIB Coding Functions

    $
    0
    0

    1. Overview

    ZLIB is a lossless data compression method and software library by Jean-loup Gailly and Mark Adler initially released in 1995 and became “de-facto” standard of lossless data compression. ZLIB is inherent part of almost all operating systems based on Linux*, including Android, OS X* and versions for embedded and mobile platforms. Many applications, including the software packages as HTTP servers, use ZLIB as one (and sometimes, as the only) data compression methods.

    Intel® Integrated Performance Primitives (Intel® IPP) library has functionality supporting and optimizing ZLIB library since Intel® IPP version 5.2. Unlike other ZLIB implementations, Intel® IPP functions for ZLIB optimize not only data compression part, but decompression operations too.

    This article describes how Intel® IPP supports ZLIB, Intel® IPP for ZLIB distribution model, recent changes in Intel® IPP ZLIB functionality in version 2017, provides performance data obtained on different Intel® platforms.

    2. ZLIB and Intel® IPP Implementation

    The distribution model of Intel® IPP for ZLIB is as follows:

    • Intel® IPP library package provides files for source code patching for all ZLIB major versions – from 1.2.5.3 to 1.2.8. This patches should be applied to ZLIB source code files downloaded from ZLIB repositories at zlib.net (latest ZLIB version), or zlib.net/fossils for previous versions of ZLIB;
    • After patch file applied, the source code contains a set of conditional compilation constructions with respect to WITH_IPP definition. For example (from file deflate.c):
    send_bits(s, (STATIC_TREES<<1)+last, 3);
    #if !defined(WITH_IPP)
    compress_block(s, (const ct_data *)static_ltree,(const ct_data *)static_dtree);
    #else
    {
      IppStatus status;
      status = ippsDeflateHuff_8u( (const Ipp8u*)s->l_buf, (const Ipp16u*)s->d_buf,
                         (Ipp32u)s->last_lit, (Ipp16u*)&s->bi_buf, (Ipp32u*)&s->bi_valid,
                         (IppDeflateHuffCode*)static_ltree, (IppDeflateHuffCode*)static_dtree,
                         (Ipp8u*)s->pending_buf, (Ipp32u*)&s->pending );
     Assert( ippStsNoErr == status, "ippsDeflateHuff_8u returned a bad status" );
    }
    send_code(s, END_BLOCK, static_ltree);
    #endif

    So, when source code file is compiled with no WITH_IPP definition, the original ZLIB library is built. If “-DWITH_IPP” compiler option is used, the Intel® IPP-enabled ZLIB library produced. Of course, several other compiler/linker options are required to build ZLIB with IPP (look below).

    Intel® IPP library has the following functions to support ZLIB functionality:
    Common functions:

    • ippsAdler32_8u,
    • ippsCRC32_8u

    For compression (deflate):

    • ippsDeflateLZ77Fast_8u,
    • ippsDeflateLZ77Fastest_8u,
    • ippsDeflateLZ77Slow_8u,
    • ippsDeflateHuff_8u,
    • ippsDeflateDictionarySet_8u,
    • ippsDeflateUpdateHash_8u

    For decompression (inflate):

    • ippsInflateBuildHuffTable,
    • ippsInflate_8u.

    6 source code files are patched in ZLIB source code tree with the optimized Intel® IPP functions:

    • adler32.c,
    •  crc32.c,
    • deflate.c,
    •  inflate.c,
    •  inftrees.h,
    •  trees.c.

    In general, the most compute intensive parts of ZLIB code are substituted with Intel® IPP function calls, all common/service parts of ZLIB remain intact.

    3. What’s New in Intel® IPP 2017 Implementation of ZLIB

    Intel® IPP 2017 adds some significant enhancement for the ZLIB optimization code, including a faster CPU-specific optimization code, a new “fastest“ compression level with the best compression performance, deflate parameters tuning support, and additional compression levels support:

    3.1 CPU-Specific Optimizations

    Intel® IPP 2017 functions provide the additional optimization for new Intel® platforms. For particular ZLIB needs, Intel® IPP library 2017 contains the following optimizations:

    • Checksum computing using modern Intel® CPU instructions;
    • Hash table operation using modern Intel® CPU instructions;
    • Huffman tables generation functionality;
    • Huffman tables decomposition during inflating;
    • Additional optimization of pattern matching algorithms (new in Intel® IPP 2017)

    3.2 New Fastest Compression Level

    Intel® IPP 2017 for ZLIB implementation introduces a brand new compression level with best compression performance. This is done by simplifying pattern matching, thus by slightly decreasing compression ratio.
    New compression level – called “fastest” – got numeric code of “-2” to distinguish it from ZLIB “default” compression level (Z_DEFAULT_COMPRESSION = -1).
    The value of compression level decrease can be seen from the following table:

    Data Compression Corpus

    Ratio (level “fast”, 1)/ Performance* (MBytes/s)

    Ratio (level “fastest”, -2) )/ Performance* (MBytes/s)

    Large Calgary

    2.80 / 86

    2.10 (-0.7) / 197 (+111)

    Canterbury

    3.09 / 107

    2.26 (-0.83)/ 294(+187)

    Large (3 files)

    3.10 / 97

    2.01 (-1.09)/ 209(+112)

    Silesia

    2.80 / 89

    2.16(-0.64) / 194(+105)

    Note: "Compression ratio” in the table above is geometric mean of ratios of uncompressed file sizes to compressed file sizes, “performance” is number of input data megabytes compressed per second measured on Intel® Xeon® processor E5-2680 v3, 2.5 GHz, single thread.

    3.3 Deflate Parameters Tuning

    To give additional freedom in tuning of data compression parameters, in Intel® IPP 2017 for ZLIB the original deflateTune function is activated:

            ZEXTERN int ZEXPORT deflateTune OF((z_streamp strm, int good_length, int max_lazy,
                                            int nice_length, int max_chain));

    The purpose and usage of function parameters is the same as in original ZLIB deflate algorithm. The modified deflate function itself loads the pattern matching parameters from configuration_table array of deflate.c with some pre-defined sets for each compression level.

    3.4 Additional Compression Levels

    The deflateTune function parameters give a freedom to modify compression search algorithm to obtain best “compression ratio”/”compression performance” ratio for particular customer needs. Nevertheless, the process of finding optimal parameter set is not straightforward, because actual behavior of compress functionality highly depends on input data specifics.

    Intel® IPP team has done several experiments with different data and fixed some parameter sets as additional compression levels. The level values and input data characteristics are in the table below.

    Additional compression levels

    Input data

    11-19

    General data (text documents, binary files) of large size (greater than 1 MB)

    21-29

    Highly-compressible data (database tables, text documents with repeating phrases, large uncompressed pictures like BMPs, PPMs)

    These sets are stored in array configuration_table in the file deflate.c. The affect to compression ratio in the levels, for example, from 11 to 19 in the same as of original levels from 1 to 9. That is, higher level provides better compression.You may use these sets, or discover your own.

    4. Getting Started With Intel® IPP 2017 ZLIB

    The process of preparation of Intel® IPP boosted Zlib library is described in readme.html file provided with Intel® IPP “components” package. It is explained how to download Zlib source code files from its site, how to un-archive, patch source code files and how to build Intel® IPP-enabled Zlib for different needs (static or dynamic Zlib libraries, statically or dynamically linked to Intel® IPP).

    5. Usage Notes for Intel® IPP ZLIB Functions

    5.1 Using the "Fastest" Compression Level

    In order to obtain better compression performance, keeping ZLIB (deflate) compatibility, the new “fastest” compression method is implemented. It is light-weight compression, which

    • Doesn’t look back in the dictionary to find a better match;
    • Doesn’t collect input stream statistics for better Huffman-based coding.

    This method corresponds to compression level “-2” and can be used as follows:

           z_stream str_deflate;
           str_deflate.zalloc =NULL;
           str_deflate.zfree =NULL;
           deflateInit(&str_deflate,-2);

    The output (compressed) stream, generated with “fastest” compression is fully compatible with “deflate” standard and can be decompressed using regular ZLIB.

    5.2 Tuning Compression Level

    In Intel® IPP 2017 product ZLIB-related functions use table of substring marching parameters to control compression ratio and performance. This table, defined as configuration_table in deflate.c file contains the sets of four values. They are max_chain, good_length, nice_length and max_lazy. The description of these values is in the table below:

    Value

    Description

    max_chain

    Maximum number of searches in the dictionary for better (higher matching length) substring match. Reasonable value range is 1-8192.

    good_length

    If substring of this or higher length is matched in the dictionary, the maximum number of searches for this particular input string is reduced fourfold. Reasonable value range is 4-258.

    nice_match

    If substring of this or higher length is matched in the dictionary, the search is stopped. Reasonable value range is 4-258.

    max_lazy

    If this or wider substring is found in dictionary:

    • For fast compression method (compression levels from 1 to 4) hash table is not updated;
    • For slow compression method (levels from 5 to 9) the search algorithm doesn’t check nearest input data for better match.

    Note: the final results of compression ratio and performance highly depends on input data specifics.

    The actual values of parameters are shown in the table below

    Compression level

    Deflate function

    max_chain

    good_length

    nice_match

    max_lazy

    1

    Fast

    4

    8

    8

    8

    2

    4

    16

    16

    9

    3

    4

    16

    16

    12

    4

    48

    32

    32

    16

    5

    Slow

    32

    8

    32

    16

    6

    128

    8

    256

    16

    7

    144

    8

    256

    16

    8

    192

    32

    258

    128

    9

    256

    32

    258

    258

    These values were chosen for similar compression ratios with original open-source ZLIB on standard data compression collections. You can try your own combinations of matching values using deflateTune ZLIB function.For example, to change max_chain value from 128 to 64, and thus to speedup compression with some compression ratio degradation you need to do the following:

        z_stream str_deflate;
        str_deflate.zalloc =NULL;
        str_deflate.zfree =NULL;
        deflateInit(&str_deflate, Z_DEFAULT_COMPRESSION);
        deflateTune(&str_deflate, 8, 26, 256, 64);
        …
        deflateEnd(&str_deflate);

    Note, that the string matching parameters changed for all subsequent compression operations (ZLIB deflate calls) with str_deflate object, until it is destroyed, or re-initialized with deflateReset function call.

    5.3 Using additional Compression Levels

    Some input data sets for compression can have specifics, for example input data can be long, or input data can be highly compressible.
    For this specific data we introduced additional compression level, which are in fact function calls of the same “fast” or “slow” compression functions, but with different sets of string matching values. The new compression levels are the following:

    • From 11 to 19 – compression levels for big input data buffers (1 Mbyte and longer);
    • From 21 to 29 – compression levels for highly-compressible data (compression ratio of 30x and more).

    For example, for levels 6 and 16 on “Large” data compression corpus on Intel® Xeon® processor E5-2680 v3, the “geomean” results are:

    Level

    Ratio

    Compression Performance (in Mbyte/sec)

    6

    3.47

    17.7

    16

    3.46

    19.9

    For levels, 6 and 26 on some synthetic highly-compressible data on Intel® Xeon® processor E5-2680 v3, the “geomean” results are:

    Level

    Ratio

    Compression Performance (in Mbyte/sec)

    6

    218

    768

    26

    218

    782

    Note: These levels are “experimental” and don’t guarantee improvements on all input data.

    Redistributable Libraries for Intel® Parallel Studio XE 2017 for C++ and Fortran Windows*

    $
    0
    0

    Overview

    This article contains links to the redistributable installation packages for Intel® Parallel Studio XE 2017 Composer Edition for C++ Windows* and Intel® Parallel Studio XE 2017 Composer Edition for Fortran Windows*.

    If you are looking for other versions, please go to Redistributable Libraries by Version.

    The redistributable packages are for the end users who use applications that are built with Intel Compilers. Please note that there is one redistributable package for every compiler update. Make sure you download and install the one recommended by the application vendor.

    OS requirement for redistributable packages

    Please read the Release Notes of the update for supported OS distributions:

    Installation instructions

    The installation program of the redistributable package will guide you through the installation. You will need to accept the EULA and the installation will install all the libraries to the fixed directory: [Common Files]\Intel\Shared Libraries\

    The installation creates a new env-var "INTEL_DEV_REDIST" with the value of above installation directory, and the PATH env-var is updated with [INTEL_DEV_REDIST]\redist\[ia32|intel64]\compiler and [INTEL_DEV_REDIST]\redist\[ia32|intel64]\mpirt (for Fortran packages). The "redist\intel64" directory is added only on 64-bit systems. See below for more information on PATH changes.

    Additionally on 64-bit systems there is another subfolder  [INTEL_DEV_REDIST]\compiler\lib\mic with redistributable libraries for Intel® Many Integrated Core Architecture(Intel MIC) architecture. And an environment variable MIC_LD_LIBRARY_PATH is set to this location.

    If you wish to install the redistributable package "silently", so that no output is presented to the user, run the executable with the following options added to the command line like:

    >> ww_icl_redist_msi_2017.0.109.msi /quiet /qn

    System PATH Environment Variable Changes

    Installation of the redistributable libraries, in either MSI or MSM form, adds folders containing the installed DLLs to the system PATH environment variable. Microsoft Windows has a limit on the total size of the value of PATH; in versions later than Windows 7 the limit is 4095 characters. This limit applies not only to the system-wide definition, but the length as modified by any batch files or scripts run. If the length is exceeded, the value of PATH can be truncated and this can cause WIndows or some applications to operate improperly.

    If you are concerned that PATH may get truncated, you can prevent the redistributable installer from modifying PATH, but then it is your responsibility to make sure that the proper folders are named in PATH when programs built using the Intel compilers are executed.

    • If you are using the MSI installer, use the command line and add the parameter NO_UPDATE_PATH=yes.For example:
      msiexec /i ww_icl_redist_msi_2017.0.109.msi NO_UPDATE_PATH=yes
    • If you are using the MSM merge module, set the update property NO_UPDATE_PATH=yes in the installer properties.

    Testing your Installation:

    After installation of the Intel redistributable libraries AND the prerequisite Microsoft Visual C++ redistributables or Visual Studio with C++ tools and libraries, try to run your Intel-compiled binary. If there are any issues, please try to determine the missing DLLs or libraries using a tool such as Dependency Walker.

    Links to the redistributable packages

    Intel® Parallel Studio XE 2017 Composer Edition for C++ Windows*Intel® Parallel Studio XE 2017 Composer Edition for Fortran Windows*
    RTM
    Redistributable library package
    RTM
    Redistributable library package

    References

    Have Questions?

    Please consult the Intel User Forums:

     

    Redistributable Libraries for Intel® Parallel Studio XE 2017 Composer Edition for C++ and Fortran Linux*

    $
    0
    0

    Overview

    This article contains links to the redistributable installation packages for Intel® Parallel Studio XE 2017 Composer Edition for C++ Linux* and Intel® Parallel Studio XE 2017 Composer Edition for Fortran Linux*.

    If you are looking for other versions, please go to Redistributable Libraries by Version.

    The redistributable packages are for the end users who use applications that are built with Intel Compilers. Please note that there is one redistributable package for every compiler update. Make sure you download and install the one recommended by the application vendor.

    OS requirement for redistributable packages

    Please read the Release Notes of the update for supported OS distributions:

    Installation instructions

    First of all, use following command to untar the .tgz file:
           $tar -xzvf l_comp_lib_2017.0.098_comp.cpp_redist.tgz

    To start the installation, run following shell command:
           $. ./l_comp_lib_2017.0.098_comp.cpp_redist/install.sh

    The installation shell program (install.sh) of the redistributable package will guide you through the installation. You will need to accept the EULA and the installation will install all the libraries to the following directory. But you can change the installation directory.

    For the redistributable package, the default installation directory is
            $HOME/intel/

    Links to the redistributable packages

    Intel® Parallel Studio XE 2017 Composer Edition for C++ Linux*Intel® Parallel Studio XE 2017 Composer Edition for Fortran Linux*
    RTM
    Redistributable library package
    RTM
    Redistributable library package

    References

    Have Questions?

    Please consult the Intel User Forums:


    Redistributable Libraries for Intel® Parallel Studio XE 2017 Composer Edition for C++ and Fortran OS X*

    $
    0
    0

    Overview

    This article contains links to the redistributable installation packages for Intel® Parallel Studio XE 2017 Composer Edition for C++ OS X* and Intel® Parallel Studio XE 2017 Composer Edition for Fortran OS X*. For other versions, please go to Redistributable Libraries by Version.

    The redistributable packages are for the end users who use applications that are built with Intel Compilers. Please note that there is one redistributable package for every compiler update. Make sure you download and install the one recommended by the application vendor.

    OS requirement for redistributable packages

    Please read the Release Notes of the update for supported OS distributions:

    Installation instructions

    The installation is easy. Just double-click on the downloaded file i.e. "m_comp_lib_icc_redist_2017.0.102.dmg", and a new folder is opened with a file "m_comp_lib_icc_redist_2017.0.102.pkg".

    To start the installation, double click on the file  "m_comp_lib_icc_redist_2017.0.102.pkg". Then it will guide you through the installation of the redistributable libraries. 

    You will need to accept the EULA. The default installation directory is: $HOME\intel\redist

    Links to the redistributable packages

    Intel® Parallel Studio XE 2017 Composer Edition for C++ OS X*Intel® Parallel Studio XE 2017 Composer Edition for Fortran OS X*
    RTM
    Redistributable library package
    RTM
    Redistributable library package

    References

    Have Questions?

    Please consult the Intel User Forums:

    What's New? Intel® Threading Building Blocks 2017

    $
    0
    0

    One of the best known C++ threading libraries Intel® Threading Building Blocks (Intel® TBB) was recently updated to a new release 2017. The updated version contains several key new features when compared to the previous 4.4 release. Some of them were already released in Intel® TBB 4.4 updates.

    Licensing

    Like Intel® TBB 2.0, the Intel® TBB coming in 2017 brings both technical improvements and becomes more open with the switch to an Apache* 2.0 license, which should enable it to take root in more environments while continuing to simplify effective use of multicore hardware. 

    Parallel algorithms

    static_partitioner

    Intel® TBB 2017 has expanded a set of partitioners with the tbb::static_partitioner. It can be used in tbb::parallel_for and tbb::parallel_reduce to split the work uniformly among workers. The work is initially split into chunks of approximately equal size. The number of chunks is determined at runtime to minimize the overhead of work splitting while providing enough tasks for available workers. Whether these chunks may be further split is unspecified. This reduces overheads involved when the work is originally well-balanced. However, it limits available parallelism and, therefore, might result in performance loss for non-balanced workloads.

    Tasks

    Added tbb::task_arena::max_concurency() method returning the maximal number of threads that can work inside an arena. The amount of concurrency reserved for application threads at tbb::task_arena construction can be set to any value between 0 and the arena concurrency limit.   

    Namespace tbb::this_task_arena is a concept to collect information about arena where the current task is executed now. It is propagated with new functionality:

    • In previous releases to get a current thread slot index in the current arena a tbb::task_arena::current_thread_index() static method was used. Now it is deprecated and functionality was moved to tbb::this_task_arena. Use tbb::this_task_arena::current_thread_index() function now.
    • added this_task_arena::max_concurrency() that returns maximum number of threads that can work on the current arena.
    • (Preview Feature) Use tbb::this_task_arena::isolate() function to isolate execution of a group of tasks or an algorithm from other tasks submitted to the scheduler.

    Memory Allocation

    Improved dynamic memory allocation replacement on Windows* OS to skip DLLs for which replacement cannot be done, instead of aborting.

    For 64-bit platforms, quadrupled the worst-case limit on the amount of memory the Intel® TBB allocator can handle.

    Intel® TBB no longer performs dynamic replacement of memory allocation functions for Microsoft Visual Studio 2008 and earlier versions.

    Flow Graph

    async_node

     

    Now it’s a fully supported feature.

     

    The tbb::flow::async_node is re-implemented using tbb::flow::multifunction_node template. This allows to specify a concurrency level for the node.

    A class tmplate tbb::flow::async_node allows users to coordinate with  an activity that is serviced outside of the Intel® TBB thread pool. If your flow graph application needs to communicate to a separate thread, runtime or device, tbb::flow::async_node might be helpful. It has interfaces to commit results back, maintaining two-way asynchronous communication between a Intel® TBB flow graph and an external computing entity. tbb::flow::async_node class was a preview feature in Intel® TBB 4.4.

     

    async_msg

    Since Intel TBB 4.4 Update 3 a special tbb::flow::async_msg message type was introduced to support communications between the flow graph and external asynchronous activities.

    opencl_node

    Streaming workloads to external computing devices is significantly reworked in this Intel® TBB 2017 and introduced as a preview feature. Intel® TBB flow graph now can be used as a composability layer for heterogeneous computing.

    A class template tbb::flow::streaming_node was added to the flow graph API. It allows a flow graph to offload computations to other devices through streaming or offloading APIs. The “streaming” concept uses several abstractions like StreamFactory to produce instances of computational environments, kernel to encapsulate computing routine, device_selector to access a particular device.

    The following example shows a simple OpenCL* kernel invocation.

    File sqr.cl

    __kernel
    void Sqr( __global float *b2, __global float *b3   )
    {
        const int index = get_global_id(0);
        b3[index] = b2[index]*b2[index];
    }
    

    File opencl_test.cpp

    #define TBB_PREVIEW_FLOW_GRAPH_NODES 1
    #define TBB_PREVIEW_FLOW_GRAPH_FEATURES 1
    
    #include <iterator>
    #include <vector>
    #include "tbb/flow_graph_opencl_node.h"
    using namespace tbb::flow;
    
    bool opencl_test()   {
       opencl_graph g;
       const int N = 1 * 1024 * 1024;
       opencl_buffer<float>  b2( g, N ), b3( g, N );
       std::vector<float>  v2( N ), v3( N );
    
       auto i2 = b2.access<write_only>();
       for ( int i = 0; i < N; ++i ) {
            i1[i] = v1[i] = float( i );
       }
       // Create an OpenCL program
       opencl_program<> p( g, PathToFile("sqr.cl") ) ;
       // Create an OpenCL computation node with kernel "Sqr"
       opencl_node <tuple<opencl_buffer<float>, opencl_buffer<float>>> k2( g, p.get_kernel( "Sqr" ) );
       // define iteration range
       k2.set_range( {{ N },{ 16 }} );
       // initialize input and output buffers
       k2.try_put( std::tie( b2, b3 ) );
       // run the flow graph computations
       g.wait_for_all();
    
        // validation
        auto o3 = b3.access<read_only>();
        bool comp_result = true;
        for ( int i = 0; i < N; ++i ) {
        	 comp_result &&= (o3[i] - v2[i] * v2[i]) < 0.1e-7;
        }
        return comp_result;
     }
    

     

    Some other improvements in the Intel® TBB flow graph

    • Removed a few cases of excessive user data copying in the flow graph.
    • Reworked tbb::flow::split_node to eliminate unnecessary overheads.

    Important note: Internal layout of some flow graph nodes has changed; recompilation is recommended for all binaries that use the flow graph.

     

    Python

    An experimental module which unlocks additional performance for multi-threaded Python programs by enabling threading composability between two or more thread-enabled libraries.

    Threading composability can accelerate programs by avoiding inefficient threads allocation (called oversubscription) when there are more software threads than available hardware resources.

    The biggest improvement is achieved when a task pool like the ThreadPool from standard library or libraries like Dask or Joblib (used in multi-threading mode) execute tasks calling compute-intensive functions of Numpy/Scipy/PyDAAL which in turn are parallelized using Intel® Math Kernel Library(Intel® MKL) or/and Intel® TBB.

    The module implements Pool class with the standard interface using Intel® TBB which can be used to replace Python’s ThreadPool. Thanks to the monkey-patching technique implemented in class Monkey, no source code change is needed in order to unlock additional speedups.

    For more details see: Unleash parallel performance of python programs 

    Miscellaneous

    • Added TBB_USE_GLIBCXX_VERSION macro to specify the version of GNU libstdc++ when it cannot be properly recognized, e.g. when used with Clang on Linux* OS.
    • Added support for C++11 move semantics to the argument of tbb::parallel_do_feeder::add() method.
    • Added C++11 move constructor and assignment operator to tbb::combinable class template.

     

    Samples

    All examples for commercial version of library moved online: https://software.intel.com/en-us/product-code-samples. Examples are available as a standalone package or as a part of Intel(R) Parallel Studio XE or Intel(R) System Studio Online Samples packages

    Added graph/stereo example to demostrate tbb::flow::async_msg, and tbb::flow::opencl_node.

     

    You can download the latest Intel TBB version from http://threadingbuildingblocks.org and https://software.intel.com/en-us/articles/intel-tbb

    Announcing Intel® Parallel Studio 2017

    $
    0
    0

    The new 2017 version of Intel® Parallel Studio XE is here—and it's better than ever. The premier suite for parallel applications gives you new software development tools to boost application performance—including Python* code, machine learning applications, and more—while simplifying the design, debugging, and tuning of parallel code.

    New features for 2017 include:

    Intel Parallel Studio XE comes in three editions. Get the one created for your development needs:

    Composer Edition: Includes Intel® C++ and Fortran Compilers, performance libraries, parallel models, and Intel® Distribution for Python*.

    Professional Edition: Includes everything in the Composer Edition, plus performance profiling, a memory and thread debugger, and design tools to simplify adding threading and vectorization.

    Cluster Edition: Includes everything in the Professional Edition, plus an MPI library, MPI profiling and error-checking tools, and an advanced cluster diagnostic expert system in a tool.

    Learn more or get a free 30-day evaluation.

    Intel® XDK FAQs - Cordova

    $
    0
    0

    How do I set app orientation?

    You set the orientation under the Build Settings section of the Projects tab.

    To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

    <config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

    Then add the plugin as a local plugin using the plugin manager on the Projects tab.

    HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

    Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

    • cordova-plugin-screen-orientation
    • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

    Or, you can reference it directly from its GitHub repo:

    To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

    Is it possible to create a background service using Intel XDK?

    Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

    How do I send an email from my App?

    You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

    How do you create an offline application?

    You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

    How do I work with alarms and timed notifications?

    Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

    How do I get a reliable device ID?

    You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

    How do I implement In-App purchasing in my app?

    There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

    How do I install custom fonts on devices?

    Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

    How do I access the device's file storage?

    You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

    Why isn't AppMobi* push notification services working?

    This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

    How do I configure an app to run as a service when it is closed?

    If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

    How do I dynamically play videos in my app?

    1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
    2. Add references to them into your index.html file.
    3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

       
      <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
    4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

       
      Function runVid2(){
            Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
            $.ui.loadContent("#main1",true,false,"pop");
      }
    5. The 'main1' panel opens waiting for the user to click the play button.

    NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

    How do I design my Cordova* built Android* app for tablets?

    This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

    How do I resolve icon related issues with Cordova* CLI build system?

    Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

    <icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

    These are not required in the build system and so you will have to include them in the additions file.

    For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

    Is there a plugin I can use in my App to share content on social media?

    Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

    Iframe does not load in my app. Is there an alternative?

    Yes, you can use the inAppBrowser plugin instead.

    Why are intel.xdk.istablet and intel.xdk.isphone not working?

    Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

    How do I enable security in my app?

    We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
    For more details please visit: https://software.intel.com/en-us/app-security-api.

    For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

    Why does my build fail with Admob plugins? Is there an alternative?

    Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

    To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

    Why does the intel.xdk.camera plugin fail? Is there an alternative?

    There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

    How do I resolve Geolocation issues with Cordova?

    Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

    Geo fine might not work because of the following reasons:

    1. Your device does not have a GPS chip
    2. It is taking a long time to get a GPS lock (if you are indoors)
    3. The GPS on your device has been disabled in the settings

    Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

    Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

    Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

    To make this work you will need to do the following:

    • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
    • Include the plugin only on the Android* platform and use <video> on iOS*.
    • Create conditional code to do what is appropriate for the platform detected

    You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

    1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
    2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

    More information is available here and this is what an additions file can look like:

    <preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

    This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

    How do I display a webpage in my app without leaving my app?

    The most effective way to do so is by using inAppBrowser.

    Does Cordova* media have callbacks in the emulator?

    While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

    Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

    This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

    The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

    NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

    Our Cordova CLI 4.1.2 build system was "pinned" to: 

    • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
    • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
    • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

    Our Cordova CLI 5.1.1 build system is "pinned" to:

    • cordova-android@4.1.1 (as of March 23, 2016)
    • cordova-ios@3.8.0
    • cordova-windows@4.0.0

    Our Cordova CLI 5.4.1 build system is "pinned" to: 

    • cordova-android@5.0.0
    • cordova-ios@4.0.1
    • cordova-windows@4.3.1

    Our Cordova CLI 6.2.0 build system is "pinned" to: 

    • cordova-android@5.1.1
    • cordova-ios@4.1.1
    • cordova-windows@4.3.2

    Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

    Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

    Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

    The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

    Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

    You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

    • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

    • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

    • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

    • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

    • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

    Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

    The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

    How do I add a third party plugin?

    Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

    How do I make an AJAX call that works in my browser work in my app?

    Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

    I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

    When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

    When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

    How do I target my app for use only on an iPad or only on an iPhone?

    There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

    <preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

    If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

    Why does my build fail when I try to use the Cordova* Capture Plugin?

    The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

    How can I pinch and zoom in my Cordova* app?

    For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

    <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

    Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

    http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

    https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

    Another device oriented approach is to enable it by turning on Android accessibility gestures.

    How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

    The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

    You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

    How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

    The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

    • copy your XX and XXX icons into your source directory (usually named www)
    • add the following lines to your intelxdk.config.additions.xml file
    • see this Cordova doc page for some more details

    Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

    <!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

    The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

    You can continue to insert the other icons into your app using the Intel XDK Projects tab.

    Which plugin is the best to use with my app?

    We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

    Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

    See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

    What are the rules for my App ID?

    The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

    CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

    • Each section of the App ID must start with a letter
    • Each section can only consist of letters, numbers, and the underscore character
    • Each section cannot be a Java keyword
    • The App ID must consist of at least 2 sections (each section separated by a period ".").

     

    iOS /usr/bin/codesign error: certificate issue for iOS app?

    If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

    Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
    Provisioning Profile: "MyProvisioningFile"
                          (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)
    
        /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
    9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
    Command /usr/bin/codesign failed with exit code 1
    
    ** BUILD FAILED **
    
    
    The following build commands failed:
        CodeSign build/device/MyApp.app
    (1 failure)
    

    The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

    iOS Code Sign error: bundle ID does not match app ID?

    If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

    Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
    CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'
    
    ** BUILD FAILED **
    
    The following build commands failed:
        Check dependencies
    (1 failure)
    Error code 65 for command: xcodebuild with args: -xcconfig,...
    

    The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

    iOS build error?

    If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

    Please follow these steps to generate the P12 file.

    1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
    2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
    3. Upload .csr on Apple Developer Portal
    4. Generate certificate on Apple developer portal
    5. Download .cer file from the Developer portal
    6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
    7. Create an appID on Apple Developer Portal
    8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
    9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

    Few things to check before you build:  

    1.  Make sure your certificate has not expired
    2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
    3. You are using  provisioning profile that is associated with the certificate you are using to build the app
    4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

    This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

    What are plugin variables used for? Why do I need to supply plugin variables?

    Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

    What happened to the Intel XDK "legacy" build options?

    On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

    Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

    Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

    The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

    More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

    Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

    There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

    • appx works best for side-loading, and can also be used to publish your app.
    • appxupload is preferred for publishing your app, it will not work for side-loading.
    • appxbundle will work for both publishing and side-loading, but is not preferred.
    • xap is for legacy Windows Phone; works for both publishing and side-loading.

    In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

    Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

    Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

    The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

    How do I implement local storage or SQL in my app?

    See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

    How do I prevent my app from auto-completing passwords?

    Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

    Why does my PHP script not run in my Intel XDK Cordova app?

    Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

    Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

    Following is a lightly edited recommendation from an Intel XDK user:

    I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

    Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

    And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

    I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

    Why doesn’t my Cocos2D game work on iOS?

    This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

    The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

    Generic cocos2D fix -

    1. Inside the loadTxt function, xhr.onload should be defined as

    xhr.onload = function () {
        if(xhr.readyState == 4)
            xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
        };

    instead of

    xhr.onload = function () {
        if(xhr.readyState == 4)
            xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
        };

    2. The condition inside _loadTxtSync function should be changed to 

    if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

    instead of 

    if (!xhr.readyState == 4 || xhr.status != 200) {

     

    App Preview fix -

    Add this line inside of loadTxtSync after _xhr.open:

    xhr.setRequestHeader("iap_isSyncXHR", "true");

    How do I change the alias of my Intel XDK Android keystore certificate?

    You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

    Use the following procedure:

    • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

    • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

    • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

    keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
    • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

    What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

    See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

    How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

    To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

    Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

    These two sections of the Android developer Signing Your Applications article are also worth reading:

    Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

    Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

    Why you should be using the Plugin Management Tool:

    • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

    • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

    • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

      When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

    • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

    • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

    How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

    Removing a plugin from your project generates the following error:

    Sometimes you may see this error:

    This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

    The simplest fix is to:

    • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
    • exit the Intel XDK
    • delete the entire plugins directory inside your project
    • restart the Intel XDK

    The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

    NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

    Why do I get a "build failed: the plugin contains gradle scripts" error message?

    You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

    The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

    The error message in your build summary log will look like the following:

    In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

    You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

    <framework src="some.gradle" custom="true" type="gradleReference" />

    it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

    How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

    Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

    This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

    Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

    import java.util.regex.Pattern
    
    def doExtractStringFromManifest(name) {
        def manifestFile = file(android.sourceSets.main.manifest.srcFile)
        def pattern = Pattern.compile(name + "=\"(.*?)\"")
        def matcher = pattern.matcher(manifestFile.getText())
        matcher.find()
        return matcher.group(1)
    }
    
    android {
        sourceSets {
            main {
                manifest.srcFile 'AndroidManifest.xml'
            }
        }
    
        defaultConfig {
            applicationId = doExtractStringFromManifest("package")
        }
    }

    All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

    The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

    How does this help you and what do you do?

    To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

    • Download a ZIP of the plugin version you want to use from that plugin's git repo.

      IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and choose carefully when you clone a git repo! 

    • Unzip that plugin onto your local hard drive.

    • Remove the <framework> line that references the gradle script from the plugin.xml file.

    • Add the modified plugin into your project as a "local" plugin (see the image below).

    In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

    If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

    If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

    There is no Entitlements.plist file, how do I add Universal Links to my iOS app?

    The Intel XDK project does not provide access to an Entitlements.plist file. If you are using Cordova CLI locally you would have the ability to add such a file into the CLI platform build directories located in the CLI project folder. Because the Intel XDK build system is cloud-based, your Intel XDK project folders do not include these build directories.

    A workaround has been identified by an Intel XDK customer (Keith T.) and is detailed in this forum post.

    Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?

    If you submitted an app to the Google Play Store using a version of the Intel XDK prior to version 3088 (prior to March of 2016), you need to use your "converted legacy" certificate when you build your app in order for the Google Play Store to accept an update to your app. The error message you receive will look something like the following:

    When using version 3088 (or later) of the Intel XDK, you are given the option to convert your existing Android certificate, that was automatically created for your Android builds with an older version of the Intel XDK, into a certificate for use with the new version of the Intel XDK. This conversion process is a one-time event. After you've successfully converted your "legacy Android certificate" you will never have to do this again.

    Please see the following links for more details.

    How do I add [image, audio, etc.] resources to the platform section of my Cordova project with the Intel XDK?

    See this forum thread for a specific example, which is summarized below.

    If you are using a Cordova plugin that suggests that you "add a file to the resource directory" or "make a modification to the manifest file or plist file" you may need to add a small custom plugin to your application. This is because the Cordova project that builds and packages your app is located in the the Intel XDK cloud-based build system. Your development system contains only a partial, prototype Cordova project. The real Cordova project is created on demand, when you build your application with the Intel XDK build system. Parts of your local prototype Cordova project are sent to the cloud to build your application: your source files (normally located in the "www" folder), your plugins folder, your build configuration files, your provisioning files, and your icon and splash screen files (located in the package-assets folder). Any other folders and files located in your project folder are strictly used for local simulation and testing tasks and are not used by the cloud-based build system.

    To modify a manifest or plist file, see this FAQ. To create a local plugin that you can use to add resources to your Cordova cloud-based project, see the following instructions and the forum post mentioned at the beginning of this FAQ.

    • Create a folder to hold your special plugin, either in the root of your project or outside of your project. Do NOT create this folder inside of the plugins folder of your project, that is a destination folder, not a source folder.

    • Create a plugin.xml file in the root of your special plugin. This file will contain the instructions that are needed to add the resources needed for the build system.

    • Add the resources to the appropriate location in your new plugin folder.

    See these Cordova plugin.xml instructions may also be helpful.

    Back to FAQs Main

    Intel® Math Kernel Library (Intel® MKL) 2017 System Requirements

    $
    0
    0

    Please see the following links available online for the latest information regarding the Intel® Math Kernel Library (Intel® MKL):

    Operating System Requirements

    The Intel MKL 2017 release supports the IA-32 and Intel® 64 architectures. For a complete explanation of these architecture names please read the following article:

    Intel Architecture Platform Terminology for Development Tools

    The lists below pertain only to the system requirements necessary to support developing applications with Intel MKL. Please review your compiler (gcc*, Microsoft* Visual Studio* or Intel® Compiler Pro) hardware and software system requirements, in the documentation provided with that product, to determine the minimum development system requirements necessary to support your compiler product.

    Supported operating systems

    • Windows 10 (IA-32 / Intel® 64)
    • Windows 8* (IA-32 / Intel® 64)
    • Windows 8.1* (IA-32 / Intel® 64)
    • Windows 7* SP1 (IA-32 / Intel® 64)
    • Windows HPC Server 2016 (Intel® 64)
    • Windows HPC Server 2012 (Intel® 64)
    • Windows HPC Server 2008 R2 (Intel® 64)
    • Windows Embedded 10 (IA-32 / Intel® 64)
    • Windows Embedded 8.x (IA-32 / Intel® 64)
    • Windows Embedded 7 (IA-32 / Intel® 64)
    • Red Hat* Enterprise Linux* 6 (IA-32 / Intel® 64)
    • Red Hat* Enterprise Linux* 7 (IA-32 / Intel® 64)
    • Red Hat* Enterprise Linux* 7.5 (IA-32 / Intel® 64)
    • Red Hat Fedora* core 25 (IA-32 / Intel® 64)
    • Red Hat Fedora* core 24 (IA-32 / Intel® 64)
    • SUSE Linux Enterprise Server* 11 SP2
    • SUSE Linux Enterprise Server* 12
    • OpenSuse 13.2 
    • CentOS 7.1
    • Debian* 7 (IA-32 / Intel® 64)
    • Debian* 8 (IA-32 / Intel® 64)
    • Ubuntu* 14.04 LTS (IA-32/Intel® 64)
    • Ubuntu* 15.04 (IA-32/Intel® 64)
    • Ubuntu* 15.10 (IA-32/Intel® 64)
    • Ubuntu* 16.04 LTS (IA-32/Intel® 64)
    • WindRiver Linux 6
    • WindRiver Linux 7
    • WindRiver Linux 8
    • Tizen 3.6
    • Yocto 1.7
    • Yocto 1.8
    • Yocto 2.0
    • OS X* 10.11 (Xcode 6.x) and OS X* 10.12 (Xcode 6.x) (Intel® 64)

    Note: Intel® MKL is expected to work on many more Linux distributions as well. Let us know if you have trouble with the distribution you use.

    Supported C/C++ and Fortran compilers for Windows*:

    • Intel® Fortran Composer XE 2017 for Windows* OS
    • Intel® Fortran Composer XE 2016 for Windows* OS
    • Intel® Fortran Composer XE 2015 for Windows* OS
    • Intel® Visual Fortran Compiler 15.0 for Windows* OS
    • Intel® Visual Fortran Compiler 16.0 for Windows* OS
    • Intel® Visual Fortran Compiler 17.0 for Windows* OS
    • Intel® C++ Composer XE 2017 for Windows* OS
    • Intel® C++ Composer XE 2016 for Windows* OS
    • Intel® C++ Composer XE 2015 for Windows* OS
    • Intel® C++ Compiler 15.0 for Windows* OS
    • Intel® C++ Compiler 16.0 for Windows* OS
    • Intel® C++ Compiler 17.0 for Windows* OS
    • Microsoft Visual Studio* 2015 - help file and environment integration
    • Microsoft Visual Studio* 2013 - help file and environment integration
    • Microsoft Visual Studio* 2012 - help file and environment integration

    Supported C/C++ and Fortran compilers for Linux*:

    • Intel® Fortran Composer XE 2017 for Linux* OS
    • Intel® Fortran Composer XE 2016 for Linux* OS
    • Intel® Fortran Composer XE 2015 for Linux* OS
    • Intel® Fortran Compiler 15.0 for Linux* OS
    • Intel® Fortran Compiler 16.0 for Linux* OS
    • Intel® Fortran Compiler 17.0 for Linux* OS
    • Intel® C++ Composer XE 2017 for Linux* OS
    • Intel® C++ Composer XE 2016 for Linux* OS
    • Intel® C++ Composer XE 2015 for Linux* OS
    • Intel® C++ Compiler 15.0 for Linux* OS
    • Intel® C++ Compiler 16.0 for Linux* OS
    • Intel® C++ Compiler 17.0 for Linux* OS
    • GNU Compiler Collection 4.9 and later
    • PGI* Compiler version 2015
    • PGI* Compiler version 2016

    Note: Using the latest version of Intel® Manycore Platform Software Stack (Intel® MPSS is recommended on Intel MIC Architecture. It is available from the Intel® Software Development Products Registration Center at http://registrationcenter.intel.com as part of your Intel® Parallel Studio XE for Linux* registration.

    Supported C/C++ and Fortran compilers for OS X*:

    • Intel® Fortran Compiler 15.0 for OS X*
    • Intel® Fortran Compiler 16.0 for OS X*
    • Intel® Fortran Compiler 17.0 for OS X*
    • Intel® C++ Compiler 15.0 for OS X*
    • Intel® C++ Compiler 16.0 for OS X*
    • Intel® C++ Compiler 17.0 for OS X*
    • Mac OS CLANG-LLVM Compiler

    MPI implementations that Intel® MKL for Windows* OS has been validated against:

    • Intel® MPI Library Version 5.1 (Intel® 64) (http://www.intel.com/go/mpi)
    • Intel® MPI Library Version 2017 (Intel® 64) (http://www.intel.com/go/mpi)
    • MPICH2 version 1.5 (http://www-unix.mcs.anl.gov/mpi/mpich)
    • MS MPI, CCE or HPC 2012 on Intel® 64 (http://www.microsoft.com/downloads)
    • OpenMPI 1.8.x (Intel® 64) (http://www.open-mpi.org)

    MPI implementations that Intel® MKL for Linux* OS has been validated against:

    • Intel® MPI Library Version and 5.1 (Intel® 64) (http://www.intel.com/go/mpi)
    • Intel® MPI Library Version 2017 (Intel® 64) (http://www.intel.com/go/mpi)
    • MPICH2 version 1.5 (Intel® 64) (http://www-unix.mcs.anl.gov/mpi/mpich)
    • MPICH version 3.1  (http://www-unix.mcs.anl.gov/mpi/mpich)
    • MPICH version 3.2  (http://www-unix.mcs.anl.gov/mpi/mpich)
    • Open MPI 1.8.x (Intel® 64) (http://www.open-mpi.org)

    Note: Usage of MPI and linking instructions can be found in the User's Guide in the doc directory of Intel MKL.

    Other tools supported for use with example source code:

    • uBLAS examples: Boost C++ library, version 1.x.x
    • JAVA examples: J2SE* SDK 1.4.2, JDK 5.0 and 6.0 from Sun Microsystems, Inc.

    Note: Parts of Intel® MKL have Fortran interfaces and data structures, while other parts have C interfaces and C data structures. The User Guide in the doc directory contains advice on how to link to Intel® MKL with different compilers and from different programming languages.

    Deprecation Notices :

    • Dropped support for all MPI IA-32 implementations
    • Visual Studio* 2008* is note supported
      • Support for Visual Studio 2008* has been removed
    • Windows XP* is not supported
      • Support for Windows XP has been removed
    • Windows Server 2003* and Windows Vista* not supported
    • Visual Studio* 2012 support is deprecated
      • Support has been removed for installation and use on Windows Server 2003 and Windows Vista. Intel recommends migrating to a newer version of these operating systems
    Viewing all 3384 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>