Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

How to Add a Video

$
0
0

Curabitur blandit tempus porttitor. Donec sed odio dui. Curabitur blandit tempus porttitor. Aenean lacinia bibendum nulla sed consectetur. Sed posuere consectetur est at lobortis.

Cras justo odio, dapibus ac facilisis in, egestas eget quam. Cras mattis consectetur purus sit amet fermentum. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nullam id dolor id nibh ultricies vehicula ut id elit.


How to Add an Event

$
0
0

Curabitur blandit tempus porttitor. Donec sed odio dui. Curabitur blandit tempus porttitor. Aenean lacinia bibendum nulla sed consectetur. Sed posuere consectetur est at lobortis.

Cras justo odio, dapibus ac facilisis in, egestas eget quam. Cras mattis consectetur purus sit amet fermentum. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nullam id dolor id nibh ultricies vehicula ut id elit.

Using Intel® Omni-Path Architecture

$
0
0

This document shows how to install the Intel® Omni-Path Host Fabric Interface (Intel® OP HFI) card and the Intel® Omni-Path Architecture (Intel® OPA) software on the host. It also describes how to verify the link and port status, perform the necessary steps to bring the stack up, and run a test on the fabric.

Introduction to Intel® Omni-Path Architecture

Intel OPA is the latest generation of Intel’s high-performance fabric technology. It adds new capabilities to enhance high-performance computing performance, scalability, and quality of service. The Intel OPA components include Intel OP HFI, which provides fabric connectivity, switches that connect a scalable number of endpoints, copper, and optical cables, and a Fabric Manager (FM) that identifies all nodes, provides centralized provisioning, and monitors fabric resources. Intel OP HFI is the Intel OPA interface card which provides host-to-switch connectivity. Intel OP HFI can also connect directly to another HFI (back-to-back connectivity).

This paper focuses on how to install the Intel OP HFI, configure the IP over Fabric, and test the fabric using a prebuilt program. Two systems, each equipped with a preproduction Intel® Xeon® E5 processor, were used in this example. Both systems were running Red Hat Enterprise Linux* 7.2 and were equipped with Gigabit Ethernet adapters connected through a Gigabit Ethernet router.

Intel® Omni-Path Host Fabric Interface

Intel OP HFI is a standard PCIe* card that interfaces with a router or other HFI. There are two current models of Intel OP HFI: PCIe x16, which supports 100 Gbps, and PCIe x8, which supports 56 Gbps. Designed for low latency/high bandwidth, it can configure from 0 to 8 Virtual Lanes plus one management. MTU size can be configurable as 2, 4, 6, 8, or 10 KB.

Below is the picture of an Intel OP HFI PCIe x16 that was used in this test:

 PCIe x16

Hardware Installation

Two Intel® Xeon® processor-based servers running Red Hat Enterprise Linux 7.2 were used. They have Gigabit Ethernet adapters and are connected through a router. Their IP addresses were 10.23.3.27 and 10.23.3.148. In this example, we will install an Intel OP HFI PCIe x16 card on each server and use an Intel OPA cable to connect them in a back-to-back configuration.

First, power down the systems and then install an Intel OP HFI in an x16 PCIe slot in each system. Connect the two Intel OP HFIs with the Intel OPA cable and power up the systems. Verify that the solid-green LED of the Intel OP HFI is on; this indicates the Intel OP HFI link status is activated.

Intel OP HFI Link

Next, verify that the OS detects the Intel OP HFI by using the lspci command:

# lspci -vv | grep Omni
18:00.0 Fabric controller: Intel Corporation Omni-Path HFI Silicon 100 Series [discrete] (rev 11)
        Subsystem: Intel Corporation Omni-Path HFI Silicon 100 Series [discrete]

The first field of the display output (18:00.0) shows the PCI slot number, the second field shows the slot name “Fabric controller”, and the last field shows the device name which is the Intel OP HFI.

To verify the Intel OP HFI speed, type “lspci –vv” to display more details and search for the previous slot (18:00.0):

# lspci -vv

...................................................

18:00.0 Fabric controller: Intel Corporation Omni-Path HFI Silicon 100 Series [discrete] (rev 11)
        Subsystem: Intel Corporation Omni-Path HFI Silicon 100 Series [discrete]
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 38
        NUMA node: 0
        Region 0: Memory at a0000000 (64-bit, non-prefetchable) [size=64M]
        Expansion ROM at <ignored> [disabled]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <8us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L1, Exit Latency L0s <4us, L1 <64us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-

...................................................

LinkCap and LinkStat indicate a speed of 8 GT/s, width x16. This is Gen3 PCIe. It confirms that the optimal speed for this Intel OP HFI is used.

Host Software Installation

The Intel® Omni-Path Fabric host software is available in two versions:

  • Basic bundle: This software is usually installed on the compute nodes.
  • Intel® Omi-Path Fabric Suite (IFS) is a superset of the Basic bundle and is usually installed on a head/management node, which additionally containing the following packages:
    • FastFabric: A collection of utilities for installation, testing, and monitoring the fabric
    • Intel® Omni-Path Fabric Manager
    • FMGUI is Intel's Fabric Manager GUI

Each fabric requires at least one FM; however if many FMs coexist, only one FM is the master FM and others are standby FMs. The FM identifies all nodes, switches, and routers; it assigns local IDs, maintains all nodes with routing tables, and scans for change and reprograms automatically. It is recommended that the host memory must reserve 500 MB for Compute nodes, and 1 GB per each FM instance. The following figure shows all components of the Fabric Host Software Stack (source: “Intel® Omni-Path Fabric Host Software User Guide, Rev. 5.0”).

Host Software Stack Components

Both versions can be downloaded from https://downloadcenter.intel.com/search?keyword=Omni-Path. You need to install the required OS RPMs before installing the host software. Please refer to the Section 1.1.1.1 OS RPMs Installation Prerequisites in the document “Intel® Omni-Path Fabric Software Installation Guide (Rev 5.0)”.

In this example, we download the IFS package IntelOPA-IFS.RHEL72-x86_64.10.3.0.0.81.tgz, extract the package, and then run the installation script on both machines (both machines run the same OS and the same IFS package):

# tar -xvf IntelOPA-IFS.RHEL72-x86_64.10.3.0.0.81.tgz
# cd IntelOPA-IFS.RHEL72-x86_64.10.3.0.0.81/
# ./INSTALL -a
Installing All OPA Software
Determining what is installed on system...
-------------------------------------------------------------------------------
Preparing OFA 10_3_0_0_82 release for Install...
...
A System Reboot is recommended to activate the software changes
Done Installing OPA Software.
Rebuilding boot image with "/usr/bin/dracut -f"...done.

This requires a reboot on the host:

# reboot

After the system is rebooted, you load the Intel OP HFI driver, run lsmod, and then check for Intel OPA modules:

# modprobe hfi1
# lsmod | grep hfi1
hfi1                  633634  1
rdmavt                 57992  1 hfi1
ib_mad                 51913  5 hfi1,ib_cm,ib_sa,rdmavt,ib_umad
ib_core                98787  14 hfi1,rdma_cm,ib_cm,ib_sa,iw_cm,xprtrdma,ib_mad,ib_ucm,rdmavt,ib_iser,ib_umad,ib_uverbs,ib_ipoib,ib_isert
i2c_algo_bit           13413  2 ast,hfi1
i2c_core               40582  6 ast,drm,hfi1,ipmi_ssif,drm_kms_helper,i2c_algo_bit

For installation errors or additional information, you can refer to the /var/log/opa.log file.

Post Installation

To configure IP over Fabric from the Intel OPA software, run the script again:

# ./INSTALL

Then choose option 2) Reconfigure OFA IP over IB. In this example, we configure the IP address (IPoFabric) of the host as 192.168.100.101. You can verify the IP over Fabric Interface:

# more /etc/sysconfig/network-scripts/ifcfg-ib0
DEVICE=ib0
BOOTPROTO=static
IPADDR=192.168.100.101
BROADCAST=192.168.100.255
NETWORK=192.168.100.0
NETMASK=255.255.255.0
ONBOOT=yes
NM_CONTROLLED=no
CONNECTED_MODE=yes
MTU=65520

Bring the IP over Fabric Interface up, and then verify its IP address:

# ifup ib0
# ifconfig ib0
ib0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 65520
        inet 192.168.100.101  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::211:7501:179:311  prefixlen 64  scopeid 0x20<link>
Infiniband hardware address can be incorrect! Please read BUGS section in ifconfig(8).
        infiniband 80:00:00:02:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  txqueuelen 256  (InfiniBand)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 2888 (2.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Note that common information is captured in /var/log/messages file and Intel OPA-related information is captured in /var/log/opa.log.

So far, we have installed the host software package on one host (10.23.3.27). We repeat the same procedure on the second host (10.23.3.148) and configure the IP over Fabric with 192.168.100.102 on that host. Verify that you can ping 192.168.100.102 from the first server:

# ping 192.168.100.102
PING 192.168.100.102 (192.168.100.102) 56(84) bytes of data.
64 bytes from 192.168.100.102: icmp_seq=1 ttl=64 time=1.34 ms
64 bytes from 192.168.100.102: icmp_seq=2 ttl=64 time=0.303 ms
64 bytes from 192.168.100.102: icmp_seq=3 ttl=64 time=0.253 ms
^C

And from the second server, verify that you can ping the IP over Fabric interface:

# ping 192.168.100.101
PING 192.168.100.101 (192.168.100.101) 56(84) bytes of data.
64 bytes from 192.168.100.101: icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from 192.168.100.101: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 192.168.100.101: icmp_seq=3 ttl=64 time=0.023 ms
^C

The opainfo command can be used to verify the fabric:

# opainfo
hfi1_0:1                           PortGUID:0x0011750101790311
   PortState:     Init (LinkUp)
   LinkSpeed      Act: 25Gb         En: 25Gb
   LinkWidth      Act: 4            En: 4
   LinkWidthDnGrd ActTx: 4  Rx: 4   En: 1,2,3,4
   LCRC           Act: 14-bit       En: 14-bit,16-bit,48-bit
   Xmit Data:                  0 MB Pkts:                    0
   Recv Data:                  0 MB Pkts:                    0
   Link Quality: 5 (Excellent)

This confirms the Intel OP HFI speed is 100 Gb (25 Gb x 4).

Next, you need to enable the Intel OPA Fabric Manager to run on one host and start the Intel OPA Fabric Manager. Note that the FM must go through many steps, including physical subnet establishment, subnet discovery, information gathering, LID assignment, path establishment, port configuration, switch configuration, and subnet activation.

# opaconfig –E opafm
# service opafm start
Redirecting to /bin/systemctl start  opafm.service

You can query the status of the Intel OPA Fabric Manager at any time:

# service opafm status
Redirecting to /bin/systemctl status  opafm.service
● opafm.service - OPA Fabric Manager
   Loaded: loaded (/usr/lib/systemd/system/opafm.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2017-01-09 23:42:41 EST; 21min ago
  Process: 6758 ExecStart=/usr/lib/opa-fm/bin/opafmd -D (code=exited, status=0/SUCCESS)
 Main PID: 6759 (opafmd)
   CGroup: /system.slice/opafm.service
           ├─6759 /usr/lib/opa-fm/bin/opafmd -D
           └─6760 /usr/lib/opa-fm/runtime/sm -e sm_0

# opainfo
hfi1_0:1                           PortGID:0xfe80000000000000:0011750101790311
   PortState:     Active
   LinkSpeed      Act: 25Gb         En: 25Gb
   LinkWidth      Act: 4            En: 4
   LinkWidthDnGrd ActTx: 4  Rx: 4   En: 3,4
   LCRC           Act: 14-bit       En: 14-bit,16-bit,48-bit       Mgmt: True
   LID: 0x00000001-0x00000001       SM LID: 0x00000001 SL: 0
   Xmit Data:                  0 MB Pkts:                   21
   Recv Data:                  0 MB Pkts:                   22
   Link Quality: 5 (Excellent)

The port state is “Active” which indicates the normal operating state of a fully functional link. To display Intel OP HFI port information and to monitor the link quality, run the opaportinfo command:

# opaportinfo
Present Port State:
Port 1 Info
   Subnet:   0xfe80000000000000       GUID: 0x0011750101790311
   LocalPort:     1                 PortState:        Active
   PhysicalState: LinkUp
   OfflineDisabledReason: None
   IsSMConfigurationStarted: True   NeighborNormal: True
   BaseLID:       0x00000001        SMLID:            0x00000001
   LMC:           0                 SMSL:             0
   PortType: Unknown                LimtRsp/Subnet:     32 us, 536 ms
   M_KEY:    0x0000000000000000     Lease:       0 s  Protect: Read-only
   LinkWidth      Act: 4            En: 4             Sup: 1,2,3,4
   LinkWidthDnGrd ActTx: 4  Rx: 4   En: 3,4           Sup: 1,2,3,4
   LinkSpeed      Act: 25Gb         En: 25Gb          Sup: 25Gb
   PortLinkMode   Act: STL          En: STL           Sup: STL
   PortLTPCRCMode Act: 14-bit       En: 14-bit,16-bit,48-bit Sup: 14-bit,16-bit,48-bit
   NeighborMode   MgmtAllowed:  No  FWAuthBypass: Off NeighborNodeType: HFI
   NeighborNodeGuid:   0x00117501017444e0   NeighborPortNum:   1
   Capability:    0x00410022: CN CM APM SM
   Capability3:   0x0008: SS
   SM_TrapQP: 0x0  SA_QP: 0x1
   IPAddr IPV6/IPAddr IPv4:  ::/0.0.0.0
   VLs Active:    8+1
   VL: Cap 8+1   HighLimit 0x0000   PreemptLimit 0x0000
   VLFlowControlDisabledMask: 0x00000000   ArbHighCap: 16  ArbLowCap: 16
   MulticastMask: 0x0    CollectiveMask: 0x0
   P_Key Enforcement: In: Off Out: Off
   MulticastPKeyTrapSuppressionEnabled:  0   ClientReregister  0
   PortMode ActiveOptimize: Off PassThru: Off VLMarker: Off 16BTrapQuery: Off
   FlitCtrlInterleave Distance Max:  1  Enabled:  1
     MaxNestLevelTxEnabled: 0  MaxNestLevelRxSupported: 0
   FlitCtrlPreemption MinInitial: 0x0000 MinTail: 0x0000 LargePktLim: 0x00
     SmallPktLimit: 0x00 MaxSmallPktLimit 0x00 PreemptionLimit: 0x00
   PortErrorActions: 0x172000: CE-UVLMCE-BCDCE-BTDCE-BHDR-BVLM
   BufferUnits:VL15Init 0x0110 VL15CreditRate 0x00 CreditAck 0x0 BufferAlloc 0x3
   MTU  Supported: (0x6) 8192 bytes
   MTU  Active By VL:
   00: 8192 01:    0 02:    0 03:    0 04:    0 05:    0 06:    0 07:    0
   08:    0 09:    0 10:    0 11:    0 12:    0 13:    0 14:    0 15: 2048
   16:    0 17:    0 18:    0 19:    0 20:    0 21:    0 22:    0 23:    0
   24:    0 25:    0 26:    0 27:    0 28:    0 29:    0 30:    0 31:    0
   StallCnt/VL:  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   HOQLife VL[00,07]: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
   HOQLife VL[08,15]: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
   HOQLife VL[16,23]: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
   HOQLife VL[24,31]: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
   ReplayDepth Buffer 0x80; Wire 0x0c
   DiagCode: 0x0000    LedEnabled: Off
   LinkDownReason: None    NeighborLinkDownReason: None
   OverallBufferSpace: 0x0880
   Violations    M_Key: 0         P_Key: 0        Q_Key: 0

To display information about the Intel OP HFI:

# hfi1_control -i
Driver Version: 0.9-294
Driver SrcVersion: A08826F35C95E0E8A4D949D
Opa Version: 10.3.0.0.81
0: BoardId: Intel Omni-Path Host Fabric Interface Adapter 100 Series
0: Version: ChipABI 3.0, ChipRev 7.17, SW Compat 3
0: ChipSerial: 0x00790311
0,1: Status: 5: LinkUp 4: ACTIVE
0,1: LID=0x1 GUID=0011:7501:0179:0311


# systemctl stop firewalld
# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Thu 2017-01-12 21:35:18 EST; 1s ago
  Process: 137597 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 137597 (code=exited, status=0/SUCCESS)

Jan 12 21:34:13 ebi2s28c01.jf.intel.com firewalld[137597]: 2017-01-12 21:34:1...
Jan 12 21:35:17 ebi2s28c01.jf.intel.com systemd[1]: Stopping firewalld - dyna...
Jan 12 21:35:18 ebi2s28c01.jf.intel.com systemd[1]: Stopped firewalld - dynam...
Hint: Some lines were ellipsized, use -l to show in full.

Finally, you need to set up Secure Shell (SSH) password-less for the MPI test. When running a program on the target machine, SSH requires the target password to log on to the target machine and execute the program. To enable this transaction without manual intervention, you need to enable the ssh login without a password. To do this, first generate a pair of authentication keys on the host without entering a passphrase:

[host-device ~]$ ssh-keygen -t rsa

Then append the host machine new public key to the target machine public key using the command ssh-copy-id:

[host-device ~]$ ssh-copy-id <user>@192.168.100.102

Running an Intel® MPI Benchmarks Program

In this section, we use a benchmark program to observe the IP over Fabric performance. Note that the numbers are for reference and illustration purposes only, as the tests are running with preproduction servers.

On both systems, Intel® Parallel Studio 2017 Update 1 was installed. First, we run the Sendrecv benchmark between the servers using TCP protocol. Sendrecv is a Parallel Transfer Benchmark in the suite Intel® MPI Benchmarks IMB-MPI1, this tool is available with Intel Parallel Studio. To use the TCP protocol in this benchmark, users can specify “-genv I_MPI_FABRICS shm:tcp”:

[root@ebi2s28c01 ~]# mpirun -genv I_MPI_FABRICS shm:tcp -host ebi2s28c01 -n 1 /opt/intel/impi/2017.1.132/bin64/IMB-MPI1 Sendrecv : -host ebi2s28c02 -n 1 /opt/intel/impi/2017.1.132/bin64/IMB-MPI1
Source Parallel Studio
Intel(R) Parallel Studio XE 2017 Update 1 for Linux*
Copyright (C) 2009-2016 Intel Corporation. All rights reserved.
#------------------------------------------------------------
#    Intel (R) MPI Benchmarks 2017, MPI-1 part
#------------------------------------------------------------
# Date                  : Fri Jan 13 21:02:36 2017
# Machine               : x86_64
# System                : Linux
# Release               : 3.10.0-327.el7.x86_64
# Version               : #1 SMP Thu Oct 29 17:29:29 EDT 2015
# MPI Version           : 3.1
# MPI Thread Environment:


# Calling sequence was:

# /opt/intel/impi/2017.1.132/bin64/IMB-MPI1 Sendrecv

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE
# MPI_Datatype for reductions    :   MPI_FLOAT
# MPI_Op                         :   MPI_SUM
#
#

# List of Benchmarks to run:

# Sendrecv

#-----------------------------------------------------------------------------
# Benchmarking Sendrecv
# #processes = 2
#-----------------------------------------------------------------------------
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
            0         1000        63.55        63.57        63.56         0.00
            1         1000        62.30        62.32        62.31         0.03
            2         1000        53.16        53.17        53.16         0.08
            4         1000        69.16        69.16        69.16         0.12
            8         1000        63.27        63.27        63.27         0.25
           16         1000        62.46        62.47        62.46         0.51
           32         1000        57.75        57.76        57.76         1.11
           64         1000        62.57        62.60        62.58         2.04
          128         1000        45.21        45.23        45.22         5.66
          256         1000        45.04        45.08        45.06        11.36
          512         1000        50.28        50.28        50.28        20.37
         1024         1000        60.76        60.78        60.77        33.69
         2048         1000        81.36        81.38        81.37        50.33
         4096         1000       121.30       121.37       121.33        67.50
         8192         1000       140.51       140.63       140.57       116.50
        16384         1000       232.06       232.14       232.10       141.16
        32768         1000       373.63       373.74       373.69       175.35
        65536          640       799.55       799.92       799.74       163.86
       131072          320      1473.76      1474.09      1473.92       177.83
       262144          160      2806.43      2808.14      2807.28       186.70
       524288           80      6031.64      6033.80      6032.72       173.78
      1048576           40      9327.35      9330.27      9328.81       224.77
      2097152           20     19665.44     19818.81     19742.13       211.63
      4194304           10     50839.90     52294.80     51567.35       160.41


# All processes entering MPI_Finalize

Next, edit the /etc/hosts file and add an alias for the above IP addresses over Fabric (192.168.100.101, 192.168.100.102).

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.23.3.27  ebi2s28c01
10.23.3.148 ebi2s28c02
192.168.100.101 ebi2s28c01-opa
192.168.100.102 ebi2s28c02-opa

To use Intel OPA, users can specify “-genv I_MPI_FABRICS shm:tmi -genv I_MPI_TMI_PROVIDER psm2” or just “–PSM2”. Note that PSM2 stands for Intel® Performance Scaled Messaging 2, a high-performance vendor-specific protocol that provides a low-level communications interface for the Intel® Omni-Path family of products. By using Intel OPA, the performance increases significantly:

[root@ebi2s28c01 ~]# mpirun -PSM2 -host ebi2s28c01-opa -n 1 /opt/intel/impi/2017.1.132/bin64/IMB-MPI1 Sendrecv : -host ebi2s28c02-opa -n 1 /opt/intel/impi/2017.1.132/bin64/IMB-MPI1
Source Parallel Studio
Intel(R) Parallel Studio XE 2017 Update 1 for Linux*
Copyright (C) 2009-2016 Intel Corporation. All rights reserved.
#------------------------------------------------------------
#    Intel (R) MPI Benchmarks 2017, MPI-1 part
#------------------------------------------------------------
# Date                  : Fri Jan 27 22:31:23 2017
# Machine               : x86_64
# System                : Linux
# Release               : 3.10.0-327.el7.x86_64
# Version               : #1 SMP Thu Oct 29 17:29:29 EDT 2015
# MPI Version           : 3.1
# MPI Thread Environment:


# Calling sequence was:

# /opt/intel/impi/2017.1.132/bin64/IMB-MPI1 Sendrecv

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE
# MPI_Datatype for reductions    :   MPI_FLOAT
# MPI_Op                         :   MPI_SUM
#
#

# List of Benchmarks to run:

# Sendrecv

#-----------------------------------------------------------------------------
# Benchmarking Sendrecv
# #processes = 2
#-----------------------------------------------------------------------------
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
            0         1000         1.09         1.09         1.09         0.00
            1         1000         1.05         1.05         1.05         1.90
            2         1000         1.07         1.07         1.07         3.74
            4         1000         1.08         1.08         1.08         7.43
            8         1000         1.02         1.02         1.02        15.62
           16         1000         1.24         1.24         1.24        25.74
           32         1000         1.21         1.22         1.22        52.63
           64         1000         1.26         1.26         1.26       101.20
          128         1000         1.20         1.20         1.20       213.13
          256         1000         1.31         1.31         1.31       392.31
          512         1000         1.36         1.36         1.36       754.03
         1024         1000         2.16         2.16         2.16       948.54
         2048         1000         2.38         2.38         2.38      1720.91
         4096         1000         2.92         2.92         2.92      2805.56
         8192         1000         3.90         3.90         3.90      4200.97
        16384         1000         8.17         8.17         8.17      4008.37
        32768         1000        10.44        10.44        10.44      6278.62
        65536          640        17.15        17.15        17.15      7641.97
       131072          320        21.90        21.90        21.90     11970.32
       262144          160        32.62        32.62        32.62     16070.33
       524288           80        55.30        55.30        55.30     18961.18
      1048576           40        99.05        99.05        99.05     21172.45
      2097152           20       187.19       187.30       187.25     22393.31
      4194304           10       360.39       360.39       360.39     23276.25


# All processes entering MPI_Finalize

The graph below summarizes the results when running the benchmark using TCP and Intel OPA. The x-axis represents the message length in bytes, and the y-axis represents the throughput (Mbytes/sec).

Sendrecv Benchmark Chart

Summary

This document showed how the Intel OP HFI cards were installed on two systems and connected back-to-back with an Intel OPA cable. Then Intel Omni-Path Fabric host software was installed on the host. All the configuration and verification steps were shown in detail to bring the necessary services up. Finally, a simple Intel MPI Benchmarks was run to illustrate the benefit of using Intel OPA.

References

Introduction to Software Defined Visualization

$
0
0

Software Defined Visualization (SDVis) is an open source initiative from Intel and industry collaborators to improve the visual fidelity, performance and efficiency of prominent visualization solutions – with a particular emphasis on supporting the rapidly growing “Big Data” usage on workstations through HPC supercomputing clusters without the memory limitations and cost of GPU based solutions.

 

Enhance existing applications using the high performing parallel software rendering libraries 

 

Embree 

Ray Tracing Kernel Library

The target user of Embree are graphics application engineers that want to improve the performance of their application by leveraging the optimized ray tracing kernels of Embree. The kernels are optimized for photo-realistic rendering on the latest Intel® processors with support for SSE, AVX, AVX2, and AVX-512.


OSPRay 

A Ray Tracing Based Rendering Engine for High-Fidelity Visualization

an open source, scalable, and portable ray tracing engine for high-performance, high-fidelity visualization on Intel® Architecture CPUs. OSPRay is released under the permissive Apache 2.0 license.


OpenSWR 

OpenGL Software Rasterizer

A high performance, highly scalable OpenGL compatible software rasterizer that allows use of unmodified visualization software. This allows working with datasets where GPU hardware isn't available or is limiting. OpenSWR is completely CPU-based, and runs on anything from laptops, to workstations, to compute nodes in HPC systems.

 

Intel® Parallel Computing Center Collaborators

Texas Advanced Computer Center (TACC)
Kitware, Inc.
University of Tennessee
University of Oregon
University of Utah
The Stephen Hawking Centre for Theoretical CosmologyUniversity of Cambridge
Hartree Centre STFC, Daresbury
 

Intel Node FAQ

$
0
0

The following are key question and answers in supporting common steps needed for using the Intel® Xeon Phi processor powered Remote Access Cluster for Artificial Intelligence.  If you’ve been granted node access to this system to test projects, and build out Machine Learning or Deep Learning models, the FAQs below will serve to help you get started on the system. 

How do I get access to my Intel node?
You will receive an email from Colfax International that provides a link to some instructions you need to follow in order to setup your system. The instructions are simple to follow and should only take about 10 minutes. It is important that you read each section that pertains to you carefully so that you avoid any confusion or the need to redo the setup process.
How do I install system packages?
You can install system packages using the “sudo yum” command.
Does my Intel node have Intel® Distribution for Python* installed?
To check if your node has Intel Distribution for Python* installed, look in the “/opt/intel” directory. There should be “intelpython*” directories in “/opt/intel.” The installed version is determined by the remainder of the directory name. For example, intelpython27 means that this directory contains Intel Distribution for Python* 2.7.
How can I use Intel® Distribution for Python* and Pip instead of the default system Python* and Pip?

You can add “intel-python” and “intel-pip” aliases in the “.bashrc” file in your home directory. The following examples show how to add an alias for Intel Distribution for Python* 2.7 and its corresponding Pip. To use an alias with any other Intel Distribution for Python*, you must substitute the version digits (that is, ’27’) for the installed version you would like to use.

To add an “intel-python” alias:

alias intel-python=“/opt/intel/intelpython27/bin/python”

To add an “intel-pip” alias:

alias intel-python=“/opt/intel/intelpython27/bin/pip”

How do I install Intel® Software Optimization for Theano*?

Each Intel node should already have Intel Software Optimization for Theano* in “/opt/theano”. To use this Intel Software Optimization for Theano* version, use the following commands.

NOTE: If you have not installed these prerequisite packages, please install them using the following command.

	pip install nose-parameterized, Pygments, requests, docutils, snowballstemmer, alabaster, Jinja2, imagesize, pytz, babel, Sphinx —user —no-deps

	# create an installs directory if one does not already exist
	[[ -d ~/.installs/node_copies ]] || mkdir -p ~/.installs/node_copies
	# remove the previous installation of theano if it exists
	rm -rf ~/.installs/node_copies/intel-theano
	# copy Intel Theano from the opt directory
	cp -r /opt/theano ~/.installs/node_copies/intel-theano
	# install Intel Theano using Pip
	cd ~/.installs/node_copies/intel-theano
	python setup.py build
	pip install . —user
	cp theanorc_icc_mkl ~/.theanorc
	echo “Test if Intel Theano is installed by running python and importing theano”
	echo “For more information, see the Theano README.md or Installation files”
	
How do I install Keras*?

A version of the Keras deep learning library is in the “/opt/“ directory. To install this version of Keras, make sure that Theano or TensorFlow is installed first, then use the following commands.

	# install Keras dependencies.
	pip install h5py —user —no-deps
	# create an installs directory if one does not already exist
	[[ -d ~/.installs/node_copies ]] || mkdir -p ~/.installs/node_copies
	# remove the previous installation of keras if it exists
	rm -rf ~/.installs/node_copies/keras
	# copy  from the opt directory
	cp -r /opt/keras ~/.installs/node_copies/keras
	# install Keras using Pip
	cd ~/.installs/node_copies/keras
	python setup.py build
	pip install . —user
	echo “Test if Keras is installed by running python and importing keras”
	echo “For more information, see the Keras README.md or Installation files”
	

Demo: Software Defined Visualization Using Intel® Xeon Phi™ Processor

$
0
0

In this demo we are showcasing the use of Intel® Xeon Phi™ processor, to do a 3D visualization of tumor in a human brain. This can help advance research in medical field by getting precise detection and removal of something like tumor in human brain.

More information

The tool used for visualization is Paraview, with OSPRay as the rendering library.

Pre-requisites

Intel® Xeon Phi™ processor system with CentOS 7.2 Linux* (internet enabled)

Open a terminal, in your work area directory and follow the steps below:

  1. Create directory for the demo

    mkdir Intel_brain_demo

  2. Change directory

    cd Intel_brain_demo

  3. Create two directories under this

    mkdir paraview
    mkdir ospray

  4. Access the files from Dropbox:

    https://www.dropbox.com/s/wj0qp1clxv5xssv/SC_2016_BrainDemo.tar.gz?dl=0

  5. Copy the Paraview and Ospray tar files into the respective directories you created in steps above

    mv SC_2016_BrainDemo/paraview_sc_demo.tgz paraview/
    mv SC_2016_BrainDemo/ospray.tgz ospray/

  6. Untar each of the *tgz directories in the respective area

    tar –xzvf *.tgz

  7. Point the library path

    Export
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<…../Intel_brain_demo/ospray/install/lib64>

  8. Optional step: set graphics system to a random variable, only if Paraview doesn’t load normally

    export QT_GRAPHICSSYSTEM=gtk

  9. Change directory to paraview/install where the binaries are

    cd paraview/install

  10. Run Paraview

    ./bin/paraview

  11. Once Paraview loads

    Select File/Load State

  12. Then load the brain_demo.psvm file from the SC_2016BrainDemo that you sync’d in step above

  13. Then it will ask you to load VTK files, click the “...” button to select the appropriate *tumor1.vtk file, then *tumor2.vtk file and then *Tumor1.vtk file in order on your local machine. Then click OK.

  14. An Output Messages pop window will show with warnings. Ignore the warnings and click close, and you should see something like following:

  15. Now you can go to File/Save State and save this state. Now every time you load you can load to this state file to skip the previous step of having to locate the data files.
  16. Then on properties tab on left side, enable Ospray for every view (all the rendewViews1/2/2 by selecting that view and clicking enable Ospray)

  17. Once you do that you should see the images for all three views look as below:

  18. You can also rotate the views and see how they look.

A few issues and how to resolve

Missing OpenGL, install mesa for OpenGL

Sudo yum –y install mesa-libGL
Sudo yum –y install mesa-libGL-devel

libQtGui.so.4 error, install qt-x11 package

yum –y install qt-x11

Acknowledgements

Special thanks to Carson Brownlee and James Jeffers from Intel Corporation for all their contributions and support. Without their efforts, it wouldn’t have been possible to get this demo running.

References

  1. http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html
  2. https://software.intel.com/en-us/blogs/Intel-Parallel-Studio-XE-2016
  3. https://gitlab.kitware.com/carson/paraview
  4. https://gitlab.kitware.com/carson/vtk
  5. http://www.ospray.org
  6. http://www.ospray.org/getting_ospray.html
  7. http://dap.xeonphi.com
  8. https://ispc.github.io/downloads.html
  9. https://www.threadingbuildingblocks.org
  10. https://en.wikipedia.org/wiki/Software_rendering

Intel Distribution for Python - development environment setting for Jupyter notebook and Pycharm

$
0
0

 

This article describes how to set up the Intel® Distribution for Python* development environment. Two typical development environments are described. The first is a Jupyter notebook, and the second is Pycharm. This article contains details on setting up Intel® Distribution for Python* in both development environments.

1. What is Intel® Distribution for Python*

Intel® Distribution for Python* gives you ready access to tools and techniques for high performance to supercharge all your Python applications on modern Intel platforms. Yet Python is known for simplicity and speed of development. The Intel® Distribution for Python* addresses this fundamental performance challenge using the techniques (Intel MKL,  Intel® Threading Building Blocks,  Intel® Data Analytics Acceleration Library)

More details are in :

https://software.intel.com/en-us/intel-distribution-for-python

https://software.intel.com/en-us/intel-distribution-for-python/details

2. Install Intel® Distribution for Python* (IDP)

(1) Install Anaconda

https://www.continuum.io/downloads

(2) Run command prompt as admin in Windows and enter the following commands in the window to download / install Intel Python via Anaconda.

 

conda update conda
conda config --add channels intel
conda create -n idp3 intelpython3_full python=3
conda create -n idp2 intelpython2_full python=2

(3) To run Jupyter notebook for IDP2 or IDP3, enter the following commands in the window.

activate idp2
jupyter notebook

activate idp3
jupyter notebook

(4) Jupyter notebook is opened in browser and go to NEW > Python2 or Python3

(5) Enter Python commands for checking the Python version. (SHIFT+ENTER Key at end of input.)

import sys
sys.version

You can see the Intel® Distribution for Python* version.

 

3. Setting Intel® Distribution for Python* for Pycharm IDE

(1) Install Pycharm IDE  (https://www.jetbrains.com/pycharm/download)

(2) New project  > Add Local

(2) Enter the path of Intel Distribution for Python

 Nomally, it is C:\ProgramData\Anaconda2\envs\idp2\python.exe

(3) Enter Python commands for checking the Python version and run. And you can check the right Intel Distribution for Python version has set.

 

References :

https://software.intel.com/en-us/articles/using-intel-distribution-for-python-with-anaconda

 

Recipe: Building and running NEMO* on Intel® Xeon Phi™ Processors

$
0
0

About NEMO*

The NEMO* (Nucleus for European Modelling of the Ocean) numerical solutions framework encompasses models of ocean, sea ice, tracers, and biochemistry equations and their related physics. It also incorporates the pre- and post-processing tools and the interface to other components of the Earth System. NEMO allows several ocean-related components of the Earth System to work together or separately, and also allows for two-way nesting via AGRIF software. It is interfaced with the remaining components of the Earth System package (atmosphere, land surfaces, and so on) via the OASIS coupler.

This recipe shows the performance advantages of using the Intel® Xeon Phi™ processor 7250.

NEMO 3.6 is the current stable version.

Downloading the Code

  1. Download the NEMO source code from the official NEMO repository (you should register at www.nemo-ocean.eu ):

    svn co –r 6939  http://forge.ipsl.jussieu.fr/nemo/svn/branches/2015/nemo_v3_6_STABLE/NEMOGCM nemo

  2. Download the XIOS IO server from the official XIOS repository:

    svn co -r 703 http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-1.0 xios

  3. If your system has NetCDF libraries with Fortran bindings already installed and they link with NEMO and XIOS binaries, go to the section “Building XIOS for the Intel Xeon Processor”:
  4. NetCDF-Fortran  from https://github.com/Unidata/netcdf-fortran/archive/netcdf-fortran-4.2.tar.gz

Building Additional Libraries for the Intel® Xeon® Processor

  1. First, choose a directory for your experiments, such as “~/NEMO-BDW”:
    export base=”~/NEMO-BDW”
  2. Create a directory and copy all required libraries in $base:
    mkdir -p $base/libraries
  3. Unpack the tarball files in $base/libraries/src.
  4. To build an Intel® Advanced Vector Extensions 2 (Intel® AVX2) version of libraries, set:
    export arch="-xCORE-AVX2"
  5. Set the following environment variables:
    export PREFIX=$base/libraries
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PREFIX}/lib
    export CFLAGS="-I$PREFIX/include -L$PREFIX/lib –O3 -g -traceback -openmp ${arch} -fPIC"
    export CPPFLAGS=$CFLAGS
    export CXXFLAGS=$CFLAGS
    export FFFLAGS=$CFLAGS
    export FCFLAGS=$CFLAGS
    export LDFLAGS="-L$PREFIX/lib -openmp ${arch} -fPIC"
    export FC=mpiifort
    export CXX=mpiicc
    export CC=mpiicc
    export CPP="icc -E"
  6. Build szip:
    cd $base/libraries/src/szip-2.1
    ./configure --prefix=$PREFIX
    make -j 4
    make install
  7. Build zlib:
    cd $base/libraries/src/zlib-1.2.8
    ./configure --prefix=$PREFIX
    make –j 4
    make install
  8. Build HDF5:
    cd $base/libraries/src/hdf5-1.8.12
    ./configure --with-zlib=$PREFIX --prefix=$PREFIX --enable-fortran --with-szlib=$PREFIX --enable-hl
    make
    make install
  9. Build CURL:
    cd $base/libraries/src/curl- 7.42.1
    ./configure --prefix=$PREFIX
    make –j 4
    make install
  10. Build NetCDF:
    cd $base/libraries/src/netcdf-4.3.3
    export LIBS=" -lhdf5_hl -lhdf5 -lz -lsz -lmpi"
    export LD_FLAGS+=" -L$PREFIX/lib"
    ./configure --prefix=$PREFIX
    make
    make install
  11. Build NetCDF Fortran wrapper:
    cd $base/libraries/src/netcdf-fortran-4.2/
    export LIBS=""
    export CFLAGS="$CFLAGS -lnetcdf"
    export CPPFLAGS=$CFLAGS
    export CXXFLAGS=$CFLAGS
    export FFFLAGS=$CFLAGS
    export FCFLAGS=$CFLAGS
    export FC=ifort
    export CXX=mpiicc
    export CC=mpiicc
    export LDFLAGS+=" -L$I_MPI_ROOT/lib64/"
    ./configure --prefix=$PREFIX
    make
    make install

Building XIOS for the Intel Xeon Processor

  1. Copy XIOS source code to $base/xios
  2. Create files:
    $base/xios/arch/arch-ifort_linux.env
    $base/xios/arch/arch-ifort_linux.fcm
    $base/xios/arch/arch-ifort_linux.path
  3. Add the following lines to the $base/xios/arch/arch-ifort_linux.env file:
    export NETCDF_INC_DIR=$base/libraries/include
    export NETCDF_LIB_DIR=$base/libraries/lib
    export HDF5_INC_DIR=$base/libraries/include
    export HDF5_LIB_DIR=$base/libraries/lib
  4. Add the following lines to the $base/xios/arch/arch-ifort_linux.fcm file:
    %NCDF_INC            -I$base/libraries/include
    %NCDF_LIB            -L$base/libraries/lib -lnetcdff -lnetcdf -lhdf5 -lcurl -lz -lsz
    %FC                  mpiifort
    %FCFLAGS             -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %FFLAGS              -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %LD                  mpiifort
    %FPPFLAGS            -P -C -traditional
    %LDFLAGS             -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %AR                  ar
    %ARFLAGS             -r
    %MK                  gmake
    %USER_INC            %NCDF_INC_DIR
    %USER_LIB            %NCDF_LIB_DIR
    
    %MAKE                gmake
    %BASE_LD        -lstdc++ -lifcore -lintlc
    %LINKER         mpiifort -nofor-main
    %BASE_INC       -D__NONE__
    %CCOMPILER      mpiicc
    %FCOMPILER      mpiifort
    %CPP            cpp
    %FPP            cpp -P
    
    %BASE_CFLAGS    -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %PROD_CFLAGS    -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %DEV_CFLAGS    -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %DEBUG_CFLAGS  -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %BASE_FFLAGS   -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %PROD_FFLAGS    -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %DEV_FFLAGS    -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
    %DEBUG_FFLAGS   -O3 -g -traceback -xCORE-AVX2 -I$base/libraries/include -L$base/libraries/lib
  5. Add the following lines to the $base/xios/arch/arch-ifort_linux.path file:
    NETCDF_INCDIR="-I $NETCDF_INC_DIR"
    NETCDF_LIBDIR="-L $NETCDF_LIB_DIR"
    NETCDF_LIB="-lnetcdff -lnetcdf -lcurl"
    MPI_INCDIR=""
    MPI_LIBDIR=""
    MPI_LIB=""
    HDF5_INCDIR="-I $HDF5_INC_DIR"
    HDF5_LIBDIR="-L $HDF5_LIB_DIR"
    HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl"
  6. Change directory to $base/xios and execute the following command:
    ./make_xios --full --prod --arch ifort_linux

Building NEMO for the Intel Xeon Processor and Preparing Workloads

  1. Copy NEMO source code to $base/nemo
  2. Apply the following patch to file $base/nemo/NEMOGCM/ NEMO/OPA_SRC/nemogcm.F90:
    @@ -116,6 +116,7 @@
           !!              Madec, 2008, internal report, IPSL.
           !!----------------------------------------------------------------------
           INTEGER ::   istp       ! time step index
    +DOUBLE PRECISION :: mpi_wtime, sstart, send
           !!----------------------------------------------------------------------
           !
     #if defined key_agrif
    @@ -163,18 +164,19 @@
     #if defined key_agrif
               CALL Agrif_Regrid()
     #endif
    -
              DO WHILE ( istp <= nitend .AND. nstop == 0 )
    +sstart = mpi_wtime()
     #if defined key_agrif
                 CALL stp                         ! AGRIF: time stepping
     #else
                 CALL stp( istp )                 ! standard time stepping
     #endif
    +send=mpi_wtime()
    +print *, "Step ", istp, " - " , send-sstart , "s."
                 istp = istp + 1
                 IF( lk_mpp )   CALL mpp_max( nstop )
              END DO
     #endif
    -
           IF( lk_diaobs   )   CALL dia_obs_wri
           !
           IF( ln_icebergs )   CALL icb_end( nitend )
  3. Create the file $base/nemo/ARCH/arch-mpiifort_linux.fcm and add the following lines:
    %NCDF_INC            -I/$base/libraries/include
    %NCDF_LIB            -L$base/libraries/lib -lnetcdff -lnetcdf -lz -lcurl -lhdf5_hl -lhdf5 -lz -lcurl
    %CPP                 icc -E
    %FC                  mpiifort
    %FCFLAGS          -r8 -g -traceback -qopenmp -O3 -xCORE-AVX2 -g -traceback
    %FFLAGS             -r8 -g -traceback -qopenmp -O3 -xCORE-AVX2 -g -traceback
    %LD                  mpiifort
    %FPPFLAGS            -P -C -traditional
    %LDFLAGS             -lstdc++ -lifcore -O3 -xCORE-AVX2 -g -traceback
    %AR                  ar
    %ARFLAGS             -r
    %MK                  gmake
    %XIOS_INC            -I$base/xios/inc
    %XIOS_LIB            -L$base/xios/lib -lxios
    %USER_INC            %NCDF_INC %XIOS_INC
    %USER_LIB            %NCDF_LIB %XIOS_LIB
  4. Build the binary for the GYRE workload:
    cd $base/nemo/NEMOGCM/CONFIG
    ./makenemo -n GYRE -m mpiifort_linux -j 4
  5. Create a sandbox directory for the GYRE runs:
    1.  mkdir -p $base/nemo/gyre-exp
       cp –r $base/nemo/NEMOGCM/CONFIG/GYRE/BLD/bin/nemo.exe $base/nemo/gyre-exp
       cp -r $base/nemo/NEMOGCM/CONFIG/GYRE/EXP00/* $base/nemo/gyre-exp
    2. Switch creating mesh files to off by changing “nn_msh” to 0 in namelist_ref file
    3. Enable benchmark mode by changing “nn_bench” to 1 in namelist_ref  file.
    4. Set the following parameters in the “&namcfg” section:
      jp_cfg = 70
      jpidta = 2102
      jpjdta = 1402
      jpkdta = 31
      jpiglo = 2102
    5. Switch off using the IO server in the iodef.xml file (“using_server = false”)
  6. Build a binary for the ORCA025 workload:
    1. Change  “$base/nemo/NEMOGCM/CONFIG/ORCA2_LIM3/cpp_ORCA2_LIM3.fcm” content to “bld::tool::fppkeys key_trabbl key_vvl key_dynspg_ts key_ldfslp key_traldf_c2d key_traldf_eiv key_dynldf_c3d key_zdfddm key_zdftmx key_mpp_mpi key_zdftke key_lim3 key_iomput”
    2. Change the line “ORCA2_LIM3 OPA_SRC LIM_SRC_3 NST_SRC” to “ORCA2_LIM3 OPA_SRC LIM_SRC_3” in file $base/nemo/NEMOGCM/CONFIG/cfg.txt
    3. ./makenemo -n ORCA2_LIM3 -m mpiifort_linux -j 4
  7. Go to the Barcelona Supercomputing Center (in Spanish), and in section 9 locate the paragraph, “PREGUNTAS Y RESPUESTAS:” with a path to the ftp server and credentials to log in.
  8. Download the BenchORCA025L75.tar.gz file from directory Benchmarks_aceptacion/NEMO/
  9. Extract the contents of the tarball file to $base/nemo/orca-exp
  10. Copy the NEMO binary to the sandbox directory:
    cp $base/nemo/NEMOGCM/CONFIG/ORCA2_LIM3/BLD/bin/nemo.exe $base/nemo/orca-exp
  11. Edit the file $base/nemo/orca-exp/iodef.xml and add the following lines into the “<context id="xios">    <variable_definition>” section:
    <variable id="min_buffer_size" type="int">994473778</variable><variable id="buffer_size" type="int">994473778</variable> 
  12. In the file namelist_ref in section “&namrun” set the following variables:
    nn_itend     =   10
    nn_stock    =    10
    nn_write    =    10
  13. Copy the $base/nemo/NEMOGCM/CONFIG/SHARED/namelist_ref file to $base/nemo/exp-orca
  14. Switch off using the IO server in the iodef.xml file (“using_server = false”)
  15. To build the KNL binaries change “-xCORE-AVX2” to “-xMIC-AVX512”, change $base to another directory, and do all of the steps again.

Running the GYRE Workload with the Intel Xeon Processor

  1. Go to $base/nemo/gyre-exp
  2. Source the environment variables for the compiler and the Intel® MPI Library:
    source /opt/intel/compiler/latest/bin/compilervars.sh intel64
    source /opt/intel/impi/latest/bin/compilervars.sh intel64
  3. Add libraries to LD_LIBRARY_PATH:
    export LD_LIBRARY_PATH=$base/libraries/lib/:$LD_LIBRARY_PATH
  4. Set additional variables for the Intel MPI Library:
    export I_MPI_FABRICS=shm:tmi
    export I_MPI_PIN_CELL=core
  5. Run NEMO:
    mpiexec.hyrda –genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe

Running the ORCA025 Workload with the Intel Xeon Processor

  1. Go to $base/nemo/orca-exp
  2. Source the environment variables for the compiler and the Intel MPI Library:
    source /opt/intel/compiler/latest/bin/compilervars.sh intel64
    source /opt/intel/impi/latest/bin/compilervars.sh intel64
  3. Add libraries to LD_LIBRARY_PATH:
    export LD_LIBRARY_PATH=$base/libraries/lib/:$LD_LIBRARY_PATH
  4. Set additional variables for Intel MPI Library:
    export I_MPI_FABRICS=shm:tmi
    export I_MPI_PIN_CELL=core
  5. Run NEMO:
    mpiexec.hyrda –genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe
  6. If you are faced with hangs while the application is running you can run NEMO with the XIOS server in detached mode:
    1. Copy xios_server.exe from $base/xios/bin to $base/nemo/orca-exp
    2. Edit iodef.xml file and set “using_server = true”
    3. mpiexec.hy–da -genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe : -n 2 ./xios_server.exe

Building Additional Libraries for the Intel® Xeon Phi™ Processor

  1. First, choose a directory for your experiments, such as “~/NEMO-KNL”
    export base=”~/NEMO-KNL”
  2. Create the directory and copy all required libraries in $base:
    mk–ir -p $base/libraries
  3. Unpack the tarball files in $base/libraries/src
  4. To build an Intel AVX2 version of libraries, set:
    export a”ch="-xMIC-AV”512"
  5. Set the following environment variables:
     export PREFIX=$base/libraries
     export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PREFIX}/lib
     export CFL”GS="-I$PREFIX/incl–de -L$PREFIX/lib –O3–-g -traceb–ck -openmp ${ar–h} -”PIC"
     export CPPFLAGS=$CFLAGS
     export CXXFLAGS=$CFLAGS
     export FFFLAGS=$CFLAGS
     export FCFLAGS=$CFLAGS
     export LDFL”GS="-L$PREFIX/–ib -openmp ${ar–h} -”PIC"
     export FC=mpiifort
     export CXX=mpiicc
     export CC=mpiicc
     export ”PP="–c” -E"
  6. Build szip:
     cd $base/libraries/src/szip-2.1
     ./config–e --prefix=$PREFIX
     m–ke -j 4
     make install
  7. Build zlib:
    cd $base/libraries/src/zlib-1.2.8
    ./config–e --prefix=$PREFIX
    make –j 4
    make install
  8. Build HDF5:
    cd $base/libraries/src/hdf5-1.8.12
    ./config–e --with-zlib=$PRE–X --prefix=$PRE–X --enable-fort–n --with-szlib=$PRE–X --enable-hl
    make
    make install
  9. Build CURL:
    cd $base/libraries/src/curl- 7.42.1
    ./config–e --prefix=$PREFIX
    make –j 4
    make install
  10. Build NetCDF:
    cd $base/libraries/src/netcdf-4.3.3
    export L”B–=" -lhdf5–hl -lh–f5 –lz -–sz -”mpi"
    export LD_FLA”S–=" -L$PREFIX”lib"
    ./config–e --prefix=$PREFIX
    make
    make install
  11. Build the NetCDF Fortran wrapper:
    cd $base/libraries/src/netcdf-fortran-4.2/
    export L””S=""
    export CFL”GS="$CFL–GS -lne”cdf"
    export CPPFLAGS=$CFLAGS
    export CXXFLAGS=$CFLAGS
    export FFFLAGS=$CFLAGS
    export FCFLAGS=$CFLAGS
    export FC=ifort
    export CXX=mpiicc
    export CC=mpiicc
    export LDFLA”S–=" -L$I_MPI_ROOT/li”64/"
    ./config–e --prefix=$PREFIX
    make
    make install

Building XIOS for the Intel Xeon Phi Processor

  1. Copy XIOS source code to $base/xios
  2. Create files:
    $base/xios/arch/arch-ifort_linux.env
    $base/xios/arch/arch-ifort_linux.fcm
    $base/xios/arch/arch-ifort_linux.path
  3. Add the following lines to the $base/xios/arch/arch-ifort_linux.env file:
    export NETCDF_INC_DIR=$base/libraries/include
    export NETCDF_LIB_DIR=$base/libraries/lib
    export HDF5_INC_DIR=$base/libraries/include
    export HDF5_LIB_DIR=$base/libraries/lib
  4. Add the following lines to the $base/xios/arch/arch-ifort_linux.fcm file:
    %NCDF_INC            -I$base/libraries/include
    %NCDF_LIB            -L$base/libraries/–ib -lnetc–ff -lnet–df -lh–f5 -lc–rl –lz -lsz
    %FC                  mpiifort
    %FCFLAGS             –O3–-g -traceback –xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %FFLAGS              –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %LD                  mpiifort
    %FPPFLAGS           –-P–-C -traditional
    %LDFLAGS             –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %AR                  ar
    %ARFLAGS             -r
    %MK                  gmake
    %USER_INC            %NCDF_INC_DIR
    %USER_LIB            %NCDF_LIB_DIR
    
    %MAKE                gmake
    %BASE_LD        -lstdc++ -lifc–re -lintlc
    %LINKER         mpiif–rt -nofor-main
    %BASE_INC       -D__NONE__
    %CCOMPILER      mpiicc
    %FCOMPILER      mpiifort
    %CPP            cpp
    %FPP            –pp -P
    
    %BASE_CFLAGS    –O3–-g -traceb–ck - xMIC-AVX512-I$base/libraries/incl–de -L$base/libraries/lib
    %PROD_CFLAGS   –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %DEV_CFLAGS    –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %DEBUG_CFL–GS –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %BASE_FFLAGS   –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %PROD_FFLAGS    –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %DEV_FFLAGS    –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
    %DEBUG_FFLAGS   –O3–-g -traceb–ck - xMIC-AVX–12 -I$base/libraries/incl–de -L$base/libraries/lib
  5. Add the following lines to the $base/xios/arch/arch-ifort_linux.path file:
    NETCDF_INC”IR="-I $NETCDF_INC”DIR"
    NETCDF_LIB”IR="-L $NETCDF_LIB”DIR"
    NETCDF_”IB="-lnetc–ff -lnet–df -l”url"
    MPI_INC””R=""
    MPI_LIB””R=""
    MPI_””B=""
    HDF5_INC”IR="-I $HDF5_INC”DIR"
    HDF5_LIB”IR="-L $HDF5_LIB”DIR"
    HDF5_”IB="-lhdf5–hl -lh–f5 –lz -l”url"
  6. Change the directory to $base/xios and execute the following command:
    ./make_x–s --f–l --p–d --arch ifort_linux

Building NEMO for the Intel Xeon Phi Processor and Preparing Workloads

  1. Copy the NEMO source code to $base/nemo
  2. Apply the following patch to file $base/nemo/NEMOGCM/ NEMO/OPA_SRC/nemogcm.F90:
    @@ -116,6 +116,7 @@
           !!              Madec, 2008, internal report, IPSL.
           !!----------------------------------------------------------------------
           INTEGER ::   istp       ! time step index
    +DOUBLE PRECISION :: mpi_wtime, sstart, send
           !!----------------------------------------------------------------------
           !
     #if defined key_agrif
    @@ -163,18 +164,19 @@
     #if defined key_agrif
               CALL Agrif_Regrid()
     #endif
    -
              DO WHILE ( istp <= nitend .AND. nstop == 0 )
    +sstart = mpi_wtime()
     #if defined key_agrif
                 CALL stp                         ! AGRIF: time stepping
     #else
                 CALL stp( istp )                 ! standard time stepping
     #endif
    +send=mpi_wtime()
    +print“*, "S“ep ", is“p– “ - " , send-sstar“ ,”"s."
                 istp = istp + 1
                 IF( lk_mpp )   CALL mpp_max( nstop )
              END DO
     #endif
    -
           IF( lk_diaobs   )   CALL dia_obs_wri
           !
           IF( ln_icebergs )   CALL icb_end( nitend )
  3. Create the file $base/nemo/ARCH/arch-mpiifort_linux.fcm and add the following lines:
    %NCDF_INC            -I/$base/libraries/include
    %NCDF_LIB            -L$base/libraries/–ib -lnetc–ff -lnet–df –lz -lc–rl -lhdf5–hl -lh–f5 –lz -lcurl
    %CPP                 –cc -E
    %FC                  mpiifort
    %FCFLAGS          –r8–-g -traceb–ck -qope–mp –O3 - xMIC-AVX–12–-g -traceback
    %FFLAGS             –r8–-g -traceb–ck -qope–mp –O3 - xMIC-AVX–12–-g -traceback
    %LD                  mpiifort
    %FPPFLAGS           –-P–-C -traditional
    %LDFLAGS             -lstdc++ -lifc–re –O3 - xMIC-AVX–12–-g -traceback
    %AR                  ar
    %ARFLAGS             -r
    %MK                  gmake
    %XIOS_INC            -I$base/xios/inc
    %XIOS_LIB            -L$base/xios/–ib -lxios
    %USER_INC            %NCDF_INC %XIOS_INC
    %USER_LIB            %NCDF_LIB %XIOS_LIB
  4. Build the binary for the GYRE workload:
    cd $base/nemo/NEMOGCM/CONFIG
    ./maken–mo -n G–RE -m mpiifort_li–ux -j 4
  5. Create a sandbox directory for the GYRE runs:
    1. mk–ir -p $base/nemo/gyre-exp
      cp –r $base/nemo/NEMOGCM/CONFIG/GYRE/BLD/bin/nemo.exe $base/nemo/gyre-exp–cp -r $base/nemo/NEMOGCM/CONFIG/GYRE/EXP00/* $base/nemo/gyre-exp
    2. Switch off creating mesh files by changing “nn_msh” to 0 in the namelist_ref file
    3. Enable benchmark mode by changing “nn_bench” to 1 in the namelist_ref  file.
    4. Set the following parameters in the “&namcfg” section:
      jp_cfg = 70
      jpidta = 2102
      jpjdta = 1402
      jpkdta = 31
      jpiglo = 2102
      jpjglo = 1402
    5. Switch off using the IO server in the iodef.xml file (“using_server = false”)
  6. Build the binary for ORCA025 workload:
    1. Change  $base/nemo/NEMOGCM/CONFIG/ORCA2_LIM3/cpp_ORCA2_LIM3.fcm content to “bld::tool::fppkeys key_trabbl key_vvl key_dynspg_ts key_ldfslp key_traldf_c2d key_traldf_eiv key_dynldf_c3d key_zdfddm key_zdftmx key_mpp_mpi key_zdftke key_lim3 key_iomput”
    2. Change line “ORCA2_LIM3 OPA_SRC LIM_SRC_3 NST_SRC” to “ORCA2_LIM3 OPA_SRC LIM_SRC_3” in the file $base/nemo/NEMOGCM/CONFIG/cfg.txt 
    3. ./maken–mo -n ORCA2_L–M3 -m mpiifort_li–ux -j 4
  7. Go to the Barcelona Supercomputing Center (in Spanish), and in section 9 locate the paragraph, “PREGUNTAS Y RESPUESTAS:” with the path to the ftp server and credentials to log in.
  8. Download the BenchORCA025L75.tar.gz file from the Benchmarks_aceptacion/NEMO/ directory
  9. Extract the contents of the tarball file to $base/nemo/orca-exp
  10. Copy the NEMO binary to the sandbox directory:
    cp $base/nemo/NEMOGCM/CONFIG/ORCA2_LIM3/BLD/bin/nemo.exe $base/nemo/orca-exp
  11. Edit the file $base/nemo/orca-exp/iodef.xml and add the following lines into the “<context”id="”ios">    <variable_definition>” section:
    <variable”id="min_buffer_”ize" t”pe=”int">994473778</variable><variable”id="buffer_”ize" t”pe=”int">994473778</variable>
  12. In the file namelist_ref in section “&namrun” set the following variables:
    nn_itend    =  10
    nn_stock    =    10
    nn_write    =    10
  13. Copy the $base/nemo/NEMOGCM/CONFIG/SHARED/namelist_ref file to the $base/nemo/exp-orca directory
  14. Switch off using the IO server in the iodef.xml file (“using_server = false”)
  15. To build the KNL binaries, change “-xCORE- to “-xMIC-AVX512”, change $base to another directory, and do all of the steps again.

Running the GYRE Workload with the Intel Xeon Phi Processor

  1. Go to $base/nemo/gyre-exp
  2. Source the environment variables for the compiler and Intel MPI Library:
    source /opt/intel/compiler/latest/bin/compilervars.sh intel64
    source /opt/intel/impi/latest/bin/compilervars.sh intel64
  3. Add the libraries to LD_LIBRARY_PATH:
    export LD_LIBRARY_PATH=$base/libraries/lib/:$LD_LIBRARY_PATH
  4. Set additional variables for Intel MPI Library:
    export I_MPI_FABRICS=shm:tmi
    export I_MPI_PIN_CELL=core
  5. Run NEMO:
    mpiexec.hyrda -genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe

Running the ORCA025 Workload with the Intel Xeon Phi Processor

  1. Go to $base/nemo/orca-exp
  2. Source environment variables for the compiler and Intel MPI Library:
    source /opt/intel/compiler/latest/bin/compilervars.sh intel64
    source /opt/intel/impi/latest/bin/compilervars.sh intel64
  3. Add libraries to LD_LIBRARY_PATH:
    export LD_LIBRARY_PATH=$base/libraries/lib/:$LD_LIBRARY_PATH
  4. Set additional variables for the Intel MPI Library:
    export I_MPI_FABRICS=shm:tmi
    export I_MPI_PIN_CELL=core
  5. Run NEMO:
    mpiexec.hyrda -genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe
  6. If you are faced with hangs while the application is running you can run NEMO with the XIOS server in detached mode:
    1. Copy xios_server.exe from $base/xios/bin to $base/nemo/orca-exp
    2. Edit iodef.xml file and set “using_server = true”
    3. mpiexec.hyrda -genvall –f <hostfile> -n <number of ranks> -perhost <ppn> ./nemo.exe : -n 2 ./xios_server.exe

Configuring Test Systems

CPU

Dual-socket Intel® Xeon® processor E5-2697 v4, 2.3 GHz (turbo OFF), 18 cores/socket, 36 cores, 72 threads (HT on)

Intel® Xeon Phi™ processor 7250, 68 core, 136 threads, 1400 MHz core freq. (turbo OFF), 1700 MHz uncore freq.

RAM

128 GB (8 x 16 GB) DDR4 2400 DDR4 DIMMs

96 GB (6 x 16 GB) DDR4 2400 MHz  RDIMMS

Cluster File System Abstract

Intel® Enterprise Edition for Lustre* software SSD (Intel® EE for Lustre* software) SSD (136 TB storage)

Intel® Enterprise Edition for Lustre* software SSD (Intel® EE for Lustre* software) SSD (136 TB storage)

Interconnect

Intel® Omni-Path Architecture (Intel® OPA) Si 100 series

Intel® Omni-Path Architecture (Intel® OPA) Si 100 series

OS / Kernel / IB stack

Oracle Linux* server release 7.2

Kernel: 3.10.0-229.20.1.el6.x86_64.knl2

OFED version: 10.2.0.0.158_72

Oracle Linux server release 7.2

Kernel: 3.10.0-229.20.1.el6.x86_64.knl2

OFED Version 10.2.0.0.158_72

  • NEMO configuration: V3.6 r6939 with XIOS 1.0 r703, Intel® Parallel Studio XE 17.0.0.098, Intel MPI Library 2017 for Linux*
  • MPI configuration:
    • I_MPI_FABRICS=shm:tmi
    • I_MPI_PIN_CELL=core

Performance Results for the Intel Xeon Processor and Intel Xeon Phi Processor

    1. Time of second step for GYRE workload:

# nodesIntel® Xeon® ProcessorIntel® Xeon Phi™ Processor
16.5462293.642156
23.0113522.075075
41.3265010.997129
80.6406320.492369
160.3213780.284348

 

 

 

 

 

 

 

    2. Time of second step for ORCA workload:

# nodesIntel® Xeon® processorIntel® Xeon Phi™ processor
25.764083 
42.6427252.156876
81.3052381.0546
160.677250.643372

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and choose carefully when you clone a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

There is no Entitlements.plist file, how do I add Universal Links to my iOS app?

The Intel XDK project does not provide access to an Entitlements.plist file. If you are using Cordova CLI locally you would have the ability to add such a file into the CLI platform build directories located in the CLI project folder. Because the Intel XDK build system is cloud-based, your Intel XDK project folders do not include these build directories.

A workaround has been identified by an Intel XDK customer (Keith T.) and is detailed in this forum post.

Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?

If you submitted an app to the Google Play Store using a version of the Intel XDK prior to version 3088 (prior to March of 2016), you need to use your "converted legacy" certificate when you build your app in order for the Google Play Store to accept an update to your app. The error message you receive will look something like the following:

When using version 3088 (or later) of the Intel XDK, you are given the option to convert your existing Android certificate, that was automatically created for your Android builds with an older version of the Intel XDK, into a certificate for use with the new version of the Intel XDK. This conversion process is a one-time event. After you've successfully converted your "legacy Android certificate" you will never have to do this again.

Please see the following links for more details.

How do I add [image, audio, etc.] resources to the platform section of my Cordova project with the Intel XDK?

See this forum thread for a specific example, which is summarized below.

If you are using a Cordova plugin that suggests that you "add a file to the resource directory" or "make a modification to the manifest file or plist file" you may need to add a small custom plugin to your application. This is because the Cordova project that builds and packages your app is located in the the Intel XDK cloud-based build system. Your development system contains only a partial, prototype Cordova project. The real Cordova project is created on demand, when you build your application with the Intel XDK build system. Parts of your local prototype Cordova project are sent to the cloud to build your application: your source files (normally located in the "www" folder), your plugins folder, your build configuration files, your provisioning files, and your icon and splash screen files (located in the package-assets folder). Any other folders and files located in your project folder are strictly used for local simulation and testing tasks and are not used by the cloud-based build system.

To modify a manifest or plist file, see this FAQ. To create a local plugin that you can use to add resources to your Cordova cloud-based project, see the following instructions and the forum post mentioned at the beginning of this FAQ.

  • Create a folder to hold your special plugin, either in the root of your project or outside of your project. Do NOT create this folder inside of the plugins folder of your project, that is a destination folder, not a source folder.

  • Create a plugin.xml file in the root of your special plugin. This file will contain the instructions that are needed to add the resources needed for the build system.

  • Add the resources to the appropriate location in your new plugin folder.

See these Cordova plugin.xml instructions may also be helpful.

Why is my APK so big when building my Crosswalk app with the Cordova Build Package option?

Due to this line in the exported config.xml file:

<preference name="xwalkMultipleApk" value="false" />

This line causes the build to generate a "multiple architecture" Crosswalk APK file; meaning that both the x86 and the ARM Crosswalk libraries are included in your APK file. This is done because PhoneGap Build will only return a single APK file when it performs a build, unlike the XDK cloud build system, which returns a ZIP file that contains two APK files (one for each architecture).

Removing that line from the config.xml (or changing the value to "true") will generate two APK files, similar to what the XDK build system does (except they won't be bundled in a ZIP file). Unfortunately, it is not obvious, when you look at the messages created by the build system:

BUILD SUCCESSFUL

Total time: 39.289 secs
Built the following apk(s):
	/Users/username/Downloads/tmp/platforms/android/build/outputs/apk/android-debug.apk

But if you "cd" to this folder: "platforms/android/build/outputs/apk/" (within your build project) you will see something like the following:

$ cd platforms/android/build/outputs/apk/
$ ls -al
total 395904
-rw-r--r--  1 username  staff    24M Mar 27 10:09 android-armv7-debug-unaligned.apk
-rw-r--r--  1 username  staff    24M Mar 27 10:09 android-armv7-debug.apk
-rw-r--r--  1 username  staff    46M Mar 27 10:06 android-debug-unaligned.apk
-rw-r--r--  1 username  staff    46M Mar 27 10:06 android-debug.apk
-rw-r--r--  1 username  staff    27M Mar 27 10:10 android-x86-debug-unaligned.apk
-rw-r--r--  1 username  staff    27M Mar 27 10:10 android-x86-debug.apk

In this example, three APK images were created (ignoring the "unaligned" files): two with a single multi-architecture lib folder and one with a multi-architecture lib folder (the exact results of your build may vary as a function of the Cordova CLI version, the android-cordova framework version and the Crosswalk library version):

$ unzip -l android-debug.apk | grep " lib/"
 36566392  03-27-2017 10:06   lib/armeabi-v7a/libxwalkcore.so
     5192  03-27-2017 10:06   lib/armeabi-v7a/libxwalkdummy.so
 54864996  03-27-2017 10:06   lib/x86/libxwalkcore.so
     5188  03-27-2017 10:06   lib/x86/libxwalkdummy.so

$ unzip -l android-armv7-debug.apk | grep " lib/"
 36566392  03-27-2017 10:09   lib/armeabi-v7a/libxwalkcore.so
     5192  03-27-2017 10:09   lib/armeabi-v7a/libxwalkdummy.so

$ unzip -l android-x86-debug.apk | grep " lib/"
 54864996  03-27-2017 10:09   lib/x86/libxwalkcore.so
     5188  03-27-2017 10:09   lib/x86/libxwalkdummy.so

Back to FAQs Main

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Do I need to use the Intel XDK to complete the HTML5 from W3C Xseries?

It is not required that you use the Intel XDK to complete the HTML5 from W3C Xseries course. There is nothing in the course that requires the Intel XDK. 

All that is needed to complete the course is the free Brackets HTML5 editor. Whenever the course refers to using the "Live Layout" feature of the Intel XDK, use the "Live Preview" feature in Brackets, instead. The Intel XDK "Live Layout" feature is directly derived from, and is nearly identical to, the Brackets "Live Layout" feature. 

For additional help, see this Intel XDK forum post and this Intel XDK forum thread.

How do I convert my web app or web site into a mobile app?

The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

Where are the global-settings.xdk and xdk.log files?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

The xdk.log file contains logged data generated by the Intel XDK while it is running. Sometimes technical support will ask for a copy of this file in order to get additional information to engineering regarding problems you may be having with the Intel XDK. 

Both files are located in the same directory on your development system. Unfortunately, the precise location of these files varies with the specific version of the Intel XDK. You can find the global-settings.xdk and the xdk.log using the following command-line searches:

  • From a Windows cmd.exe session:
    > cd /
    > dir /s global-settings.xdk
     
  • From a Mac and Linux bash or terminal session:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* you must use Windows* 7 or higher. The Intel XDK will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (the Intel XDK has issues with network shares that have not been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). 
  • Some people have issues using the Intel XDK behind a corporate network proxy or firewall. To check for this issue, try running the Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel login page and confirm that you can login with your Intel XDK account username and password.
  • If you are experiencing login issues, please send an email to html5tools@intel.com from the email address registered to your login account, describing the nature of your account problem and any other details you believe may be relevant.

If you can reliably reproduce the problem, please post a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to the Intel XDK forum. Please ATTACH the xdk.log file to your post using the "Attach Files to Post" link below the forum edit window.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

The steps below assume you installed into the "default" location. Version 3900 (and later) installs the user data files one level deeper, but using the locations specified will still find the saved user information and node-webkit cache files. If you did not install in the "default" location you will have to find the location you did install into and remove the files mentioned here from that location.

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > rmdir /s /q .

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > rmdir /s /q .
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

If the Intel XDK is still listed as an app in the Windows Control Panel "Uninstall or change a program" list, find this entry in your registry (using regedit):

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall

Delete any sub-entries that refer to the Intel XDK. For example, a 3900 install will have this sub-key:

ARP_for_prd_xdk_0.0.3900

Use the following methods on a Linux or a Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

See the image below (this image is from a Windows 8.1 system).

Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

How do I run the Intel XDK on Fedora Linux?

See the instructions below, copied from this forum post:

$ sudo find xdk/install/dir -name libudev.so.0
$ cd dir/found/above
$ sudo rm libudev.so.0
$ sudo ln -s /lib64/libudev.so.1 libudev.so.0

Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

The Intel XDK generates a path error for my launch icons and splash screen files.

If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

<icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

Inside of your <project-name>.xdk file you will find entries that look like this:

"icons_": [
  {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
  },
  {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
  },
  {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
  },

Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

<!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

Upgrading to the latest version of the Intel XDK results in a build error with existing projects.

Some users have reported that by creating a new project, adding their plugins to that new project and then copying the www folder from the old project to the new project they are able to resolve this issue. Obviously, you also need to update your Build Settings in the new project to match those from the old project.

Back to FAQs Main

Code Sample: Plant Lighting System in Python*

$
0
0

Introduction

This Plant Lighting System application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison board, Intel® IoT Gateway, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison board or Intel® IoT Gateway, computing platforms designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison board or Arduino 101* (branded Genuino 101* outside the U.S.) board IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Store the Plant Lighting System data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services (AWS)*, different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.
  • Set up a MQTT-based server using IoT Hub from Microsoft Azure*, IoT from IBM Bluemix*, or IoT from Amazon Web Services (AWS)*, different cloud machine to machine messaging services based on the industry standard MQTT protocol.
  • Invoke the services of the Twilio* API for sending text messages.

This article continues here on GitHub.

Code Sample: Smart Stove Top in Python*

$
0
0

Introduction

This Smart Stove Top application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison board, Intel® IoT Gateway, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison board or Intel® IoT Gateway, computing platforms designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison board or Arduino 101* (branded Genuino 101* outside the U.S.) board IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Store the Smart Stove Top data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services (AWS)*, different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.
  • Set up a MQTT-based server using IoT Hub from Microsoft Azure*, IoT from IBM Bluemix*, or IoT from Amazon Web Services (AWS)*, different cloud machine to machine messaging services based on the industry standard MQTT protocol.

This article continues here on GitHub.

Code Sample: Watering System in Python*

$
0
0

Introduction

This Watering System application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Store the Watering System data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services (AWS)*, different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.
  • Set up a MQTT-based server using IoT Hub from Microsoft Azure*, IoT from IBM Bluemix*, or IoT from Amazon Web Services (AWS)*, different cloud machine to machine messaging services based on the industry standard MQTT protocol.
  • Invoke the services of the Twilio* API for sending text messages.

This article continues here on GitHub.

How to Build a Heart Rate Monitor Using Zephyr* on the Arduino 101* (branded Genuino 101* outside the U.S.) on Ubuntu* under VMware*

$
0
0

Introduction

This article is an extension of Flashing the Zephyr* Application Using a JTAG Adapter on the Arduino 101*. It demonstrates to new users how to build a heart rate monitor using Zephyr* on the X86, and ARC processors on an Arduino 101* platform. This is done with Ubuntu* in a VMware* workstation using a JTAG adapter. The JTAG adapter method enables engineers to perform advanced development and debugging on the Arduino 101 platform through a small number of dedicated pins.

Hardware Components

The hardware components used in this project are listed below:

Setting up a VMware Workstation on Ubuntu*

Go to the VMware website to download and install the latest VMware Workstation player for Windows*. Browse to the Ubuntu website to download the latest version of Ubuntu Desktop. Open VMware and create a new virtual machine using the downloaded Ubuntu image. Check virtualization settings in your computer’s BIOS to ensure they are enabled, otherwise VMware will not work.

Set up a Development Environment for Zephyr* on Ubuntu* under VMware*

Ensure the Ubuntu OS is up to date and install dependent Ubuntu packages.

  • sudo apt-get update
  • sudo apt-get install git make gcc gcc-multilib g++ libc6-dev-i386 g++-multilib python3-ply

Install the Zephyr Software Development (SDK) kit and run the installation binary.

When the “Enter target directory for SDK” prompt is displayed, enter the directory where you want to store the zephyr SDK.

Export the Zephyr SDK environment variables. The following ZEPHYR_SDK_INSTALL_DIR assumes the zephyr SDK area is =~/zephyr-sdk.

  • export ZEPHYR_GCC_VARIANT=zephyr
  • export ZEPHYR_SDK_INSTALL_DIR=~/zephyr-sdk

Save the Zephyr SDK environment variables for later use in new sessions.

  • vi  ~/.zephyrrc
  • export ZEPHYR_GCC_VARIANT=zephyr
  • export ZEPHYR_SDK_INSTALL_DIR=~/zephyr-sdk

Building the Heart Rate Monitor on Zephyr

Git clone a repository to the Ubuntu.

  • git clone https://gerrit.zephyrproject.org/r/zephyr && cd zephyr && git checkout tags/v1.5.0

Navigate to the Zephyr project directory and set the project environment variables.

  • cd zephyr
  • source zephyr-env.sh

Connecting FlySwatter2 to Arduino 101* (branded Genuino 101* outside the U.S.)

The first step is to connect the ARM micro JTAG connector to the FlySwatter2. Then, connect the FlySwatter2 to the Arduino 101 micro JRAG connector. The small white dot beside the micro JTAG header on the Arduinio 101 indicates the location of pin one. Insert one of the cable so it matches the dot. For more information on locating the micro JTAG connector on the Arduino 101, visit https://www.zephyrproject.org/doc/1.4.0/board/arduino_101.html.

 Connect FlySwatter2 to Arduino 101 using ARM Micro JTAG Connector.

Figure 1: Connect FlySwatter2* to Arduino 101* using ARM* Micro JTAG Connector.

A Hardware Abstraction Layer (HAL) allows the computer operating system to interact with a hardware device. Add your username to HAL layer interaction permissions to control the FlySwatter2.

$ sudo usermod –a –G plugdev $USERNAME

Grant members of the plugdev group permission to control the FlySwatter2.

$ sudo vi /etc/udev/rules.d/99-openocd.rules
$ # TinCanTools FlySwatter2
$ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", MODE="664", GROUP="plugdev"

Reload udev rules to apply the new udev rule without restarting the system.

$ sudo  udevadm control --reload-rules

Insert the second standard A plug to B plug USB type B cable into the FlySwatter2 and the computer.

$ dmesg | grep FTDI

In the Ubuntu window, FTDI USB Serial device should be displayed as below.

 FlySwatter2 on Ubuntu.

Figure 2: FlySwatter2 on Ubuntu.

The FlySwatter2 device should be connected to the Ubuntu as a removable device.

 FlySwatter2 pop up message on virtual machine.

Figure 3: FlySwatter2 pop up message on virtual machine.

Backup and Restore Factory Reset

Visit Flashing the Zephyr* Application Using a JTAG Adapter on the Arduino 101* to backup and restore the Arduino 101 factory settings.

Flashing the Zephyr* Image onto the Intel® Quark™ SE SoC

Navigate to the Zephyr project and build the binary image. The board type for X86 processor is arduino_101_factory.

$ cd zephyr
$ make pristine
$ make BOARD=arduino_101_factory

Ensure the FlySwatter2 is connected to the Arduino 101 platform and the computer, and then flash the image.

$ make BOARD=arduino_101_factory flash

The Heart Rate Monitor image should be successfully flashed onto the X86 processor.

 Flash image successfully.

Figure 4: Flash image successfully message.

Flashing the Zephyr* Image into the ARC Processor

Navigate to the Zephyr project and build the binary image. The board type for ARC processor is arduino_101_sss_factory.

$ cd heartrate-monitor
$ make pristine
$ make BOARD=arduino_101_sss_factory

Ensure the FlySwatter2 is connected to the Arduino 101 platform and the computer, and then flash the image.

$ make BOARD=arduino_101_sss_factory flash

Troubleshooting

This section contains tips for establishing and detecting permission to control the FlySwatter2.

Got “Error: libusb_open() failed with LIBUSB_ERROR_ACCESS” when flashing the Zephyr into X86 or ARC processor

 Error LIBUSB_ERROR_ACCESS

Figure 5: Error LIBUSB_ERROR_ACCESS

Ensure that your username is added to HAL layer interaction permissions to control the FlySwatter2.

$ sudo usermod -a -G plugdev $USERNAME

Also ensure permission was granted for plugdev to control the FlySwatter2.

$ sudo vi /etc/udev/rules.d/99-openocd.rules
$ # TinCanTools FlySwatter2
$ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", MODE="664", GROUP="plugdev"

Got “Error: no device found” when flashing the Zephyr into X86 or ARC processor

 Error No device found

Figure 6: Error No device found

Make sure Ubuntu detected the FlySwatter2 device. Unplug and re-plug the standard A plug to B plug USB type B cable from the FlySwatter2 to the computer may help Ubuntu detect FlySwatter2. Then test the connection with the following command:

$ dmesg | grep FTDI

Summary

We have described how to build and flash the Zephyr Heart Rate Monitor image onto the ARC or X86 processor of the Arduino 101 platform on Ubuntu in VMware. Experiment with the Zephyr Heart Rate Monitor application by connecting a portable device that supports BLE such as a Jarv RunBT and Grove-LCD RGB Backlight to the Arduino 101 board. Install a BLE app such as nRF Toolbox on an Android device, pair with the device that connected to the Arduino 101. Follow the instructions on the portable device for heart rate input. The heart rate date should appear on the Android screen and Grove LED display. To find more information about the Intel® Curie™ module and Zephyr project, go to  https://software.intel.com/en-us/iot/hardware/curie.

Helpful References

About the Author

Nancy Le is a software engineer at Intel Corporation in the Software and Services Group working on Intel® Atom™ processor scale-enabling and IoT projects.

 

How to Add a Landing Page

$
0
0

Curabitur blandit tempus porttitor. Donec sed odio dui. Curabitur blandit tempus porttitor. Aenean lacinia bibendum nulla sed consectetur. Sed posuere consectetur est at lobortis.

Cras justo odio, dapibus ac facilisis in, egestas eget quam. Cras mattis consectetur purus sit amet fermentum. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nullam id dolor id nibh ultricies vehicula ut id elit.


MariaDB* Performance with Intel® Xeon® Processor E5 v4 Family

$
0
0

MariaDB* increases database throughout by 51% and cuts response times by 15%

As businesses become more and more data intensive, the cost per transaction becomes an important metric. There are two ways to lower cost per transaction. The first is to lower the cost of data infrastructure; and the second is to increase hardware efficiency. With the MariaDB Enterprise and Intel® Xeon® processor-based solution, organizations can do both by using an enterprise subscription to reduce database costs and multi-core processors to increase performance with existing servers.

“By adopting the Intel® Xeon® Processor E5-2600 v4, our users and customers will not only get faster response times, they’ll also reduce total cost of ownership, getting even more value out of MariaDB.”
— Bruno Šimić Solutions Engineer, MariaDB
 

Using MPI-3 Shared Memory in Intel® Xeon Phi™ Processors

$
0
0

This whitepaper introduces the MPI-3 shared memory feature, the corresponding APIs, and a sample program to illustrate the use of MPI-3 shared memory in the Intel® Xeon Phi™ processor.

Introduction to MPI-3 Shared Memory

MPI-3 shared memory is a feature introduced in version 3.0 of the message passing interface (MPI) standard. It is implemented in Intel® MPI Library version 5.0.2 and beyond. MPI-3 shared memory allows multiple MPI processes to allocate and have access to the shared memory in a compute node. For applications that require multiple MPI processes to exchange huge local data, this feature reduces the memory footprint and can improve performance significantly.

In the MPI standard, each MPI process has its own address space. With MPI-3 shared memory, each MPI process exposes its own memory to other processes. The following figure illustrates the concept of shared memory: Each MPI process allocates and maintains its own local memory, and exposes a portion of its memory to the shared memory region. All processes then can have access to the shared memory region. Using the shared memory feature, users can reduce the data exchange among the processes.

Figure 1

By default, the memory created by an MPI process is private. It is best to use MPI-3 shared memory when only memory needs to be shared and all other resources remain private. As each process has access to the shared memory region, users need to pay attention to process synchronization when using shared memory.

Sample Code

In this section, sample code is provided to illustrate the use of MPI-3 shared memory.

A total of eight MPI processes are created on the node. Each process maintains a long array of 32 million elements. For each element j in the array, the process updates this element value based on its current value and the values of the element j in the corresponding arrays of two nearest processes, and the same procedure is applied for the whole array. The following pseudo-code shows when running the program for eight MPI processes with 64 iterations:

Repeat the following procedure 64 times:
for each MPI process n from 0 to 7:
    for each element j in the array A[k]:An[j] ← 0.5*An[j]  + 0.25*Aprevious[j] + 0.25*Anext[j]

where An is the long array belonging to the process n, and An [j] is the value of the element j in the array belonging to the process n. In this program, since each process exposes it to local memory, all processes can have access to all arrays, although each process just needs the two neighbor arrays (for example, process 0 needs data from processes 1 and 7, process 1 needs data from processes 0 and 2,…).

Figure 2

Besides the basic APIs used for MPI programming, the following MPI-3 shared memory APIs are introduced in this example:

  • MPI_Comm_split_type: Used to create a new communicator where all processes share a common property. In this case, we pass MPI_COMM_TYPE_SHARED as an argument in order to create a shared memory from a parent communicator such as MPI_COMM_WORLD, and decompose the communicator into a shared memory communicator shmcomm.
  • MPI_Win_allocate_shared: Used to create a shared memory that is accessible by all processes in the shared memory communicator. Each process exposes its local memory to all other processes, and the size of the local memory allocated by each process can be different. By default, the total shared memory is allocated contiguously. The user can pass an info hint “alloc_shared_noncontig” to specify that the shared memory does not have to be contiguous, which can cause performance improvement, depending on the underlying hardware architecture. 
  • MPI_Win_free: Used to release the memory.
  • MPI_Win_shared_query: Used to query the address of the shared memory of an MPI process.
  • MPI_Win_lock_all and MPI_Win_unlock_all: Used to start an access epoch to all processes in the window. Only shared epochs are needed. The calling process can access the shared memory on all processes.
  • MPI_Win_sync: Used to ensure the completion of copying the local memory to the shared memory.
  • MPI_Barrier: Used to block the caller process on the node until all processes reach a barrier. The barrier synchronization API works across all processes.

Basic Performance Tuning for Intel® Xeon Phi™ Processor

This test is run on an Intel Xeon Phi processor 7250 at 1.40 GHz with 68 cores, installed with Red Hat Enterprise Linux* 7.2 and Intel® Xeon Phi™ Processor Software 1.5.1, and Intel® Parallel Studio 2017 update 2. By default, the Intel compiler will try to vectorize the code, and each MPI process has a single thread of execution. OpenMP* pragma is added at loop level for later use. To compile the code, run the following command line to generate the binary mpishared.out:

$ mpiicc mpishared.c -qopenmp -o mpishared.out
$ mpirun -n 8 ./mpishared.out
Elapsed time in msec: 5699 (after 64 iterations)

To explore the thread parallelism, run four threads per core, and re-compile with –xMIC-AVX512 to take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions:

$ mpiicc mpishared.c -qopenmp -xMIC-AVX512 -o mpishared.out
$ export OMP_NUM_THREADS=4
$ mpirun -n 8 ./mpishared.out
Elapsed time in msec: 4535 (after 64 iterations)

As MCDRAM in this system is currently configured as flat, the Intel Xeon Phi processor appears as two NUMA nodes. The node 0 contains all CPUs and the on-platform memory DDR4, while node 1 has the on-packet memory MCDRAM:

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271
node 0 size: 98200 MB
node 0 free: 92775 MB
node 1 cpus:
node 1 size: 16384 MB
node 1 free: 15925 MB
node distances:
node   0   1
  0:  10  31
  1:  31  10

To allocate the memory in the MCDRAM (node 1), pass the argument –m 1 to the command numactl as follows:

$ numactl -m 1 mpirun -n 8 ./mpishared.out
Elapsed time in msec: 3070 (after 64 iterations)

This simple optimization technique greatly improves performance speeds.

Summary

This whitepaper introduced the MPI-3 shared memory feature, followed by sample code, which used IMP-3 shared memory APIs. The pseudo-code explained what the program is doing along with an explanation of shared memory APIs. The program ran on an Intel Xeon Phi processor, and it was further optimized with simple techniques.

Reference

  1. MPI Forum, MPI 3.0
  2. Message Passing Interface Forum, MPI: A Message-Passing Interface Standard Version 3.0
  3. The MIT Press, Using Advanced MPI
  4. James Reinders, Jim Jeffers, Publisher: Morgan Kaufmann, Chapter 16 - MPI-3 Shared Memory Programming Introduction, High Performance Parallelism Pearls Volume Two

Appendix

The code of the sample MPI program is available for download.

ADXL345* Accelerometer Tutorial

$
0
0

Data from the built-in 3-axis accelerometer of the Terasic DE10-Nano is measured on ALL 3 axes to show when the board is in motion. The raw output of the accelerometer is converted to g-force values by a sensor library and then sent to graphing software for data visualization and interpretation.

In this tutorial you will:

  • Interface with the board's built-in digital accelerometer using an I2C interface.
  • Use Intel’s I/O and sensor libraries (MRAA and UPM) to get data from the accelerometer.
  • Monitor and observe acceleration data for small vibration and movement along the x, y, z axes.
  • Translate the acceleration data into +/- g-force values to demonstrate the motion of the Terasic DE10-Nano board.
  • Show the accelerometer data using different open-source technologies:
    • Express* (web server)
    • Plotly* (graphing library)
    • Websocket* (data stream).

Note: Both Express.js and Plotly.js are non-restrictive MIT* licensed technologies.

Visit GitHub for this project's code samples. 

Additional Resources

BigDL – Scale-out Deep Learning on Apache Spark* Cluster

$
0
0

Summary

BigDL is a distributed deep learning library for Apache Spark*. With BigDL, users can write their deep learning applications as standard Spark programs, which can run directly on top of existing Spark or Hadoop* clusters. This can enable deep learning functionalities on existing Hadoop/Spark clusters and analyze data that is already present in HDFS*, HBase*, Hive*, etc. Other common features of BigDL include:

  • Rich, deep learning support. Modeled after Torch*, BigDL provides comprehensive support for deep learning, including numeric computing (via Tensor) and high-level neural networks; in addition, users can load pre-trained Caffe* or Torch models into Spark programs using BigDL.
  • Extremely high performance. To achieve high performance, BigDL uses Intel® Math Kernel Library (Intel® MKL) and multi-threaded programming in each Spark task. Consequently, it is orders of magnitude faster than out-of-the-box open source Caffe, Torch, or TensorFlow on a single-node Intel® Xeon® processor (that is, comparable with a mainstream GPU).
  • Efficiently scale out. BigDL can efficiently scale out to perform data analytics at big data scale, by leveraging Apache Spark (a lightning-fast distributed data processing framework), as well as efficient implementations of synchronous SGD and all-reduce communications on Spark.

Figure 1 shows a basic overview of how a BigDL program is executed on an existing Spark cluster. With the help of a cluster manager and an application master process or a driver program, Spark tasks are distributed across the Spark worker nodes or containers (executors). BigDL enables faster execution of Spark tasks using Intel MKL.

Figure 1.Basic overview of BigDL program running on Spark* cluster.

Experimental Setup

Virtual Hadoop Cluster

The Cloudera* administrator training guide for Apache Hadoop was referenced for setting up an experimental four-node virtual Hadoop cluster with YARN* as a resource manager. Standalone Spark and Spark on YARN were both installed on the cluster.

Virtual Machine

Node_1

Node_2

Node_3

Node_4

Services

NameNode

Secondary NameNode

ResourceManager

JobHistoryServer

NodeManager

NodeManager

NodeManager

NodeManager

DataNode

DataNode

DataNode

DataNode

Spark Master

Spark Worker

Spark Worker

Spark Worker

Spark Worker

 

 

 

Physical Machine (Host) – System Configuration

System/Host Processor

Intel® Xeon® processor E7-8890 v4 @ 2.20 GHz  (4 sockets)

Total Physical Cores

96

Host Memory

512 GB DDR-1600 MHz

Host OS

Linux*; version 3.10.0-327.el7.x86_64

Virtual Guests

4

Virtual Machine Guest - System Configuration

System/Guest Processor

Intel® Xeon® processor E7-8890 v4 @ 2.20 GHz

Physical Cores

18

Host Memory

96 GB DDR-1600 MHz

Host OS

Linux*; version 2.6.32-642.13.1.el6.x86_64

Java version

1.8.0_121

Spark version

1.6

Scala version

2.10.5

CDH version

5.10

BigDL Installation

Prerequisites

Java* Java is required for building BigDL. The latest version of Java can be downloaded from the Oracle website. It is highly recommended to use Java 8 when running with Spark 2.0; otherwise, you may observe performance issues.

 export JAVA_HOME=/usr/java/jdk1.8.0_121/

Maven* Apache Maven as a software management tool is required for downloading and building BigDL. The latest version of Maven can be downloaded and installed from the Maven website.

wget http://mirrors.ibiblio.org/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
export M2_HOME=/home/training/Downloads/apache-maven-3.3.9
export PATH=${M2_HOME}/bin:$PATH
export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"

When compiling with Java 7, you need to add the option “-XX:MaxPermSize=1G” to avoid OutOfMemoryError while using BigDL with Java 7.

export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m -XX:MaxPermSize=1G"

Building BigDL

Download BigDL. BigDL source code is available at GitHub*.

git clone https://github.com/intel-analytics/BigDL.git

It is highly recommended that you build BigDL using the make-dist.sh script.

bash make-dist.sh

This creates a directory dist with utility script (${BigDL_HOME}/dist/bin/bigdl.sh) to set up the environment for BigDL and create packaged JAR* files with required dependencies for Spark, Python*, and other supporting tools and libraries.

By default, make-dist.sh uses Scala* 2.10 for Spark 1.5.x or 1.6.x, and Scala 2.11 for Spark 2.0. Alternative ways to build BigDL are published on the BigDL build page.

BigDL and Spark Environment

BigDL can be used with a variety of local and cluster environments using Java, standalone Spark, Spark with Hadoop YARN, or Amazon EC2 cloud. Here, we use LeNet* as an example to explain and differentiate each mode. Further details about LeNet model and usage is explained in the following sections.

  • Local Java application - In this mode, the application can be launched with BigDL using the local Java environment.
    ${BigDL_HOME}/dist/bin/bigdl.sh -- java \
      -cp ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies-and-spark.jar \
       com.intel.analytics.bigdl.models.lenet.Train \
       -f $MNIST_DIR \
       --core 8 --node 1 \
       --env local -b 512 -e 1
  • Spark standalone - In this mode, Spark’s own cluster manager is used to allocate resources across applications running with BigDL.
    • Local environment - In this mode a BigDL application is launched locally using the –master=local[$NUM-OF_THREADS] and --env local flag. For example, LeNet model training can be started on a local node as follows:
      ${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master local[16] \
        --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        --class com.intel.analytics.bigdl.models.lenet.Train \
        ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        -f $MNIST_DIR \
        --core 16 --node 1 \
        -b 512 -e 1 --env local
  • Spark cluster environment - In this mode a BigDL application is launched in a cluster environment. Depending on where the driver is deployed there are two ways in which BigDL can be used in a Spark cluster environment.
    • Spark standalone cluster in client deploy mode—In this mode the driver program is launched locally as an external client. This is the default mode, where application progress can be viewed on the client.
      ${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master spark://Node_1:7077 \
         --deploy-mode client --executor-cores 8 --executor-memory 4g --total-executor-cores 32 \
         --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
         --class com.intel.analytics.bigdl.models.lenet.Train \
         ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
         -f $MNIST_DIR \
         --core 8 --node 4 \
         -b 512 -e 1 --env spark
    • Spark standalone cluster in cluster deploy mode - In this mode the driver is launched on one of the worker nodes. You can use webUI* or Spark log files to track the progress of your application.
      ${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master spark://Node_1:7077 \
        --deploy-mode cluster --executor-cores 8 --executor-memory 4g \
        --driver-cores 1 --driver-memory 4g --total-executor-cores 33 \
        --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        --class com.intel.analytics.bigdl.models.lenet.Train \
         ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
         -f $MNIST_DIR \
         --core 8 --node 4 \
         -b 512 -e 1 --env spark
  • Spark with YARN as a cluster manager - In this mode Hadoop’s YARN cluster manager is used to allocate resources across applications running with BigDL.
    • Client deployment mode - In this mode the spark driver runs on the host where the job is submitted.
      ${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn \
        --deploy-mode client --executor-cores 16 --executor-memory 64g \
        --driver-cores 1 --driver-memory 4g --num-executors 4 \
        --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        --class com.intel.analytics.bigdl.models.lenet.Train \
        ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        -f $MNIST_DIR \
        --core 16 --node 4 \
        -b 512 -e 1 --env spark
    • Cluster deployment mode - In this mode the spark driver runs on the cluster host chosen by YARN.
      ${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn --deploy-mode cluster \
        --executor-cores 16 --executor-memory 64g \
        --driver-cores 1 --driver-memory 4g --num-executors 4 \
        --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        --class com.intel.analytics.bigdl.models.lenet.Train \
        ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
        -f $MNIST_DIR \
        --core 16 --node 4 \
        -b 512 -e 1 --env spark
  • Running on Amazon EC2 - The BigDL team has made available a public Amazon Machine Image* (AMI*) file for experimenting with BigDL with Spark on EC2. Detailed information on steps to run BigDL examples on Spark in the Amazon EC2 environment is provided on GitHub.

BigDL Sample Models

This tutorial shows training and testing for two sample models, LeNet and VGG*, to demonstrate usage of BigDL for distributed deep learning on Apache Spark.

LeNet

LeNet 5 is a classical CNN model used in digital number classification. For detailed information, please refer to http://yann.lecun.com/exdb/lenet/.

The MNIST* database can be downloaded from http://yann.lecun.com/exdb/mnist/. We downloaded images and labels for both training and validation data. 

A JAR file for training and testing the sample LeNet model is created as part of the BigDL installation. If not yet created, please refer to the section on Building BigDL.

Training the LeNet Model

An example command to train the LeNet model using BigDL with Spark running on YARN can be given as follows:

${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn \
  --deploy-mode cluster --executor-cores 16 --executor-memory 64g \
  --driver-cores 1 --driver-memory 4g --num-executors 4 \
  --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  --class com.intel.analytics.bigdl.models.lenet.Train \
  ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  -f $MNIST_DIR \
  --core 16 --node 4 \
  -b 512 -e 5 --env spark --checkpoint ~/models

Usage:
LeNet parameters
  -f <value> | --folder <value>
        where you put the MNIST data
  -b <value> | --batchSize <value>
        batch size
  --model <value>
        model snapshot location
  --state <value>
        state snapshot location
  --checkpoint <value>
        where to cache the model
  -r <value> | --learningRate <value>
        learning rate
  -e <value> | --maxEpoch <value>
        epoch numbers
  -c <value> | --core <value>
        cores number on each node
  -n <value> | --node <value>
        node number to train the model
  -b <value> | --batchSize <value>
        batch size (currently this value should be multiple of (–-core * –-node)
  --overWrite
        overwrite checkpoint files
  --env <value>
        execution environment
YARN parameters
                --master yarn --deploy-mode cluster : Using spark with YARN cluster manager in cluster deployment mode
       --executor-cores 16 --num-executors 4: This sets the number of executors and cores per executor for YARN to match with --core and –-node parameters for LeNet training. Currently this is a known issue and hence required for successful cluster training with BigDL using Spark

Testing the LeNet Model

An example command to test the LeNet model using BigDL with Spark running on YARN can be given as follows:

${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn --deploy-mode cluster \
  --executor-cores 16 --executor-memory 64g \
  --driver-cores 1 --driver-memory 4g --num-executors 4 \
  --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  --class com.intel.analytics.bigdl.models.lenet.Test \
  ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  -f $MNIST_DIR \
  --core 16 --nodeNumber 4 \
  -b 512 --env spark --model ~/models/model.591

Usage:
    -f <value> | --folder <value>
       where you put the MNIST data
  --model <value>
        model snapshot location (model.iteration#)
  -c <value> | --core <value>
        cores number on each node
  -n <value> | --nodeNumber <value>
        nodes number to train the model
  -b <value> | --batchSize <value>
        batch size
  --env <value>
        execution environment

For quick verification, results for model accuracy can be seen as follows:

yarn logs -applicationId application_id | grep accuracy

Refer to the Hadoop cluster WebUI for additional information about this training.

VGG model on CIFAR-10*

This example demonstrates the use of BigDL to train and test a VGG-like model on a CIFAR-10* dataset. Details about this model can be found here.

You can download the binary version of the CIFAR-10 dataset from here.

A JAR file for training and testing the sample VGG model is created as part of the BigDL installation. If not yet created, please refer to the section Building BigDL.

Training the VGG Model

An example command to train the VGG model on the CIFAR-10 dataset using BigDL with Spark running on YARN can be given as follows:

${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn --deploy-mode cluster \
  --executor-cores 16 --executor-memory 64g \
  --driver-cores 1 --driver-memory 16g --num-executors 4 \
  --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  --class com.intel.analytics.bigdl.models.vgg.Train \
  ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  -f $VGG_DIR \
  --core 16 --node 4 \
  -b 512 -e 5 --env spark --checkpoint ~/models

Usage:
  -f <value> | --folder <value>
        where you put the Cifar10 data
  --model <value>
        model snapshot location
  --state <value>
        state snapshot location
  --checkpoint <value>
        where to cache the model and state
  -c <value> | --core <value>
        cores number on each node
  -n <value> | --node <value>
        node number to train the model
  -e <value> | --maxEpoch <value>
        epoch numbers
  -b <value> | --batchSize <value>
        batch size
  --overWrite
        overwrite checkpoint files
  --env <value>
        execution environment

Testing the VGG Model

An example command to test the VGG model on the CIFAR-10 dataset using BigDL with Spark running on YARN can be given as follows:

${BigDL_HOME}/dist/bin/bigdl.sh -- spark-submit --master yarn \
  --deploy-mode cluster --executor-cores 16 --executor-memory 64g \
  --driver-cores 1 --driver-memory 16g --num-executors 4 \
  --driver-class-path ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  --class com.intel.analytics.bigdl.models.vgg.Test \
  ${BigDL_HOME}/dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar \
  -f $VGG_DIR \
  --core 16 --nodeNumber 4 \
  -b 512 --env spark --model ~/models/model.491

Usage:
  -f <value> | --folder <value>
        where you put the Cifar10 data
  --model <value>
        model snapshot location
  -c <value> | --core <value>
        cores number on each node
  -n <value> | --nodeNumber <value>
        nodes number to train the model
  -b <value> | --batchSize <value>
        batch size
  --env <value>
        execution environment

Detailed steps for training and testing other sample models, like a recurrent neural network (RNN), a residual network (ResNet), Inception*, Autoencoder*, and so on using BigDL, are published on the BigDL GitHub site.

BigDL can also be used to load pre-trained Torch and Caffe models into the Spark program for classification or prediction. One such example is shown on the BigDL GitHub site.

Performance Scaling

Figure 2 shows performance scaling of training VGG and ResNet models using BigDL on Spark with an increasing number of cores and nodes (virtual nodes as per the current setup). Here, we compare the average time taken to train both models on the CIFAR-10 dataset for five epochs.

Figure 2:Performance scaling of VGG and ResNet with BigDL on Spark running with YARN.

Conclusion

In this article, we validated the steps to install and use BigDL for training and testing some of the commonly used deep neural network models on Apache Spark using a four-node virtual Hadoop cluster. We saw how BigDL can easily enable deep learning functionalities on existing Hadoop/Spark clusters. The total time to train a model can be significantly reduced; first, with the help of the Intel MKL and multi-threaded programming in each Spark task, and then by distributing Spark tasks across multiple nodes on a Hadoop/Spark cluster.

References

BigDL GitHub

Apache Spark

Spark on YARN – Cloudera Enterprise 5.10.x

LeNet/MNIST

VGG on CIFAR-10 in Torch

Deep Residual Learning for Image Recognition

CIFAR-10 Dataset

BigDL: Distributed Deep Learning on Apache Spark

BigDL: Known Issues

Cloudera Administrator Training for Apache Hadoop

Cloudera Archive – CDH 5.10

Java SE Download Kit

VirtualBox

Intel® Nervana™ AI Academy University Workshops and Seminars

$
0
0

The Intel® Nervana™ AI Academy Student Developer Program is engaging a large number of universities around the globe in 2017, hosting technical workshops and seminars at university campuses. Students from various fields, clubs, and interests are asked to attend and learn what Intel is doing with machine learning, deep learning and artificial intelligence. Faculty and students also participate, showcasing their research and expertise in the field, along with Intel showing the latest tools, technologies, and programs supporting AI on Intel® architecture.

Currently these workshops are being held in the Americas, Africa, India, China, and Europe. Each is adjusted to the needs and interest of the students, but most start with Intel or a guest host welcoming the students with an overview of the technical agenda and information on how students can be engaged with Intel going forward.

Video Overview of the Intel® Student Developer AI Workshop - University of California (UC) Berkeley

At these events Intel reviews its technology and program offerings for students including:

  • Technologies available to all students attending these events, including remote access to Intel® servers running Intel® optimized libraries and frameworks.

  • Select university club sponsorships for student clubs seeking AI content to share and discuss.

  • Intel Student Ambassador Program for advanced students seeking Intel support and evangelism of their work supporting AI on Intel architecture.

The figure above illustrates the various technologies optimized for Intel architecture from Intel® CPUs, to libraries like the Intel® Math Kernel Library (Intel® MKL), and the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Deep learning frameworks are at the heart of emerging AI solutions, and Intel is working with each of the leading frameworks to optimize them for Intel architecture. The circled areas denote to students the technologies immediately and freely available for trial use as part of the Student Developer Program. Additionally, select students applying to be Student Ambassadors will have dedicated access to servers based on the latest Intel® Xeon Phi™ processors as well as access to designated tools, as long as they participate in the program. Learn more about the Student Ambassador Program.

Sample Welcome and Agenda Talk at UC Berkeley Student Workshop for AI

Technical Sessions: These workshops provide at least two technical sessions, often tailored to the needs, curriculum, or focus of the university, and may cover:

  • Overview of machine learning and deep learning

  • Technical deep dive into deep learning

  • Demo/training on the Intel® Deep Learning SDK

  • Demo/training on Caffe* optimized for Intel architecture

  • Introduction to deep learning using Intel® Nervanad technology and the Neon framework

  • Demo/training using Intel Nervana technology and the Neon framework

  • Faculty or student work in machine learning, deep learning, and AI

Example Sessions:

Introduction to deep learning with Intel® Nervana™ technology and the Neon framework

Deep learning demos of the Intel Nervana technology platform using the Neon framework

Student session on using a deep learning approach to model behavior in MOOCs

Get more information on the Intel Nervana AI Academy. Get more information and how to apply to the Intel Student Ambassador for AI.

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>