Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Intel® Distribution for Python 2017 Update 2 Readme

$
0
0

Intel® Distribution for Python powered by Anaconda gives you ready access to tools and techniques for high performance to supercharge all your Python applications on modern Intel platforms. Whether you are a seasoned high-performance developer or a data scientist looking to speed up your workflows, the Intel Distribution for Python powered by Anaconda delivers an easy-to-install, performance-optimized Python experience to meet even your most demanding requirements.

The Intel® Distribution for Python 2017 Update 2 for Linux*, Windows*, and macOS* packages are now ready for download. The Intel® Distribution for Python is available as a stand-alone product and as part of the Intel® Parallel Studio XE.

New in this release:

  • Includes Intel-optimized Deep Learning frameworks Caffe and Theano, powered by the new Intel® MKL-DNN
  • Select Scikit-learn algorithms now accelerated with Intel® Data Analytics Acceleration Library for ~200X speedup
  • Arithmetic, transcendental and 1D & multi-dimensional FFT functions significantly faster in NumPy and SciPy

Refer to the Intel® Distribution for Python Release Notes for more details.

Contents:

  • Intel® Distribution for Python 2017 Update 2 for Linux*
    • File: l_python2_pu2_2017.2.045.tgz

      A File containing the complete product installation for Python 2.7 on Linux (x86-64bit/Intel® Xeon Phi™ coprocessor development)

    • File: l_python3_pu2_2017.2.045.tgz

      A File containing the free runtime environment installation for Python 3.5 on Linux (x86-64bit/Intel® Xeon Phi™ coprocessor development)

  • Intel® Distribution for Python 2017 Update 2 for Windows*
    • File: w_python27_pu2_2017.2.044.exe

      A File containing the complete product installation for Python 2.7 on Windows (x86-64bit development)

    • File: w_python35_pu2_2017.2.044.exe

      A file containing the free runtime environment installation for Python 3.5 on Windows (x86-64bit development)

  • Intel® Distribution for Python 2017 Update 2 for macOS*
    • File: intelpython27-2017.2.044.tgz

      A File containing the complete product installation for Python 2.7 on macOS (x86-64bit development)

    • File: intelpython35-2017.2.044.tgz

      A file containing the free runtime environment installation for Python 3.5 on macOS (x86-64bit development)


Intel® Parallel Studio XE 2017 Update 2 Readme

$
0
0

Intel® Parallel Studio XE 2017 Update 2 for Linux*, Windows*, and macOS*

Deliver top application performance and reliability with the Intel® Parallel Studio XE 2017 Update 2. This software development suite combines Intel's C/C++ compiler and Fortran compiler; performance and parallel libraries; error checking, code robustness, and performance profiling tools into a single suite offering.

Key Features

  • Faster code: Boost applications performance that scales on today’s and next-gen processors
  • Create code faster: Utilize a toolset that simplifies creating fast, reliable parallel applications

This package is for users who develop on and build for Intel® 64 architectures on Linux*, Windows*, and macOS*, as well as customers running over the Intel® Xeon Phi™ processors and coprocessors on Linux*. There are currently 3 editions of the suite:

  • Intel® Parallel Studio XE 2017 Update 2 Composer Edition, which includes:
    • Intel® C++ Compiler 17.0 Update 2
    • Intel® Fortran Compiler 17.0 Update 2
    • Intel® Data Analytics Acceleration Library (Intel® DAAL) 2017 Update 2
    • Intel® Integrated Performance Primitives (Intel® IPP) 2017 Update 2
    • Intel® Math Kernel Library (Intel® MKL) 2017 Update 2
    • Intel® Threading Building Blocks (Intel® TBB) 2017 Update 4
    • Intel-provided Debug Solutions
    • Intel® Distribution for Python* 2017Update 2
  • Intel® Parallel Studio XE 2017 Update 2 Professional Edition adds the following utilities:
    • Intel® VTune™ Amplifier XE 2017 Update 2
    • Intel® Advisor 2017 Update 2
    • Intel® Inspector 2017 Update 2
  • Intel® Parallel Studio XE 2017 Update 2 Cluster Edition includes all previous tools plus:
    • Intel® MPI Library 2017 Update 2
    • Intel® Trace Analyzer and Collector 2017 Update 2
    • Intel® Cluster Checker 2017 Update 2 (Linux* only)
    • Intel® MPI Benchmarks 2017 Update 1

New in this release:

  • All components updated to current versions
  • Migration to SHA-256 digital signatures on Linux*
  • Intel® Advisor:
    • Roofline Analysis is released as a public feature
    • Added call stacks for FLOPS and Trip Counts that enable total metrics
    • Filter by module for Survey, FLOPS, and Trip Counts collections
  • Intel® Cluster Checker:
    • Added additional support for Intel® Xeon Phi™ Product Family x200 processors
    • Added additional support for Intel® Omni-Path Architecture
  • Intel® Data Analytics Acceleration Library:
    • Added Deep Learning feature extensions
    • Added API extensions for data parallelism scheme
  • Intel® Inspector: Support for C++17 std::shared_mutex
  • Intel® Integrated Performance Primitives:
    • Introduced support for Intel® Xeon Phi™ processor x200 leverage boot mode in examples
    • Added new functions in ZLIB to support user-defined Huffman tables
  • Intel® Math Kernel Library:
    • Intel® AVX-512 code is dispatched by default on Intel® Xeon® processors
    • Added support for Intel® Threading Building Blocks in various functions
  • Intel® MPI Library: Added a new environment variable, I_MPI_MEMORY_LOCK, to prevent memory swapping to the hard drive
  • Intel® Threading Building Blocks:
    • Added template class gfx_factory to the flow graph API
    • Fixed a possible deadlock caused by missed wakeup signals in task_arena::execute()
  • Intel® Trace Analyzer and Collector:
    • Improved the color changing scheme
    • MPI Performance Snapshot adds Pcontrol support
    • MPI Performance Snapshot adds idle time per function metric
  • Intel® VTune™ Amplifier XE:
    • Added support for mixed Python and native code in Locks and Waits analysis
    • Added support for performance analysis of a guest Linux* operating system via Kernel-based Virtual Machine (KVM) from a Linux* host system with the KVM Guest OS option
    • Enriched HPC Performance Characterization

For more information on the changes listed above, please read the individual component release notes available from the main Intel® Parallel Studio XE Release Notes page.

Resources:

Contents:

  • Linux* packages
    • File: parallel_studio_xe_2017_update2.tgz
      Offline Installer package which has bigger size and contains all components of the product
    • File: parallel_studio_xe_2017_update2_cluster_edition_online.tgz
      Online Installer for the Intel® Parallel Studio XE Cluster Edition which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_online.tgz
      Online Installer for the Intel® Parallel Studio XE Professional Edition for Fortran and C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_for_cpp_online.tgz
      Online Installer for the Intel® Parallel Studio XE Professional Edition for C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_for_fortran_online.tgz
      Online Installer for the Intel® Parallel Studio XE Professional Edition for Fortran which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition.tgz
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for Fortran and C++ only
    • File: parallel_studio_xe_2017_update2_composer_edition_online.tgz
      Online Installer for the Intel® Parallel Studio XE Composer Edition for Fortran and C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition_for_cpp.tgz
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for C++ only
    • File: parallel_studio_xe_2017_update2_composer_edition_for_cpp_online.tgz
      Online Installer for the Intel® Parallel Studio XE Composer Edition for C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition_for_fortran.tgz
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for Fortran only
    • File: parallel_studio_xe_2017_update2_composer_edition_for_fortran_online.tgz
      Online Installer for the Intel® Parallel Studio XE Composer Edition for Fortran which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: l_comp_lib_2017.2.174_comp.cpp_redist.tgz
      Redistributable Libraries C++
    • File: l_comp_lib_2017.2.174_comp.for_redist.tgz
      Redistributable Libraries Fortran
    • File: get-ipp-2017-crypto-library.htm
      Directions on how to obtain the Cryptography Library
  • Windows* packages
    • File: parallel_studio_xe_2017_update2_setup.exe
      Offline Installer package which has bigger size and contains all components of the product
    • File: parallel_studio_xe_2017_update2_cluster_edition_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Cluster Edition which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Professional Edition for Fortran and C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_for_cpp_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Professional Edition for C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_professional_edition_for_fortran_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Professional Edition for Fortran which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition_setup.exe
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for Fortran and C++ only
    • File: parallel_studio_xe_2017_update2_composer_edition_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Composer Edition for Fortran and C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition_for_cpp_setup.exe
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for C++ only
    • File: parallel_studio_xe_2017_update2_composer_edition_for_cpp_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Composer Edition for C++ which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: parallel_studio_xe_2017_update2_composer_edition_for_fortran_setup.exe
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for Fortran only
    • File: parallel_studio_xe_2017_update2_composer_edition_for_fortran_online_setup.exe
      Online Installer for the Intel® Parallel Studio XE Composer Edition for Fortran which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: ww_icl_redist_msi_2017.2.187.zip
      Redistributable Libraries for 32- and 64-bit msi files for the Intel® Parallel Studio XE Composer Edition for C++
    • File: ww_ifort_redist_msi_2017.2.187.zip
      Redistributable Libraries for 32- and 64-bit msi files for the Intel® Parallel Studio XE Composer Edition for Fortran
    • File: get-ipp-2017-crypto-library.htm
      Directions on how to obtain the Cryptography Library
  • macOS* packages
    • File: m_ccompxe_2017.2.046.dmg
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for C++ which has bigger size and contains all components of the product
    • File: m_ccompxe_online_2017.2.046.dmg
      Online Installer which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: m_comp_lib_icc_redist_2017.2.163.dmg
      Redistributable Libraries C++
    • File: m_fcompxe_2017.2.046.dmg
      Offline Installer package for the Intel® Parallel Studio XE Composer Edition for Fortran which has bigger size and contains all components of the product
    • File: m_fcompxe_online_2017.2.046.dmg
      Online Installer which has smaller file size. This installer may save you download time as it allows you to select only those components you desire to download. You must be connected to the internet during installation with this installer.
    • File: m_comp_lib_ifort_redist_2017.2.163.dmg
      Redistributable Libraries Fortran
    • File: get-ipp-2017-crypto-library.htm
      Directions on how to obtain the Cryptography Library

Intel® MPI Library 2017 Update 2 Readme

$
0
0

The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1) specification.  This package is for MPI users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ product family.  You must have a valid license to download, install, and use this product.

The Intel® MPI Library 2017 Update 2 for Linux* and Windows* packages are now ready for download.  The Intel® MPI Library is available as a stand-alone product and as part of the Intel® Parallel Studio XE Cluster Edition.

New in this release:

  • Intel® MPI Library adds new environment variables including I_MPI_MEMORY_LOCK that prevents memory swapping to the hard drive.

Refer to the Intel® MPI Library Release Notes for more details.

Contents:

  • Intel® MPI Library 2017 Update 2 for Linux*
    • l_mpi_2017.2.174.tgz - A file containing the complete product installation for Linux* OS.
    • l_mpi-rt_2017.2.174.tgz - A file containing the free runtime environment installation for Linux* OS.
       
  • Intel® MPI Library 2017 Update 2 for Windows*
    • w_mpi_p_2017.2.187.exe - A file containing the complete product installation for Windows* OS.
    • w_mpi-rt_p_2017.2.187.exe - A file containing the complete product installation for Windows* OS.

Intel® Trace Analyzer and Collector 2017 Update 2 Readme

$
0
0

The Intel® Trace Analyzer and Collector for Linux* and Windows* is a low-overhead scalable event-tracing library with graphical analysis that reduces the time it takes an application developer to enable maximum performance of cluster applications.  This package is for users who developer on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on Intel® Xeon Phi™.  The package also includes an option download on OS X* for analysis only.  You must have a valid license to download, install, and use this product.

The Intel® Trace Analyzer and Collector 2017 for Linux* Update 2 and Windows* packages are now ready for download.  The Intel® Trace Analyzer and Collector is only available as part of Intel® Parallel Studio XE Cluster Edition.

New in this release:

  • Enhancements of function color selection on timelines

Refer to the Intel® Trace Analyzer and Collector Release Notes for more details.

Contents:

  • Intel® Trace Analyzer and Collector 2017 Update 2 for Linux*
    • l_itac_p_2017.2.028.tgz - A file containing the complete product installation for Linux* OS.
    • w_ita_p_2017.2.028.exe - A file containing the Graphical User Interface (GUI) installation for Windows* OS.
    • m_ita_p_2017.2.028.tgz - A file containing the Graphical User Interface (GUI) installation for OS X*.
       
  • Intel® Trace Analyzer and Collector 2017 Update 2 for Windows*
    • w_itac_p_2017.2.025.exe - A file containing the complete product installation for Windows* OS.
    • m_ita_p_2017.2.028.tgz - A file containing the Graphical User Interface (GUI) installation for OS X*.

Known problems in Intel® Integrated Performance Primitives Cryptography XTS-AES, GFp, and HMAC functions

$
0
0

The following issues were identified in the Intel® Integrated Performance Primitives (Intel® IPP) Cryptography XTS-AES, GFp, and HMAC functions. The problems affect the Intel® IPP 2017 Update 2 and earlier releases.

These issues will be fixed in the future versions of Intel® IPP. If your code is affected, use the following workaround to fix the problem, and improve the code security
:

  • ippsAESEncryptXTS_Direct and ippsAESDecryptXTS_Direct
    Problem: The ippsAESEncryptXTS_Direct and ippsAESDecryptXTS_Direct functions do not check the number of blocks in AES-XTS encryption/decryption operations.  The AES-XTS operations are required not to exceed 2^20 AES blocks.

    Workaround: To avoid issues with the large AES blocks number, check the blocks number in the application code.

  •  ippsGFpxGetSize and ippsGFpECGetsize
    Problem
     The ippsGFpxGetSize and ippsGFpECGetsize functions do not perform check for integer overflow.

    Workaround:  Check the GF tower construction height in your application, and limit the extension of basic prime GF to less than 8.  The parameter in ippsGFpxGetSize() function should be  2<= degree <=8.  

  • ippsHMACGetTag_rmf and ippsHMACGetTag
    Problem
    : The ippsHMACGetTag_rmf and ippsHMACGetTag functions leave some sensitive data after exit. This may lead to a leak of these data.

    Workaround: Use the following pairs of sequential calls to replace the ippsHMACGetTag_rmf and ippsHMACGetTag function:   
        ippsHMAC_Duplicate()and ippsHMAC_Final()
        ippsHMACDuplicate_rmf()and ippsHMACFinal_rmf() 

Installing Intel® Performance Libraries and Intel® Distribution for Python* Using YUM Repository

$
0
0

This page provides general installation and support notes about the Community forum supported Intel® Performance Libraries and Intel® Distribution for Python* as they are distributed via the YUM repositories described below.

These software development tools are also available as part of the Intel® Parallel Studio XE and Intel® System Studio products. These products include enterprise-level Intel® Online Service Center support.

Setting up the Repository

Here is how to install the Intel YUM Repository. [Note: You must be logged in as root to set up and install the repository]

  1. Add the repositories in two ways:
    • Add all Intel® Performance Libraries and Intel® Distribution for Python* repositories at once:
      sudo yum-config-manager --add-repo http://yum.repos.intel.com/setup/intelproducts.repo
      You can enable or disable repositories in the intelproducts.repo file by setting the value of the enabled directive to 1 or 0 as required.
    • Add an individual product:
      • Intel® Math Kernel Library (Intel® MKL):
        sudo yum-config-manager --add-repo http://yum.repos.intel.com/mkl/setup/intel-mkl.repo
      • Intel® Integrated Performance Primitives (Intel® IPP):
        sudo yum-config-manager --add-repo http://yum.repos.intel.com/ipp/setup/intel-ipp.repo
      • Intel® Distribution for Python*:
        sudo yum-config-manager --add-repo http://yum.repos.intel.com/intelpython/setup/intelpython.repo
      • Intel® Threading Building Blocks (Intel® TBB): Coming soon
      • Intel® Data Analytics Acceleration Library (Intel® DAAL): Coming soon
  2. Import the gpg public key for the repository
    sudo rpm --import http://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB

Intel® Performance Libraries and Intel® Distribution for Python* versions available in the repository

<COMPONENT>

<VERSION>

<UPDATE>

<BUILD_NUM>

Intel® MKL

2017

2

050

Intel® IPP

2017

2

050

Intel® Distribution for Python*

2017

2

045

Intel® TBB

TBD

TBD

TBD

Intel® DAAL

TBD

TBD

TBD

By downloading Intel® Performance Libraries and Intel® Distribution for Python* you agree to the terms and conditions stated in the End-User License Agreement (EULA).

Installing the library and Python packages using the YUM Package Manager

The following variables are used in the installation commands:
<PYTHON_VERSION>: 2, 3
<VERSION>: 2017, ...
<UPDATE>: 0, 1, 2, ...
<BUILD_NUMBER>: build number, check the list in previous section
<COMPONENT>: a component name from the list of available components below

Component

<COMPONENT>

Intel® Math Kernel Library

intel-mkl
intel-mkl-32bit
intel-mkl-64bit

Intel® Integrated Performance Primitives

intel-ipp
intel-ipp-32bit
intel-ipp-64bit

Intel® Distribution for Python*

intelpython2
intelpython3

Intel® Threading Building Blocks

Coming soon

Intel® Data Analytics Acceleration Library

Coming soon

How do I install a particular version?

  1. To install a particular version of one of the Intel® Performance Libraries:
    yum install <COMPONENT>-<VERSION>.<UPDATE>-<BUILD_NUMBER>

    Example:

    yum install intel-mkl-2017.2-050
  2. To install a particular language version of the Intel® Distribution for Python*:
    yum install intelpython<PYTHON_VERSION>

    Example:

    yum install intelpython3
  3. To specify which version of the Intel® Distribution for Python* to install:
    yum install intelpython<PYTHON_VERSION>-<VERSION>.<UPDATE>-<BUILD_NUMBER>

    Example:

    yum install intelpython3-2017.2-050

By downloading Intel® Performance Libraries and Intel® Distribution for Python* you agree to the terms and conditions stated in the End-User License Agreement (EULA).

How do I uninstall a particular version?

  1. To uninstall one of the Intel® Performance Libraries:
    yum autoremove <COMPONENT>-<VERSION>.<UPDATE>-<BUILD_NUMBER>

    Example:

    yum autoremove intel-mkl-2017.2-050
  2. To uninstall the Intel® Distribution for Python*:
    yum remove intelpython<PYTHON_VERSION>

    Example:

    yum remove intelpython3

Have Questions?

Check out the FAQ
Or ask in our User Forums

 

Installing Intel® Performance Libraries and Intel® Distribution for Python* Using APT Repository

$
0
0

This page provides general installation and support notes about the Community forum supported Intel® Performance Libraries and Intel® Distribution for Python* as they are distributed via the APT repositories described below.

These software development tools are also available as part of the Intel® Parallel Studio XE and Intel® System Studio products. These products include enterprise-level Intel® Online Service Center support.

Setting up the Repository

Install the GPG key for the repository

Grab the public key and install it as follows:

wget http://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB

Add the APT Repository

Add the repositories in two ways:

  • Add all Intel® Performance Libraries and Intel® Distribution for Python* repositories at once:
    sudo wget http://apt.repos.intel.com/setup/intelproducts.list -O /etc/apt/sources.list.d/intelproducts.list
    You can enable or disable repositories in the intelproducts.repo file by setting the value of the enabled directive to 1 or 0 as required.
  • Add an individual product:
    • Intel® Math Kernel Library (Intel® MKL):
      sudo sh -c 'echo deb http://apt.repos.intel.com/mkl stable main > /etc/apt/sources.list.d/intel-mkl.list'
    • Intel® Integrated Performance Primitives (Intel® IPP):
      sudo sh -c 'echo deb http://apt.repos.intel.com/ipp stable main > /etc/apt/sources.list.d/intel-ipp.list'
    • Intel® Distribution for Python*:
      sudo sh -c 'echo deb http://apt.repos.intel.com/intelpython binary/ > /etc/apt/sources.list.d/intelpython.list'
    • Intel® Threading Building Blocks (Intel® TBB): Coming soon
    • Intel® Data Analytics Acceleration Library (Intel® DAAL): Coming soon

Update the list of packages

sudo apt-get update

Intel® Performance Libraries and Intel® Distribution for Python* versions available in the repository

<COMPONENT>

<VERSION>

<UPDATE>

<BUILD_NUM>

Intel® MKL

2017

2

050

Intel® IPP

2017

2

050

Intel® Distribution for Python*

2017

2

045

Intel® TBB

TBD

TBD

TBD

Intel® DAAL

TBD

TBD

TBD

By downloading Intel® Performance Libraries and Intel® Distribution for Python* you agree to the terms and conditions stated in the End-User License Agreement (EULA).

Installing the library and Python packages using the APT-GET Package Manager

The following variables are used in the installation commands:
<PYTHON_VERSION>: 2, 3
<VERSION>: 2017, ...
<UPDATE>: 0, 1, 2, ...
<BUILD_NUMBER>: build number, check the list in previous section
<COMPONENT>: a component name from the list of available components below

Component

<COMPONENT>

Intel® Math Kernel Library

intel-mkl
intel-mkl-32bit
intel-mkl-64bit

Intel® Integrated Performance Primitives

intel-ipp
intel-ipp-32bit
intel-ipp-64bit

Intel® Distribution for Python*

intelpython2
intelpython3

Intel® Threading Building Blocks

Coming soon

Intel® Data Analytics Acceleration Library

Coming soon

How do I install a particular version?

  1. To install a particular version of one of the Intel® Performance Libraries:
    sudo apt-get install <COMPONENT>-<VERSION>.<UPDATE>-<BUILD_NUMBER>

    Example:

    sudo apt-get install intel-mkl-2017.2-050
  2. To install a particular language version of the Intel® Distribution for Python*:
    sudo apt-get install intelpython<PYTHON_VERSION>

    Example:

    sudo apt-get install intelpython3
  3. To specify which version of the Intel® Distribution for Python* to install:
    sudo apt-get install intelpython<PYTHON_VERSION>=<VERSION>.<UPDATE>.<BUILD_NUMBER>

    Example:

    sudo apt-get install intelpython3=2017.2.045

By downloading Intel® Performance Libraries and Intel® Distribution for Python* you agree to the terms and conditions stated in the End-User License Agreement (EULA).

How do I uninstall a particular version?

  1. To uninstall one of the Intel® Performance Libraries:
    sudo apt-get autoremove <COMPONENT>-<VERSION>.<UPDATE>-<BUILD_NUMBER>

    Example:

    sudo apt-get autoremove intel-mkl-2017.2-050
  2. To uninstall the Intel® Distribution for Python*:
    sudo apt-get remove intelpython<PYTHON_VERSION>

    Example:

    sudo apt-get remove intelpython3

Have Questions?

Check out the FAQ
Or ask in our User Forums

 

IoT Reference Implementation: Making of an Environment Monitor Solution

$
0
0

To demonstrate the value of Internet of Things (IoT) code samples provided on Intel‘s Developer Zone as the basis for more complex solutions, this project extends previous work on air quality sensors. This IoT reference implementation builds on the existing Intel® IoT air quality sensor code samples to create a more comprehensive Environment-Monitor solution. Developed using Intel® System Studio IoT Edition and an Intel® Next Unit of Computing (Intel® NUC) as a gateway connected to sensors using Arduino* technology, the solution is based on Ubuntu*. Data is transferred to the cloud using Amazon Web Services (AWS)*.

Visit GitHub for this project’s latest code samples and documentation.

IoT promises to deliver data from previously obscure sources, as the basis for intelligence that can deliver business value and human insight, ultimately improving the quality of human life. Air quality is an excellent example of vital information that is all around us but often unseen, with importance that ranges from our near-term comfort to the ultimate well-being of our species, in terms of pollution and its long-term effects on our climate and environment.

In the context that IT infrastructure has historically been seen as having a negative environmental footprint, IoT can therefore be seen potentially as a vital inflection point. That is, as we strive for greater understanding of the world and our impact on it, technology is being developed to give us information that can drive wise decisions.

For example, IoT sensors gather data about levels of carbon dioxide, methane, and other greenhouse gases, toxins such as carbon monoxide, hexane, and benzene, as well as particulate irritants that include dust and smoke. They can also be used to capture granular information about the effects of these contaminants over vast or confined areas, based on readings such as temperature, light levels, and barometric pressure.

Building on a Previous Air-Quality Sensor Application

Part of the Intel® Developer Zone value proposition for IoT comes from the establishment of seed projects that provide points of entry into a large number of IoT solution domains. These are intended both as instructive recipes that interested developers can recreate and as points of departure for novel solutions. 

The Existing Air Quality Sensor Application

The Environment Monitor described in this development narrative is an adaptation of a previous project—a more modest in scope air-quality sensor implementation. The previous air-quality sensor solution was built as part of a series of how-to Intel® IoT code sample exercises using the Intel® IoT Developer Kit.

Further information on that application can be found in the GitHub repository: https://github.com/intel-iot-devkit/how-to-code-samples/blob/master/air-quality-sensor/javascript/README.md

Its core functionality of the application is centered on a single air-quality sensor and includes the following:

  • Continuously checking air quality for airborne contaminants, based on whether any of several gases exceeds a defined threshold.
  • Alerting with an audible alarm whenever an alert is generated based on one of the threshold values being exceeded, which indicates unhealthy air.
  • Storing alert history in the cloud, tracking and providing an historical record of each time an alert is generated by the application.

The application code is built as a Node.js* project using JavaScript*, with MRAA (I/O library) and UPM (sensor library) from the Intel® IoT Developer Kit. This software interfaces with the following hardware components:  

  • Intel® NUC
  • Arduino 101*
  • Grove* Starter Kit 

Note: Here, we use an Intel® NUC but click here for other compatible gateways.

You can also use the Grove* IoT Commercial Developer Kit for this solution. The IDE used in creating the application is the Intel® System Studio IoT Edition.

The air-quality sensor application provides a foundation for the Environment Monitor.

Adapting the Existing Project to Create a New Solution

This document shows how such projects such as the previous air-quality sensor can readily be modified and expanded. In turn, the new Environment Monitor solution, shown in Figure 1, extends the IoT capability by adding additional sensors and provides the user flexibility in the choice of operating system (i.e., you can use Intel® IoT Gateway Software Suite or Ubuntu* Server).

Environmental Monitor

`Environmental Monitor
Figure 1. Enhanced Environmental Monitor Solution

Sensor Icons
Figure 2. Sensor Icons

Table 1. Sensor Icon Legend

DustGasTemperatureHumidity

The relatively simple scope of the existing air-quality sensor application allows significant opportunity for expanding on its capabilities.

The team elected to add a number of new sensors to expand on the existing solution’s functionality. The Environment Monitor solution includes new sensors for dust, temperature, and humidity.   

In addition to the changes made as part of this project initiative, interested parties could make changes to other parts of the solution as needed. These could include, for example, adding or removing sensors or actuators, switching to a different integrated development environment (IDE), programming language, or OS, or adding entirely new software components or applications to create novel functionality.

Hardware Components of the Environment Monitor Solution

  • Intel® NUC
  • Arduino 101*
  • Grove* sensors

Based on the Intel® Atom™ processor E3815, the Intel® NUC offers a fanless thermal solution, 4 GB of onboard flash storage (and SATA connectivity for additional storage), as well as a wide range of I/O ports. The Intel® NUC is conceived as a highly compact and customizable device that provides capabilities at the scale of a desktop PC. It provides the following key benefits:

  • Robust compute resources to ensure smooth performance without bogging down during operation.
  • Ready commercial availability to help ensure that the project could proceed on schedule.
  • Pre-validation for the OS used by the solution (Ubuntu).

The Arduino 101* board makes the Intel® NUC both hardware and pin compatible with Arduino shields, in keeping with the open-source ideals of the project team. While Bluetooth* is not used in the current iteration of the solution, the hardware does have that functionality, which the team is considering for future use.

The Intel® NUC and Arduino 101 board are pictured in Figure 3, and the specifications of each are given in Table 2.

Intel NUC Kit
Figure 3. Intel® NUC Kit DE3815TYKHE and Arduino* 101 board.

Table 2. Prototype hardware used in the Environment Monitor solution

 Intel® NUC Kit DE3815TYKHEArduino 101 Board
Processor/MicrocontrollerIntel® Atom™ processor E3815 (512K Cache, 1.46 GHz)Intel® Curie™ Compute Module @ 32 MHz
Memory8 GB DDR3L-1066 SODIMM (max)
  • 196 KB flash memory
  • 24 KB SRAM
Networking / IOIntegrated 10/100/1000 LAN
  • 14 Digital I/O pins
  • 6 Analog IO pins
Dimensions190 mm x 116 mm x 40 mm68.6 mm x 53.4 mm
Full Specsspecsspecs

For the sensors and other components needed in the creation of the prototype, the team chose the Grove* Starter Kit for Arduino*(manufactured by Seeed Studio*), which is based on the Grove* Starter Kit Plus used in the Grove* IoT Commercial Developer Kit. This collection of components is available at low cost, and because it is a pre-selected set of parts, it reduces the effort required to identify and procure the bill of materials. The list of components, their functions, and connectivity are given in Table 3.

Table 3. Bill of Materials

 ComponentDetailsPin ConnectionConnection Type
Base SystemIntel® NUC Kit DE3815TYKHEGateway  
 Arduino* 101 boardSensor hub USB
 Grove* - Base ShieldArduino 101 Shield Shield
 USB Type A to Type B CableConnect Arduino 101 board to Intel® NUC  
SensorsGrove - Green LEDLED indicates status of the monitorD2Digital
 Grove - Gas Sensor(MQ2)Gas sensor (CO, methane, smoke, etc.)A1Analog
 Grove - Dust SensorParticulate matter sensorD4Digital
 Grove - Temp&Humi&Barometer Sensor (BME280)Temperature, humidity, barometer sensorBus 0I2C

Using that bill of materials and connectivity schema, the team assembled the bench model of the Environment monitor, as illustrated in the figures below.

Intel NUC Kit
Figure 4. Intel® NUC, Arduino* 101 board, and sensor components.

Pin Connections
Figure 5. Pin Connections to the Arduino* 101 Board

Sensor Callouts
Figure 6. Sensor callouts.

Software Components of the Environment Monitor Solution

Apart from the physical model, the Environment monitor solution also includes a variety of software components, which are described in this section. As mentioned above, the solution includes an administrative application running on the Intel® NUC, as well as a mobile customer application for general users, which is designed to run on a tablet PC or smartphone.

The previous air quality sensor application runs on Intel® IoT Gateway Software Suite. In contrast, the Environment monitor solution uses Ubuntu* Server. 

The IDE used to develop the software for the Environment monitor solution is Intel® System Studio IoT Edition that facilitates connecting to the Intel® NUC and developing applications.

Like the previous air-quality sensor application, this solution uses MRAA and UPM from the Intel® IoT Developer Kit to interface with platform I/O and sensor data. The MRAA library provides an abstraction layer for the hardware to enable direct access to I/O on the Intel® NUC, as well as Firmata*, which allows for programmatic interaction with the Arduino* development environment, taking advantage of Arduino’s hardware-abstraction capabilities. Abstracting Firmata using MRAA enables greater programmatic control of I/O on the Intel® NUC, simplifying the process of gathering data from sensors. UPM is a library developed on top of MRAA that exposes a user-friendly API and provides the specific function calls used to access sensors.

Administrative Application

A simple administrative application built into the solution, the user interface of which is shown below runs on the Intel® NUC. This application provides a view of the data from the solution’s sensor array, with the ability to generate views of changes to that data over time. It also provides buttons that can be used by an administrator to trigger events by generating simulated sensor data that is outside preset “normal” ranges. This application is built to be extensible, with the potential to support additional types of sensors in multiple geographic locations, for example.

Figure 7 shows the main window of the administrative application, simulating normal operation when sensor data is all within the preset limits. This window includes controls to set the thresholds for alerts based on the various sensors, with display of the thresholds themselves and current sensor readings.

Admin Window
Figure 7. Main administrative window showing status of sensors within normal parameters.

In Figure 8, one of the buttons to generate a simulated event has been pressed. The active state of that button is indicated visually, the data reading is shown to be outside normal parameters, and the alert indicator is active. The operator can acknowledge the alert, dismissing the alarm while the application continues to register the non-normal data. Once the sensor data passes back into the normal range, the screen will return to its normal operating state.

Simulated Alert
Figure 8. Main administrative window during generation of a simulated alert.

The administrative application also provides the ability to view time-series sensor data, as illustrated in Figure 9. Using this functionality, operators can track changes over time in the data from a given sensor, which provides simple trending information that could be augmented using analytics.

Log File Screen History
Figure 9. Log-file screen showing historical sensor data.

The Environment monitor solution uses AWS* cloud services to provide a central repository for real-time and historical sensor data. This cloud-based storage could be used, for example, to aggregate information from multiple sensors as the basis for a view of contaminant levels and resultant effects over a large geographical area. Analytics run against either stored or real-time streaming data could potentially generate insights based on large-scale views of substances of interest in the atmosphere.

Using capabilities such as these, development organizations could establish a scope of monitoring that is potentially open-ended in terms of the factors under investigation and the geographic area under observation. The administrative application takes advantage of cloud-based resources in order to support large-scale analytics on big data, as the foundation for future innovation.

Note: The administrative application can access data on AWS* using either a backend datastore or data storage using Message Queue Telemetry Transport* (MQTT*), a machine-to-machine messaging server. Implementation guidance for those options is available at the following locations:

Conclusion

The solution discussed in this development narrative provides an example of how existing IoT solutions provided by Intel can provide a springboard for development of related projects. In this case, a relatively simple air-quality sensor application is the basis of a more complex and robust Environment Monitor solution, which incorporates additional sensors and more powerful system hardware, while retaining the ability of the original to be quickly brought from idea to reality. Using this approach, IoT project teams working on the development of new solutions don’t need to start from scratch.

More Information


Intel(R) Distribution for Python* 2017 Update 2

$
0
0

Intel Corporation is pleased to announce the release of Intel® Distribution for Python* 2017 Update 2, which offers both performance improvements and new features. 

Update 2 offers great performance improvements for NumPy*, SciPy*, and Scikit-learn* that you can see across a range of Intel processors, from Intel® Core™ CPUs to Intel® Xeon® and Intel® Xeon Phi™ processors. 

Benchmarks for all these accelerations will be published soon. This post provides a preview of the nature, extent, and impact to you. 

Fast Fourier Transforms
In addition to initial Fast Fourier Transforms (FFT) optimizations offered in previous releases, Update 2 brings widespread optimizations for NumPy and SciPy FFT. It offers a layered interface for the Intel® Math Kernel Library (Intel® MKL) that allows efficient access to native FFT optimizations from a range of NumPy and SciPy functions. The optimizations include real and complex data types, both single and double precision. Update 2 covers both 1D and multidimensional data, in place and out of place. As a result, performance may improve up to 60x over Update 1 and is now close to native C/Intel MKL.

Arithmetic and transcendental expressions
NumPy is designed for high-performance basic arithmetic and transcendental operations on ndarrays. Some umath primitives are optimized to benefit from SSE, AVX and (recently) from AVX2 instruction sets, but not from AVX-512. Also, original NumPy functions did not take advantage of multiple cores. Update 2 provides substantial changes to the guts of NumPy to incorporate the Intel MKL Vector Math Library (VML) in respective umath primitives, which enables support for all available cores on a system and all CPU instruction sets. 

The logic in Update 2 NumPy umath works as follows:
•    For short NumPy arrays, the overheads to distribute work across multiple threads are high relative to the amount of computation work. In such cases, Update 2 uses the Intel MKL Short Vector Math Library (SVML), which is optimized for good performance across a range of Intel CPUs on short vectors. 
•    For large arrays, threading overheads are lower compared to the amount of computation and Update 2 uses the Intel MKL VML, which is optimized for utilizing multiple cores and a range of Intel CPUs.
NumPy Arithmetic and transcendental operations on vector-vector and vector-scalar are accelerated up to 400x for Intel® Xeon Phi processors.

Memory management optimizations
Update 2 introduces widespread optimizations in NumPy memory management operations. As a dynamic language, Python manages memory for the user. Memory operations, such as allocation, de-allocation, copy, and move, affect performance of essentially all Python programs. 

Specifically, Update 2 ensures NumPy allocates arrays that are properly aligned in memory on Linux, so that NumPy and SciPy compute functions can benefit from respective aligned versions of SIMD memory access instructions. This is especially relevant for Intel® Xeon Phi processors.
The most significant improvements in memory optimizations in Update 2 comes from replacing original memory copy and move operations with optimized implementations from Intel MKL. The result: improved performance because these Intel MKL routines are optimized for both a range of Intel CPUs and multiple CPU cores.

Faster Machine Learning with Scikit-learn
Scikit-learn is among the most popular Python machine learning packages. The initial release of Intel Distribution for Python provided Scikit-learn optimizations via respective NumPy and SciPy functions accelerated by Intel MKL. Update 2 optimizes selective key machine learning algorithms in Scikit-learn, accelerating them with the Intel® Data Analytics Acceleration Library (Intel® DAAL).

Specifically, Update 2 optimizes Principal Component Analysis (PCA), Linear and Ridge Regressions, Correlation and Cosine Distances, and K-Means. Speedups may range from 1.5x to 160x.

Intel-optimized Deep Learning
Deep learning is becoming an essential tool for knowledge discovery. Intel engineers put much effort into optimizing the most popular Deep Learning frameworks. Update 2 incorporates two Intel-optimized Deep Learning frameworks, Caffe* and Theano*, into the distribution so Python users can take advantage of these optimizations out of the box.

Neural network enhancements for pyDAAL
Intel DAAL:
•    Introduces a number of extensions for neural networks, such as the transposed convolution layer and the reshape layer. 
•    Now supports input tensors of arbitrary dimension in loss softmax cross-entropy layers, sigmoid cross-entropy criterion, and truncated Gaussian initializer for tensors.  
•    Extends support for distributed computing by adding the objective function with pre-computed characteristics.  
pyDAAL comes with improved performance for neural network layers used in topologies such as AlexNet. 

Summary
The Intel Distribution for Python is powered by Anaconda* and conda build infrastructures that give all Python users the benefit of interoperability within these two environments and access to the optimized packages through a simple conda install command.
Intel Distribution for Python 2017 Update 2 delivers significant performance optimizations for many core algorithms and Python packages, while maintaining the ease of download and install. 

Update 2 is available for free download at the Intel Distribution for Python website or through the Intel channel at Anaconda.org
The Python team at Intel welcomes you to try it out and email us any feedback. 

Unreal Engine 4 Optimization Tutorial, Part 2

$
0
0

This is part 2 of a tutorial to help developers improve the performance of their games in Unreal Engine* 4 (UE4). In this tutorial, we go over a collection of tools to use within and outside of the engine, as well some best practices for the editor, and scripting to help increase the frame rate and stability of a project.

Editor Optimizations

Forward versus. Deferred Rendered

Deferred is the standard rendering method used by UE4. While it typically looks the best, there are important performance implications to understand, especially for VR games and lower end hardware. Switching to Forward Rendering may be beneficial in these cases.

For more detail on the effect of using Forward Rendering, see the Epic documentation.

If we look at the Reflection scene from Epic’s Marketplace, we can see some of the visual differences between Deferred and Forward Rendering.


Figure 13: Reflection scene with Deferred Rendering


Figure 14:  Reflection scene with Forward Rendering

While Forward Rendering comes with a loss of visual fidelity from reflections, lighting, and shadows, the remainder of the scene remains visually unchanged and performance increase maybe worth the trade-off.

If we look at a frame capture of the scene using the Deferred Rendering in the Intel GPA Frame Analyzer tool, we see that the scene is running at 103.6 ms (9 fps) with a large duration of time being taken by lighting and reflections.


Figure 15:  Capture of the Reflection scene using Deferred Rendering on Intel® HD Graphics 530

When we look at the Forward Rendering capture, we see that the scene’s runtime has improved from 103.6 to 44.0 ms, or 259 percent improvement, with most time taken up by the base pass and post processing; both of which can be optimized further.


Figure 16: Capture of the Reflection scene using Forward Rendering on Intel® HD Graphics 530

Level of Detail

Static Meshes within UE4 can have thousands, even hundreds of thousands of triangles in their mesh to show all the smallest details a 3D artist could want to put into their work. However, when a player is far away from that model they won’t see any of that detail, even though the engine is still rendering all those triangles. To solve this problem and optimize our game we can use Levels of Detail (LOD) to have that detail up close, while also showing a less intensive model at a distance.

LOD Generation

In a standard pipeline, LODs are created by the 3D modeler during the creation of that model. While this method allows for the most control over the final appearance, UE4 now includes a great tool for generating LODs.

Auto LOD Generation

To auto generate Static Mesh LODs, go into that model’s details tab. On the LOD Settings panel select the Number of LODs you would like to have.


Figure 17: Creating auto generated level of details.

Clicking Apply Changes signals the engine to generate the LODs and number them, with LOD0 as the original model. In the example below, we see that the LOD generation of 5 takes our Static Mesh from 568 triangles to 28— a huge optimization for the GPU.


Figure 18: Triangle and vertex count, and the screen size setting for each level of detail.

When we place our LOD mesh in scene we can see the mesh change the further away it is from the camera.


Figure 19: Visual demonstration of level of detail based on screen size.

LOD Materials

Another feature of LODs is that each one can have its own material, allowing us to further reduce the cost of our Static Mesh.


Figure 20: Material instances applied to each level of detail.

For example, the use of normal maps has become standard in the industry. However, in VR there is a problem; normal maps aren’t ideal up close as the player can see that it’s just a flat surface.

A way to solve this issue is with LODs. By having the LOD0 Static Mesh detailed to the point where bolts and screws are modeled on, the player gets a more immersive experience when examining it up close. Because all the details are modeled on, the cost of applying a normal map can be avoided on this level. When the player is further away from the mesh and it switches LODs, a normal map can then be swapped in while also reducing the detail on the model. As the player gets even further away and the mesh gets smaller, the normal map can again be removed, as it becomes too small to see.

Instanced Static Meshes

Every time anything is brought into the scene it corresponds to an additional draw call to the graphics hardware. When this is a static mesh in a level, it applies to every copy of that mesh. One way to optimize this, if the same static mesh is repeated several times in a level, is to instance the static meshes to reduce the amount of draw calls made.

For example, here we have two spheres of 200 octahedron meshes; one set in green, and the other in blue.


Figure 21: Sphere of static and instanced static meshes.

The green set of meshes are all standard static meshes, meaning that each has its own collection of draw calls.


Figure 22: Draw calls from 200 static mesh spheres in scene (Max 569).

The blue set of meshes are a single-instanced static mesh, meaning that they share a single collection of draw calls.


Figure 23: Draw calls from 200 instanced static mesh spheres in scene (Max 143).

Looking at the GPU Visualizer for both, the Base Pass duration for the green (static) sphere is 4.30 ms and the blue (instanced) sphere renders in 3.11 ms; a duration optimization of ~27 percent in this scene.

One thing to know about instanced static meshes is that if any part of the mesh is rendered, the whole of the collection is rendered. This wastes potential throughput if any part is drawn off camera. It’s recommended to keep a single set of instanced meshes in a smaller area; for example, a pile of stone or trash bags, a stack of boxes, and distant modular buildings.


Figure 24: Instanced Mesh Sphere still rendering when mostly out of sight.

Hierarchical Instanced Static Meshes

If collections of static meshes that have LODs are used, consider a Hierarchical Instanced Static Mesh.


Figure 25: Sphere of Hierarchical Instanced Meshes with Level Of Detail.

Like a standard instanced mesh, hierarchical instances reduce the number of draw calls made by the meshes, but the hierarchical instance also uses the LOD information of its meshes.


Figure 26: Up close to that sphere of Hierarchical Instanced Meshes with Level Of Detail.

Occlusion

In UE4, occlusion culling is a system where objects not visible to the player are not rendered. This helps to reduce the performance requirements of a game as you don’t have to draw every object in every level for every frame.


Figure 27: Spread of Octohedrons.

To see the occluded objects with their green bounding boxes, you can enter r.VisualizeOccludedPrimitives 1 (0 to turn off) into the console command of the editor.


Figure 28: Viewing the bounds of occluded meshes with r.VisualizeOccludedPrimitives 1

The controlling factor of whether or not a mesh is drawn is relative to its bounding box. Because of this, some drawn objects that may not be visible to the player, but the bounding box is visible to the camera.


Figure 29: Viewing bounds in the meshes details window.

If a mesh needs to be rendered before a player sees it, for additional streaming time or to let an idle animation render before being seen for example, the size of the bounding boxes can be increased under the Static Mesh Settings > Positive Bounds Extension and Negative Bounds Extension in the meshes settings window.


Figure 30: Setting the scale of the mesh’s bounds.

As the bounding box of complex meshes and shapes always extend to the edges of those meshes, creating white space will cause the mesh to be rendered more often. It is important to think about how mesh bounding boxes will affect the performance of the scene.

For a thought experiment on 3D model design and importing into UE4, let’s think about how a set piece, a colosseum-style arena, could be made.

Imagine we have a player standing in the center of our arena floor, looking around our massive colosseum, about to face down his opponents. When the player is rotating the camera around, the direction and angle of the camera will define what the game engine is rendering. Since this area is a set piece for our game it is highly detailed, but to save on draw calls we need to make it out of solid pieces. First, we are going to discard the idea of the arena being one solid piece. In this case, the number of triangles that have to be drawn equals the entire arena because it’s all drawn as a single object, in view or not. How can the model be improved to bring it into the game?

It depends. There are a few things that will affect our decision. First is how the slices can be cut, and second is how those slices will affect their bounding boxes for occlusion culling. For this example, let’s say the player is using a camera angle of 90 degrees, to make the visuals easier.

If we look at a pizza-style cut, we can create eight identical slices to be wheeled around a zero point to make our whole arena. While this method is simple, it is far from efficient for occlusion, as there are a lot of overlapping bounding boxes. If the player is standing in the center and looking around, their camera will always cross three or four bounds, resulting in half the arena being drawn most the time. In the worst case, with a player standing back to the inner wall and looking across the arena, all eight pieces will be rendered, granting no optimization.

Next, if we take the tic-tac-toe cut, we create nine slices. This method is not quite orthodox, but has the advantage that there are no overlapping bounding boxes. As with the pizza cut, a player standing in the center of the arena will always cross three or four bounds when standing in the middle of the arena. However, in the worst case of the player standing up against the inner wall, they will be rendering six of the nine pieces, giving an optimization over the pizza cut.

As a final example, let’s make an apple core cut (a single center piece and eight wall slices). This method is the most common approach to this thought experiment and, with little overlap, a good way to build out the model. When the player is standing in the center they will be crossing five or six bounds, but unlike the other two cuts, the worst case for this cut is also five or six pieces rendered out of nine.

Figure 31: Thought experiment showing how a large model can be cut up, and how that effects bounding boxes and their overlap.

Cascaded Shadow Maps

Dynamic Shadow Cascades bring a high level of detail to your game, but they can be expensive and require a powerful gaming PC to run without a loss of frame rate.

Fortunately, as the name suggests, these shadows are dynamically created every frame, so can be set in game to allow the player to optimize to their preferences.

Cost of Dynamic Shadow Cascades using Intel® HD Graphics 350

The level of Dynamic Shadow Cascades can be dynamically controlled in several ways:

  • Shadow quality settings under the Engine Scalability Settings
  • Editing the integer value of r.Shadow.CSM.MaxCascades under the BaseScalability.ini file (between 0 and 4) and then changing the sg.ShadowQuality (between 0 – 3 for Low, Medium, High, and Epic)
  • Adding an Execute Console Command node in a blueprint within your game where you manually set the value of r.Shadow.CSM.MaxCascades

Back to Part 1   Next Section

Unreal Engine 4 Optimization Tutorial, Part 3

$
0
0

This is part 3 of a tutorial to help developers improve the performance of their games in Unreal Engine* 4 (UE4). In this tutorial, we go over a collection of tools to use within and outside of the engine, as well some best practices for the editor, and scripting to help increase the frame rate and stability of a project.

Scripting Optimizations

Disabling Fully Transparent Objects

Even fully transparent game objects consume rendering draw calls. To avoid these wasted calls, set the engine to stop rendering them.

To do this with Blueprints, UE4 needs multiple systems in place.

Material Parameter Collection

First, create a Material Parameter Collection (MPC). These assets store scalar and vector parameters that can be referenced by any material in the game, and can be used to modify those materials during play to allow for dynamic effects.

Create an MPC by selecting it under the Create Advanced Asset > Materials & Textures menu.


Figure 32: Creating a Material Parameter Collection.

Once in the MPC, default values for scalar and vector parameters can be created, named and set. For this optimization, we need a scalar parameter that we will call Opacity and we’ll use it to control the opacity of our material.


Figure 33: Setting a Scalar Parameter named Opacity.

Material

Next, we need a material to use the MPC. In that material, create a node called Collection Parameter. Through this node, select an MPC and which of its parameters will be used.


Figure 34: Getting the Collection Parameter node in a material.

Once the node set is created, drag off its return pin to use the value of that parameter.


Figure 35: Setting the Collection Parameter in a material.

The Blueprint Scripting Part

After creating the MPC and material we can set and get the values of the MPC through a blueprint. The values can be called and changed with the Get/Set Scalar Parameter Value and Get/Set Vector Parameter Value. Within those nodes, select the Collection (MPC) to use and a Parameter Name within that collection.

For this example, we set the Opacity scalar value to be the sine of the game time, to see values between 1 and -1.


Figure 36: Setting and getting a scalar parameter and using its value in a function.

To set whether the object is being rendered, we create a new function called Set Visible Opacity with an input of the MPC’s Opacity parameter value and a static mesh component, and a Boolean return for whether or not the object is visible.

From that we run a greater than near-zero check, 0.05 in this example. A check of 0 could work, but as zero is approached the player will no longer be able to see the object, so we can turn it off just before it gets to zero. This also helps provide a buffer in the case of floating point errors not setting the scalar parameter to exactly 0, making sure it is turned off if it’s set to 0.0001, for instance.

From there, run a branch where a True condition will Set Visibility of the object to be true, and a False condition to be set to false.


Figure 37: Set Visible Opacity function.

Tick, Culling, and Time

If blueprints within the scene use Event Tick, those scripts are being run even when those objects no longer appear on screen. Normally this is fine, but the fewer blueprints ticking every frame in a scene, the faster it runs.

Some examples of when to use this optimization are:

  • Things that do not need to happen when the player is not looking
  • Processes that run based on game time
  • Non-Player Characters (NPC) that do not need to do anything when the player is not around

As a simple solution, we can add a Was Recently Rendered check to the beginning of our Event Tick. In this way, we do not have to worry about connecting on custom events and listeners to get our tick to turn on and off, and the system can still be independent of other actors within the scene.


Figure 38: Using the culling system to control the content of Event Tick.

Following that method, if we have a process that runs based on game time, say an emissive material on a button that dims and brightens every second, we use the method that we see below.


Figure 39: Emissive Value of material collection set to the absolute sine of time when it is rendered. 

What we see in the figure is a check of game time that is passed through the absolute value of sine plus one, which gives a sine wave ranging from 1 to 2.

The advantage is that no matter when the player looks at this button, even if they spin in circles or stare, it always appears to be timed correctly to this curve thanks to the value being based on the sine of game time.

This also works well with modulo, though the graph looks a bit different.

This check can be called later into the Event Tick. If the actor has several important tasks that need to be done every frame, they can be executed before the render check. Any reduction in the number of nodes called on a tick within a blueprint is an improvement.


Figure 40: Using culling to control visual parts of a blueprint.

Another approach to limiting the cost of a blueprint is to slow it down and only let it tick once every time interval. This can be done using the Set Actor Tick Interval node so that the time needed is set through scripting.


Figure 41: Switching between tick intervals.

In addition, the Tick Interval can be set in the Details tab of the blueprint. This allows setting when the blueprint will tick based on time in seconds.


Figure 42: Finding the Tick Interval within the Details tab.

For example, this is useful in the counting of seconds.


Figure 43: Setting a second counting blueprint to only tick once every second.

As an example of how this optimization could help by reducing the average ms, let’s look at the following example.


Figure 44: An extremely useful example of something not to do.

Here we have a ForLoop node that counts 0 to 10000, and we set the integer Count to the current count of the ForLoop. This blueprint is extremely costly and inefficient, so much that it has our scene running at 53.49 ms.


Figure 45: Viewing the cost of the extremely useful example with Stat Unit.

If we go into the Profiler we see why. This simple yet costly blueprint takes 43 ms per tick.


Figure 46: Cost of extremely useful example ticking every frame as viewed in the Profiler.

However, if we only tick this blueprint once every second, it takes 0 ms most the time. If we look at the average time (click and drag over an area in the Graph View) over three tick cycles for the blueprint we see that it uses an average of 0.716 ms.


Figure 47: Cost average of the extremely useful example ticking only once every second as viewed in the Profiler.

To look at a more common example, if we have a blueprint that runs at 1.4 ms in a scene that is running at 60 fps, it uses 84 ms of processing time. However, if we can reduce its tick time, it reduces the total amount of processing time for the blueprint.

Mass Movement, ForLoops, and Multithreading

The idea of several meshes all moving at once looks awesome and can really sell the visual style of a game. However, the processing cost can put a huge strain on the CPU and, in turn, the FPS. Thanks to multithreading and UE4’s handling of worker threads, we can break up the handling of this mass movement across multiple blueprints to optimize performance.

For this section, we will use the following blueprint scripts to dynamically move a collection of 1600 instanced sphere meshes up and down along a modified sine curve.

Here is a simple construction script to build out the grid. Simply add an Instanced Static Mesh component to an actor, choose the mesh to use for it in the Details tab, and then add these nodes to its construction.


Figure 48: Construction Script to build a simple grid.

Once the grid is created, add this blueprint script to the Event Graph.

Something to note about the Update Instance Transform node. When the transform of any instance is modified, the change will not be seen unless Mark Render State Dirty is marked as true. However, it is an expensive operation, as it goes through every mesh in the instance and marks it as dirty. To save on processing, especially if the node is to run multiple times in a single tick, update the meshes at the end of that blueprint. In the script below we Mark Render State Dirty as true only if we are on the Last Index of the ForLoop, if Index is equal to Grid Size minus one.


Figure 49:  Blueprint for dynamic movement for an instanced static mesh.

With our actor blueprint and the grid creation construction and dynamic movement event we can place several different variants with the goal of always having 1600 meshes displaying at once.


Figure 50:  Diagram of the grid of 1600 broken up into different variations.

When we run the scene we get to see the pieces of our grid traveling up and down.


Figure 51:  Instanced static mesh grid of 1600 moving dynamically.

However, the breakdown of the pieces we have affects the speed at which our scene runs.

Looking at the chart above, we see that 1600 pieces of one Instanced Static Mesh each (negating the purpose of even using instancing) and the single piece of 1600 run the slowest, while the rest all hover around a performance of 19 and 20 ms.

The reason the individual pieces runs the slowest is that the cost of running the 1600 blueprints is 16.86 ms, an average of only 0.0105 ms per blueprint. However, while the cost of each blueprint is tiny, the sheer number of them starts to slow down the system. The only thing that can be done to optimize is to reduce the number of blueprints running per tick. The other slowdown comes from the increased number of draw calls and mesh transform commands caused by the large number of individual meshes.

On the opposite side of the graph we see the next biggest offender, the single piece of 1600 meshes. This mesh is very efficient on draw calls, since the whole grid is only one draw call, but the cost of running the blueprint that must update all 1600 meshes per tick causes it to take 19.63 ms of time to process.

When looking at the processing time for the other three sets we see the benefits of breaking up these mass-movement actors, thanks to smaller script time and taking advantage of multithreading within the engine. Because UE4 takes advantage of multithreading, it spreads the blueprints across many worker threads, allowing the evaluation to run faster by effectively utilizing all CPU cores.

If we look at a simple breakdown of the processing time for the blueprints and how they are split among the worker threads, we see the following.

Data Structures

Using the correct type of Data Structure is imperative to any program, and this applies to game development just as much as any other software development. When programming in UE4 with blueprints, no data structures are given for the templated array that will act as the main container. They can be created them by hand using functions and the nodes provided by UE4.

Example of Usage

As an example of why and how a data structure could be used in game development, consider a shoot ’em up (Shmup) style game. One of the main mechanics of a Shmup is shooting thousands of bullets across the screen toward incoming enemies. While one could spawn each of the bullets and then destroy them, it would require a lot of garbage collection on the part of the engine, and could cause a slowdown or loss of frame rate. To get around this, developers could consider a spawning pool (collection of objects all placed into an array or list which are processed when the game is started) of bullets, enabling and disabling them as needed, so the engine only needs to create each bullet once.

A common method of using these spawning pools is to grab the first bullet in the array/list not enabled, moving it into a starting position, enabling it, and then disabling it when it flies off screen or into an enemy. The problem with this method comes from the run time, or Big O, of a script. Because you are iterating through the collection of objects looking for the next disabled object, if the collection is 5000 objects for example, it could take up to that many iterations to find one object. This type of function would have a time of O(n), where n is the number of objects in the collection.

While O(n) is far from the worst an algorithm can perform, the closer we can get to O(1), a fixed cost regardless of size, the more efficient our script and game will be. To do this with a spawning pool we use a data structure called a Queue. Like a queue in real life, this data structure takes the first object in the collection, uses it, and then removes it, continuing the line until every object has been de-queued from the front.

By using a queue for our spawning pool, we can get the front of our collection, enable it, and then pop it (remove it) from the collection and immediately push it (add it) to the back of our collection; creating an efficient cycle within our script and reducing its run time to O(1). We can also add an enabled check to this cycle. If the object that would be popped is enabled, the script would instead spawn a new object, enable it, and then push it to the back of the queue, increasing the size of the collection without decreasing the efficiency of the run time.

Queues

Below is a collection of pictures that illustrate how to implement a queue in blueprints, using functions to help maintain code cleanliness and reusability.

Pop


Figure 52: A queue pop with return implemented in blueprints.

Push


Figure 53: A queue push implemented in blueprints.

Empty


Figure 54: A queue empty implemented in blueprints.

Size


Figure 55: A queue size implemented in blueprints.

Front


Figure 56:  A queue front implemented in blueprints.

Back


Figure 57: A queue back implemented in blueprints.

Insert


Figure 58:  A queue insert with position check implemented in blueprints.

Swap


Figure 59: A queue swap with position checks implemented in blueprints.

Stacks

Below is a collection of pictures that illustrate how to implement a stack in blueprints, using functions to help maintain code cleanliness and reusability.

Pop


Figure 60: A stack pop with return implemented in blueprints.

Push


Figure 61: A stack push implemented in blueprints.

Empty


Figure 62: A stack empty implemented in blueprints.

Size


Figure 63: A stack size implemented in blueprints.

Back


Figure 64: A stack back implemented in blueprints.

Insert


Figure 65: A stack insert with position check implemented in blueprints.

Back to Part 2

IoT Reference Implementation: How to Build an Environment Monitor Solution

$
0
0

This guide demonstrates how existing IoT solutions can be adapted to address more complex problems (e.g., solutions that require more sensor monitoring). The solution we present here, an Environment Monitor, incorporates additional hardware and extends the use of IoT software libraries (sensors and I/O). Also, the solution has been adapted so the gateway can work with multiple operating systems.

Visit GitHub for this project’s latest code samples and documentation. 

The Environment Monitor solution, shown in Figure 1, is built using an Intel® NUC Kit DE3815TYKHE, an Arduino 101* (branded Genuino 101* outside the U.S.) board, and Grove* sensors available from Seeed Studio*. The solution runs on Ubuntu* Server with the Intel® System Studio IoT Edition IDE which creates the code to enable the sensors.


Figure 1. Adapted Environment Monitor Solution

The Intel® NUC acts as a gateway for the solution. The Intel® NUC provides plenty of compute power to function as a router, run higher-level services such as a web server, and to interact with other cloud services (AWS, MQTT, etc.). However it does not provide any I/O ports for interfacing directly with sensors. Hence, the Arduino 101* acts an edge device/sensor hub. Firmata*, a communication protocol, is used to control the Arduino 101 from the application running on the Intel® NUC. In turn, the gateway can be programmed using Intel® System Studio IoT Edition from the host computer.

This solution is built around MRAA (I/O library) and UPM (sensor library) from the Intel® IoT Developer Kit to interface with platform I/O and sensor data. In this case, the MRAA library provides an abstraction layer for the hardware to enable direct access to I/O on the Arduino 101 board using Firmata*. The UPM sensor library was developed on top of MRAA and exposes a user-friendly API that will allow the user to capture sensor data with just a few lines of code. Data is then sent periodically to Amazon Web Services (AWS)* using MQTT*.

The exercise in this document describes how to build the Environment Monitor solution.

From this exercise, developers will learn how to:

  • Setup the system hardware
    • Intel® NUC Kit DE3815TYKHE
    • Arduino 101* board
    • Sensors
  • Install and configure the required software
  • Connect to Cloud Services
    • Amazon Web Service (AWS)* using MQTT* 

Setup the System Hardware

This section describes how to set up all the required hardware for the Environment Monitor solution: the Intel® NUC Kit DE3815TYKHE, the Arduino 101 board, and Grove* sensors.

Intel® NUC Setup


Figure 2. Intel® NUC with bootable USB


Figure 3. Back of the Intel® NUC

Setting up the Intel® NUC for this solution consists of the following steps:

  1. Follow the Intel® NUC DE3815TYKHE User Guide (available online here) and determine if additional components, such as system memory, need to be installed. Optionally, an internal drive and/or wireless card can be added.
  2. Connect a monitor via the HDMI or VGA port and a USB keyboard. These are required for OS deployment and can be removed after the Intel® NUC has been connected to the network and a connection from the development environment has been established.
  3. Plug in an Ethernet cable from your network’s router. This step can be omitted if a wireless network card has been installed instead.
  4. Plug in the power supply for the Intel® NUC but DO NOT press the power button yet. First connect the Arduino 101 and other hardware components and then power on the Intel® NUC. 

Note: The Intel® NUC provides a limited amount of internal eMMC storage (4 GB). Consider using an internal drive or a USB thumb drive to extend the storage capacity.

Arduino 101 Setup

In general, the Arduino 101 board will be ready to use out of the box without any additional changes. A USB Type A to Type B cable is required to connect the Arduino 101 to the Intel® NUC.

Additional setup instructions for the Arduino 101 board are available at https://www.arduino.cc/en/Guide/Arduino101.

Sensors Setup

The sensors used for this project are listed in Table 1.

First, plug in the Grove* Base Shield on top of the Arduino 101 board.

Three sensors with various functions relevant to monitoring the environment have been selected:

  • Grove - Gas Sensor (MQ2) measures the concentration of several gases (CO, CH4, propane, butane, alcohol vapors, hydrogen, liquefied petroleum gas) and is connected to analog pin 1 (A0). Can detect hazardous levels of gas concentration.
  • Grove - Dust Sensor will detect fine and coarse particulate matter in the surrounding air, connect to digital pin 4 (D4).
  • Grove - Temperature, Humidity & Barometer sensor is based on the Bosch* BME280 chip and used to monitor temperature and humidity. It can be plugged in any of the connectors labeled I2C on the shield.

The Grove green LED acts as an indicator LED to show whether the application is running or not, and is connected to digital pin 2 (D2).

Table 1. Bill of Materials

 ComponentDetailsPin ConnectionConnection Type
Base SystemIntel® NUC Kit DE3815TYKHEGateway  
Arduino* 101 boardSensor hub USB
Grove* - Base ShieldArduino 101 Shield Shield
USB Type A to Type B CableConnect Arduino 101 board to Intel® NUC  
SensorsGrove - Green LED LED indicates status of the monitorD2Digital
Grove - Gas Sensor(MQ2)Gas sensor (CO, methane, smoke, etc.)A1Analog
Grove - Dust SensorParticulate matter sensorD4Digital

Grove - Temp&Humi&Barometer Sensor (BME280)

Temperature, humidity, barometer sensorBus 0I2C


Note: here a green LED is used but any color LED (red, blue, etc.) can be used as an indicator.


Figure 4. Sensor connections to the Arduino 101


Figure 5. Sensors and pin connections

Install and Configure the Required Software

This section gives instructions for installation of the operating system and connecting the Intel® NUC to the Internet, installing required software libraries, and finally cloning the project sources from a GitHub* repository.

Installing the OS: Ubuntu Server

Note: Find additional information about drivers and troubleshooting: http://www.intel.com/content/www/us/en/support/boards-and-kits/000005499.html.

Connecting the Intel® NUC to the Internet

This section describes how to connect the Intel® NUC to your network, which will enable you to deploy and run the project from a different host on the same network (i.e. your laptop). Internet access is required in order to download the additional software libraries and the project code.

The following steps list commands to be entered into a terminal (shell) on the Intel® NUC. You can connect to the Internet through an Ethernet cable or Wi-Fi*.

Ethernet
  1. After Ubuntu is installed, restart the Intel® NUC and login with your user ID.
  2. Type in the command ifconfig and locate the interface named enp3s0 (or eth0). Use the interface name for the next step.
  3. Open the network interface file using the command: vim /etc/network/interfaces and type:

    auto enp3s0
    iface enp3s0 inet dhcp

  4. Save and exit the file and restart the network service using:
    /etc/init.d/networking restart

Note: If you are connecting to external networks via a proxy, setting up a network connection is also required.

Wi-Fi (optional)

This is an optional step that only applies if a wireless card has been added to the Intel® NUC.

  1. Install Network Manager using the command: sudo apt install network-manager and then install WPA supplicant using: sudo apt install wpasupplicant
  2. Once these are in place, check your Wi-Fi* interface name using ifconfig. This examples uses wlp2s0. Now run these commands:
    • Add the wifi interface to the interfaces file at: /etc/network/interfaces by adding the following lines:

      auto wlp2s0
      iface wlp2s0 inet dhcp

    • Restart the networking service: /etc/init.d/networking restart
    • Run: nmcli networking and nmcli n connectivity and nmcli radio, these commands tell you whether the network is actually enabled or not, in case either of them days not enabled then you’ll have to enable full connectivity. For enabling radio use the following command: nmcli radio wifi on
    • Now check the connection status: nmcli device status
    • If the Wi-Fi interface shows up as unmanaged then troubleshoot.
    • To check for and add wifi connections: nmcli d wifi rescan

      nmcli d wifi
      nmcli c add type wifi con-name [network-name] ifname [interface-name] ssid [network-ssid]

    • Running nmcli c should show you the connection you have tried to connect to. In case you are trying to connect to an enterprise network, you might have to make changes to /etc/NetworkManager/system-connections/[network-name]
    • Now bring up the connection and the network interfaces:

      nmcli con up [network-name]
      ifdown wlp2s0
      ifup wlp2s0

You can now use the Intel® NUC remotely from your development machine if you are on the same network. 

Installing the MRAA and UPM libraries

In order to put UPM and MRAA on your system you can just use the MRAA:PPA to update the libraries. The instructions are as follows:

sudo add-apt-repository ppa:mraa/mraa
sudo apt-get update
sudo apt-get install libupm-dev python-upm python3-upm upm-examples libmraa1 mraa-firmata-fw mraa-imraa

You can also build from source:

MRAA instructions: https://github.com/intel-iot-devkit/mraa/blob/master/docs/building.md

UPM instructions: https://github.com/intel-iot-devkit/upm/blob/master/docs/building.md

Note: You’ll need CMake if you plan to build from source.

Plug in an Arduino 101* board and reboot the Intel® NUC. The Firmata* sketch is flashed onto the Arduino 101, you are ready to use MRAA and UPM. If you have an error, run the command imraa –a. If you are missing dfu-util then install it after setting the MRAA PPA to get the dfu that is included with MRAA.

Cloning the Git* repository

Clone the reference implementation repository with Git* on your development computer using:

$ git clone https://github.com/intel-iot-devkit/reference-implementation.git

Alternatively, you can download the repository as a .zip file. To do so, from your web browser (make sure you are signed in to your GitHub account) click the Clone or download button on the far right (green button in Figure 6 below). Once the .zip file is downloaded, unzip it, and then use the files in the directory for this example.


Figure 6

Create the Development and Runtime Environment

This section gives instructions for setting up the rest of the computing environment needed to support the Environment Monitor solution, including installation of Intel® System Studio IoT Edition, creating a project, and populating it with the files needed to build the solution.

Install Intel® System Studio IoT Edition

Intel® System Studio IoT Edition allows you to connect to, update, and program IoT projects on the Intel® NUC.

Windows Installation

Note: Some files in the archive have extended paths. We recommend using 7-Zip*, which supports extended path names, to extract the installer files.

  1. Install 7-Zip (Windows only):
    • Download the 7-Zip software from http://www.7-zip.org/download.html.
    • Right-click on the downloaded executable and select Run as administrator.
    • Click Next and follow the instructions in the installation wizard to install the application.
  2. Download the Intel® System Studio IoT Edition installer file for Windows.
  3. Using 7-Zip, extract the installer file.

Note: Extract the installer file to a folder location that does not include any spaces in the path name.

For example, DO use C:\Documents\ISSDO NOT include spaces such as: C:\My Documents\ISS

Linux* Installation
  1. Download the Intel® System Studio IoT Edition installer file for Linux*.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, ss-iot-linux.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.
Mac OS X® Installation
  1. Download the Intel® System Studio IoT Edition installer file for Mac OS X.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, tar -jxvf iss-iot-mac.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.

Note: If you get a message "iss-iot-launcher can’t be opened because it is from an unidentified developer", right-click the file and select Open with. Select the Terminal app. In the dialog box that opens, click Open.

Launch Intel® System Studio IoT Edition

  1. Navigate to the directory where you extracted the contents of the installer file.
  2. Open Intel® System Studio IoT Edition:
    • On Windows, double-click iss-iot-launcher.bat to launch Intel® System Studio IoT Edition.
    • On Linux, run export SWT_GTK3=0 and then ./iss-iot-launcher.sh.
    • On Mac OS X, run iss-iot-launcher.

Note: Using the iss-iot-launcher file (instead of the Intel® System Studio IoT Edition executable) will open Intel® System Studio IoT Edition with all the necessary environment settings. Use the iss-iot-launcher file to launch Intel® System Studio IoT Edition every time.

Add the Solution to Intel® System Studio IoT Edition

This section provides the steps to add the solution to Intel® System Studio IoT Edition, including creating a project and populating it with the files needed to build and run.

  1. Open Intel® System Studio IoT Edition. When prompted, choose a workspace directory and click OK.
  2. From the Intel® System Studio IoT Edition, select File | New | Create a new Intel Project for IoT. Then choose Intel® Gateway 64-Bit, as shown in Figure 7 and click next until you reach the Create or select the SSH target connection screen, as shown in Figure 8. Input IP address of the Intel® NUC (run command: ifconfig on the Intel® NUC if you don't know the IP address).

    New Intel IoT Project
    Figure 7. New Intel® IoT Project.
    Adding target Connection
    Figure 8. Adding Target Connection.

  3. Now give the project the name “Environment Monitor” and in the examples choose the “Air Quality Sensor” as the How To Code Sample (shown in Figure 9) and then click Next.

    Adding Project Name
    Figure 9. Adding Project Name.

  4. The preceding steps will have created a How to Code Sample project. Now we have to do a couple of small things in order to convert this into Environment Monitor Project:  
    • Copy over the air-quality-sensor.cpp and grovekit.hpp files from the git repository's src folder into the new project's src folder in Intel® System Studio IoT Edition. This will overwrite the local files.
    • Next right click on the project name and follow the sequence Right Click → C/C++ Build → Settings → IoT WRS 64-Bit G++ Linker → Libraries and then add the libraries as shown in the following screen shot. This can be done by clicking on the small green '+' icon on the top right side of the libraries view. The red 'x' next to green '+' icon deletes the libraries.
     

    Adding Libraries
    Figure 10. Adding libraries to the build path

  5. In order to run this project, connect to the Intel® NUC first using the IP address (already provided). This can be done from the Target Selection View tab, but you can also right click on the target (gateway device) and choose the “Connect” option. Enter username/password for the Intel® NUC when prompted.

Note: Ensure the Intel® NUC and the laptop (running Intel® System Studio IoT Edition) are connected to the same network.

Setup and Connect to a Cloud Service

Amazon Web Services (AWS)*

This solution was designed to send sensor data using the MQTT* protocol to AWS*. In order to connect the application to a cloud service, first setup and create an account.

To set up and create an account: https://github.com/intel-iot-devkit/intel-iot-examples-mqtt/blob/master/aws-mqtt.md

The following information should now be available:

  • MQTT_SERVER - use the host value you obtained by running the aws iot describe-endpoint command, along with the ssl:// (for C++) or mqtts:// protocol (for JavaScript*)
  • MQTT_CLIENTID - use \<your device name\>
  • MQTT_TOPIC - use devices/ \<your device name\>
  • MQTT_CERT - use the filename of the device certificate as described above
  • MQTT_KEY - use the filename of the device key as described above
  • MQTT_CA - use the filename of the CA certificate (/etc/ssl/certs/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem)

Additional Setup for C++ Projects

  1. When running your C++ code on the Intel® NUC, set the MQTT* client parameters in Eclipse* as outlined in the steps below: Go to Run configurations and in the commands to execute before application field, type:

    chmod 755 /tmp/; export MQTT_SERVER="ssl://<Your host name>:8883"; export
    MQTT_CLIENTID="<Your device ID>"; export MQTT_CERT="/home/root/.ssh/cert.pem"; export
    MQTT_KEY="/home/root/.ssh/privateKey.pem"; export
    MQTT_CA="/etc/ssl/certs/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem"; export
    MQTT_TOPIC="devices/<your device ID>"

  2. Click Apply to save these settings.
  3. Click Run.

Adding MQTT
Figure 11. Adding MQTT variables to a Run Configuration

More Information

intelvtune-amplifier-intel-advisor-and-intel-inspector-now-include-cross-os-support

New York University

$
0
0

Consume Me

This game was an easy choice for the NYU Game Center. It’s a thoughtful, fun, and aesthetically unique game that is personal, smart, and clearly an artist's passion project.

 

 

The Team:

Jenny Jiao Hsia – As this is a solo project, Jenny did all the design, visuals, coding, and audio.

The Inspiration:

Jenny says: “When I was in high school, I started dieting and exercising a lot. I created a long list of rules that I made myself follow, such as: no eating after 7 pm; drink at least a glass of water before each meal; perform 50 sit-ups if I go over my calorie budget. When I started making games, one of the first things I learned about game design is that many games are sets of constraints that result in a particular experience. That notion of a game has always stuck with me. I thought it was interesting how my dieting experience shared many similarities to this systematic structure of games. That’s why I decided to make this personal game about my experience with dieting.”

The Game:

“Consume Me” puts the player into the mind of the dieter. These prototypes explore a three-way dynamic between the player, the character, and the fact that this character is based on Jenny, the author. What does it mean to push and prod the character into certain eating behaviors when the player doesn’t get full control of the character’s thoughts and internal state? The player is put in an awkward position of performing as the character, but only in a limited sense. These prototypes embrace an intimate and confessional mood and present a goal-oriented relationship with food using simple, but distressing mechanics. Cramming Tetris-shaped pieces of food on a plate to hit a calorie target, putting a flopping avatar through a fat-burning workout, and showing the protagonist’s distress as she tries on a crop top are mechanics which place the powerful feelings of self-consciousness and anxiety front and center, with a discomforting undercurrent of humor. Is it okay to “play” – or have fun – with someone else’s pain? By giving you permission to poke fun at her suffering, the mechanics of Jenny’s game attempt to bring humor and vulnerability to this serious and uncomfortable subject matter.

Development and Hardware:

The game was developed over the course of several semesters and included multiple builds using different hardware during Jenny’s tenure as an undergraduate at the NYU Game Center. NYU is happy to be showing the game at the Intel University Games Showcase using Intel hardware.

Drexel University

$
0
0

Sole

Drexel is proud to be invited to IUGS since the beginning. It’s a great opportunity for Drexel students to have their work recognized on a national stage. Holding it at the premier game development industry conference makes it a great practical exercise in promoting their projects, as well as a strong networking opportunity, regardless of the competition outcome.

Drexel’s Digital Media program produces many student projects every year, so when they open their internal competition, they get student teams applying from different years (sophomores through PhD candidates), courses, and programs under the DIGM umbrella. They hold their own competition using a format similar to the actual event, modeling their judging process on the IUGS rules, and adding the overall quality of the presentation to the gameplay and visual quality categories. With energetic discussions among the faculty, they choose from their 6-8 participating teams the one team that will represent Drexel.

It’s an exciting process that becomes a goal for the students, especially after Drexel’s first-place win for gameplay last year. Opportunities like IUGS inspire students to stay focused on their projects as goals beyond just grades and portfolios.

The Team:

  • Nabeel Ansari – Composer who is responsible for the game’s beautiful music and audio design
  • Nina DeLucia and Vincent De Tommaso – Artists who work tirelessly to paint, model, and sculpt all of Sole’s art assets
  • Thomas Sharpe – Creative director and programmer

The Inspiration:

The team says: “The creative process for conceptualizing ‘Sole’ has been one of the most challenging and rewarding journeys of our artistic careers. We believe video games are an incredibly powerful tool for capturing abstract emotions that are hard to put into words. So in approaching the original design of Sole, we started with a particular emotion and worked backwards to find what kinds of interactions would evoke that feeling. The game’s core mechanic and thematic content were inspired by the internal struggles we’re currently facing in trying to figure out who we are and where we’re going in our personal and creative lives. In many ways, Sole is an allegory for all that uncertainty we’re feeling working through our first major artistic endeavor.”

The Game:

Sole is an abstract, aesthetic-driven adventure game where you play as a tiny ball of light in a world shrouded in darkness. The game is a quiet journey through desolate environments where you’ll explore the remnants of great cities to uncover the history of an ancient civilization. Paint the land with light as you explore an abandoned world with a mysterious past.

Sole is a game about exploring a world without light. As you move through the environment and discover your surroundings, you’ll leave a permanent trail of light wherever you go. Free from combat or death, Sole invites players of all skill levels to explore at their own pace. With no explicit instructions, players are left to discover their objective as they soak up the somber ambiance.

Development and Hardware:

The team says: “Developing the game on Intel’s newest hardware has given us the opportunity to experiment with many visual effects we previously couldn’t achieve. As a result, we are now able to incorporate DirectX 11 shaders for our grass, add more props and details to the environment, and render the world with multiple post-processing effects. The feel of our game changed dramatically once we had access to hardware that was capable of powering the latest rendering technology.”


Savannah College of Art and Design

$
0
0

Kyon

The Savannah College of Art and Design participated in the IUGS 2016 competition and found that it was a great experience for the students, giving them an opportunity to present their work in front of judges. This year, SCAD sent out a department-wide call for entries. Faculty members evaluated entries based on a balance of game design, aesthetics, and overall product polish. “Kyon” was chosen as the best among its peers after the students produced an interesting playable version in their first 10 weeks of development.

The Team:

  • Jonathan Gilboa – Art lead, character artist
  • Neal Krupa – Environment artist, level designer
  • Chris Miller – Tech lead, gameplay programmer
  • Jason Thomas – Prop artist, UI programmer
  • Remi Gardaz – Prop artist, character rigger
  • Erika Flusin – Environment artist, prop artist
  • Jack Lipoff – Project lead, lead designer

The Inspiration:

The team says: “We were actually in development of a totally different game last year, and while going over what types of ambient wildlife we wanted at one point, sheep were a popular option. A running joke began of having sheep play a more and more central role in the game. Eventually, we were tasked with developing a different game and we decided to just roll with what was originally just a joke and develop a sheep-herding video game.

The Game:

Kyon is a top-down third-person adventure game where the player assumes the role of a sheepdog named Kyon in mythological Ancient Greece. Kyon is sent by his master, Polyphemus, to find lost sheep and bring the herd home. The player must guide the herd with physical movement and special bark commands through dangerous environments filled with AI threats. All art assets are made using a PBR workflow, and the art team utilized advanced software for realistic effects such as Neofur and Speedtree. Level streaming allows an entire play through with no loading screens to interrupt gameplay.

Development and Hardware:

The team made use of Intel products in each of its machines, relying heavily on the power inside to push the boundaries of its sheep herd size and particle systems. The game was made entirely on machines using Intel technology.

SMU Guildhall

$
0
0

Mouse Playhouse

SMU Guildhall was asked to participate in the inaugural University Games Showcase in 2014 and proudly participated with “Kraven Manor.” The 2014 event was a great experience for both the students and the university, resulting in an invitation they look forward to annually. The team's selection process is the same every year.  There is a small panel of three that reviews the capstone games developed over the school year.  The panel members are: Gary Brubaker – Director of Guildhall; Mark Nausha – Deputy Director Game Lab: and Steve Stringer – Capstone Faculty. This panel uses three very high but simple measures: 1) quality in game play and visuals; 2) does the game demonstrate the team game pillars of the program?; and; 3) are the students excellent ambassadors of their game and the university? Guildhall has quite a few games and students that exceed the panel’s expectations, making their job very difficult in choosing only one team.

The Team:

  • Clay Howell – Game designer
  • Taylor Bishop – Lead programmer
  • Ben Gibson – Programmer
  • Jeremy Hicks – Programmer
  • Komal Kanukuntla – Programmer
  • Michael Feffer – Lead level designer
  • Alexandre Foures – Level designer
  • Steve Kocher – Level designer
  • Jacob Lavender – Level designer
  • James Lee – Level designer
  • Sam Pate – Level designer
  • Taylor McCart – Lead artist
  • Devanshu Bishnoi – Artist
  • Nina Davis – Artist
  • Taylor Gallagher – Artist
  • Mace Mulleady – Artist
  • Mitchell Massey – Usability producer
  • Mario Rodriguez – Producer

The Inspiration:

The team says: “The idea for ‘Mouse Playhouse’ started out as a companion-based platformer for PC but slowly evolved into a VR puzzle game. The team rallied on the idea of being the first VR Capstone game and developing it for a new platform. Therefore, after the team decided to do a VR game, we studied what was fun about playing in VR by playing games already on the market. We found that throwing and being able to move objects around were fun mechanics to do in VR. This is why the main mechanic of the game revolves around moving and placing objects in specific locations to solve a puzzle. Additionally, we included throwing mechanics as extra mini-games such as shooting basketballs into a toy basketball hoop and throwing darts. We wanted to use simple but fun mechanics to showcase the skills of our developers to create a game for a new platform.”

The Game:

‘Mouse Playhouse’ is a light-hearted VR puzzle game in which you manipulate objects to solve puzzles and guide your pet mice towards the cheese. In Mouse Playhouse, you can also throw objects around, play basketball, darts, and even play the xylophone. There are a total of 15 levels in the game and each one presents a different challenge. Players must use the blue objects to guide the mice away from trouble and towards the cheese. During development, the level designers created clever solutions that enabled them to record mixed reality using Unreal Engine. During development, Unreal Engine did not have support for more than two Vive controllers and mixed reality recording. So the level designers used various tools such as the Unreal Sequencer to “fake” mixed reality in the engine. This allowed the team to record gameplay and live footage on a green screen for their trailer.

Development and Hardware:

The students used Intel® Core™ i7 processor-based desktops won at a previous Intel GDC Showcase event for development. With the addition of an NVidia* 1080 GPU, these machines provided a lag-free development environment. When the students did usability testing, a clear result was that high-performance computing was required for a comfortable VR experience.

Recipe: Building and Running GROMACS* on Intel® Processors

$
0
0

Purpose

This recipe describes how to get, build, and run the GROMACS* code on Intel® Xeon® and Intel® Xeon Phi™ processors for better performance on a single node.

Introduction

GROMACS is a versatile package for performing molecular dynamics, using Newtonian equations of motion, for systems with hundreds to millions of particles. GROMACS is primarily designed for biochemical molecules like proteins, lipids, and nucleic acids that have a multitude of complicated bonded interactions. But, since GROMACS is extremely fast at calculating the non-bonded interactions typically dominating simulations, many researchers use it for research on non-biological systems, such as polymers.

GROMACS supports all the usual algorithms expected from a modern molecular dynamics implementation.

The GROMACS code is maintained by developers around the world. The code is available under the GNU General Public License from www.gromacs.org.

Code Access

Download GROMACS:

Workloads Access

Download the workloads:

Generate Water Workloads Input Files:

To generate the .tpr input file:

  • tar xf water_GMX50_bare.tar.gz
  • cd water-cut1.0_GMX50_bare/1536
  • gmx_mpi grompp -f pme.mdp -c conf.gro -p topol.top -o topol_pme.tpr
  • gmx_mpi grompp -f rf.mdp -c conf.gro -p topol.top -o topol_rf.tpr

Build Directions

Build the GROMACS binary. Use cmake configuration for Intel® Compiler 2017.1.132 + Intel® MKL + Intel® MPI 2017.1.132:

Set the Intel Xeon Phi BIOS options to be:

  • Quadrant Cluster mode
  • MCDRAM Flat mode
  • Turbo Enabled

For Intel Xeon Phi, build the code as:

  • BuildDir= "${GromacsPath}/build” # Create the build directory
  • installDir="${GromacsPath}/install"
  • mkdir $BuildDir
     

  • source /opt/intel/<version>/bin/compilervars.sh intel64 # Source the Intel compiler, MKL and IMPI
  • source /opt/intel/impi/<version>/mpivars.sh
  • source /opt/intel/mkl/<version>/mklvars.sh intel64
     

  • cd $BuildDir # Set the build environments for Intel Xeon Phi
FLAGS="-xMIC-AVX512 -g -static-intel"; CFLAGS=$FLAGS CXXFLAGS=$FLAGS CC=mpiicc CXX=mpiicpc cmake .. -DBUILD_SHARED_LIBS=OFF -DGMX_FFT_LIBRARY=mkl -DCMAKE_INSTALL_PREFIX=$installDir -DGMX_MPI=ON -DGMX_OPENMP=ON -DGMX_CYCLE_SUBCOUNTERS=ON -DGMX_GPU=OFF -DGMX_BUILD_HELP=OFF -DGMX_HWLOC=OFF -DGMX_SIMD=AVX_512_KNL -DGMX_OPENMP_MAX_THREADS=256

For Intel Xeon, set the build environments and build the code as above with changes:

  • FLAGS="-xCORE-AVX2 -g -static-intel"
  • -DGMX_SIMD=AVX2_256

Build GROMACS:

  • make -j 4
  • sleep 5
  • make check

Run Directions

Run workloads on Intel Xeon Phi with the environment settings and command lines as (nodes.txt : localhost:272):


	export  I_MPI_DEBUG=5
	export I_MPI_FABRICS=shm
	export I_MPI_PIN_MODE=lib
	export KMP_AFFINITY=verbose,compact,1

	gmxBin="${installDir}/bin/gmx_mpi"

	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 66 numactl -m 1 $gmxBin mdrun -npme 0 -notunepme -ntomp 4 -dlb yes -v -nsteps 4000 -resethway -noconfout -pin on -s ${WorkloadPath}water-cut1.0_GMX50_bare/1536/topol_pme.tpr
	export KMP_BLOCKTIME=0
	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 66 numactl -m 1 $gmxBin mdrun -ntomp 4 -dlb yes -v -nsteps 1000 -resethway -noconfout -pin on -s ${WorkloadPath}lignocellulose-rf.BGQ.tpr
	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 64 numactl -m 1 $gmxBin mdrun -ntomp 4 -dlb yes -v -nsteps 5000 -resethway -noconfout -pin on -s ${WorkloadPath}water-cut1.0_GMX50_bare/1536/topol_rf.tpr

Run workloads on Intel Xeon with the environment settings and command lines as:


	export  I_MPI_DEBUG=5
	export I_MPI_FABRICS=shm
	export I_MPI_PIN_MODE=lib
	export KMP_AFFINITY=verbose,compact,1

	gmxBin="${installDir}/bin/gmx_mpi"

	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 72 $gmxBin mdrun -notunepme -ntomp 1 -dlb yes -v -nsteps 4000 -resethway -noconfout -s ${WorkloadPath}water-cut1.0_GMX50_bare/1536_bdw/topol_pme.tpr
	export KMP_BLOCKTIME=0
	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 72 $gmxBin mdrun -ntomp 1 -dlb yes -v -nsteps 1000 -resethway -noconfout -s ${WorkloadPath}lignocellulose-rf.BGQ.tpr
	mpiexec.hydra -genvall -machinefile ./nodes.txt -np 72 $gmxBin mdrun -ntomp 1 -dlb yes -v -nsteps 5000 -resethway -noconfout -s ${WorkloadPath}water-cut1.0_GMX50_bare/1536_bdw/topol_rf.tpr

Performance Testing

Performance tests for GROMACS are illustrated below with comparisons between an Intel Xeon processor and an Intel Xeon Phi processor against three standard workloads: water1536k_pme, water1536k_rf, and lignocellulose3M_rf. In all cases, turbo mode is turned on.

Testing Platform Configurations

The following hardware was used for the above recipe and performance testing.

Processor

Intel® Xeon® Processor E5-2697 v4

Intel® Xeon Phi™ Processor 7250

Stepping

1 (B0)

1 (B0) Bin1

Sockets / TDP

2S / 290W

1S / 215W

Frequency / Cores / Threads

2.3 GHz / 36 / 72

1.4 GHz / 68 / 272

DDR4

8x16GB 2400 MHz(128GB)

6x16 GB 2400 MHz

MCDRAM

N/A

16 GB Flat

Cluster/Snoop Mode/Mem Mode

Home

Quadrant/flat

Turbo

On

On

BIOS

GRRFSDP1.86B.0271.R00.1510301446

GVPRCRB1.86B.0011.R04.1610130403

Compiler

ICC-2017.1.132

ICC-2017.1.132

Operating System

Red Hat Enterprise Linux* 7.2

Red Hat Enterprise Linux 7.2

3.10.0-327.el7.x86_64

3.10.0-327.13.1.el7.xppsl_1.3.3.151.x86_64

GROMACS Build Configurations

The following configurations were used for the above recipe and performance testing.

  • GROMACS Version: GROMACS-2016.1
  • Intel® Compiler Version: 2017.1.132
  • Intel® MPI Library Version: 2017.1.132
  • Workloads used: water1536k_pme, water1536k_rf, and lignocellulose3M_rf

University of Central Florida

$
0
0

The Channeler

FIEA's decision to participate was an easy one. In their view, IUGS has turned into a great celebration and showcase of student games at GDC and they love competing with peer programs. “The Channeler” was a great game for them to pick because it has a mix of innovation, gameplay, and beautiful art. Also, because it uses eye-tracking (through a partnership with the Tobii Eye Tracker) as its main controller, they believe it will really stand out from the rest of the field.

The Team:

  • Summan Mirza – Project lead
  • Nihav Jain – Lead programmer
  • Derek Mattson – Lead designer
  • Alex Papanicolaou – Lead artist
  • Peter Napolitano – Technical designer
  • Raymond Ng – Technical designer/art manager
  • Claire Rice – Environment/UI artist
  • KC Brady – Environment artist/UX design
  • Matt Henson – Gameplay programmer
  • Steven Ignetti – Gameplay/AI programmer
  • Yu-Hsiang Lu – SDK programmer
  • Kishor Deshmukh – UI programmer

The Inspiration:

Summan Mirza says: “The original pitch started with the idea of innovating by creating gameplay that could not be replicated with traditional controllers. Eye-tracking, as a mechanic, had many fascinating possibilities that lent itself well to immersing a player in a world. After all, the vast majority of us already use our eyes heavily to play games! We had first looked into developing ‘The Channeler’ by installing eye-tracking into a VR headset, but found that modding one to use eye-tracking was too problematic and expensive at the time. Instead, we found the Tobii EyeX, a slim and affordable eye-tracking peripheral that did not carry the cumbersome and obtrusive nature of headset-style peripherals. From then on, we mass-prototyped in aspects such as narrative, combat, and puzzles, and surprisingly found the breadth of possibilities for eye-tracking-based gameplay vast and exciting, but sadly, beyond our scope.  So, we focused on using eye-tracking for puzzle-based gameplay. The kooky City of Spirits in The Channeler formed the perfect wrapper for this. It allowed us to create some out-of-the-box puzzles that worked well in such a silly world, and the feeling of controlling the world with your eyes really gave players a sense of possessing an otherworldly power.

The Game:

“The Channeler” takes place in a kooky city of spirits, where the denizens are plagued by mysterious disappearances. Fortunately, you are a Channeler. Gifted with the “Third Eye,” you possess a supernatural ability to affect the world around you with merely your sight. Explore the spooky night market and solve innovative puzzles to find the missing spirits! Innovation is what really sets The Channeler apart from other games; not many games out there use eye-tracking as a main mechanic.  Whether it’s trying to beat a seedy ghost in a shuffling shell game, tracing an ancient rune with your gaze, or confronting possessed statues that rush toward you with every blink—our game utilizes eye movement, blinking, and winking mechanics that provide only a sample of the vast possibilities for eye-tracking games.

Development and Hardware:

Summan Mirza says: “Game development brings up many challenges along the way, especially when troubleshooting hardware issues. However, using Intel hardware for ‘The Channeler’ was virtually effortless. We never had to worry about things like frame-rate issues, even though our game heavily used taxing graphical aspects such as overlapping transparencies with the ghosts. To put it simply, working with Intel hardware was certainly the smoothest part of the game development process.”

University of Utah

$
0
0

Wrecked: Get Your Ship Together

In UU’s Entertainment Arts and Engineering program, all students working on their capstone projects and all masters student projects, are automatically entered into a university event where faculty not involved in the actual game projects review all entries. They select four finalists. A subcommittee of three faculty members chooses the finalist. This year, “Wrecked” was chosen.

The Team:

  • Matt Barnes – Lead artist
  • Jeff Jackman – Artist
  • Brock Richards – Lead technical artist
  • Sam Russell – Lead engineer
  • Samrat Pasila – Engineer
  • Yash Bangera – Engineer
  • Michael Brown – Engineer
  • Shreyas Darshan – Engineer
  • Sydnie Ritchie – Producer, team lead

The Inspiration:

Bob Kessler, Executive Director and founder of the School of Computing, says: “I was co-teaching the class when the games were originally created. We went through a long process that involved studying award-winning games, coming up with many different ideas, narrowing those down to a handful, paper prototyping, and then digital prototyping to try to find the fun. Sydnie's team had a goal to try to solve the problem that the VR experience is typically for one person at a time. For example, if you had a party, then only one person can experience the game. They decided that by integrating players using their mobile phones with the person using the VR headset, it would be a better experience for all.”

The Game:

“Wrecked: Get Your Ship Together” is the living-room party game for VR! One player is on the Vive while everyone else in the room plays on their mobile phones. Together, they must repair their ship to escape the planet on time. The Vive player must navigate a foreign planet, both on foot and by hover ship, to scavenge parts and repair the team’s mothership. The mobile players command helpful drones which follow the player and give aid. Specifically, the mobile players can give directional guidance, or they can obtain speed boosts for their captain by successfully executing orders.”

Another problem specific to VR is that of traveling world-scale environments in a room-scale experience, a living room is generally a bit smaller than the world of Skyrim. The development team’s solution is to give the player a hover ship. This means their actual physical chair is part of the play space. When they sit, they can fly around world-scale. When they stand up, they can experience the full joys of room-scale.

The development team feels both the mobile integration with VR and the physical augmentation of the game are compelling, and they are excited to be exploring this new space.

Development and Hardware:

The EAE studio has over 100 computers all with Intel hardware in them. Besides the wonderful and fast Intel processors, the team had donated SSDs for its computers. Having an SSD is great as it eliminates the latency for file and other access.

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>