Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Exploring Smart Remote Control Airplanes with the Arduino 101* (branded Genuino 101* outside the U.S.)

$
0
0

Introduction:

Remote control airplanes are a fun hobby whether you are young or old.  The technology and kits offered today have evolved a lot over the years yielding smaller footprints, lower costs, and more integrated electronics.  This article explores emulating a three channel smart remote control airplane example with a focus on the hardware interface and software.

The Arduino 101* (branded Genuino 101* outside the U.S.) with the Intel® Curie™ module is a good choice for this example given the low latency microcontroller-like characteristics of the module, low power consumption, PWM peripheral, and integrated accelerometer and gyroscope.  The Arduino 101 will form the heart of the solution as it provides a mechanism to gather user input from a remote control, manage the control of a DC motor that represents the propeller, and two Servo motors representing the rudder and elevator controls respectively. The rudder controls the left and right “steering” of the plane and the elevator controls the climb and dive of the plane.

Since the Intel Curie module also comes packaged with an accelerometer and gyroscope, we will also incorporate an advanced “learning/beginner” mode. Once selected, this mode will detect dramatic/steep changes to rates of climb/decent or turns and automatically counter the movement to prevent crashes.

 

Hardware Interface:

The various hardware components of this prototype are:

  • 1 x Arduino 101 (https://www.arduino.cc/en/Main/ArduinoBoard101)
  • 2 x Micro Servos (https://www.sparkfun.com/products/9065)
  • 1 x ArduMoto Motor Shield (https://www.sparkfun.com/products/9815)
  • 1 x Brushed DC Motor (https://www.sparkfun.com/products/11696)
  • 1 x R415x 2.4GHz Receiver (https://hobbyking.com/media/file/672761531X1606554X3.pdf)
  • 2.4GHz Transmitter (https://www.rcplanet.com/ParkZone_2_4GHz_4_Channel_Transmitter_Spektrum_DSM_p/pkz3341.htm?gclid=CLz6uPnvsNICFVCBfgodFcUKqw)

The brushed DC motor is used as the throttle, in conjunction with the Ardumoto shield, which provides the h- bridge drivers for energizing the motor and inputs for controlling the motor speed and direction. The hardware connections are listed below:

  1. The motor terminals connect to the motor A port screw terminals on the shield.
  2. The Arduino 101 is able to control the motor speed using the Pin 3 PWMA pin as indicated on the shield.
    1. The motor only needs to spin in the clockwise direction and is configured by grounding Pin 12 DIRA on the shield.
  3. The rudder and elevator are controlled using servos with the control wires connected to PWM Pins 9 and 6 respectively.
  4. The remote control receiver output pin is connected as a Digital Input to Pin 2 on the Arduino 101.
  5. All components are powered by +5V.
  6. The Arduino 101 device is powered with an external power supply.

Conceptual model of the plane

 

Hardware Diagram

 

Processing Data From the Receiver:

Data Stream Format:

 

cPPM Data Stream

The transmitter used in this example contains left and right sticks for user input.  The transmitter sends the stick position values to the receiver where it is then output to the Arduino 101 device for processing. The receiver output connects to the Arduino 101 using a single digital I/O pin.  The receiver outputs data in a cPPM (Combined Pulse Position Modulation) format that the Arduino 101 can decode to determine the channel values sent from the transmitter.  The data stream contains a start pulse and 6 channel pulses as shown in the oscilloscope capture above.

 

Start Pulse

The start pulse is the longest pulse in the data stream and is the beginning of a frame.  The start pulse for the receiver in this example measures around 12.48ms as shown by the blue cursors in the oscilloscope capture above.

 

Channel Pulses

After the start pulse, there are six channel pulses that will have varying pulse widths depending on the position of the left and right sticks on the transmitter.  The channel 1 pulse width in this example measures 1.16ms as shown by the blue cursors in the oscilloscope capture above. 

 

Decoding the data:

To decode the data stream, the pulses are measured to detect the start pulse and then measure the time for each channel pulse.  The resources used on the Arduino 101 to decode the data stream are a Timer and Interrupt on Pin Change.  The Timer is used to count the number of microseconds of each edge and the interrupt is used to detect edge transitions.  To initialize the Interrupt on Pin Change and Timer, see the setup() function code below:

#define cPPM_PIN 2
unsigned long startT=0;
void setup() {
   pinMode(cPPM_PIN, INPUT);
   attachInterrupt(digitalPinToInterrupt(cPPM), isr, CHANGE);
   startT=micros();
 }

The interrupt service routine will trigger on each edge transition in the data stream to detect the start pulse and measure each channel pulse width using the Timer.  The start pulse is detected to determine the start of the current data stream frame and then each channel pulse width is measured to determine the channel value.  The pulse width is measured by subtracting the previous timer value from the current timer value during each interrupt.  The channel values are continuously updated in the isr and are stored in the channel array.  The interrupt service routine isr() implementation is shown in the code below.

#define SOF_TIME 11000
#define NUM_CHANNELS 6
unsigned long endT=0;
unsigned long pwT=0;
int cppm=0;
bool sof=false;
int count=0;
int channel[6];
void isr()
{
    //Save End Time
    endT = micros();

    //Read cPPM Pin
    cppm = digitalRead(cPPM_PIN);

    //Measure Pulsewidth Time
    pwT = endT - startT;
    startT = endT;

    //SOF Pulse Detect
    if (cppm == LOW && pwT > SOF_TIME)
    {
      sof = true;
    }

    //Channel Pulse Detect
    else if (cppm == LOW && sof)
    {
      channel[count++] = pwT;
      if (count == NUM_CHANNELS)
      {
        sof = false;
        count = 0;
      }
    }
} 

 

Correlating the data:

After decoding the data stream, the data is correlated to determine the transmitter left/right stick to channel number and channel minimum and maximum pulse width times.  After experimenting with the ranges of the left and right sticks on the transmitter, a correlation table is shown below.  This table is used in the next sections where controlling the airplane with the rudder, elevator, and throttle are discussed.

ChannelControlStickDirectionMin Value(us) Max Value(us) 
1RudderRightLeft/Right790 Left1615 Right
2ElevatorRightUp/Down790 Down1627 Up
3ThrottleLeftUp/Down790 0%1630 100%
5Flight IntelligenceRightPush790 Enabled1630 Disabled

Below is some test code that can be put in the main loop to print out the channel values.

for (int i=0; i < NUM_CHANNELS; i++)
    Serial.println(channel[i]);
Serial.println("");
delay(250);

 

 

Controlling the Rudder and Elevator:

The rudder steers the airplane left or right and the elevator allows the airplane to climb or descend.  Individual servos are used to move the rudder and elevator.  The servo control lines are connected to the Arduino 101 using two PWM pins.  The Arduino 101 Servo library is used to quickly interface to the servos.  For example, the servos are initialized for rudder and elevator control in the setup() function below.

#include <Servo.h>
#define RUDDER_PIN  9
#define ELEVATOR_PIN 6
Servo rudder;
Servo elevator;
void setup() {
  rudder.attach(RUDDER_PIN);
  elevator.attach(ELEVATOR_PIN);
 }

After the servos are initialized, the main loop is used to map the channel values measured in the isr() to servo position values.  From the correlation table above, the minimum and maximum channel values are mapped to the minimum and maximum position values that the servo should move.  This interpolation is easy to do with the map() function.  The servos are actuated by calling the write() function.  The example main loop code for rudder and elevator code is shown below.

#define RUDDER_CHANNEL      0
#define ELEVATOR_CHANNEL    1

#define RUDDERVAL_MIN     790
#define RUDDERVAL_MAX     1615
#define RUDDERSERVO_MIN   0
#define RUDDERSERVO_MAX   180

#define ELEVATORVAL_MIN   790
#define ELEVATORVAL_MAX   1627
#define ELEVATORSERVO_MIN 0
#define ELEVATORSERVO_MAX 180

int rudderVal=0;
int elevatorVal=90;

void loop() {
rudderVal   = map(channel[0],RUDDERVAL_MIN,RUDDERVAL_MAX,RUDDERSERVO_MIN,RUDDERSERVO_MAX);
    elevatorVal = map(channel[1],ELEVATORVAL_MIN,ELEVATORVAL_MAX,ELEVATORSERVO_MIN,ELEVATORSERVO_MAX);
  rudder.write(rudderVal);
  elevator.write(elevatorVal);
  delay(100);
}

 

Controlling the Throttle:

The throttle is used to control the air speed of the airplane and is connected to an Arduino 101 PWM pin.    Increasing the throttle, increases the PWM duty cycle which causes the motor to spin the propeller faster.  Decreasing the throttle, decreases the PWM duty cycle which causes the motor to spin the propeller slower.  The example implementation below in the main loop shows how to accomplish simple throttle control.  A similar channel mapping interpolation is used to determine the throttle percentage to PWM duty cycle value.  This value is simply passed in a call to analogWrite() to adjust the motor speed.

#define THROTTLE_PIN 3
#define THROTTLE_CHANNEL    2
#define THROTTLEVAL_MIN   790
#define THROTTLEVAL_MAX   1630
#define THROTTLEMOTOR_MIN 0
#define THROTTLEMOTOR_MAX 255
int throttleVal=0;
void loop() {
    throttleVal = map(channel[THROTTLE_CHANNEL],THROTTLEVAL_MIN,THROTTLEVAL_MAX,THROTTLEMOTOR_MIN,THROTTLEMOTOR_MAX);
  analogWrite(THROTTLE_PIN, throttleVal);
  delay(100);
}

 

Flight Intelligence:

One of the challenges when flying remote control airplanes is getting used to the sensitivity of the control sticks on the transmitter.  It is very easy to apply too much stick during a flight which can cause you to lose control of the airplane.  Using the onboard accelerometer and gyroscope, closed loop flight intelligence could be added to measure and correct steep banking angles during turns and steep dive angles while descending or climbing.  Correcting these conditions could prevent the plane from stalling or crashing.  To enter or exit the flight intelligence mode, the right control stick is pushed in to toggle the mode.  This correlates to channel five as indicated in the channel correlation table discussed earlier.    

Measuring the Angles:

To measure the angles of the airplane during flight, real time data received from the sensors are converted to roll and pitch values.  A roll value gives the angle when the airplane is turning left or right.  The pitch value gives the angle when climbing or descending.  The sign of the angle measured determines the direction for a given pitch or roll value.  Please see the table below for correlating roll and pitch values to the airplane movements.  Please also note that the values assume the front orientation of the Arduino 101 board is the side where the ICSP header resides.

Left TurnNegative Roll
Right TurnPositive Roll
ClimbingNegative Pitch
DescendingPositive Pitch

Correcting the Angles:

When detecting that roll or pitch threshold angles are exceeded, the system can override the user’s input and actuate the rudder or elevator servos intelligently to reverse the roll or pitch angles back below the threshold while holding the throttle steady.  The implementation is achievable using the CurieIMU library to read the sensor data and the Madgwick filter to compute the pitch and roll angles.  An example flowchart, initialization of the sensors, and a modified implementation of the main loop are shown below. 

 

//Initialization
#include <CurieIMU.h>
#include <MadgwickAHRS.h>
#define RATE                    25    //Hz
#define ACCEL_RANGE             2     //G
#define GYRO_RANGE              250   //Degrees/Second
void setup() {
CurieIMU.begin();
CurieIMU.setGyroRate(RATE);
CurieIMU.setAccelerometerRate(RATE);
filter.begin(RATE);
CurieIMU.setAccelerometerRange(ACCEL_RANGE);
CurieIMU.setGyroRange(GYRO_RANGE);
}

//Main Loop
#define ROLL_THRESHOLD          15
#define PITCH_THRESHOLD         25
#define CENTER_ELEVATOR         90
#define CENTER_RUDDER            90
#define UP_ELEVATOR                  CENTER_ELEVATOR+30
#define LEFT_RUDDER                  CENTER_RUDDER-30
#define RIGHT_RUDDER               CENTER_RUDDER+30
#define HALF_THROTTLE              128
void loop() {
  int aix, aiy, aiz;
  int gix, giy, giz;
  float ax, ay, az;
  float gx, gy, gz;
  int roll, pitch;

  //If Flight Intelligence Disabled, Regular User Control
  if (channel[FLIGHTMODE_CHANNEL] > FLIGHTMODE_THRESHOLD)
  {
    rudderVal   = map(channel[RUDDER_CHANNEL],RUDDERVAL_MIN,RUDDERVAL_MAX,RUDDERSERVO_MIN,RUDDERSERVO_MAX);
    elevatorVal = map(channel[ELEVATOR_CHANNEL],ELEVATORVAL_MIN,ELEVATORVAL_MAX,ELEVATORSERVO_MIN,ELEVATORSERVO_MAX);
    throttleVal = map(channel[THROTTLE_CHANNEL],THROTTLEVAL_MIN,THROTTLEVAL_MAX,THROTTLEMOTOR_MIN,THROTTLEMOTOR_MAX);
  }
  //If Flight Intelligence Enabled
  else
  {
    //Measure Pitch and Roll Angles
    CurieIMU.readMotionSensor(aix, aiy, aiz, gix, giy, giz);
    ax = convertRawAcceleration(aix);
    ay = convertRawAcceleration(aiy);
    az = convertRawAcceleration(aiz);
    gx = convertRawGyro(gix);
    gy = convertRawGyro(giy);
    gz = convertRawGyro(giz);
    filter.updateIMU(gx, gy, gz, ax, ay, az);
    roll = (int)filter.getRoll();
    pitch = (int)filter.getPitch();

    //If Steep Dive
    if (pitch >= 0 && abs(pitch) >= PITCH_THRESHOLD)
    {
      //Apply Up Elevator
      elevatorVal = UP_ELEVATOR;

      //Apply 50% Throttle
      throttleVal = HALF_THROTTLE;
    }

    //If Steep Climb
    else if (pitch < 0 && abs(pitch) >= PITCH_THRESHOLD)
    {
      //Apply Center Elevator
      elevatorVal = CENTER_ELEVATOR;

      //Apply 50% Throttle
      throttleVal = HALF_THROTTLE;
    }

    //If Right Turn
    else if (roll >= 0 && abs(roll) >= ROLL_THRESHOLD)
    {
      //Apply Left Rudder
      rudderVal = LEFT_RUDDER;

      //Apply 50% Throttle
      throttleVal = HALF_THROTTLE;
    }

    //If Left Turn
    else if (roll < 0 && abs(roll) >= ROLL_THRESHOLD)
    {
      //Apply Right Rudder
      rudderVal = RIGHT_RUDDER;

      //Apply 50% Throttle
      throttleVal = HALF_THROTTLE;
    }

    //Else Flight Mode User Control
    else
    {
      //Map Channel Values to Control Values
      rudderVal   = map(channel[RUDDER_CHANNEL],RUDDERVAL_MIN,RUDDERVAL_MAX,RUDDERSERVO_MIN,RUDDERSERVO_MAX);
      elevatorVal = map(channel[ELEVATOR_CHANNEL],ELEVATORVAL_MIN,ELEVATORVAL_MAX,ELEVATORSERVO_MIN,ELEVATORSERVO_MAX);
      throttleVal = map(channel[THROTTLE_CHANNEL],THROTTLEVAL_MIN,THROTTLEVAL_MAX,THROTTLEMOTOR_MIN,THROTTLEMOTOR_MAX);
  }
  rudder.write(rudderVal);
  elevator.write(elevatorVal);
  analogWrite(THROTTLE_PIN, throttleVal);
  delay(100);
}

 

Conclusion:

We discussed about the hardware and software used in an example smart three channel remote control airplane.  We laid the groundwork and learned about how to process the receiver data and how to use the motor and servo components to control the airplane.  We concluded by adding real time flight intelligence using the sensors onboard the Intel Curie module to keep the airplane flying under control.  The concepts discussed in this article can also be applied to other types of projects that use remote control, servos, and motor control.  

 

About the Author:

Mike Rylee is a Software Engineer at Intel Corporation with a background in developing embedded systems and apps for Android*, Windows*, iOS*, and Mac*.  He currently works on Internet of Things projects.

 

Notices:

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others

**This sample source code is released under the Intel Sample Source Code License Agreement

© 2017 Intel Corporation.


Lindsays TEST Article with Entitlement

$
0
0

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla suscipit nibh id est pretium, eget venenatis lectus consequat. Mauris aliquam, sem vel sagittis eleifend, diam urna mattis nisl, faucibus elementum nunc elit a nisi. Mauris aliquet suscipit iaculis. Proin imperdiet augue eu elit pharetra finibus. Aliquam sagittis porttitor magna. Nullam condimentum velit vitae eros dapibus tristique. Maecenas luctus consectetur quam at consectetur. Phasellus et lacus arcu. Duis feugiat, felis eget pharetra semper, turpis augue iaculis ex, at viverra felis tellus eu quam.

Meet the “Action Ribbon”

$
0
0

This dark blue band is one of the key features we've implemented to ensure all the top resources developers look for are within easy reach.

 

 

 


Specifications

  • Only approved items are allowed in the action ribbon (please contact us the DX team if you wish to request an addition)
  • The ribbon is not a navigation bar
  • The ribbon can't be placed at the very top of the page (there always needs to be a content block between the IDZ top area and the ribbon)
  • You must have at least 3 items in the action ribbon 
  • You can have up to 6-7 items in the action ribbon (# depends on how wordy the chosen items are)
  • Intel brand names are not allowed in the action ribbon
  • 3rd party brand names may be allowed if they represent known resources developers seek out "GitHub Repo", "Twitch Stream", etc. 

 


 

Approved Action Ribbon Items*

Most of these items can't be changed (icon or copy). A few "Training Guide" can have subtle changes made.

*Subject to change without notice.

Approved Action Ribbon Items

 

 

DX "Artifacts" Explained

$
0
0

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Donec sed odio dui. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nullam id dolor id nibh ultricies vehicula ut id elit.

Personas

A persona is a representation of a user, typically based off user research and incorporating user goals, needs, and interests. Cooper categorizes personas into three types. Each has its own advantages and shortcomings. Marketing personas focus on demographic information, buying motivations and concerns, shopping or buying preferences, marketing message, media habits and such. Proto—personas are used when there is no money or time to create true research—based personas – they are based on secondary research and the team’s educated guess of who they should be designing for. Design personas focus on user goals, current behavior, and pain points as opposed to their buying or media preferences and behaviors. They are based on field research and real people.

Persona Example

  • They reflect patterns observed in research
  • They focus on the current state, not the future
  • Are realistic, not idealized
  • Describe a challenging (but not impossible) design target
  • Help you understand users’
    • Context
    • Behaviors
    • Attitudes
    • Needs
    • Challenges/pain points
    • Goals and motivations

 


OOBE & User Flows

Users engage with organizations across various channels, including the web, email, mobile devices, kiosks, online chat, and by visiting physical locations (such as storefronts or service centers). An organization, like ours, with a multichannel ecosystem should ensure independent channel interactions coordinate to create one cohesive, consistent customer experience.

OOBE flow

  • Consistency
    • Familiarity and confidence
    • Learnability
    • Efficiency
    • Trust
  • Seamlessness
  • Optimization
  • Orchestration
  • Collaboration

Content Outline

Content Outline (a TOC for your copy)

Every area, page or task should have a goal. This outline will help you discover the purpose, determine what is valuable and prioritize it. Think of it like a "Table of Contents" for your copy. The first item is your most important piece of content and whatever is at the bottom may be cut if the page becomes overwhelming.

  • How much you need to share with developers?
  • How much would they find interesting?
  • If a developer only sees one thing, what would that be? Remember, nothing is equal.
  • If a developer glanced down the outline, will they get a good idea of what your tool will do for them?
  • Include any known links you want to feature

 


Sitemaps

Designing a new space can be a daunting process, only made more complicated by the volume of information that sometimes needs to be organized and incorporated. A sitemap is a centralized planning tool that can help organize and clarify the content that needs to be on your site, as well as help you eliminate unnecessary pages. 

Sitemap

  • Provides scope of your project 
  • Indicates all security requirements
  • Shows how your pages connect with external sites

 


Wireframes

WireframesBefore designs are started, our team will create wireframes based on the content outline/sitemap. The aim of a wireframe is to provide a visual understanding of a page early in a project to get approval before the creative phase gets under way. 

  • Validates that the content outline is delivering on the “goal” of the page
  • Provides guidance for character count and visual support (images/graphics)
  • Ensures nothing is missing (that might be missed in copy form)

 


Copy

Each tool team is expected to provide a refined draft of their copy in GC (not in a word doc) that has been validated by all stakeholders. Our editorial team will:

Gather Content (online copy tool)

  • Check for TM&B
  • Ensure your copy is talking to the right level of developer for your tool
  • Ensure your copy is clear and easy to understand
  • Validate that your copy is following the goal and outline

 


Design "Comps"

Not all pages will be provided in a visual “comp”. Our team will determine which pages require this step. Note:

Design &quot;Comps&quot;

  • Designs are for visual reference and will not have the latest copy
  • Comps will only be updated if the design of a page changes extensively (and not for copy changes)

 


Web Page Previews

When you receive preview links to your web page, they will have a lot of pink. Don’t worry.

Web Page Preview

  • Design & copy are locked prior to web page creation. Any changes may affect your launch date
  • Check for layout, content and links (there will be minimal differences between these pages and live)
  • Your custom tool menu/footer won’t be viewable until launch
  • Check documentation feed content for accuracy

YouTrack Guide

$
0
0

YouTrack is Intel® Developer Zone's issue tracking tool, it has many features not mentioned below, but if you need any work done on the Intel® DZ website you must provide a ticket. Use the following guidelines to get started.

 

Access 

The first time you do this, your account will be created and you will be able to submit issues right away

  • To log in, go to https://youtrack.or.intel.com
  • For your username, enter your Intel email address (not your IDSID).
  • For your password, enter your usual Intel domain password.

 

Basics 

Main YouTrack page: https://youtrack.or.intel.com

Logging in:

Please use your Intel email & network password to log into the system.
Once you've signed in, the system will take you to your Dashboard view.

You can select 'Assigned to Me' and click the arrow button to see a list of issues assigned to you
YouTrack assigned to me

 

New Issue: Click 'Create Issue' button, see Filling out ticket section for details
YouTrack create issue

 

If you are currently looking at a ticket, you can also instantly choose a ticket relationship by selecting from the drop-down. This is for when you know the new ticket will be related to or a subtask of the ticket you are currently looking at.
YouTrack create issue drop-down

 

BKMs 

  • Funnel work coming in
    • Better to have a limited number of people so reviewing, questions, answers, etc. are consistent.
    • Also keeps from having multiple tickets for updating the same page
  • Prioritizing your ticket: We have a 3 business day minimum Service Level Agreement (SLA) that applies to minor updates, article or blog creation. Large projects (creating a set of pages or redesigning) and Developer projects will be based on an agreed-upon schedule. Due dates should only be selected for launch-related tickets. Guidelines:
    • Showstopper: Only for full site down, do not use otherwise
    • High Emergency: For partial site down or urgent legal errors
    • High: For legal errors or urgent updates.  Note that if we find a person using this level for a majority of their tickets we will presume that they do not plan ahead and we will require a valid business reason to retain this priority level.
    • Medium: For most tickets, whether they are minor updates or projects
    • Low: For 'nice to have' updates or features that are not launch related
  • Monitoring - very important!
    • The IDZ Web team will be sending tickets back and forth with you for review, approval to publish, or questions. It is imperative that you respond with answers in the ticket and assign back to us otherwise we may not see it for weeks. Emails get lost, buried, etc.
  • Create your own query to review daily
    • Set up your own YouTrack Dashboard and save it to your bookmarks, check it frequently.
    • [https://youtrack.or.intel.com/dashboard https://youtrack.or.intel.com/dashboard]
    • If you know you have a work request in, keep that window open in your browser.
    • This contains your queue and queries
    • Create your widget with your own query so you will always instantly see what is assigned to you.
      • Click the Add Widget button and select Issues
        YouTrack add widget
      • Type 'Assignee' into the field
      • Begin to type your email address
        YouTrack assignee query
      • Then type -closed -rejected (this will keep closed and rejected tickets from appearing here and cluttering up your list. Then click Create new saved search.
        YouTrack full query
      • You can move this 'widget' around, resize it, etc. You can create as many of these as you want. Perhaps you'll want one that lists all the tickets you have created. Example query: created by: your-email-login@intel.com -closed -rejected
      • There are many other queries you can run and the system has many 'type ahead' suggested options to select from
      • You may be more easily able to create and test your query using the long text field so you can read it all.
        YouTrack full query test
        • Select the Everything option on left so you can see anything assigned to you in any project
        • Enter query and click the magnifying glass to see the a full page with the list of results
        • If the results are what you want to see, copy/paste the query into the widget
    • Notifications: You should also receive notifications from IDZ Admin. There is a link to the ticket in the email as well as most recent comment made by user.
      • If you are not receiving notifications go into your YouTrack profile to the Filters and Notifications tab and set up what you want to be notified on.

 

Filling out a Ticket 

YouTrack create project drop-down

  • Project: Various projects are set up to be assigned to the correct person on initial creation of a ticket in the project. Confirm that project selected in upper left corner is set.
    • IDZ -WebOps: content updates including text, links, adding images, file uploads, content removal, redirects, etc. Essentially any updates that don't require a new or updated layout. This is also what to select for creating and/or editing articles, blogs, videos, books/documentation as they have defined, locked in templates. These will be assigned to ChristinaX M King Wojcik
    • IDZ - Design: New landing pages or updates to existing layout and graphics support. These often will generate the need for a Kickoff meeting with Design and WebOps team. These will be assigned to Tracy Johnson
    • IDZ - Dev Core: Issues with site functionality such as site down or a feature no longer working. These will be assigned to Jeffrey Wu
    • IDZ Business Intelligence: Reports, metrics. These should be assigned to Timothy C Manders
    • IDZ :: IoT Zone: Content related to IoT Zone pages, this can include WebOps or Design. These should be assigned to KayX Martin
    • YouTrack Administration: Issues with or features for YouTrack. These will be assigned to Matthew Groener
    • If you're not sure, select IDZ – Routing instead
    • Disregard other projects as they are handled internally
  • Summary: Write a few words to briefly describe work, but not too generic – provide a page or product name.
    • Bad example: Please update page
    • Good example: Please update the Android is Awesome article
  • Description:
    • Page updates: ALWAYS include link to page being updated
      • Include details, be clear, where on page to update, if link – provide link, where to place on page, and link text
    • New Page: For articles, blogs, event entries, videos, etc. that are not in our Editorial teams' tool, Gather Content, give summary of work to be done and why (i.e. Need a new article or landing page for blah, blah, event). Do not provide all content for the page unless it's a simple title and a couple paragraphs.
      • If a lot of content, provide copydeck (Word doc) to indicate desired layout as well as content, graphics, etc. Attach final content – copydeck, assets (graphics, PDFs, downloadable files, etc.) Graphics should be pre-sized to work on the page.
      • If multiple updates are provided in a Word doc, please set the Word doc up with Track Changes on so the changes can easily be seen and no need to rebuild entire page.
      • Provide metadata such as Software Product, Subject Matter, OS, Technology, Topic, Zone, etc. Don't overdo it – 5 to 10 tags will do.  For Software Product – it is best to only select the most prominent one, it must be an Intel SWP.
    • You can preview your description below the text box
      YouTrack preview
    • Note that this tool uses 'WikiText' which is a scripting language different than HTML or Rich Text. There are many ways to format it. You can use the YouTrack sandbox to 'code' your formatting. This is available from the 'More' link on the upper right of the text box
      YouTrack sandbox
  • Attach file: You can attach a file in multiple ways:
    • drag and drop, using the Attach button near the bottom of the conent:
      YouTrack attach or drag
    • Using the paperclip icon on the bar above the ticket content (see #4 in the Icon buttons below for details

 

Settings on right column YouTrack settings

  • Project - verify this is assigned to the correct Project
  • Priority - standard is medium. See SLA info under BKMs. Never use Showstopper unless the entire site is down. High Emergency is only for partial site down or legal issues that will affect Intel Corporation across the board. We expect you to plan ahead, do not create High priority tickets without a qualified business reason.
  • Type:
    • Update or new content select 'Tasks',
    • Bug or error, select 'Bug'
    • New content or page rework, select 'Feature'
  • State: New (selected by default in the WebOps project)
  • Category: Articles, Landing pages, Videos, Webforms are most common
  • Working Group: WebOps (selected by default in the WebOps project)
  • Watchers:
    • Add users individually, they will receive email notifications every time the ticket is updated
  • Assignee: on ticket creation this is automated according to Filling out ticket Project section. When updating a ticket you will need to select the individual you working with on the ticket

Click "Create Issue" button at bottom left of page

 

Icon Buttons 

These are the buttons above the Summary. As with most software tools there are multiple ways of implementing settings. These are another way of adding settings.
YouTrack icon butttons

  1. Go to ticket listing
  2. Assign to
  3. Add Comment
  4. Attach File or Image. Note, you can also edit an image that you attach - adding lines, notes, etc.). When creating a new ticket, be patient when attaching, the system needs to upload entire file before you can submit the ticket.
  5. Link to another ticket. Select the type of relationship, the most common relationship would be Relates to, Parent of, or Subtask of. Then begin typing the ticket number, ie. WEBOPS-### or DESIGN-###. The system will give a list of results to select from after partially typing it in.
  6. Add Tags. If this is for the DPD team, add the DPD tag. If for the IOT landing zone, use the IOT tag. You can also add custom tags for your team:
    YouTrack add tag
  7. Clone issue or go to Command dialog. Note, if cloning an issue it will be a duplicate of the existing issue, including all attachments and the same project

 

Managing and Commenting 

  • Monitor: If you know you have a ticket you created - monitor it. Watch for notifications that may have questions or A/Rs for you.  Check your queue daily if you know you have an open ticket.
    • If you receive a ticket in Testing-QA state, please advise if all is okay to publish or if there are any problems with the content and assign back to the WebOps working the ticket.
    • If you receive a ticket in Fixed-Verify state, please review the content.
      • If all is published and looks fine do one of the following:
        • For content that has localized (translated) versions on the site, send back to WebOps advising it is ready to go our localization team (GLS)
        • For non-localized content, close the ticket
      • If all is published and you find an error, make a note of the problem in a comment and assign back to the WebOps working the ticket.
    • If you do not manage the ticket on your own, that ticket may get forgotten and work not completed, or when found during one of our occasional queue reviews we will need to make assumptions about the status and either close and pass on as needed. 
  • Updating: Type in your note/comment in the bottom text box
    • Set status (to Assigned or Feedback)
    • Assign back to person working the ticket or whoever last commented to you
      • To help ensure that the target individual is notified also type in their name on your comment, starting with the '@' sign, i.e. @john.m.doe@intel.com. As you begin to type the email their name should come up to select as long as they also have a YouTrack account. This does not automatically assign a ticket to that person.
      • This will assign to a person even if you have not made a note/comment
    • You will see a preview of your comment before you add it below the text box
    • Click the Add Comment Button.
      YouTrack add comment

 

Notifications 

To check your notification settings:

  1. Click on your Profile icon on the far right of the screen and select Profile from the drop-down:
    YouTrack profile
  2. Go to Filters and Notifications tab
    YouTrack notify tab
  3. there are a couple of tables that you can select from, but the most important is the lower table (the top table is for Tags).  The best selections to make are on issue created | on issue updated | on issue resolved | on comment posted
  4. Select the items you want notifications for, this should include Assigned to me, Commented by me, and Reported by me.
    YouTrack notify assigned
    YouTrack notify commented
    YouTrack notify reported
  5. Go back to General Settings tab and click the Save button at the bottom of the page
    YouTrack profile save

 

Additional Help

In the Dashboard view you will see a tab labeled "Learn". From there you can search for videos and information on using the tool.

Bare-metal performance for Big Data workloads on Docker* Containers

$
0
0

BlueData® and Intel® have collaborated in an unprecedented benchmark of the performance of Big Data workloads. These workloads were benchmarked in a bare-metal environment versus a container-based environment that uses the BlueData EPIC™ software platform. The workloads for both test environments ran on apples-to-apples configurations on Intel® Xeon® processors-based architecture.

This in-depth study shows that performance ratios for container-based Apache Hadoop* workloads on BlueData EPIC are equal to — and in some cases, better than — bare-metal Hadoop. For example, benchmark tests showed that the BlueData EPIC platform demonstrated an average 2.33% performance gain over bare-metal, for a configuration with 50 Hadoop compute nodes and 10 terabytes (TB) of data.

These results show that you can take advantage of the BlueData benefits of agility, flexibility, and cost reduction while running Big Data workloads in Docker* containers, and still gain the performance of a bare-metal environment.

Updating Older Unity* Games for VR

$
0
0

Many developers have created great works and games using Unity* games for PC or mobile devices.  And now with VR capabilities integrated into Unity 3D, it’s worth a look at how to update any of these older games for a good VR experience.This post will show developers how to convert existing Unity games to be compatible with the HTC Vive* VR hardware.

  • Target VR Category: Premium VR
  • Target VR Hardware: HTC Vive
  • Target Unity Version: Unity 5.0

This post will cover the following topics:

  • Redesigning your old game for a new VR experience
  • Tips for integrating the SteamVR* plugin
  • Starting to use the controller scripts
  • New C# to old JavaScript* solutions
  • Migrating 2D GUIs to 3D UIs

Designing Your New VR Experience

Your existing Unity games were not designed for VR, but it is more likely than not that a good portion of the work is already done to make it a good VR experience, as Unity is a 3D environment made to simulate real-world graphics and physics. So the first thing you need to do before adding controller scripts is to think through your old game as a new VR experience. Here are some tips to designing the right VR experience for your old game.

Design Out Motion Sickness:  A standard Unity game or application when played can be an amazing experience, but it doesn’t fool your senses. This is important to realize. If you had created an amazing high-speed racing game and then ported it to VR, it would throw a mess of bad brain signals to the user’s reflexes. The issue is that as your brain sees a first-person motion in VR, it will send signals to your legs and torso to counterbalance the motion it sees. While for some people this might be fun, for most it's just a bad experience, especially if the user is standing upright while this is happening and could fall over.  However, if they are in a seated experience, perceived motion is much less of an issue for the user and is safer. Evaluate the best experience for your game and consider recommending a seated or standing position, as well as adjusting the locomotion so users can move in the scene without having balance issues.

For standing users, one way to design out motion sickness is to remove motion entirely and choose to add a teleport feature to move from space to space in the game.  With the Teleport feature, you can stand as a user and confine the experience to a few feet in any direction, then allow the user to point their controller to a space in the game and teleport to that space.  For example, FPS shooters might be better to be limited to a few steps forward and back, then use the Teleport feature to go farther than a few steps.

My game sample was originally designed as a first-person space driving experience. Thus, I’ve made the decision for the game to be a seated experience and not a room-scale experience. This will avoid the “jimmy” leg problem of mixed brain signals as a user tries to maintain balance on their legs, and allow the user to move around the scene naturally.  I also made the decision to have a lot of motion damping so any movement is eased for the user.  This is a space game, so that works out well; there are no hard turns or movements as you float in space. As a seated experience, you still get the feeling of motion, but it feels more controlled without the crazy signals of your brain trying to adjust and account for the motion. (To learn about the original source game I am porting to VR, read this post on using an Ultrabook™ to create a Unity game.) 

Design for First Person: Another issue in VR that it is primarily a first-person experience. If your old game was not designed from a first-person perspective, you may want to consider how to make the third-person view an interesting first-person experience, or how to shift your game into a first-person perspective.

My game was a hybrid first-person game, but there was a camera follow-the-action feature.  I felt this was too passive or an indirect first-person experience.  Thus, I made the decision to anchor the person directly to the action rather than follow indirectly via a script.

Natural Interactive UI or Controls.  Interactive UI and controls are another issue as there is not set camera position where typical onscreen menus might work. The position of the camera is wherever the user is looking. Thus be sure you address controls, menus and interactivity that are natural.  Anyone who developed a console game is in a good position, as most console controls are on a game pad and not on screen.  If you designed a mobile game with touch controls, or a complex key or mousable screen controls, you’ll need to rethink you control scheme

For my game, I have four controls: turn left, turn right, forward momentum, and fire.  The mobile version of the game had on-screen touchable controls while the PC version used key commands.  For the VR version, I started to design it with tanklike controls: Left Controller rotates clockwise, Right rotates right counterclockwise, etc. This seemed to make sense until I tested it. I quickly found that my left thumb wanted to control both spin directions.  Turns out I have muscle memory for gamepad driving games that I could not avoid. Thus I redesigned, since HTC Vive controllers are held in hand with thumbs and fingers free similar to gamepads. I made the Left Controller Pad control the steering while the Right Trigger fuels the vehicle forward, and my right thumb fires lasers similar to where the A or X buttons would be for a gamepad.

Leverage 360 degrees:  A VR experience allows for a 360 view of the world, so the game would be a waste if I’ve created a forward-only experience.  This was actually a problem with the regular version of the game. In that game I had designed a mini-HUD display of all the enemies, so you could see if an enemy was approaching from the side or back of the ship.  With VR, this is elegantly solved by just looking left or right. I am also going to more purposely design in hazards that will be out of forward view, thus taking advantage of the full 360 VR experience.

VR Redesign Decisions For Sample Game as Follows:

  • Seated position to avoid motion sickness and twitchy leg syndrome
  • Disable follow-me camera and lock the camera to the vehicle forward position
  • Dampen rotation to avoid balance issues and ease the movement of the game
  • Map controls to the HTC Vive controllers similar to the layout of gamepad controls
  • Build in features to leverage views and action from a wider panoramic experience

Adding The SteamVR Plugin

This is either the simplest thing you’ll do or potentially the biggest pain.  It all depends on how old your code is and to what degree it will allow the SteamVR Toolkit to work and operate.   I’ll explain the way it is supposed to work, as well as the way you may need to get it to work.

Clean Your Game for the Latest Unity Build: First, make a copy of your original game. Download the latest version of Unity.  Open the old game in that version. If prompted, update the APIs.  Play the game and review the console for any outdated APIs the game suggests you update.

Install SteamVR Plugin: If the game works as expected with a modern version of Unity, you are ready to download and import the SteamVR Plugin. Go to the Unity Store. Search for SteamVR Plugin. It is the only one with the familiar Steam logo. Click Import.

Select All and import all features, models and scripts.  Everything will be neatly put in the SteamVR folder so you can know what scripts and assets came from the plugin.

Test Example Scene: Before going to the next step, be sure your HTC Vive hardware is connected and working.  Now, in Unity, go to the Project Tab and open the SteamVR folder, select the Scenes Folder, and Select Example scene.  You’ll see a scene with a bunch of boxes and SteamVR screenshots.  Play the scene and put on the headset.  You should be able to see the scene in full VR and see the HTC Vive controllers.  If that is working, then it is safe to assume the SteamVR plugin is working.

Doesn’t Work – Try New Project: If you can’t see the controllers or the SteamVR example doesn’t load in the headset, you’ll need to create a new blank project.  Import the SteamVR Toolkit and then try the example scene. If that doesn’t work then something is up with your version of Unity or Steam hardware. 

New Project Works, but Not the Old:  If a brand new projects works great BUT your original game doesn’t work with the SteamVR plugin, then you’ll have to again create a blank project, install SteamVR Toolkit, test that it works, THEN you’ll have to move your game into this new project. To do this, first close the project. Copy EVERYTHING in the Assets folder of your cleaned Unity project. Then go to the Asset folder on your new project and paste. Now open your Unity project and test the VR.  It should work because it just works.  However, your game will be busted.  All the project settings for many of your assets are lost, including links from your project components to your game assets such as models, sound files, etc.  Go to each asset and each component and make sure your game objects and sound files are linked correctly. This is the painful part. I’ve done this 2 or 3 times. It’s not that bad; it's just busywork.

Replacing the Camera:  Before you do anything more, you should be able to put in the HTC Vive headset and run your game as you normally would, and be able to turn your head to move the camera and look around your game. But you will not have the HTC Vive controllers and what’s really needed for VR to work with your camera.  The biggest and most key step of this integration is what is next.  It’s so simple it’s kind of silly.  To get VR to work in your application, you simply need to replace the Main Camera in your scene with the Prefab camera in the SteamVR folder.  That’s it. However, you may have a ton of scripts or settings on your existing camera.  My suggestion is not to immediately replace the camera.  My suggestion is to drop the Prefab camera as a child to your existing camera.  

Next, in the Properties panel of your existing camera, turn off the camera and all of its components. Be sure the checkmark next to Camera in the Properties panel is unchecked.

Then, start to copy and paste the camera components and settings to either the Camera Head or Camera Eye child inside the Prefab. Move, copy and tweak the settings so that your skyboxes and things work as you expect them to with the HTC Vive headset on.

At this point, your old game is now VR-enabled. You should be able to play this scene, put on your VR headset, look around your game, and see the controllers in your hand. The rest is getting the controllers and scripts to work with your existing game logic.

Controller Scripts

If you are like me and you developed your game using JavaScript, it may be a bit more challenging, but either way it’s fairly straightforward to get the controller scripts working with your game. For those who have JavaScript in their game and need the controllers to work with those scripts, see the section below.  You’ll want to do the following to the Left and Right Controllers in your scene.  Your controllers have been added as child assets to the Prefab camera you dropped under your main camera.   The Tracked Controller Script is located in the Extra folder.  Drag it to the Controller (Left) inside the Camera Prefab. Do the same for the Controller (Right).

With this done, you should be able to see the Trigger Pressed and other items in the Inspector, and these will check on and off as you use them in your game.

The concept of using the controllers is similar to an event listener method.  To capture an event like a trigger pull or touchpad, you'll need to create a C# script.  The following script is a technique I found available from a number of forums. In my example, I show how you can create the event triggers for pulling the trigger, letting go a trigger, touching a controller pad, and lifting your finger off a controller pad. 

C# Script For Triggering Events when Trigger Pulled or TouchPad touched:

using UnityEngine;
using System.Collections;

public class EasyController : MonoBehaviour {
    private SteamVR_TrackedController device;
    void Start () {
        device.TriggerClicked += Trigger
        device.TriggerUnclicked += UnTrigger ;

        device.PadTouched += PadTouch;
        device.PadUntouched += PadLift;
    }


    void Trigger(object sender, ClickedEventArgs e)
        {
//        Place Code Here for Trigger
        }
    void UnTrigger(object sender, ClickedEventArgs e)
    {
 //        Place Code Here for Lifting Trigger
            }

    void PadTouch(object sender, ClickedEventArgs e)
    {
//        Place Code Here for Touching Pad
      }
    void  PadLift(object sender, ClickedEventArgs e)
    {
 //        Place Code Here for UnTouching Pad
    }
}

Controller Tip:  As you develop your experience, you will find that you will not know what controller is the left vs. the right when in VR.  I suggest adding a 3D asset to one of the controllers, e.g., a light, a sphere, or anything that tells you which one is which. This is for development purposes. Later I suggest in the UI section how to provide icons and labels to help the user know how to use the controllers.

Additionally you may want to use the touchpad for more than an on-or-off situation.  In that case, this script will allow you to control your game based on location of the tap in the touchpads. The following script will allow you to trigger activity depending on whether the left side or the right side of the touchpad is tapped.

C# Script For Using X or Y Values from VR Touchpad

using UnityEngine;
using System.Collections;
using Valve.VR;

public class myTouchpad : MonoBehaviour
{
    SteamVR_Controller.Device device;
    SteamVR_TrackedObject controller;

    void Start()
    {
        controller = gameObject.GetComponent<SteamVR_TrackedObject>();
    }

    void Update()
    {
        device = SteamVR_Controller.Input((int)controller.index);
        //If finger is on touchpad

        if (device.GetTouch(SteamVR_Controller.ButtonMask.Touchpad))
        {
            //Read the touchpad values
            touchpad = device.GetAxis(EVRButtonId.k_EButton_SteamVR_Touchpad);
            touchstate = device.GetPress(EVRButtonId.k_EButton_SteamVR_Touchpad);

        if (touchpad.x < 0)
            {
                //  Add code if left side of controller is touched
            }

        if (touchpad.x > 0)
            {
                //  Add code if right side of controller is touched
            }
       }

     else
       {
            // Add code if pad is not touched
        }

    }

}
 

 

C# and JavaScript Issues

SteamVR scripts are C#, and that may make it difficult or impossible to have the HTC Vive controllers interact with your existing JavaScript game logic. The following are simple techniques to get this going without conversion.  However, it is highly recommended that you eventually port JavaScript to C#.

Why They Don’t Talk to Each Other: There is a compiling order for scripts in Unity. While it is possible to get C# variables to JavaScript or the other way around, to do that you’d need to move those scripts into the Standard Assets folder.  That folder compiles first, thus making them available.  But, since you need JavaScript to see SteamVR values, you’d need the Steam scripts to be in the Standard Assets folder and compile first.  But if you move them, you break the SteamVR Plugin.  Thus you can’t get the C# variables to pass to JavaScript. 

BUT there is another way.

A Simple Workaround:  It dawned on me that both C# and JavaScript can GET and SET values on game objects. For example, both types of scripts can get and/or define the tag value on a game object in the scene. Game Object Tags themselves are variables that can pass between the scripts. For example, if the LaserCannon in the scene is initially tagged “notfired”, you can have a touchpad event set the LaserCannon.tag to “fired” in C#.  Your existing JavaScript can look for the value of that object's tag each frame. When the LaserCannon.tag = “fired” (written by the C# script), the JavaScript can pick that up and thus run a function that fires the laser cannon. This trick allows C# to pass events and values to JavaScript or back in the other direction. 

Using the previous C# sample above, let me show you how to share variables with JavaScript. The idea is that if I tap one side of the touchpad, the appropriate particle emitter will have its tag value changed in C#.  The code for turning on the particle emitter, playing a sound, collision detection, and associated points are all in an existing JavaScript attached to the left and right particle emitters. So the first thing I need to do is to identify these particle emitters in C#. I first declare 'rthrust' and 'lthrust' as GameObjects. Then in the Start section, I define the objects 'lthrust' as the left particle emitter in my scene, and 'rthrust' as the right.  

C# Adding Game Object To Script

public class myTouchpad : MonoBehaviour
{
    public GameObject rthrust;
    public GameObject lthrust;

    SteamVR_Controller.Device device;
    SteamVR_TrackedObject controller;


    void Start()
    {
        controller = gameObject.GetComponent<SteamVR_TrackedObject>();
        rthrust = GameObject.Find("PartSysEngineRight");
        lthrust = GameObject.Find("PartSysEngineLeft");
    }

Next, inside the If statement that determines if the left or right side of the trackpad is touch, I add code to change the tag names for 'lthrust' and 'rthrust'. (By the way: you might think this is backwards, but in space, to turn right, your left thruster must fire.)

C# Changing Tag Value of Object When Touching Pad 

if (touchpad.x < 0)
            {
                lthrust.tag = "nolthrust";
                rthrust.tag = "rthrust";
            }

 if (touchpad.x > 0)
            {
                rthrust.tag = "northrust";
                lthrust.tag = "lthrust";
            }

Finally, in the existing JavaScript attached to my each particle emitter, I add an additional condition "|| this.gameObject.tag='rthrust' " at end of the If Statement, which checks if the tag equals a value set in the C# script. 

JavaScript Executing Game Logic Based On Tag Updated From C#

if (Input.GetKey ("left") || this.gameObject.tag=="rthrust"){
         GetComponent.<ParticleSystem>().enableEmission= true;
         GetComponent.<AudioSource>().PlayOneShot(thrustersright);
    }

And there you have it: C# talking to JavaScript.  The same technique can be used the other way around. This is a simple workaround solution to get controllers to work and build the basic gaming experience between the two languages and scripts. Once you have things ironed out, I suggest taking the time to convert the JavaScript to C#.

C# Tips:  If you are new to C#, here are some tips to consider.  If you plan to add, multiple or divide numbers that will leave decimals, you will need to declare those variables as floats.  You will need to be sure the number you add, subtract, multiply, or divide is also set as a float. 

  • Declarations look like this:  public float: myVar ;
  • Calculations should look like this: myVar = (float)otherVar +1f

2D GUI to 3D UI

Depending how long ago you built this Unity game, you may have been using GUI menus, buttons, or other on-screen controls and UI elements. My old game was designed to work on tablets and PCs, so it had both onscreen and GUIs.  I used GUIs for startup menus, title graphics, on-screen scores, and alerts.  All of them were disabled inside the headset, but viewable on my PC screen.  With some Google searches I found that you’ll need to convert GUIs to UIs, as UIs can sit in 3D space inside your game, allowing them to be viewable and interactive in VR, but GUIs cannot.

UI Tips for VR: To get UIs going, simply right-click in your Hierarchy panel and add in UI and select Canvas.  By default, Canvas is set to Screen Space, thus all the dimension and location attributes are greyed out in the Inspector.  But you will see that in the Canvas module of the Properties Inspector, you can change Screen View to World Space.    

This will make it so the UI features can be placed and sized anywhere in the scene.  Once you have this Canvas you can add in specific UI items like buttons, images or text items into the canvas. 

To test, add in a Text UI element as a child to the Canvas item (see left side of image below).  Then in the inspector, type "Hello World" in the Text Input field.  If you do not see these words in the scene, go to the Properties Inspector of the Text. Under Paragraph, change the Horizontal and Vertical Overflow to “Overflow”. I found the scaling to be way off and things to be oversized by default, and the overflow setting allows you to see the words even if they are too big.  You may need to either scale down the Canvas or the text using the Scale features in the Inspector.  Play with the scaling and the font to be sure your fonts are clean. If you are getting jagged fonts, change down the scaling and upsize the font.

Controller UIs:  A great use case for UI is to provide some instructions to your controllers.  Label which controller is left vs right, and put icons that explain what the controllers do and where. On the left is my startup scene with UI added to each controller to guide the user on which controllers belong to which hands. Then in the main game scene, the controllers have graphic UIs to explain how to operate the game.

In the end I found that updating this game wasn't terribly difficult, and it allowed me to think through the interaction to be a more immersive and intuitive experience than originally designed. From the exercise alone I now have a lot more knowledge on how to tackle an original VR game.  If you have thoughts or would like to connect with me on this experience, add a comment below or connect with me on Twitter @bobduffy.

Intel® MKL-DNN: Part 1 – Library Overview and Installation

$
0
0

Introduction

Deep learning is one of the hottest subjects in the field of computer science these days, fueled by the convergence of massive datasets, highly parallel processing power, and the drive to build increasingly intelligent devices. Deep learning is described by Wikipedia as a subset of machine learning (ML), consisting of algorithms that model high-level abstractions in data. As depicted in Figure 1, ML is itself a subset of artificial intelligence (AI), a broad field of study in the development of computer systems that attempt to emulate human intelligence.


Figure 1. Relationship of deep learning to AI.

Intel has been actively involved in the area of deep learning though the optimization of popular frameworks like Caffe* and Theano* to take full advantage of Intel® architecture (IA), the creation of high-level tools like the Intel® Deep Learning SDK for data scientists, and providing software libraries to the developer community like Intel® Data Analytics Acceleration Library (Intel® DAAL) and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN).

Intel MKL-DNN is an open source, performance-enhancing library for accelerating deep learning frameworks on IA. Software developers who are interested in the subject of deep learning may have heard of Intel MKL-DNN, but perhaps haven’t had the opportunity to explore it firsthand.

The Developer's Introduction to Intel MKL-DNN tutorial series examines Intel MKL-DNN from a developer’s perspective. Part 1 identifies informative resources and gives detailed instructions on how to install and build the library components. Part 2 of the tutorial series provides information on how to configure the Eclipse* integrated development environment to build the C++ code sample, and also includes a source code walkthrough.

Intel® MKL-DNN Overview

As depicted in Figure 2, Intel MKL-DNN is intended for accelerating deep learning frameworks on IA. It includes highly vectorized and threaded building blocks for implementing convolutional neural networks with C and C++ interfaces.


Figure 2.Deep learning framework on IA.

Intel MKL-DNN operates on these main object types: primitive, engine, and stream. These objects are defined in the library documentation as follows:

  • Primitive - any operation, including convolution, data format reorder, and memory. Primitives can have other primitives as inputs, but can have only memory primitives as outputs.
  • Engine - an execution device, for example, CPU. Every primitive is mapped to a specific engine.
  • Stream - an execution context; you submit primitives to a stream and wait for their completion. Primitives submitted to a stream may have different engines. Stream objects also track dependencies between the primitives.

A typical workflow is to create a set of primitives, push them to a stream for processing, and then wait for completion. Additional information on the programming model is provided in the Intel MKL-DNN documentation.

Resources

There are a number of informative resources available on the web that describe what Intel MKL-DNN is, what it is not, and what a developer can expect to achieve by integrating the library with his or her deep learning project.

GitHub Repository

Intel MKL-DNN is an open source library available to download for free on GitHub*, where it is described as a performance library for DL applications that includes the building blocks for implementing convolutional neural networks (CNN) with C and C++ interfaces.

An important thing to note on the GitHub site is that although the Intel MKL-DNN library includes functionality similar to Intel® Math Kernel Library (Intel® MKL) 2017, it is not API compatible. At the time of this writing the Intel MKL-DNN release is a technical preview, implementing the functionality required to accelerate image recognition topologies like AlexNet* and VGG*.

Intel Open Source Technology Center

The MKL-DNN|01.org project microsite is a member of the Intel Open Source Technology Center known as 01.org, a community supported by Intel engineers who participate in a variety of open source projects. Here you will find an overview of the Intel MKL-DNN project, information on how to get involved and contribute to its evolution, and an informative blog entitled Introducing the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) by Kent Moffat.

Installing Intel MKL-DNN

This section elaborates on the installation information presented on the GitHub repository site by providing detailed, step-by-step instructions for installing and building the Intel MKL-DNN library components. The computer you use will require an Intel® processor supporting Intel® Advanced Vector Extensions 2 (Intel® AVX2). Specifically, Intel MKL-DNN is optimized for Intel® Xeon® processors and Intel® Xeon Phi™ processors.

GitHub indicates the software was validated on RedHat* Enterprise Linux* 7; however, the information presented in this tutorial was developed on a system running Ubuntu* 16.04.

Install Dependencies

Intel MKL-DNN has the following dependencies:

  • CMake* – a cross-platform tool used to build, test, and package software.
  • Doxygen* – a tool for generating documentation from annotated source code.

If these software tools are not already set up on your computer you can install them by typing the following:

sudo apt install cmake

sudo apt install doxygen

Download and Build the Source Code

Clone the Intel MKL-DNN library from the GitHub repository by opening a terminal and typing the following command:

git clone https://github.com/01org/mkl-dnn.git

Note: if Git* is not already set up on your computer you can install it by typing the following:

sudo apt install git

Once the installation has completed you will find a directory named mkl-dnn in the Home directory. Navigate to this directory by typing:

cd mkl-dnn

As explained on the GitHub repository site, Intel MKL-DNN uses the optimized general matrix to matrix multiplication (GEMM) function from Intel MKL. The library supporting this function is also included in the repository and can be downloaded by running the prepare_mkl.sh script located in the scripts directory:

cd scripts && ./prepare_mkl.sh && cd ..

This script creates a directory named external and then downloads and extracts the library files to a directory named mkl-dnn/external/mklml_lnx*.

The next command is executed from the mkl-dnn directory; it creates a subdirectory named build and then runs CMake and Make to generate the build system:

mkdir -p build && cd build && cmake .. && make

Validating the Build

To validate your build, execute the following command from the mkl-dnn/build directory:

make test

This step executes a series of unit tests to validate the build. All of these tests should indicate Passed, and the processing time as shown in Figure 3.


Figure 3.Test results.

Library Documentation

Documentation for Intel MKL-DNN is available online. This documentation can also be generated locally on your system by executing the following command from the mkl-dnn/build directory:

make doc

Finalize the Installation

Finalize the installation of Intel MKL-DNN by executing the following command from the mkl-dnn/build directory:

sudo make install

The next step installs the libraries and other components that are required to develop Intel MKL-DNN enabled applications under the /usr/local directory:

Shared libraries (/usr/local/lib):

  • libiomp5.so
  • libmkldnn.so
  • libmklml_intel.so

Header files (/usr/local/include):

  • mkldnn.h
  • mkldnn.hpp
  • mkldnn_types.h

Documentation (/usr/local/share/doc/mkldnn):

  • Intel license and copyright notice
  • Various files that make up the HTML documentation (under /reference/html)

Building the Code Examples on the Command Line

The GitHub repository contains C and C++ code examples that demonstrate how to build a neural network topology block that consists of convolution, rectified linear unit, local response normalization, and pooling. The following section describes how to build these code examples from the command line in Linux. Part 2 of the tutorial series demonstrates how to configure the Eclipse integrated development environment for building and extending the C++ code example.

C++ Example Command-Line Build (G++)

To build the C++ example program (simple_net.cpp) included in the Intel MKL-DNN repository, first go to the examples directory:

cd ~/mkl-dnn/examples

Next, create a destination directory for the executable:

mkdir –p bin

Build the simple_net.cpp example by linking the shared Intel MKL-DNN library and specifying the output directory as follows:

g++ -std=c++11 simple_net.cpp –lmkldnn –o bin/simple_net_cpp


Figure 4.C++ command-line build using G++.

Go to the bin directory and run the executable:

cd bin

./simple_net_cpp

C Example Command-Line Build Using GCC

To build the C example application (simple_net.c) included in the Intel MKL-DNN repository, first go to the examples directory:

cd ~/mkl-dnn/examples

Next, create a destination directory for the executable:

mkdir –p bin

Build the simple_net.c example by linking the Intel MKL-DNN shared library and specifying the output directory as follows:

gcc –Wall –o bin/simple_net_c simple_net.c -lmkldnn


Figure 5.C command-line build using GCC.

Go to the bin directory and run the executable:

cd bin

./simple_net_c

Once completed, the C application will print either passed or failed to the terminal.

Next Steps

At this point you should have successfully installed the Intel MKL-DNN library, executed the unit tests, and built the example programs provided in the repository. In Part 2 of the Developer's Introduction to Intel MKL-DNN you’ll learn how to configure the Eclipse integrated development environment to build the C++ code sample along with a walkthrough of the code.


Intel® MKL-DNN: Part 2 – Sample Code Build and Walkthrough

$
0
0

Introduction

In Part 1 we introduced Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), an open source performance library for deep learning applications. Detailed steps were provided on how to install the library components on a computer with an Intel processor supporting Intel® Advanced Vector Extensions 2 (Intel® AVX2) and running the Ubuntu* operating system. Details on how to build the C and C++ code examples from the command line were also covered in Part 1.

In Part 2 we will explore how to configure an integrated development environment (IDE) to build the C++ code example, and provide a code walkthrough based on the AlexNet* deep learning topology. In this tutorial we’ll be working with the Eclipse Neon* IDE with the C/C++ Development Tools (CDT). (If your system does not already have Eclipse* installed you can follow the directions on the Ubuntu Handbook site, specifying the Oracle Java* 8 and Eclipse IDE for C/C++ Developers options.)

Building the C++ Example in Eclipse IDE

This section describes how to create a new project in Eclipse and import the Intel MKL-DNN C++ example code.

Create a new project in Eclipse:

  • Start Eclipse.
  • Click New in the upper left-hand corner of screen.
  • In the Select a wizard screen, select C++ Project and then click Next (Figure 1).


Figure 1.Create a new C++ project in Eclipse.

  • Enter simple_net for the project name. For the project type select Executable, Empty Project. For toolchain select Linux GCC. Click Next.
  • In the Select Configurations screen, click Advanced Settings.

Enable C++11 for the project:

  • In the Properties screen, expand the C/C++ Build option in the menu tree and then select Settings.
  • In the Tool Settings tab, select GCC C++ Compiler, and then Miscellaneous.
  • In the Other flags box add -std=c++11 to the end of existing string separated by a space (Figure 2).


Figure 2. Enable C++11 for the project (1 of 2).

  • In the Properties screen, expand the C/C++ General and then select Preprocessor Include Paths, Macros etc.
  • Select the Providers tab and then select the compiler you are using (for example, CDT GCC Built-in Compiler Settings).
  • Locate the field named Command to get compiler specs: and add -std=c++11. The command should look similar to this when finished:
    “${COMMAND} ${FLAGS} -E -P -v -dD “${INPUTS}” -std=c++11”.
  • Click Apply and then OK (Figure 3).


Figure 3. Enable C++11 for the project (2 of 2).

Add library to linker settings:

  • In the Properties screen, expand the C/C++ Build option in the menu tree and then select Settings.
  • In the Tool Settings tab, select GCC C++ Linker, and then Libraries.
  • Under the Libraries (l) section click Add.
  • Enter mkldnn and then click OK (Figure 4).


Figure 4. Add library to linker settings.

Finish creating the project:

  • Click OK at the bottom of the Properties screen.
  • Click Finish at the bottom of the C++ Project screen.

Add the C++ source file (note: at this point the simple_net project should appear in Project Explorer):

  • Right-click the project name in Project Explorer and select New, Source Folder. Enter src for the folder name and then click Finish.
  • Right-click the src folder in Project Explorer and select Import…
  • In the Import screen, expand the General folder and then highlight File System. Click Next.
  • In the File System screen, click the Browse button next to the From directory field. Navigate to the location containing the Intel MKL-DNN example files, which in our case is /mkl-dnn/examples. Click OK at the bottom of the screen.
  • Back in the File System screen, check the simple_net.cpp box and then click Finish.

Build the Simple_Net project:

  • Right-click on the project name simple_net in Project Explorer.
  • Click on Build Project and verify no errors are encountered.

Simple_Net Code Example

Although it’s not a fully functional deep learning framework, Simple_Net provides the basics of how to build a neural network topology block that consists of convolution, rectified linear unit (ReLU), local response normalization (LRN), and pooling, all in an executable project. A brief step-by-step description of the Intel MKL-DNN C++ API is presented in the documentation; however, the Simple_Net code example provides a more complete walkthrough based on the AlexNet topology. Hence, we will begin by presenting a brief overview of the AlexNet architecture.

AlexNet Architecture

As described in the paper ImageNet Classification with Deep Convolutional Neural Networks, the AlexNet architecture contains an input image (L0) and eight learned layers (L1 through L8)—five convolutional and three fully-connected. This topology is depicted graphically in Figure 5.


Figure 5. AlexNet topology (credit: MIT*).

Table 1 provides additional details of the AlexNet architecture:

Layer

Type

Description

L0

Input image

Size: 227 x 227 x 3 (shown in diagram as 227 x 227 x 3)

L1

Convolution

Size: 55* x 55 x 96

  • 96 filters, size 11 × 11
  • Stride 4
  • Padding 0

*Size = (N - F)/S + 1 = (227 - 11)/4 + 1 = 55

-

Max-pooling

Size: 27* x 27 x 96

  • 96 filters, size 3 × 3
  • Stride 2

*Size = (N - F)/S + 1 = (55 – 3)/2 + 1 = 27

L2

Convolution

Size: 27 x 27 x 256

  • 256 filters, size 5 x 5
  • Stride 1
  • Padding 2

-

Max-pooling

Size: 13* x 13 x 256

  • 256 filters, size 3 × 3
  • Stride 2

*Size = (N - F)/S + 1 = (27 - 3)/2 + 1 = 13

L3

Convolution

Size: 13 x 13 x 384

  • 384 filters, size 3 × 3
  • Stride 1
  • Padding 1

L4

Convolution

Size: 13 x 13 x 384

  • 384 filters, size 3 × 3
  • Stride 1
  • Padding 1

L5

Convolution

Size: 13 x 13 x 256

  • 256 filters, size 3 × 3
  • Stride 1
  • Padding 1

-

Max-pooling

Size: 6* x 6 x 256

  • 256 filters, size 3 × 3
  • Stride 2

*Size = (N - F)/S + 1 = (13 - 3)/2 + 1 = 6

L6

Fully Connected

4096 neurons

L7

Fully Connected

4096 neurons

L8

Fully Connected

1000 neurons

Table 1. AlexNet layer descriptions.

A detailed description of convolutional neural networks and the AlexNet topology is beyond the scope of this tutorial, but the reader may find the following links useful if more information is required.

Simple_Net Code Walkthrough

The source code presented below is essentially the same as the Simple_Net example contained in the repository, except it has been refactored to use the fully qualified Intel MKL-DNN types to enhance readability. This code implements the first layer (L1) of the topology.

  1. Add include directive for the library header file:
    	#include "mkldnn.hpp"
  2. Initialize the CPU engine as index 0:
    	auto cpu_engine = mkldnn::engine(mkldnn::engine::cpu, 0);
  3. Allocate data and create tensor structures:
    	const uint32_t batch = 256;
    	std::vector<float> net_src(batch * 3 * 227 * 227);
    	std::vector<float> net_dst(batch * 96 * 27 * 27);
    
    	/* AlexNet: conv
    	 * {batch, 3, 227, 227} (x) {96, 3, 11, 11} -> {batch, 96, 55, 55}
    	 * strides: {4, 4}
    	 */
    	mkldnn::memory::dims conv_src_tz = {batch, 3, 227, 227};
    	mkldnn::memory::dims conv_weights_tz = {96, 3, 11, 11};
    	mkldnn::memory::dims conv_bias_tz = {96};
    	mkldnn::memory::dims conv_dst_tz = {batch, 96, 55, 55};
    	mkldnn::memory::dims conv_strides = {4, 4};
    	auto conv_padding = {0, 0};
    
    	std::vector<float> conv_weights(std::accumulate(conv_weights_tz.begin(),
    		conv_weights_tz.end(), 1, std::multiplies<uint32_t>()));
    
    	std::vector<float> conv_bias(std::accumulate(conv_bias_tz.begin(),
    		conv_bias_tz.end(), 1, std::multiplies<uint32_t>()));
  4. Create memory for user data:
    	auto conv_user_src_memory = mkldnn::memory({{{conv_src_tz},
    		mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::nchw}, cpu_engine}, net_src.data());
    
    	auto conv_user_weights_memory = mkldnn::memory({{{conv_weights_tz},
    		mkldnn::memory::data_type::f32, mkldnn::memory::format::oihw},
    		cpu_engine}, conv_weights.data());
    
    	auto conv_user_bias_memory = mkldnn::memory({{{conv_bias_tz},
    		mkldnn::memory::data_type::f32, mkldnn::memory::format::x}, cpu_engine},
    	    conv_bias.data());
    	
  5. Create memory descriptors for convolution data using the wildcard any for the convolution data format (this enables the convolution primitive to choose the data format that is most suitable for its input parameters—kernel sizes, strides, padding, and so on):
    	auto conv_src_md = mkldnn::memory::desc({conv_src_tz},
    		mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::any);
    
    	auto conv_bias_md = mkldnn::memory::desc({conv_bias_tz},
    		mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::any);
    
    	auto conv_weights_md = mkldnn::memory::desc({conv_weights_tz},
    		mkldnn::memory::data_type::f32, mkldnn::memory::format::any);
    
    	auto conv_dst_md = mkldnn::memory::desc({conv_dst_tz},
    		mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::any);
    	
  6. Create a convolution descriptor by specifying the algorithm, propagation kind, shapes of input, weights, bias, output, and convolution strides, padding, and padding kind:
    	auto conv_desc = mkldnn::convolution_forward::desc(mkldnn::prop_kind::forward,
    		mkldnn::convolution_direct, conv_src_md, conv_weights_md, conv_bias_md,
    		conv_dst_md, conv_strides, conv_padding, conv_padding,
    		mkldnn::padding_kind::zero);
  7. Create a descriptor of the convolution primitive. Once created, this descriptor has specific formats instead of any wildcard formats specified in the convolution descriptor:
    	auto conv_prim_desc =
    		mkldnn::convolution_forward::primitive_desc(conv_desc, cpu_engine);
  8. Create a vector of primitives that represents the net:
    	std::vector<mkldnn::primitive> net;
  9. Create reorders between user and data if it is needed and add it to net before convolution:
    	auto conv_src_memory = conv_user_src_memory;
    	if (mkldnn::memory::primitive_desc(conv_prim_desc.src_primitive_desc()) !=
    	conv_user_src_memory.get_primitive_desc()) {
    
    		conv_src_memory = mkldnn::memory(conv_prim_desc.src_primitive_desc());
    
    		net.push_back(mkldnn::reorder(conv_user_src_memory, conv_src_memory));
    	}
    
    	auto conv_weights_memory = conv_user_weights_memory;
    	if (mkldnn::memory::primitive_desc(conv_prim_desc.weights_primitive_desc()) !=
    			conv_user_weights_memory.get_primitive_desc()) {
    
    		conv_weights_memory =
    			mkldnn::memory(conv_prim_desc.weights_primitive_desc());
    
    		net.push_back(mkldnn::reorder(conv_user_weights_memory,
    			conv_weights_memory));
    	}
    
    	auto conv_dst_memory = mkldnn::memory(conv_prim_desc.dst_primitive_desc());
    	
  10. Create convolution primitive and add it to net:
    	net.push_back(mkldnn::convolution_forward(conv_prim_desc, conv_src_memory,
    		conv_weights_memory, conv_user_bias_memory, conv_dst_memory));
  11. Create a ReLU primitive and add it to net:
    	/* AlexNet: relu
    	 * {batch, 96, 55, 55} -> {batch, 96, 55, 55}
    	 */
    	const double negative_slope = 1.0;
    	auto relu_dst_memory = mkldnn::memory(conv_prim_desc.dst_primitive_desc());
    
    	auto relu_desc = mkldnn::relu_forward::desc(mkldnn::prop_kind::forward,
    	conv_prim_desc.dst_primitive_desc().desc(), negative_slope);
    
    	auto relu_prim_desc = mkldnn::relu_forward::primitive_desc(relu_desc, cpu_engine);
    
    	net.push_back(mkldnn::relu_forward(relu_prim_desc, conv_dst_memory,
    	relu_dst_memory));
    	
  12. Create an AlexNet LRN primitive:
    	/* AlexNet: lrn
    	 * {batch, 96, 55, 55} -> {batch, 96, 55, 55}
    	 * local size: 5
    	 * alpha: 0.0001
    	 * beta: 0.75
    	 */
    	const uint32_t local_size = 5;
    	const double alpha = 0.0001;
    	const double beta = 0.75;
    
    	auto lrn_dst_memory = mkldnn::memory(relu_dst_memory.get_primitive_desc());
    
    	/* create lrn scratch memory from lrn src */
    	auto lrn_scratch_memory = mkldnn::memory(lrn_dst_memory.get_primitive_desc());
    
    	/* create lrn primitive and add it to net */
    	auto lrn_desc = mkldnn::lrn_forward::desc(mkldnn::prop_kind::forward,
    		mkldnn::lrn_across_channels,
    	conv_prim_desc.dst_primitive_desc().desc(), local_size,
    		alpha, beta);
    
    	auto lrn_prim_desc = mkldnn::lrn_forward::primitive_desc(lrn_desc, cpu_engine);
    
    	net.push_back(mkldnn::lrn_forward(lrn_prim_desc, relu_dst_memory,
    	lrn_scratch_memory, lrn_dst_memory));
    	
  13. Create an AlexNet pooling primitive:
    	/* AlexNet: pool
    	* {batch, 96, 55, 55} -> {batch, 96, 27, 27}
    	* kernel: {3, 3}
    	* strides: {2, 2}
    	*/
    	mkldnn::memory::dims pool_dst_tz = {batch, 96, 27, 27};
    	mkldnn::memory::dims pool_kernel = {3, 3};
    	mkldnn::memory::dims pool_strides = {2, 2};
    	auto pool_padding = {0, 0};
    
    	auto pool_user_dst_memory = mkldnn::memory({{{pool_dst_tz},
    		mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::nchw}, cpu_engine}, net_dst.data());
    
    	auto pool_dst_md = mkldnn::memory::desc({pool_dst_tz},
    			mkldnn::memory::data_type::f32,
    		mkldnn::memory::format::any);
    
    	auto pool_desc = mkldnn::pooling_forward::desc(mkldnn::prop_kind::forward,
    		mkldnn::pooling_max, lrn_dst_memory.get_primitive_desc().desc(), pool_dst_md, pool_strides, pool_kernel, pool_padding, pool_padding,mkldnn::padding_kind::zero);
    
    	auto pool_pd = mkldnn::pooling_forward::primitive_desc(pool_desc, cpu_engine);
    	auto pool_dst_memory = pool_user_dst_memory;
    
    	if (mkldnn::memory::primitive_desc(pool_pd.dst_primitive_desc()) !=
    			pool_user_dst_memory.get_primitive_desc()) {
    		pool_dst_memory = mkldnn::memory(pool_pd.dst_primitive_desc());
    	}
    	
  14. Create pooling indices memory from pooling dst:
    	auto pool_indices_memory =
    		mkldnn::memory(pool_dst_memory.get_primitive_desc());
  15. Create pooling primitive and add it to net:
    	net.push_back(mkldnn::pooling_forward(pool_pd, lrn_dst_memory,
    		pool_indices_memory, pool_dst_memory));
  16. Create reorder between internal and user data if it is needed and add it to net after pooling:
    	if (pool_dst_memory != pool_user_dst_memory) {
        	net.push_back(mkldnn::reorder(pool_dst_memory, pool_user_dst_memory));
    	}
  17. Create a stream, submit all the primitives, and wait for completion:
    	mkldnn::stream(mkldnn::stream::kind::eager).submit(net).wait();
  18. The code described above is contained in the simple_net() function, which is called in main with exception handling:
    	int main(int argc, char **argv) {
    	    try {
    	        simple_net();
    	    }
    	    catch(mkldnn::error& e) {
    	        std::cerr << "status: "<< e.status << std::endl;
    	        std::cerr << "message: "<< e.message << std::endl;
    	    }
    	    return 0;
    	}

Conclusion

Part 1 of this tutorial series identified several resources for learning about the technical preview of Intel MKL-DNN. Detailed instructions on how to install and build the library components were also provided. In this paper (Part 2 of the tutorial series), information on how to configure the Eclipse integrated development environment to build the C++ code sample was provided, along with a code walkthrough based on the AlexNet deep learning topology. Stay tuned as Intel MKL-DNN approaches production release.

GENOMICSDB SOLUTION WHITE PAPER

$
0
0
By Hao Li, Danny Zhang, Carl Li, Hui Lv, Jianlei Gu, Welles Du, Haitong Wang
 

Abstract

During genomics life science research, the data volume of whole genomics and life science algorithm is going bigger and bigger, which is calculated as TB, PB or EB etc. The key problem will be how to store and analyze the data with optimized way. This paper demonstrates how Intel Big Data Technology and Architecture help to facilitate and accelerate the genomics life science research in data store and utilization. Intel defines high performance GenomicsDB for variant call data query. Based on this great technology, Intel defines genomics knowledge share and exchange architecture, which is landed and validated in Shanghai Children Hospital and Shanghai Jiaotong University with very positive feedback. And these big data technology can definitely be scaled to much more genomics life science partners in the world.

Keywords

GenomicsDB, TileDB; Big Data; Life Science 

 

1. INTRODUCTION

In genomics life science research, the data volume is going bigger and bigger, which is calculated as TB, PB or EB etc. For example, Genomics sequencing generates more than 1TB data per patient. During 2015, 1.65 million new patients in US generates more than 4EB data. In CNGB (China National Gene Bank), there is 500PB data volume deployed for now and it is estimated that volume will be increased by 5-10PB per year. In SCH (Shanghai Children Hospital) and Sjtu (Shanghai Jiao Tong University) Super Computing Center, there have hundreds of nodes for totally 30PB storage deployment. The key priority will be how to store and analyze data with optimized way and how to exchange and share the data with each other.

Intel defines and deploys scalable genomics knowledge share and exchange architecture, which is landed and validated in Shanghai Children Hospital and Shanghai Jiaotong University with very positive feedback. The solution architecture provides customized GenomicsDB engine for genomics variant call data search by position with very fast speed. Because genomics position is discrete, GenomicsDB also optimizes the sparse array storage by saving only the “useful” data. Intel also defines the genomics knowledge data sharing process and architecture for making the genomics knowledge be consolidated and utilized more efficiently.

2. INTEL BIG DATA ARCHITECTURE FOR LIFE SCIENCE

In real scenario, the research on genomics must work on big data mode. In this solution, it provides big data architecture (Figure 1). It is separated into 2 layer. One is application framework, which is likely interface to end user. It supports Genomics Knowledge App, Genomics DB UI as well as Genomics Work Flow etc. The other is core framework and Linux kernel, which provides services to support request from application framework. The architecture supports big data level core framework. Such as TileDB and GenomicsDB Engine for big genomics variant data.

Figure 1. Big Data Architecture for Life Science

2.1. GENOMICS KNOWLEDGE SHARE MODEL

Figure 2. Genomics Knowledge Share Model

Figure 3. Genomics Knowledge Share Architecture

In Figure 2 and Figure 3, it shows key usage model and architecture of Genomics knowledge share. There is Central Function defined as data share agent. The purpose is for secure and consolidated data share and exchange. In real scenario, the genomics data is relate to privacy, hospitals can’t share too much publicly. In this solution, hospitals keeps the raw data in local private data center, Central Function provides statistical and summarized genomics knowledge database share for query. For example, hospital A and B store raw genomics data with genomics work flow and create private GenomicsDB in their private data center. At the same time, hospital A and B can contribute statistical and representative variant call data to Central Function, data management system can create consolidated GenomicsDB knowledge center for share. Then end user can query consolidated GenomicsDB knowledge from both hospital A and B. And the data share is not for raw genomics data but statistical and GenomicsDB knowledge. Other than hospitals’ private data center, Central Function can be deployed on public cloud with secured access control for data share.

2.2. Visualized Genomics Work Flow

Figure 4. Visualized Genomics Work Flow

In Figure 4, it shows example of visualized genomics work flow, like data convert from fastq to BAM and then create final vcf (variant call format) data. In life science, some bio researchers don’t have too much IT background. The visualized UI can help bio researcher much during customization of genomics data analysis and conversion. These raw data should be put in secure environment like hospital private data center and will be used to create GenomicsDB knowledge.

2.3. GenomicsDB and TileDB

Figure 5. TileDB and GenomicsDB

In Figure 5, it shows TileDB work model. TileDB is a system for efficiently storing, querying and accessing sparse array data. It is optimized for sparse data and supports high performance linear algebra. For example, when storing data and querying cell, TileDB skips the empty cell to save much storage and query time. The GenomicsDB is instance of TileDB, which stores variant data in a 2D TileDB array. Each row corresponds to sample in a vcf and each column corresponds to a genomic position. Figure 5 also shows example of discrete genomics position data.

Figure 6. GenomicsDB Performance Report

Figure 6 shows real GenomicsDB testing report from shanghai children hospital with 11G sample vcf. It takes seconds time to response user and shows better performance when doing paralleled query and scaling to millions of variant data column range.

Figure 7. GenomicsDB UI

Figure 8 GenomicsDB UI Testing Report

Since some of bio researchers don’t have much IT background, this solution provides very friendly web UI to support the query by end user. In Figure 7, it shows the convenient UI that has been landed in shanghai children hospital. In Figure 8, it shows the GenomicsDB UI testing report. Although it takes longer time to UI parse, render as well as annotation process etc, it still takes only seconds time to response user and shows great performance when scaling to bigger data with bigger column rage.

2.4. Customized Genomics Annotation

Figure 9. Genomics Annotation

In Figure 9, it shows Genomics Annotation that can be customized together with GenomicsDB result. Besides the info and format data from vcf content, more annotations relate to Genomics variant call format data are very useful to bio researcher, GenomicsDB with its scalable design and interface can seamlessly integrate more annotations into standard vcf column info, which can contribute more valuable information during genomics research.

3. Conclusions

As big data era of genomics life science industry is coming, previous traditional way can’t fulfil the data growing request. By using Intel Big Data Architecture with GenomicsDB, genomics data can be stored and utilized with optimized way. Central Function with genomics knowledge share will be scaled as future commercial standard. It can definitely facilitate life science research and accelerate the genomics precision medicine.

Acknowledgements

Thank Carl Li, Hao Li, Guangjun Yu, Hui Lv, Jianlei Gu, Hong Sun, Ketan Paranjape, Paolo Narvaez, Karthik Gururaj, Kushal Datta, Danny Zhang, Julia Liang, Chang Yu, Ying Liu, Jian Li, Hua Ding, Hong Zhu, Ansheng Yang, for great help and support during solution design and development.

References

[1]  Karthik Gururaj & Kushal Datta, GenomicsDB.
https://github.com/Intel-HLS/GenomicsDB

[2]  Brian Cremeans & Kushal Datta & Karthik Gururaj & Samuel Madden & Timothy Mattson & Mishali Naik & Paolo Narvaez & Stavros Papadopoulos & Jagannath Premkumar, TileDB. 
http://istc-bigdata.org/tiledb/tutorials/index.html

[3]  Robert Read, Lustre file system for cloud and Hadoop, pp.1-25. https://www.openfabrics.org/images/eventpresos/workshops2015/UGWorkshop/Thursday/thursday_13.pdf

[4]  Intel Stands at Center of Huge Health Data Exchange Project,
https://www.premisehealth.com/intel-data-exchange-project/

[5]  Tim Mattson Intel labs, Polystore, Julia, and produc3vity in a Big Data world, pp. 1-38. http://www.clsac.org/uploads/5/0/6/3/50633811/mattson.pdf

[6]  Lustre Software Release 2.x Operations Manual, Part III. Administering Lustre, pp. 172-178. http://doc.lustre.org/lustre_manual.pdf

[7] Jennie Duggan Northwestern U. & Aaron Elmore U of Chicago & Tim Kraska Brown U. & Sam Madden M.I.T. & Tim Mattson Intel Corp. & Michael Stonebraker M.I.T., The BigDawg Architecture and Reference Implementation, pp. 1-2.
http://users.eecs.northwestern.edu/~jennie/research/BigDawgShort.pdf

[8]  A. Elmore Univ. of Chicago & J. Duggan Northwestern & M. Stonebraker MIT & M. Balazinska Univ. of Wash. & U. Cetintemel Brown & V. Gadepally MIT-LL & J. Heer Univ. of Wash. & B. Howe Univ. of Wash. & J. Kepner MIT-LL & T. Kraska Brown & S. Madden MIT & D. Maier Portland St U. & T. Mattson Intel & S.Papadopoulos Intel / MIT & J. Parkhurst Intel & N. Tatbul Intel / MIT & M. Vartak MIT & S. Zdonik Brown, A Demonstration of the BigDAWG Polystore System, pp. 1-4. 
http://livinglab.mit.edu/wp-content/uploads/2016/01/bigdawg-polystore-system.pdf

[9] Usegalaxy,
https://wiki.galaxyproject.org/Admin/GetGalaxy

Getting Started with Parallel STL

Parallel STL Release Notes

$
0
0

*draft*

Find the latest Release Notes for Parallel STL

This page provides the current Release Notes for the Parallel STL for Linux*, Windows* and OS X* products. All files are in TXT format.

Tencent Ultra-Cold Storage System Optimization with Intel® ISA-L – A Case Study

$
0
0

Download PDF  [823KB]

In this era of data explosion, the cumulative amount of obsolete data is becoming extremely large. For storage cost considerations, many independent Internet service providers are developing their own cold storage system. This paper discusses one such collaboration between Tencent and Intel to optimize the ultra-cold storage project in Tencent File System* (TFS). The XOR functions in Intel® Intelligent Storage Acceleration Library (Intel® ISA-L) successfully help TFS meet the performance requirement.

Introduction to Tencent and TFS

Tencent is one of the largest Internet companies in the world, whose services include social networks, web portals, e-commerce, and multiplayer online games. Its offerings in China include the well-known instant messenger Tencent QQ*, one of the largest web portals, QQ.com, and the mobile chat service WeChat. These offerings have helped bolster Tencent's continuous expansion.

Behind these offerings, TFS serves at the core of file services necessary for many businesses. With hundreds of millions of users, TFS is facing performance and capacity challenges. Since the Tencent Data Center is mainly based on Intel® architecture, Tencent has been working with Intel to optimize the TFS’s performance.

Challenge of ultra-cold storage project in TFS

Unlike for Online Systems, procurement of processors for TFS’s ultra-cold storage project is not a budget priority, so existing processors have been recycled from outdated systems. This approach does not result in powerful compute performance, with calculation performance easily the biggest bottleneck for the system.

Previously, in order to save disks capacity and maintain high reliability, the project adopted the erasure code 9+3 solution (see Figure 1).

Figure 1: Original Erasure Code 9+3 solution.

Tencent has reconsidered erasure coding for several reasons:

  • Much of the data stored in this ultra-cold storage system are outdated pictures. Occasional data corruptions are acceptable.
  • Redundancy rate of erasure code 9+3 may be too much of a luxury for this kind of data.
  • Even optimized with Intel ISA-L erasure code, it is still a heavy workload for these outdated, low-performance servers assigned to ultra-cold storage system.

In order to reduce the redundancy rate and improve performance bottlenecks, a solution that uses XOR operation on 10 stripes to generate 2 parities was adopted (see Figure 2). The first parity is horizontal processing, and the second parity is vertical processing.

Figure 2: New XOR 10+2 solution

This new solution still had one obvious hotspot: the XOR operation limits system performance. Despite simplifying the data protection algorithm, this cost-optimized solution couldn’t meet the performance requirements that Tencent Online Systems needed.

Tencent was seeking an effective and convenient way to reduce the calculation effort of the XOR operation. It needed an efficient and optimized version of XOR to alleviate the performance bottleneck and meet the design requirements for the ultra-cold storage solution.

About Intel® Intelligent Storage Acceleration Library

Intel ISA-L is a collection of optimized, low-level functions used primarily in storage applications. The general library for Intel ISA-L contains an expanded set of functions used for erasure code, data protection and integrity, compression, hashing, and encryption. It is written primarily in hand-coded ASM but with bindings for the C/C++ programming languages. Intel ISA-L contains highly optimized algorithms behind an API, automatically choosing an appropriate binary implementation for the detected processor architecture, allowing ISA-L to run on past, current, and next-generation CPUs without interface changes.

The library includes an XOR generation function, gen_xor_avx, as part of the Intel ISA-L data-protection functions. Intel ISA-L is highly performance optimized by Intel’s Single Instruction Multiple Data instructions.

Collaboration between Tencent and Intel

Tencent and Intel have worked together using Intel ISA-L to optimize ultra-cold storage project in TFS.

The XOR function used in the ultra-cold storage project was originally coded in C Language and in Galois code format, named galois_xor. The first optimization proposal was to replace galois_xor with Intel ISA-L gen_xor_avx directly. The test results from this single change showed a ~50-percent performance gain.

After analyzing the parity generation method of the ultra-cold storage system, we suggested using gen_xor_avx in pointer array format. This second optimization proposal improved coding efficiency further, by avoiding unnecessary memory operation.

Results

The performance optimization scheme, based on the Intel ISA-L XOR function, helped solve the practical problems encountered in building an ultra-cold storage system. The test results from Tencent showed a 250-percent performance increase compared with previous method.

MethodGalois xorIntel ISA-L gen_xor_avx
on non-array form
Intel ISA-L gen_xor_avx
on array form
Performance800 MB/s1.2 GB/s2 GB/s

This distinct performance gain successfully met the requirements from Online Systems. Even better, since Intel ISA-L is open-source (BSD-licensed) code, there was no cost to the Tencent team for the huge improvement in system performance.

Acknowledgement

As a result of this successful collaboration with Intel, Sands Zhou, principal of the Tencent ultra-cold storage system, said: “TFS ultra-cold storage project, based on entire cabinet program, CPU became a performance bottleneck. In the meantime, the project got strong supports from Intel based on ISA-L XOR program. Thanks again, wish more collaborations with Intel in the following work.”

How to Create a Support Request at Online Service Center

$
0
0

If you have purchased an Intel® Software Development Tools product, please go to Intel® Registration Center to register your product and obtain a support account.  You will receive priority support at the Online Service Center web site for the duration of your support period.

Requesting Support

When you need to create a support ticket go to http://www.intel.com/supporttickets.

Online Service Center

Click Request Support.

Requesting Support
Step 1: Selection

What Product Do You Need Support For

1A. Choose: I need help with “A product service I already own or use”

1B. Choose: How would you like to find your product or service? “Search for a product or service by name”

1C. In the search box, type in a portion of the name or component product name for which you want support:

Find Your Product or Service

Choose the most appropriate product for your support request.

Request Support
Step 2: Request

Request Support Request Step

Answer the first two questions.  If you answer, “yes” to either question, please do not attach any files to the support request.

In the “What steps have you taken to troubleshoot this issue?” dialog box, please describe your issue or question.  All support is provided in English.

Request Support
Step 3: Details

Request Support Details Step

Specific questions will be asked based on the product selected.  Not all fields are required.

You can attach files up to 25 MB.  If you need to provide a larger size file, please let the support agent know that in the description.

Customer Improvement Program Question

You are not required to enter an answer for the Customer improvement program.  This question is regarding the support tool not the Intel® Software Development Tools.

Please read the privacy policy and click Submit Request.

Request Support
Step 4: Confirmation

Request Support Confirmation

The support request number is displayed.  You will also receive an email with the support request number.

Upon submission, the case is routed to a specific Product Queue for an Agent to pick up.

Online Service Center Support History

You can return to Online Service Center to check the status of your open support requests and see updates from Intel.  You can also respond to the email notifications you receive to provide additional information on your support request.

Code Sample Guide

$
0
0

Getting Started

The goal isn’t to recommend one way of coding as being “the right way,” because there are so many different technologies, practices, styles, tools, frameworks, and libraries to use, and different people have different favorites. Instead, we break it down to the core, a single minimalist way that is predictable, transparent, and simple enough to not get in the way of learning what each demo is trying to teach.

The biggest advantage to adhering to these guidelines is that it will foster consistency across our samples and demos on Intel® DZ, which increases readability and comprehension overall.

 

What is a Code Sample?

Developers of all types have no hard and fast rules when it comes to defining code samples. They are looking to solve a problem and that usually involves code.

The many faces of “code” on Intel® DZ:

  • Code Samples
  • Code Snippets
  • Projects
  • Product Code Samples
  • Code Examples
  • Code Files
  • Code Packages

In an effort to create a consistent feel for code across the community we will be simplifying down to 3 basic areas. Code Snippet, Code Sample, or a project/tutorial/how-to.

 

Code Snippet

Code Snippet – A small amount of code designed to illustrate a specific purpose. For example if you have a problem with your code and you’re asking for help. No one wants to see your whole application. They want you show them a Code Snippet which they can use to reproduce the problem. Code snippets are supporting characters to your story.

Code Snippets in an Article require:

  1. description of what the code is for (prior to seeing it)
  2. Copy/paste friendly code (no matter how small)
  3. Link to full documentation/additional resources

 

Code Sample

Code Sample – Much like a code snippet, samples follow the same guidelines. Just enough to illustrate a topic or problem. The main difference is a code sample is the lead story. It drives the purpose of your content from start to finish.

"Simple" Code Samples Requires:

  • Clear Header
  • Version #
  • Intro
  • Assumptions, Software and Hardware Requirements
  • Skill Level
  • Alternative options for easier/harder
  • Copy/Paste friendly code (no images!)
  • Clearly labeled headers for easy scanning
  • End with a CTA (something else to do)

"Download Only" Code Samples Require:

  • Clear Header Version #
  • Assumptions, Software and Hardware Requirements
  • Skill Level
  • Alternative options for easier/harder
  • Details about download 
  • End with a CTA (something else to do)

Complex Code Samples Require:

A linear content type that may have code samples, code snippets or just procedural items. Tutorials have a purpose or problem to solve and go through that solution one item at a time. They should be written in a very straight forward, plain english, and easy to translate way.

  • Clear Header
  • Version #
  • in-page navigation
  • Sample Details:
    • Assumptions, Software and Hardware Requirements
    • Skill Level
    • Tested on
    • Alternative options for easier/harder
    • License
  • Intro
  • Copy/Paste friendly code (no images!)
  • Clearly labeled headers for easy scanning
  • Use images/videos for visual/onscreen steps (vs. trying to just describe it)
  • Use headers, subheaders, bullets, etc. instead of one wall of copy
  • Include troubleshooting tips (don't make developers hunt for them)
  • End with a CTA (something else to do)

 

Writing Better Code Samples

When was the last time you sat down and on paper wrote out exactly how you would approach a code sample? It’s easy to jump in thinking you know exactly how it should be structured, only to realize halfway through that you need to go back and rewrite.

Here are some tips to getting better results and increase the popularity of your code sample.

 

The Best Topics are Specific

Samples and Tutorials that are popular and easy to follow are ones that are specific rather than overly general. Don’t offer the kitchen sink. Rather, focus on a single aspect of the subject at hand, which will be much easier to present in an effective way and will, as a bonus, be much more likely to gain SEO value.

 

Write for EVERY Audience

Always remember, we are not the developer. Keep a vision of the person you are writing for in your head and ask these questions…

  1. Do they understand all the concepts you’ve used in your tutorial?
    • If not, explain them or link to another tutorial that does.
    • Explain things conceptually, so the reader has a big picture of what he’s going to do. Then lay out instructions on how to use it step-by-step.
  2. How much time would they have to make sense of it?
    • Use short, simple, declarative sentences and commands rather than compound sentences. It’s easier for the reader to digest one instruction at a time.
    • Give directions in no uncertain terms
  3. Are some of your users beginners?
    • Don’t Say Things Like “It’s Easy” or “It’s Very Good”. If something is “easy” or “very good” your readers will decide based on the evidence you present. Not everyone is at the same level. If you’re writing an article about the command line, and you say stuff like “It’s easy, simply type git [whatever]”, you’re going to alienate many readers.
    • Whenever you use a technical term for the first time, define it. Have someone else check that you defined all new terms. There are probably one or two that you missed.
    • When forced to choose between technically simple and technically efficient, choose the former

Test what reading comprehension level your code sample is written at online. Your audience (no matter who) will appreciate that the language is simple and direct.  Their main goal is to learn about your code, not be impressed with your vocabulary.

 

Assumptions, Prerequisites, and Requirements

Don’t make your users angry by not letting them know what they need, tell them up front.

  • Hardware list
  • Software/Plugin/API list
  • Programing Language Requirements
  • Specific Computer Setups

Provide links to these resources, so they don’t have to go hunting on google.

Add tips like  “If you have no prior experience, you will find a very steep learning curve diving straight into _____”. It will save beginners from frustration and give you the opportunity to get them to the right content quickly.

In the case of API/SDK samples, if you are using multiple APIs together point out what APIs are included, and what features come from where.

 

Include a Brief But Effective Introduction

Too many authors write intros that basically amount to “filler” content. Here’s an example of a bad introduction:

Front-end developers have been trying many different frameworks in recent years. Bootstrap is a really popular framework, and it has many tools you can use today. We no longer need to scratch our heads wondering how to create a cross-browser drop-down menu, or a responsive grid, or a tab switcher. Bootstrap can do it all. But how accessible are its components? In this post, I’ll show you how to take Bootstrap to the next level by making your Bootstrap website more accessible.

Notice all the unnecessary fluff leading up to the main point of the article about accessibility? Instead, the following is better:

Bootstrap is the most widely-used framework in the world. But a common complaint I hear is that its markup is unsemantic and inaccessible. Let’s use two Bootstrap components as examples to see if we can remedy this.

This intro is clear, it presents exactly the problem you’re going to solve, and it gets right into the meat of the post, which is how to make a Bootstrap website (or component) accessible.

In brief, a good intro has two necessary ingredients:

  • Present a problem to solve
  • Tell us briefly how you will solve that problem

 

Headings Alone Should Develop the Theme

A reader should be able to read the title of your sample, then read each of the primary headings in order, and get a good higher-level understanding of the subject being discussed.

To do this, your headings should be clear in stating which part of the article is now being developed and they should not be general or vague (Step 1).

 

Solve a problem with your example

Most examples don’t do anything useful. Instead, try to create code that solves a real problem. Writing “hello world” code teaches people to write code, but not how to solve issues with it.

Good code examples should answer the “how does this technology help me solve a problem?” and not the “how do I use this?” question. Code examples should get people excited about using the product and entice them to dive into the documentation to find out about the details.

 

Write Tool/Technology Names in the Correct Format

Do the necessary research. Take five seconds to look at our Intel TM List and the Third-Party TM List.

By writing tool names properly, in the way the creators intended, we show respect for these tools, adhere to legal requirements and we help to represent their brands correctly in our writing.

 

 

Code Tips

Have Consistent and Easy-to-Read Code Blocks

When including code blocks in articles and tutorials, there are a few general rules:

  • Include working code that users can copy and paste
  • Users will put it into production. Make sure it works, but does not do anything that will cause an app to be insecure, grossly inefficient, or bloated.
  • Understood as quickly as possible
  • Consistent throughout your example
  • Descriptive
  • Explain your code before you show the example
  • Write smart code that explains itself, instead of “empty” code  with comments throughout it

 

No Walls of Code

If any code block is longer than 10 or 15 lines, it becomes much harder for the developer to digest each step. You can try one of the following:

  • Remove any unnecessary parts of the code.
  • If you’re talking about one specific feature, don’t include all the elements that are not being affected with that feature.
  • Split up the code and describe what’s happening step by step.
  • Make it downloadable.

 

Show a working example

There is nothing more powerful than a way to see what the sample does by clicking a link or entering some data and sending off a form. You telling the readers that it works is one thing – the readers being able to try it out and seeing it for themselves is much more rewarding. For examples that aren’t easy to show online, video is a great tool of showing implementation and success.

Remember, users don’t care about code, but about getting projects done.

 

Link to the Latest Version of Your Documentation

If you are talking about, for example, Flexbox, you might search for the spec and find a link like this:

http://www.w3.org/TR/2014/WD-css-flexbox-1-20140325/

But that’s a static version of the spec that won’t change. What you want is the permanent link to the current version of the spec. So make sure the URL looks more like this:

http://dev.w3.org/csswg/css-flexbox/

 

Before you Publish - Checklist

We know you want to get this task completed, but before you hit that “publish” button, please take the time to go through this checklist to ensure your content has everything it needs to be a success.


Has someone, who was not involved in creating the content, tried to follow your instructions?


This alone will discover a number of issues with your sample and whether the directions are clear. If you can’t enlist support, here is a list of items to help QA:

  1. Have you listed all the requirements/assumptions/resources needed?
  2. Do you have all the steps?
  3. Are the steps in the right order?
  4. Is the code easy to copy/paste?
  5. Has the code been tested on multiple system setups?
  6. Do all of your links work? (don’t assume)
  7. Did you use the product names correctly?
  8. Do you have any large paragraph blocks? (Avoid walls of copy and walls of code)
  9. Have you checked reading level? (there are lots of online tools that can help ensure we aren’t over thinking our samples)
  10. Have you checked for legal compliance?
 

Autonomous Vehicle and Remote Surveillance with the Intel® Edison Platform

$
0
0

 

Introduction

This article will investigate using the Intel® Edison platform for small scale autonomous vehicle and remote surveillance.

Materials

● Hardware

Here we use the Devastator Tank* Mobile Robot Platform from DFRobot*, a mobile robotics platform that uses a Romeo* board powered by Intel® Edison. 

Kit: two servomotors, UVC compliant webcam, Passive Infrared (PIR), Ultrasonic Range Finder

● Software

The client device software is written in Node.js* using the Intel® XDK, and leverages the mraa libraries for peripheral access (motors, lights, etc).

Objectives

● Construct an Odometer

To assist in navigation and distance tracking, we will construct an odometer using a conductive switch called a Reed switch, coupled with a small neodymium magnet. 

● Push Images to the Edge Device

Using Secure Copy (SCP), we will push surveillance images from the robot to an edge device, and walk through how to utilize ImageMagick*, an open-source software application, to scan for an object’s presence.

Recommended Reading

Before proceeding, it might be helpful to reference one of the following articles which describe construction of the robot discussed in this article, addition of Wi-Fi enabled camera, and programming the tank using Intel® XDK and mraa libraries.

 

Building a Robotic Platform Using the Intel® Edison Module

https://software.intel.com/en-us/articles/overview-of-intel-edison-based-robotics-platform

Using the Intel® Edison module to control robots

https://software.intel.com/en-us/articles/using-the-intel-edison-module-to-control-robots

Programming Robotics using the Intel® XDK, Node.js, and mraa library

https://software.intel.com/en-us/articles/programming-robotics-using-the-intel-xdk-nodejs-and-mraa-library

Software Design for Basic Autonomous System

Before we talk about hardware, let’s first go through the design approach for our autonomous robot.  Using the Intel® XDK, Node.js*, and the mraa libraries we will construct an application that will abstract the lower level complexities involved with hardware communication.  We will also review each of the robot sensors, and how each might be modeled in software.  Sample code for each class and JavaScript* file can be found in the appendix at the end of this document.

Our goal is to design a robot that can independently navigate a course, offer obstacle avoidance, and provide remote surveillance by uploading images for offline analysis.

Using a collection of sensors, and knowledge of the course dimensions, we can build a software solution that will allow an autonomous navigation while collecting and providing constant feedback of location and surrounding objects. Image capture can be triggered programmatically when a destination has been reached and uploaded to a local gateway device for real-time or offline analysis.

We divide the application into separate JavaScript* files, each of which represents an object in our working solution.  There are files for:
1) Robot
2) Light
3) Camera
4) Object detector
5) Distance detector
6) Navigation
7) CourseLeg

 

The entry point of the application is main.js, which is responsible for instantiation of a robot and a course, responding to heartbeat timers, and sensor events, as well as kicking off a course execution.  A constants file provides a centralized configuration that can be referenced from any other module.  The heart of the autonomous capability lies in the Navigation.js module, which is responsible for all directional motor control as well as course tracking and exposure of the Course and Robot status by way of properties and triggered events. The relationship of classes can be seen in the class diagram below.

PROGRAM EXECUTION FLOW

The application is designed to execute asynchronously, responding to both timer and event driven triggers that come from the robot.  The main.js contains one main timer called heartbeat that is started when the application launches, the default callback code simply displays robot status.  The navigation class also has one timer, however it executes much faster, and is responsible for tracking robot movement through the course as well as issuing navigation commands.  This timer also is stopped when a course ends.  Wheel rotation events are also handled in the Navigation class.  The odometer class will fire an external event, ‘WheelRotationEvent’, on every edge trigger set by the odometer.  

Summary of configurable timer objects:

Main.js
    timerHeartBeat

Navigation.js
    timerPolling

Callbacks:

Navigation.js
    odometer.SensorEvent.on('WheelRevolutionEvent')
    onEndOfLegEvent()

MAIN.JS

At the heart of the application, the main.js script facilitates:

  • Setting up the Robot, Distance, Presence, and main Navigation object.
  • Listening for heartbeat checks for overall health
  • Executing response listeners for Object and Distance detection
  • Communicating directly with the Navigation object for inquiry and control

While most of the logic for the application occurs within the Navigation class, main is still responsible for the highest level health status checks using a timer based polling mechanism. It also handles events that are triggered by distance detection and object presence objects. Finally, main.js is responsible for enabling/disabling the on-board camera and transmission of images to a remote server.

Constants.js

The constants file provides one place for all configuration data.  Any piece of data that can be changed should be placed in this file. Throughout the code, the data held within these constants can then easily be referenced by first 1.) Adding a require statement to include the constants.js file, and 2.) Referencing the variable correctly. 
 

Example

1.) var constants = require("./constants.js");
2.) var objectDetector = new ObjectDetector(constants.GPIO_PIR_SENSOR, false);

Robot.js

The design of the robot object is to encapsulate properties and operations relevant to the robot itself without consideration of any external sensors or functions.  The constructor requests parameters for indication of the most important fields such as I2C* details and physical characteristics.  The lights are added by way of composition, and can easily be toggled using an external interface.  Finally, the servomotors are created as individual instances, with public methods exposed for directional control of the camera and distance finder modules.

 

The Lights object is very straight forward.  It indicates which GPIO pin the LED is attached too, and also provides external interface for toggling the light on or off.  The lights are part of the Robot object composition.

 

 

 

 

ObjectDetector.js

The ObjectDetector class will subscribe to changes in the sensor and broadcast out using an event emitter to any interested clients.  It also exposes methods for toggling Listening as well as resetting a trigger and checking the current value.  Finally, a SensorEvent is triggered each time an object is detect.

 

Listening to and reacting to the PIR events is accomplished by setting up a subscription and listener in the main.js file.  Alternatively or in addition to, the value of the sensor can be checked on-demand using a polling mechanism.  The code example below shows both approaches, both of which are houses in our main.js script.

(From main.js)
 

var ObjectDetector = require("./ObjectDetector.js");
var objectDetector = new ObjectDetector(constants.GPIO_PIR_SENSOR, false);

//Start Listening
objectDetector.StartListening();
//Check using Polling
if (objectDetector.Listening){
               objectDetector.CheckForObject();
}
//Event Listener – Only called when status changes on the device
objectDetector.SensorEvent.on('ObjectDetected', function _onPIRSensorTriggered() {
    //PIR sensor triggered
    if (constants.DEBUG)
        console.info("[OBJECT DETECTION].  Something detected.");
});

 

DistanceDetector.js


The distance class is designed to create an instance of an interface to the Ultrasonic, as well as rebroadcast out events when an object is detected.  To save power, the device is not always scanning, and can be called at anytime using the public methods:  

 

GetDistance()  - Returns distance in CM to closes object

IsObjectClose(thresholdCM)  – Returns true/false if object is within distance set by threshold.

Odometer.js

The odometer class is responsible for triggering external events when an Interrupt Service Routine (ISR) is triggered on the configured GPIO pin.

 

 

 

 

Setting up the ISR to detect the continuity trigger is accomplished by simply calling the isr() method on the gpioPin object and creating a listener.  Inside the handler, the public event is then triggered, which will be handled by the Navigation object’s listener method.

Here is an example taken from the Odometer constructor that shows setting up the ISR and firing the external event.  The full source code can be found in the appendix at the end of the document.

    this.gpioPin = new mraa.Gpio(pinNumber);
    this.gpioPin.dir(mraa.DIR_IN);
    this.gpioPin.isr(mraa.EDGE_BOTH, function _onPIRAlert()
    {
       if(self.Listening)
           self.SensorEvent.emit('WheelRevolutionEvent');
    }); 

CourseLeg.js

The course currently being navigated is a composition of CourseLegs that is referenced in the Navigation object.  Each individual CourseLeg maintains pertinent information for that leg such as how far to go in centimeters as well as revolutions required to travel that distance, and also which direction to turn at the end.  The conversion from Distance to Revolutions is set in the constructor of the CourseLeg and is determined using the track length and sprocket diameter data stored on the robot.

Navigation.js

The navigation class is the largest in the design and is responsible for robot control and management of the CurrentCourse, which is represented by a collection of CourseLeg objects.   The CurrentCourse[] collection is populated inside of the SetCourse() method, which is expected to be called prior to starting any course. 

 

On initialization of the Navigation object, an odometer instance is created, which internally will trigger a public event: “WheelRevolutionEvent” each time the wheel is turned.  A callback handler is created to capture these events, ensuring that the current wheel rotation is incremented when applicable using the minimum cycle time filter to ignore duplicate hits while the magnet is approaching or departing near the reed switch.  The handler will also manage leg and course termination logic by calling a method or triggering a separate event. 

 

 

 

Navigation Odometer revolution handler:

this.odometer.SensorEvent.on('WheelRevolutionEvent', function _OnRobotWheelRevolution(){
        if (self.CurrentLeg < CurrentCourse.length)
        {
            if ((Date.now() - lastClickTime) > constants.ODO_CYCLE_TIME_MIN)
            {
                lastClickTime = Date.now();
                var current = ++CurrentCourse[self.CurrentLeg].CurrentBotRevolutions;
                var required = CurrentCourse[self.CurrentLeg].RevolutionsRequired;

                console.info("CLICK[" + current +"] of ["  + required + "]");

                //Trigger the end of Leg Event
                if (current >= required)
                {
                    self._OnEndOfLegEvent();
                    if (current>required)
                    {
                        console.info("Received a click event after required has been met.");
                    }
                    self.RobotStop();
                    self._OnEndOfLegEvent();
                }
            }
        }
        else
        {
            if (constants.DEBUG)
                console.info("---- COURSE ENDED ----");
            self.StopCourse();
        }
    });

Once a course has started, a callback timer periodically checks on the navigational status of the robot to determine if it is time to turn, if it is currently turning, or ready to start the next leg.  

The following methods are called to start and stop the polling navigation timer.
 

this._StartTimerNavigationPoll = function _StartTimerNavigationPoll() {
        timerPolling = setInterval(this._onNavigationCheck, constants.POLLINGNAV_MS);
    };

    this._StopTimerNavigationPoll = function _StopTimerNavigationPoll() {
        console.info("[NAV]: STOPPING polling timer");
        clearInterval(timerPolling);
       _navigationStopped = true;
    };

The complete source code for this class can be found in the Appendix at the end of this document.

Camera.js

A USB Video Class (UVC) camera is leveraged to take still images for upload to a remote server.  The camera object exposes an easy interface to take pictures on demand.  When requested to capture an image, a child process is created which will execute an external application, fswebcam, which will take a snapshot.  The image will also be uploaded to a remote server using SCP.

To position the Camera, a call to Robot.Look(anglePan, angleTilt) must be called.  The robot  class holds instances of the Servo motors that allow it to point to a given X, Y location.  Public methods are available to move the camera to any location using Look(angleX, angleY).  For instance, to center the camera, a call is made to camera.LookCenter(), which in turn calls self.Look(90, 90).

Start by installing the fswebcam package:

opkg install fswebcam

Once installed and verified, add the camera device details to the constants.js file, and set the CAMERA_USE flag to true.  Setting this flag is required for any images to be captured.

Image capture configurations:

FTP_REMOTE_IP : "192.168.1.166",
FTP_REMOTE_BASE_FOLDER : "/home/root/inbound",
FTP_REMOTE_USER : "username",
FTP_REMOTE_PASSWORD : "passwd",
CAMERA_USE : true

The source code for this class can be found in the appendix at the end of this document.

Robot Automation Components

Adding automation capabilities requires numerous sensors to monitor the surrounding environment.  We will need the ability to track distance, detect object presence and distance, and capture and submit image data for real time analysis to a nearby Edge device.  All of the components required and discussed in this article are provided in the kit with the exception of the odometer, which we will construct.  This additional component requires a small (<10mm) button neodymium battery, a reed switch, and two resistors.  We will walk through construction of the odometer in the section below.

The component pin mapping used in these examples is shown below.

Component:

Peripheral:

Intel® Edison Pin:

Translated Pin:

MRAA Pin:

Passive Infrared Sensor

Digital Input

GPIO43

D11

38

Tilt Servo

PWM

PWM1

D5

14

Pan Servo

PWM

PWM0

D3

20

Ultrasonic Sensor

UART TX

UART RX

GP131

GP130

D1/TX

D0/RX

UART0

Left LED

Digital Output

GPIO48

D7

33

Right LED

Digital Output

GPIO41

D10

51

Buzzer

PWM

PWM2

D6

0

Brushed DC Motors

I2C SCL

I2C SDA

SDA1

SCL1

I2C1_SDA

I2C1_SDL

I2C0

ODOMETER

Digital Input

GP129

D4

25

POWER

The numerous sensors combined with parallel motor movement, Wi-Fi transmission, and a USB powered camera requires additional power above the default provided 9 volts.  Initial tests for the Devastator* platform show that the minimum provided 9 volts is not sufficient to consistently power the DC motors in parallel to the Wi-Fi enabled camera.  In addition, unpredictable behavior was observed at a voltage below 9 volts, including servo alignment and motor controller directional issues.  We begin by increasing the input Vcc voltage to around 14 volts by adding four more AA batteries.  Having rechargeable batteries on hand during testing is very helpful for testing.

SENSORS

To help our robot navigate autonomously, we will need to employ a few sensors, which are outlined in the following table:

Sensor:

Description

In Kit

Passive Infrared Sensor

Object detection

YES

Ultrasonic Range Finder

Detect distance to an object

YES

Odometer

Counts revolutions, tracks distance traveled  

NO

 

Firstly, to know where or how far to go, we have to know how far we have come.  Distance traveled is measured using a component called an odometer, which we will construct.  Secondly, knowing if something is within our field of motion is critical to avoid hitting the object.  The passive infrared sensor will help with this by detecting objects that emit radiation (such as humans or small animals).  Finally, the ability to detect how far away a physical object is will become a crucial part of autonomous navigation.  Distance detection, using an ultrasonic range finder, allows our robot detect an in-path obstacle as well as determine how close we are to a wall as we travel alongside it, perhaps say in a maze.

Object Detection - Passive Infrared

The Passive Infrared module (PIR) is connected to digital GPIO pin D11, and toggle to a HIGH signal when an object is detected within range. 

Object Distance – Ultrasonic Range Finder

Knowing the distance away from an object is as important a feature of autonomous navigation as the ability to know if something is there at all.  Using radio waves, we employ a device called an ultrasonic range finder which will bounce radio waves off surrounding objects, and through comparison of return signal elapsed time, provide the distance away from the object.

For this example, we wire the range finder to communicate using UART, and as discussed in the software section, respond to events in the DistanceDetector class.

Pin connections from RIGHT TO LEFT:  +5, GND, Rx, Tx

Odometer – Track distance travelled

There are multiple approaches to tracking distance travelled including GPS, magnetometers, IMUs, and optical verses conductive revolution counting.  Each method has its pros and cons.  For instance, GPS will only provide a level of accuracy down to three meters.  Maneuvering a small robot requires a much smaller acceptable accuracy level.  Similarly, using a magnetometer or an IMU to determine relative distance can also have inaccuracies due to disruptions in the magnetic field near or close the device; perhaps due to spinning of the DC motors. 

A simple approach to tracking distance is to construct a revolution counter using a magnetic switch.  The parts required for construction of this odometer can easily be found online:

PARTS LIST

Reed Switch X 1
200 Ω resistor X 2
10mm Neodymium battery X 1
Crafters glue, Electrical tape, and Jumper wires

A reed switch is a small component that houses two pieces of metal that are millimeters apart.  When a magnet gets near the switch, the two pieces of metal attract to each other forming a circuit, allowing electricity to flow.  For more information on reed switches, see the following wiki:

https://en.wikipedia.org/wiki/Reed_switch

ASSEMBLY

Onto the outer sprocket wall that faces closest to the chassis, attach a button neodymium battery.  Position the battery onto a stable portion of the sprocket with super glue, and then place the sprocket back onto the tank.

Onto the outside chassis near where the sprocket turns, attach a reed switch in line with a 200Ω resistor to an open digital pin and ground.  Attach the other end of the reed switch to an open power pin on the GPIO rail with an additional resistor.  The prototype of the circuit can be seen below.  In this example, you can see the matching colored wires at the bottom, ready for attachment to the rail:  Green is Signal, Red is input voltage, and Black is Ground.

[Figure 1 Layout of Odometer prototype]

[Figure 2 Wired prototype]

[Figure 3 Odometer connected]

Secure with tape, and prior to attaching the odometer to the chassis, be sure to test one more time.

The completed installation should look like the image in Figure 4 below.  You can see the magnet attached to the sprocket, just to the right of the reed switch that is affixed to the outside of the chassis with a small piece of tape.  Pull the LED through the front of the robot and secure with some electrical tape.

[Figure 4 Reed Switch and Magnet installed]

[Figure 5 Front of robot with mounted LED indicator]

With the hardware completed, we can focus on the software implementation, which can be verified using an Arduino script. In this example, I used an Arduino 101 (branded Genuino outside the U.S.) attached to a breadboard.  The migration of this sample code** to the Intel® Edison is discussed in the previous section detailing the Odometer class.

/*Sample Arduino code** */

int revolutionCount=0;
int startCycle = 0;
int CYCLE_TIME_MIN = 200;

void setup() {
  Serial.begin(9600);
  attachInterrupt(digitalPinToInterrupt(2), isr, FALLING);
}
void isr(){
  if (startCycle==0){
    startCycle=millis();
    Serial.print("[TURN START] ");
    Serial.println(++revolutionCount);
  }
  else if ( (millis() - startCycle)>CYCLE_TIME_MIN){
      startCycle = 0;
      Serial.print("-----[TURN END] ");
      Serial.println(revolutionCount);
  }
}
void loop() {
}

Attach a piece of tape to the sprocket, and count the turns for different distances and speed combinations.  Compare the visual observations with what is reported at the end of each cycle as shown in Figure 6.

[Figure 6 Arduino* sample output revolution counting]

Figure 7 below shows the data collected using the magnetic reed switch odometer.  Each rise of voltage in the signal, will trigger an increment of a revolution counter in software.

[Figure 7 Arduino* Plotter output]

To complete calibration of the odometer, we simply need to the diameter of our wheel sprocket in mm.   Knowing the diameter, we can compute circumference which is 2πr.  One circumference of sprocket turned is the amount of tread that is moved and the distance traveled.  We can also capture tread length, and compute a ratio as an additional reference computation. 

 

We find that the track length is 48cm, and the radius of the drive sprocket is 45mm.  We enter these values into our constants.js configuration file, and can later run some tests to see how close our estimates are.

VISION

A basic requirement for surveillance is to capture an image snapshot.  The kit we are using comes with a UVC compliant webcam, so our hardware is ready to go.  If your camera is not working correctly, and you are unable to see a /dev/video* device, you might want to take a look at the following article, “Using the Intel Edison Module to Control Robots”.

Also included with the kit are two servomotors, one for Panning left and right along the X axis, and another for Tilting up and down along the Y axis.  Before using the Servo motors, be sure all of the wires have enough flexibility to extend to the full range of motion, and that they do not protrude out onto the moving tracks. The servos should be connected to appropriate PWM GPIO Pins.   The software for controlling the camera can be referenced in the section detailing Camera.js.

Autonomous navigation

For our robot to move by itself, it is important that it understands how to follow a static path.  To begin, we must first establish the ability to accurately move in a known direction for a known distance, and also to take turns accurately.    During setup of a course, the wheel circumference is referenced to set the number of revolutions per leg.  During navigation, this course leg limit is referenced to determine if the end of leg has been reached, which can then trigger a call to StartTurn() or EndCourse().

The Navigation class manages the movement of the robot through a course by controlling the directional speed and movement based on references in a fixed course map.  The CurrentLeg is tracked as a static variable on the class, and is incremented after each turn has completed and compared against the maximum course length to ensure no overrun occurs.

Navigation.js Pseudocode

FOR EACH Leg in Course

            GO FORWARD(speed)

EVERY 500ms – TIMER POLL

                IF NOT Turning AND CourseLeg.EndOfLeg

                                IF Is Time to Turn

                                                TURN, start TurnTimerCallback

                ELSE IF Stopped Turning

                                START NEXT LEG

                ELSE IF EndOfCourse

                                Stop Polling Timer                     

onTurnTimerCompleted

                StopRobot
                StoppedTurning  = true            

Also, at any point during the journey, a camera image can be uploaded:

this.camera.TakePicture("StartTurn");
this.camera.TakePicture("EndTurn");

Going Further

The Navigation.js class handles management of where the robot needs to go by iterating the collection of CourseLegs, however it does not know the accuracy of the forward distance and turning times.  It also has no understanding if it has drifted off course.  Even with knowledge of these factors, different surfaces as well as current power availability add additional variables that must be considered.  The Navigation and CourseLeg classes could easily be extended to accept adjustments such as motor speed offsets through configuration.  Another nice touch would to include drift correction in response to known alignment issues, or using IMU outputs from an Intel® Curie module as an additional point of reference for directional control.  Given any new terrain, you could then run a few tests and insert the observed offset back into our calculations for each leg of the journey. 

Image processing at the Edge

An Intel®  IOT Gateway can be used as our edge device, providing data storage for the surveillance images, and an execution environment for image analysis services.  On the client side, the Camera.js class is responsible for capturing the images using a call to fswebcam, and transferring the images using SCP, so our server simply need to have SCP configured correctly.  Most Linux* builds come with SCP installed, and it can easily be added by installing openssh-client package if it is not already installed.

Once the surveillance images are captured using fswebcam, they could be scanned for object detection.  By taking a series of images from a fixed location, the images can be analyzed for any differences using an application like “imagemagick”, which provides a nice command line interface for comparing two image.

For more information, see Image Magick* at http://www.imagemagick.org/.

On Ubuntu: 

sudo apt-get install imagemagick

Now create a script to parse a set of images in a folder, and provide tagging when differences are detected.  By using the compare command, an Absolute error argument can be used to show when the pixels in two separate files are indeed different in anyway.  By applying a fuzz percentage, the unwanted noise factor can be eliminated for a more accurate result.  The –metric parameter will indicate to output a number, which represents the pixel difference.  This is in contrast to outputting an image. 

compare –metric AE –fuzz 5% image1 image2

Source Code

 

constants.js**

/* constants.js */

// Matt Chandler
// Intel, Corp.
// March, 2017

//NOTE that all GPIO PINS are MRAA Mappings to Edison pins
// https://iotdk.intel.com/docs/master/mraa/edison.html
// which are mapped using the Romeo schematic's GPIOs to edison pins.


/* jslint node:true */
/* jshint unused:true */

module.exports = {  SIMULATION : true, //Do not talk to motors, run sensors using callback timers
                    DEBUG : true,      //Show more debug messages
                    POLLINGNAV_MS : 1000,   //Navigation callback timer, should be lowest number=quickest
                    POLLINGHEARTBEAT_MS : 2000, //overall status poll/health check
                    I2CBusNumber : Number(0),
                    I2CAddress : Number(4),
                    I2CMotorDirectionLeft : 0xB1,
                    I2CMotorDirectionRight : 0xB2,
                    I2CMotorSpeedLeft : 0xC1,
                    I2CMotorSpeedRight : 0xC2,
                    MAX_SPEED : 0xF9,
                    RIGHT : 0,
                    LEFT : 180,
                    FORWARD : 90,
                    BACKWARD : 270,
                    MOTOR_SPEED_TURN: 0x90,
                    LIGHT_LEFT : 0,
                    LIGHT_RIGHT : 1,
                    TRACK_LENGTHCM : 48,
                    SPROCKET_DIAMETERMM : 45,
                    MOTOR_MSG_CLOCKWISE : 0x0,
                    MOTOR_MSG_COUNTERCLOCKWISE : 0x1,
                    MOTOR_SPEED_CRAWL : 0x10,
                    MOTOR_SPEED_WALK : 0x20,
                    MOTOR_SPEED_JOG : 0x30,
                    MOTOR_SPEED_RUN : 0x80,
                    GPIO_LIGHT_LEFT : 33,
                    GPIO_LIGHT_RIGHT : 51,
                    GPIO_BUZZER : 0,
                    GPIO_DIST_UART : 0,
                    GPIO_PIR_SENSOR : 38,
                    GPIO_ODO_CONTACT_SIGNAL : 50,
                    GPIO_ODO_CONTACT_MONITOR : 25,
                    GPIO_PAN : 20,
                    GPIO_TILT : 14,
                    SERVO_PERIOD_MS_PAN : 10000,
                    SERVO_PERIOD_MS_TILT : 10000,
                    SERVO_PULSE_WIDTH_PAN_0 : 1000,
                    SERVO_PULSE_WIDTH_PAN_90 : 1500,
                    SERVO_PULSE_WIDTH_PAN_180 : 2000,
                    ODO_CYCLE_TIME_MIN : 500,
                    FTP_REMOTE_IP : "192.168.1.166",
                    FTP_REMOTE_BASE_FOLDER : "/home/root/inbound",
                    FTP_REMOTE_USER : "username",
                    FTP_REMOTE_PASSWORD : "passwd",
                    CAMERA_USE : true,
                    LEG_MAXTIME_MS : 10000,
                    MOTOR_RIGHT_OFFSET : 0,
                    MOTOR_LEFT_OFFSET : 50,
                    DRIFT_ANGLE_PER_METER : 5
                 };

main.js**

/* main.js */

// Matt Chandler
// Intel, Corp.
​// March, 2017

/* jslint node:true */
/* jshint unused:false */

var m = require("mraa");

var Robot = require("./Robot.js");
var courseLeg = require("./CourseLeg.js");
var Navigation = require("./Navigation.js");
var constants = require("./constants.js");
var ObjectDetector = require("./ObjectDetector.js");
var DistanceDetector = require("./DistanceDetector.js");

_setupListeners();
_buildCourse();

//START PROGRAM
var thisBot = new Robot(0, 4, 0x55, 0xaa, "SurveyOne", constants.TRACK_LENGTHCM, constants.SPROCKET_DIAMETERMM);
var thisNavigationControl = new Navigation(thisBot);
var objectDetector = new ObjectDetector(constants.GPIO_PIR_SENSOR, false);

_enableSensors();

_initialize(thisBot);

//--------------------------------------------
// LISTENERS/Callbacks

function _onHeartBeat() {

        if (constants.DEBUG===true)
        {
           ShowStatus();
        }
    }
function ShowStatus(){
     var statusString = "-----[HEARTBEAT]: " + thisBot.Name;
            if (thisBot.IsCurrentlyMoving)
                statusString += " MOVING on LEG[" + thisNavigationControl.CurrentLeg + "]";
            else
                statusString += " IDLE";

            statusString += " [PIR]: ";
            if(objectDetector.Listening===true)
                statusString += "on";
            else
                statusString += "off";

            statusString += " , ObjectDetected=" + objectDetector.ObjectDetected;
            statusString += "-----";
}

objectDetector.SensorEvent.on('ObjectDetected', function _onPIRSensorTriggered() {
    //PIR sensor triggered
    if (constants.DEBUG)
        console.info("[OBJECT DETECTION].  Something detected.");
});

function _reset() {
    this.navigation.robotStop();
}
function _setupListeners() {
    console.info("Setting up listeners. [Heartbeat]");
    setInterval(_onHeartBeat, constants.POLLINGHEARTBEAT_MS);
}
function _buildCourse() {
    console.info("Building course");
}
function _initialize(bot) {
    console.info("Initializing robot: ");
    console.info(thisBot.StatusString);
    thisBot.TurnLightsOff();
    thisNavigationControl.StartWarmupSequence();
}
function _enableSensors(){
    objectDetector.StartListening();
    if(constants.DEBUG)
        console.info("PIR Sensor enabled: ");
}
function _ToggleLights(lightState){
    thisBot.ToggleLight(constants.LIGHT_LEFT, lightState);
    thisBot.ToggleLight(constants.LIGHT_RIGHT, lightState);
}

Robot.js**

/* Robot.js */

// Matt Chandler
// Intel, Corp.
​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");
var Light = require("./Light.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }
var mraa = require("mraa") ;

module.exports = Robot;
function Robot(I2CBusNumber, I2CAddress, I2CMessageHeader1, I2CMessageHeader2, name, tracklengthCM, sprocketDiameterMM, panPinNumber, tiltPinNumber, panPeriodus, tiltPeriodus, pwPan0, pwPan90, pwPan180, pwTilt0, pwTilt90, pwTilt180, panDelayAfterPosition, tiltDelayAfterPosition) {
    this.Name = name;
    this.I2CBusNumber = I2CBusNumber;
    this.I2CAddress = I2CAddress;
    this.SprocketDiametermm = sprocketDiameterMM;
    this.TrackLengthcm = tracklengthCM;

    this.I2CMessageHeader1 = I2CMessageHeader1;
    this.I2CMessageHeader2 = I2CMessageHeader2;
    this.IsCurrentlyMoving = false;
    this.IsCurrentlyTurning = false;
    this.CurrentDirection = constants.FORWARD;
    this.CurrentSpeed = 0x00;

    // Setup the Servo motors
    this.pinNumberPan = panPinNumber;
    this.pinNumberTilt = tiltPinNumber;
    this.pwPan0 = pwPan0;
    this.pwPan90 = pwPan90;
    this.pwPan180 = pwPan180;
    this.pwTilt0 = pwTilt0;
    this.pwTilt90 = pwTilt90;
    this.pwTilt180 = pwTilt180;
    this.panDelayAfterPosition = panDelayAfterPosition;
    this.tiltDelayAfterPosition = tiltDelayAfterPosition;

    var tiltServo = new mraa.Pwm(parseInt(tiltPinNumber));
    var panServo = new mraa.Pwm(parseInt(panPinNumber));
    tiltServo.period_us(parseInt(tiltPeriodus));  //100Hz -> 10ms Period
    panServo.period_us(parseInt(panPeriodus));   //100Hz -> 10ms Period

    //Setup the lights
    this.LightLeft = new Light(constants.LIGHT_LEFT, false, constants.GPIO_LIGHT_LEFT);
    this.LightRight = new Light(constants.LIGHT_RIGHT, false, constants.GPIO_LIGHT_RIGHT);

    this.StatusString = "Name: " + name +", Bus:" + this.I2CBusNumber +", Address: " + this.I2CAddress +", Sprocket Diameter: " + this.SprocketDiametermm +", Messager Headers: [" + this.I2CMessageHeader1 + ":" + this.I2CMessageHeader2 +"], " + (this.IsCurrentlyMoving ? "Moving":"Not Moving") +", " + (this.IsCurrentlyTurning ? "Turning":"Stopped") +", SPEED = " + this.CurrentSpeed + ", Direction = " + this.CurrentDirection;

    this.ToggleLight = function ToggleLight(lightNumber, lightOn){

        var txt = "";
        if (lightNumber===constants.LIGHT_LEFT){
            this.LightLeft.ToggleState(lightOn);
            txt += "LEFT ";
        }
        else if (lightNumber===constants.LIGHT_RIGHT){
            txt += "RIGHT ";
            this.LightRight.ToggleState(lightOn);
        }
        txt += " light turned ";
        if(lightOn===true)
            txt += "on";
        else
            txt += "off";

        console.info(txt);

    };
    this.TurnLightsOn = function TurnLightsOn(){
        this.ToggleLight(constants.LIGHT_LEFT, true);
        this.ToggleLight(constants.LIGHT_RIGHT, true);
    };
    this.TurnLightsOff = function TurnLightsOff(){
        this.ToggleLight(constants.LIGHT_LEFT, false);
        this.ToggleLight(constants.LIGHT_RIGHT, false);
    };

    this._togglePanServo = function _togglePanServo(state){
            panServo.enable(state);
    };
    this._toggleTiltServo = function _toggleTiltServo(state){
            tiltServo.enable(state);
    };
    this.mapValue = function _mapValue(val, origMin, origMax, destMin, destMax){
        //http://stackoverflow.com/questions/345187/math-mapping-numbers
        var ratio = (destMax - destMin) / (origMax - origMin);
        return ratio * (val - origMin) + destMin;
    };

    this.Pan = function Pan(angle){
        var pwPan = this.mapValue(angle, 0, 180, this.pwPan0, this.pwPan180);

        this._togglePanServo(false);
        panServo.pulsewidth_us(pwPan);

        this._togglePanServo(true);
        setTimeout(this._togglePanServo(false), this.panDelayAfterPosition);
    };

    this.Tilt = function Tilt(angle){
        var pwTilt = this.mapValue(angle, 0,180, this.pwTilt0, this.pwTilt180);

        this._toggleTiltServo(false);
        tiltServo.pulsewidth_us(pwTilt);

        this._toggleTiltServo(true);
        setTimeout(this._togglePanServo(false), this.tiltDelayAfterPosition);
    };
    this.Look = function Look(anglePan, angleTilt){
        this.Pan(anglePan);
        this.Tilt(angleTilt);
    };
    this.PanCenter = function PanCenter(){
        this._togglePanServo(false);
        panServo.pulsewidth_us(this.pwPan90);
        this._togglePanServo(true);
        setTimeout(this._togglePanServo(false), this.panDelayAfterPosition);
    };
    this.TiltCenter = function TiltCenter(){
        this._toggleTiltServo(false);
        tiltServo.pulsewidth_us(this.pwTilt90);
        this._toggleTiltServo(true);
        setTimeout(this._toggleTiltServo(false), this.tiltDelayAfterPosition);

    };
    this.LookCenter = function LookCenter(){
        this.PanCenter();
        this.TiltCenter();
    };
}

Robot.prototype.toString = function toString() { return this.StatusString; };

CourseLeg.js**

 /* CourseLeg.js */

// Matt Chandler
// Intel, Corp.
​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

function CourseLeg(leg, distFwcm, directionPost, blockAtEnd, bot, legSpeed)
{
    this.LegNumber = leg;
    this.DistanceForwardcm = distFwcm;
    this.DirectionToTurn = directionPost;
    this.BlockAtEndOfStep = blockAtEnd;

    this.CurrentBotRevolutions = 0; //Number of revolutions counted for this leg
    this.RevolutionsRequired = 0.00; //Calculated value
    this.EndOfLegReached = false;

    this.Speed = legSpeed;

    //Distance travelled by one revolution of sprocket
    //  = Circumference = 3.14 * Diameter = (3.14*44)mm = 138mm = 13.8cm

    this.RevolutionsRequired = distFwcm/(3.14 * (bot.SprocketDiametermm/10));

}
exports.CourseLeg = CourseLeg;

Light.js**

/* Light.js */

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/
var constants = require("./constants.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }
var mraa = require("mraa") ;

//Light Object
module.exports = Light;
function Light(locn, defaultState, pinNumber)
{
    this.LightLocation = locn;
    this.LightState = defaultState;
    this.gpioPin = new mraa.Gpio(pinNumber);
    this.gpioPin.dir(mraa.DIR_OUT);

    this.ToggleState = function ToggleState(lightOn){
        //var myDigitalPin5 = new mraa.Gpio(pinNumber); //setup digital read on Digital pin #5 (D5), mraa 33
       // this.gpioPin.dir(mraa.DIR_OUT); //set the gpio direction to output
        this.gpioPin.write( (lightOn===true)? 1:0); //set the digital pin to high (1)
        this.LightState = lightOn;
        console.info("Toggled State to: " + (lightOn===true)? "ON":"OFF");
    };
}

Odometer.js**

/* Odometer.js */

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }
var mraa = require("mraa") ;

var EventEmitter = require('events');
module.exports = Odometer;
function Odometer(pinNumber)
{
    var self = this;

    this.Listening = false;
    this.SensorEvent = new EventEmitter();

    this.gpioPin = new mraa.Gpio(pinNumber);
    this.gpioPin.dir(mraa.DIR_IN);
    this.gpioPin.isr(mraa.EDGE_BOTH, function _onPIRAlert()
    {
       if(self.Listening)
           self.SensorEvent.emit('WheelRevolutionEvent');
    });
    this.StartListening = function StartListening()
    {
        this.Listening = true;
    };
    this.StopListening = function StopListening()
    {
        this.Listening = false;
    };
    this.IsContacted = function IsContacted(){
        return (this.gpioPin.read()>0) ? true : false;

    };
    this.GetPinValue = function GetPinValue(){
        return this.gpioPin.read();

    };
}

Navigation.js**

/*Navigation.js*/

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }

var mraa = require("mraa") ;
var courseLeg = require("./CourseLeg.js");
var Robot = require("./Robot.js");
var Odometer = require("./Odometer.js");
var Camera = require("./Camera.js");

var CurrentCourse = [];//Array of CourseLegs

var lastClickTime = Date.now();


//mraa I2C vars used for motor control
var i2C_MSGBFR = new Buffer(5);
i2C_MSGBFR[0] = 0x55;
i2C_MSGBFR[1] = 0xaa;

var _navigationRunning = false;
var _navigationStopped = false;

if (false)//constants.DEBUG
    console.log(mraaClass);     // prints mraa object to XDK IoT debug output panel

module.exports = Navigation;
function Navigation(bot) {
    var self=this;
    var timerPolling;
    var thisBot = bot;

    var maxLegTimeMS = 5000;
    var timeNavigationStart = Date.now();

    this.CurrentLeg = 0;
    this.odometer = new Odometer(constants.GPIO_ODO_CONTACT_MONITOR);
    this.odometer.StartListening();
    this.camera = new Camera("800x600", true);

    //Navigation Status Tracking
    var turnCompleted = false;
    var turnStarted = false;

    this.SetCourse = function SetCourse(courselegs) {
        CurrentCourse = courselegs;
        console.info("Course set.");
    };
    this.AddLeg = function AddLeg(courseleg) {
        console.info("Leg added: Leg:( " + courseleg.Leg + ", Distcm:(" + courseleg.DistanceForwardcm + "), TurnAtEnd:(" + courseleg.DirectionToTurn + ")");
        CurrentCourse.push(courseleg);
    };
    this.ShowCourse = function ShowCourse() {
        console.info("[COURSE]: " + CurrentCourse.length + " legs.");
        //iterate the current course
        for (var idx=0;idx<CurrentCourse.length;idx++) {
            var txt = "[COURSE LEG]: (" + idx + ")," +"Distcm:(" + CurrentCourse[idx].DistanceForwardcm + ")," +"Revolutions Required: (" + CurrentCourse[idx].RevolutionsRequired + ")," +"TurnAtEnd:(" + CurrentCourse[idx].DirectionToTurn + ")," +"BlockAtEnd:(" + CurrentCourse[idx].BlockAtEndOfStep + ")";
            console.info(txt);
        }
    };
//-------------------------------------------
// MOTOR CONTROL

    this.StartTurning = function StartTurning(angle, turnSpeed)
    {
        // Turning is always relative to the bot's direction, with standard cartesian coord layout :
        // 0 is right 90'
        // 90 is dead ahead
        // 180 is left 90'
        // 270 is turnaround/backwards

        /* Turning is always performed at a configurable rate, the same for each motor
            Testing the amount of time it takes to complete one 90' turn, a ratio can be configured which
            can be applied to any angle:
                90 = 1 second
                45 = .5
                 A = A/90 * (Time for 90)

        */

        this.camera.TakePicture("StartTurn");
        var timeToCompleteTurn = (angle/90) * constants.TURNTIME_90_MS;

        console.info("[NAV]: TURNING " + angle + "' .  Should be done in about " + timeToCompleteTurn/1000 + "seconds");
        turnCompleted = false;
        turnStarted = true;

        if (angle<90){
            self.RobotRight(turnSpeed);
        }
        else if (angle<260){
            self.RobotLeft(turnSpeed);
        }
        else if (angle<360){
            self.RobotRight(turnSpeed);
        }

        //Set timer callback when turn has completed
        setTimeout(self._onTurnTimeoutCompleted, 4000);
    };
    this.Sleep = function Sleep(ms){
        var done=false;
        var startTime = Date.now();

        while(!done){
            if ( (Date.now() - startTime) > 2000)
                done=true;
        }
    };
    this._onTurnTimeoutCompleted = function _onTurnTimeoutCompleted(){
        self.StopTurning();
    };
    this.StopTurning = function StopTurning(){
        console.info("[NAV]" + self.CurrentLeg + ":  TURN COMPLETED");
        turnCompleted = true;
        thisBot.IsCurrentlyTurning = false;
        self.RobotStop();
    };
    this.RobotGo = function RobotGo(direction, speed){

        var rightMotorDirection;
        var leftMotorDirection;

        timeNavigationStart = Date.now();

        if (speed > constants.MAX_SPEED)
            speed = constants.MAX_SPEED;
        if (speed > 0xFF)
            speed = 0xFF;

        thisBot.IsCurrentlyMoving = true;
        thisBot.CurrentDirection = direction;
        thisBot.CurrentSpeed = speed;

        switch(direction)
            {
                case constants.FORWARD:
                    rightMotorDirection = constants.MOTOR_MSG_COUNTERCLOCKWISE;
                    leftMotorDirection = constants.MOTOR_MSG_COUNTERCLOCKWISE;
                    break;
                case constants.BACKWARD:
                    rightMotorDirection = constants.MOTOR_MSG_CLOCKWISE;
                    leftMotorDirection = constants.MOTOR_MSG_CLOCKWISE;
                    break;
                case constants.LEFT:
                    rightMotorDirection = constants.MOTOR_MSG_CLOCKWISE;
                    leftMotorDirection = constants.MOTOR_MSG_COUNTERCLOCKWISE;
                    thisBot.IsCurrentlyTurning = true;
                    break;
                case constants.RIGHT:
                    rightMotorDirection = constants.MOTOR_MSG_COUNTERCLOCKWISE;
                    leftMotorDirection = constants.MOTOR_MSG_CLOCKWISE;
                    thisBot.IsCurrentlyTurning = true;
                    break;

            }
            //Set motor directions
            i2C_MSGBFR[2] = constants.I2CMotorDirectionLeft;
            i2C_MSGBFR[3] = leftMotorDirection;
            i2C_MSGBFR[4] = (i2C_MSGBFR[0] + i2C_MSGBFR[1] + i2C_MSGBFR[2] + i2C_MSGBFR[3]) & 0xFF;
            this.I2CBus.write(i2C_MSGBFR);

            i2C_MSGBFR[2] = constants.I2CMotorDirectionRight;
            i2C_MSGBFR[3] = rightMotorDirection;
            i2C_MSGBFR[4] = (i2C_MSGBFR[0] + i2C_MSGBFR[1] + i2C_MSGBFR[2] + i2C_MSGBFR[3]) & 0xFF;
            this.I2CBus.write(i2C_MSGBFR);

            //Set Motor Speeds
            i2C_MSGBFR[2] = constants.I2CMotorSpeedLeft;
            i2C_MSGBFR[3] = speed;
            i2C_MSGBFR[4] = (i2C_MSGBFR[0] + i2C_MSGBFR[1] + i2C_MSGBFR[2] + i2C_MSGBFR[3]) & 0xFF;
            this.I2CBus.write(i2C_MSGBFR);

            //Right Motor Speed
            i2C_MSGBFR[2] = constants.I2CMotorSpeedRight;
            i2C_MSGBFR[3] = speed;
            i2C_MSGBFR[4] = (i2C_MSGBFR[0] + i2C_MSGBFR[1] + i2C_MSGBFR[2] + i2C_MSGBFR[3]) & 0xFF;
            this.I2CBus.write(i2C_MSGBFR);

    };
    this.RobotStop = function RobotStop() {
        this.RobotGo(constants.FORWARD, 0x00);
        thisBot.IsCurrentlyMoving = false;

    };
    this.RobotForward = function RobotForward(speed) {
        console.info("Going Forward");
        this.RobotGo(constants.FORWARD, speed);
    };
    this.RobotBackward = function RobotBackward(speed) {
        this.RobotGo(constants.BACKWARD, speed);
    };
    this.RobotLeft = function RobotLeft() {
       this.RobotGo(constants.LEFT, constants.MOTOR_SPEED_TURN);
    };
    this.RobotRight = function RobotRight() {
        this.RobotGo(constants.RIGHT, constants.MOTOR_SPEED_TURN);
    };

    this.StartCourse = function StartCourse() {
        console.info("[NAV]: STARTING COURSE with " + CurrentCourse.length + " legs in course.");
        _navigationStopped = false;
        _navigationRunning = true;

        thisBot.TurnLightsOn();

        this._StartTimerNavigationPoll();
        this.RobotForward(constants.MOTOR_SPEED_CRAWL); //Always start in a forward motion

    };
    this.StopCourse = function StopCourse() {
        console.info("[NAV]: STOPPING COURSE. ");
        console.info("[" + thisBot.Name + "]: STOPPING.");
        this.RobotStop();
        this._StopTimerNavigationPoll();
        CurrentCourse = {};
        _navigationRunning = false;
        thisBot.TurnLightsOff();

    };
    this.IsReadyToTurn = function IsReadyToTurn() {
        if (constants.DEBUG)
            console.info("[DEBUG]:  Ready to TURN?? " + CurrentCourse[self.CurrentLeg].EndOfLegReached);
        return CurrentCourse[self.CurrentLeg].EndOfLegReached;
    };

    // ---------------------
    // CALLBACKS/EventHandlers
    // ---------------------
    this._onNavigationCheck = function _onNavigationCheck() {

        if (_navigationRunning) {

            if ( (Date.now() - timeNavigationStart) > constants.LEG_MAXTIME_MS)
                {
                    console.warn("EMERGENCY STOP - LEG HAS REACHED MAX CONFIGURED TIME.");
                    _navigationRunning = false;
                }

            //NOT Turning, but Ready to Turn
            if (thisBot.IsCurrentlyTurning===false && turnStarted===false)
            {
                if (self.IsReadyToTurn()===true){
                        self.RobotStop();
                        self.StartTurning(CurrentCourse[self.CurrentLeg].DirectionToTurn, constants.MOTOR_SPEED_TURN);
                }
            }
            else if (thisBot.IsCurrentlyTurning && turnCompleted===false){
                // TURNING but not complete
                if (turnCompleted===false) {
                  console.info("[NAV] " + self.CurrentLeg + " : TURNING TURNING ...");
                }

            }
            else if (turnCompleted===true){

                    if (self.CurrentLeg + 1 > CurrentCourse.length)
                    {
                        if (constants.DEBUG)
                            console.info("----- END OF COURSE ------");
                        self.StopCourse();
                    }
                    else
                    {
                        self.CurrentLeg++;
                        if (constants.DEBUG)
                            console.info("[NAV]:  STARTING NEXT LEG : " + self.CurrentLeg);

                        CurrentCourse[self.CurrentLeg].CurrentBotRevolutions = 0;
                        self.RobotForward(constants.MOTOR_SPEED_CRAWL);
                    }
            }
            else
            {
                if (constants.DEBUG) {
                    var currentDirection = (currentDirection===constants.FORWARD) ? "FORWARD" : "UNKNOWN";

                    switch(thisBot.CurrentDirection)
                    {
                        case constants.FORWARD:
                            currentDirection = "FORWARD";
                            break;
                        case constants.BACKWARD:
                            currentDirection = "BACKWARD";
                            break;
                        case constants.LEFT:
                            currentDirection = "LEFT";
                            break;
                        case constants.RIGHT:
                            currentDirection = "RIGHT";
                            break;
                        default:
                           break;
                    }

                    var revsToGo = CurrentCourse[self.CurrentLeg].RevolutionsRequired - CurrentCourse[self.CurrentLeg].CurrentBotRevolutions;

                    console.info("[LEG]:" + self.CurrentLeg + ". " + currentDirection + ", " + revsToGo + " revolutions to go...");
                }
            }
        }
        else if (!_navigationStopped) {
            console.warn("Navigation was not stopped by CourseEnd action.  Stopping Course and Timer Navigation Poll.");
            self.StopCourse();
            self._StopTimerNavigationPoll();
        }
        else {
            if(constants.DEBUG)
                console.info("[NAV]: " + thisBot.Name + " Not Running, and Stopped..");
        }
    };
    this._OnEndOfLegEvent = function _OnEndOfLegEvent(){
        if (constants.DEBUG)
            console.info("[NAV]:  ----- END OF LEG " + self.CurrentLeg + " REACHED ---");
        CurrentCourse[self.CurrentLeg].EndOfLegReached = true;
    };

    this.odometer.SensorEvent.on('WheelRevolutionEvent', function _OnRobotWheelRevolution(){
        if (self.CurrentLeg < CurrentCourse.length)
        {
            if ((Date.now() - lastClickTime) > constants.ODO_CYCLE_TIME_MIN)
            {
                lastClickTime = Date.now();
                var current = ++CurrentCourse[self.CurrentLeg].CurrentBotRevolutions;
                var required = CurrentCourse[self.CurrentLeg].RevolutionsRequired;

                console.info("CLICK[" + current +"] of ["  + required + "]");

                //Trigger the end of Leg Event
                if (current >= required)
                {
                    self._OnEndOfLegEvent();
                    if (current>required)
                    {
                        console.info("Received a click event after required has been met.");
                    }
                    self.RobotStop();
                    self._OnEndOfLegEvent();
                }
            }
        }
        else
        {
            if (constants.DEBUG)
                console.info("---- COURSE ENDED ----");
            self.StopCourse();
        }
    });

// -----------
// TIMER STARTER/STOPPERS
// -----------

    this._StartTimerNavigationPoll = function _StartTimerNavigationPoll() {
        timerPolling = setInterval(this._onNavigationCheck, constants.POLLINGNAV_MS);
    };
    this._StopTimerNavigationPoll = function _StopTimerNavigationPoll() {
        console.info("[NAV]: STOPPING polling timer");
        clearInterval(timerPolling);
        _navigationStopped = true;
        _navigationStopped = true;
    };

    this.StartWarmupSequence = function StartWarmupSequence() {
        var warmupCourse = [];
        warmupCourse.push(new courseLeg.CourseLeg(0, 100, constants.RIGHT, false, thisBot, constants.MOTOR_SPEED_CRAWL));
        warmupCourse.push(new courseLeg.CourseLeg(1, 100, constants.RIGHT, false, thisBot, constants.MOTOR_SPEED_CRAWL));
        warmupCourse.push(new courseLeg.CourseLeg(2, 100, constants.RIGHT, false, thisBot, constants.MOTOR_SPEED_CRAWL));
        warmupCourse.push(new courseLeg.CourseLeg(3, 100, constants.RIGHT, false, thisBot, constants.MOTOR_SPEED_CRAWL));


        console.info("Starting warmup sequence...");

        this.camera.TakePicture("Start");

        this.RobotStop();
        this.SetCourse(warmupCourse);
        this.ShowCourse();
        this.StartCourse();
    };

    _navigationRunning = false;

    this.I2CBus = new mraaClass.I2c(thisBot.I2CBusNumber);
    this.I2CBus.address(thisBot.I2CAddress);

    console.info("Navigation object created.  No course defined yet.");
}

ObjectDetector.js**

/* ObjectDetector.js */

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }
var mraa = require("mraa") ;
var EventEmitter = require('events');
module.exports = ObjectDetector;
function ObjectDetector(pinNumber, startListening)
{
    var self = this;
    this.ObjectDetected = false;
    this.Listening = startListening;
    this.SensorEvent = new EventEmitter();

    this.gpioPin = new mraa.Gpio(pinNumber);
    this.gpioPin.dir(mraa.DIR_IN);
    this.gpioPin.isr(mraa.EDGE_RISING, function _onPIRAlert()
    {
        if (this.Listening){
            self.SensorEvent.emit('ObjectDetected');
            self.ObjectDetected = true;
        }
    });

    this.ResetTrigger = function ResetTrigger()
    {
        this.ObjectDetected = false;
    };
    this.StartListening = function StartListening()
    {
        this.Listening = true;
    };
    this.StopListening = function StopListening()
    {
        this.Listening = false;
    };
    this.CheckForObject = function CheckForObject(){

        var pirValue = this.gpioPin.read();
        this.ObjectDetected = (pirValue>0) ? true:false;
        console.info("checking PIR:" + pirValue);
    };
}

DistanceDetector.js**

/* DistanceDetector.js */

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");

try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }
var mraa = require("mraa") ;

module.exports = DistanceDetector;
function DistanceDetector(uartNum, baudRate){

    this.uart = new mraa.Uart(uartNum); //Default
    this.uart.setBaudRate(baudRate);
    this.uart.setMode(8,0,1);
    this.uart.setFlowcontrol(false, false);
    sleep(200);
    var command = new Buffer(4);
    command[0] = 0x22;
    command[1] = 0x00;
    command[2] = 0x00;
    command[3] = 0x22;

    this.GetDistance = function GetDistance(){
        var rxBuf;
        var distanceCM;
        this.uart.write(command);
        sleep(200);
        rxBuf = this.uart.read(4);
        sleep(200);

        if (rxBuf[3] == (rxBuf[0]+rxBuf[1]+rxBuf[2])) {
            distanceCM = (rxBuf[1]<<8) | rxBuf[2];
            return distanceCM;
        }
    };
    this.IsObjectClose = function IsObjectClose(thresholdCM) {
        var rxBuf;
        var distanceCM = this.getDistance();

        return (distanceCM < thresholdCM) ? true : false;

    };
}
function sleep(ms){
    var done=false;
    var startTime = Date.now();

    while(!done){
        if ( (Date.now() - startTime) > 2000)
            done=true;
    }
}

Camera.js**

/* Camera.js */

// Matt Chandler
// Intel, Corp.
​​// March, 2017

/* jslint node:true */
/* jshint unused:false*/

var constants = require("./constants.js");
try {
        require.resolve("mraa");
 }
 catch (e) {
        console.error("Critical: mraa node module is missing, try 'npm install -g mraa' to fix.", e);
        process.exit(-1);
 }

var mraa = require("mraa") ;
var cpLib = require('child_process');

module.exports = Camera;
function Camera(resolution, greyscale){

    this.CameraOn = false;
    this.Resolution = resolution;
    this.GreyScale = greyscale;
    var cp = null;

    this.TakePicture = function TakePicture(imagePrefix){
        this.CameraOn = true;
        var currentDate = new Date();
        var m = currentDate.getMonth();
        m++;
        var month = (m<10) ? "0" + m : m;
        m = currentDate.getHours();
        var hours = (m<10) ? "0" + m : m;

        var cmd = "fswebcam -r 800x600 --jpeg 100 -S 13 ";
        if (this.GreyScale===true)
            cmd+="--greyscale ";
        var fileName = "/home/root/" + imagePrefix + "_" + currentDate.getFullYear() + month +
                        currentDate.getDate() + hours + currentDate.getMinutes() +
                        currentDate.getSeconds() + ".jpg";

        cmd += fileName;
        cmd += " --exec " + "\"sshpass -p '" + constants.FTP_REMOTE_PASSWORD + "' scp " + fileName + "" + constants.FTP_REMOTE_USER + "@" + constants.FTP_REMOTE_IP + ":" + constants.FTP_REMOTE_BASE_FOLDER + "\"";

        if (constants.DEBUG)
            console.info(cmd);
        cp = cpLib.exec(cmd);
        cp.on('close', function (code) {
            console.log('child process exited with code ' + code);
        });
    };
}

Summary

In this article we have investigated some of the possibilities for using the Intel® Edison platform for autonomous navigation as well as remote surveillance.  We outlined some example Node.js code that provides a starting framework for an autonomous application using the Romeo* Intel® Edison based robotics platform.  Given a map, we also showed how we can navigate to a given location, and upload an image to a remote server.  At the edge, we showed that image analysis can then easily be scripted for object detection. 

About the Author

Matt Chandler is a software engineer at Intel working on scale enabling projects for Internet of Things.

References

DFRobot* Product Description
https://www.dfrobot.com/wiki/index.php/Romeo_for_Edison_Controller_SKU:_DFR0350

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others

**This sample source code is released under the Intel Sample Source Code License Agreement.

© 2017 Intel Corporation.

Enhancing Outlier Detection with Intel® Data Analytics Acceleration Library

$
0
0

Introduction

How do credit card companies detect fraud or abuse? How do network administrators discover intrusions? How do scientists know whether or not their experiments run correctly?

In order to do these things, they analyze the data set and look for data points that are out of normality. For example, credit card companies look for unusual, high charges in certain transactions or strange buying behaviors. These actions might indicate that a credit card has been stolen. Network administrators search in the log files for irregular activities on the network, like an unusual load from some locations or network access from a foreign IP address, which are good indications of potential network intrusion. Similarly, scientists look to see whether data is out of the normal or expected ranges as an indicator that an experiment is not running correctly.

These types of unusual or irregular activities are called outliers or anomalies. This article describes different methods to detect outliers1 in the data and how the Intel® Data Analytics Acceleration Library (Intel® DAAL)2 helps optimize outlier detection when running it on systems equipped with Intel® Xeon® processors.

What is an Outlier?

An outlier is a data point that is significantly different (abnormal or irregular) or deviates from the remaining data (see Figure 1).


Figure 1: Outlier case #1.

Each purple dot represents a data point in a data set. From the graph, the two data points are considered outliers since they are very far away from the rest of the data points.


Figure 2: Outlier case #2.

Figure 2 shows another case of outliers. In this case a data set is grouped into three groups (clusters). Any data points that lie outside the groups are considered outliers.


Figure 3: Outlier case #3.

Figure 3 shows another case of outliers. Although the data sets are grouped into different groups, this case is different from that in Figure 2 because of the density of the data points. In Figure 2, data points are almost uniformly distributed in the groups, while those in Figure 3 have different density.

What Causes Outliers?

Outliers can be both good and bad. By detecting irregular activities (outliers) in the network, network administrators can potentially discover and prevent data intrusion. On the other hand, detecting outliers and eliminating them can help eliminate or minimize the impacts in the calculation results. Outliers can skew and mislead the training process of machine learning3 algorithms, resulting in longer training times and less accurate models. For example, in the case of a K-means clustering algorithm, outliers in the data set will pull the centroid of a cluster away from its intended location.

Common outlier causes include the following:

  • Data collection error: The data collection devices can collect unusual data due to noise.
  • Data entry error: Incorrect data is entered. For example, mistyping the sale price of a house in a specific neighborhood can cause the price of that house to be out of the average range of house prices of that neighborhood.
  • Selection type error: For example, consider high-school student heights. Some high school basketball students are very tall comparing to their fellow students. Those student heights are outliers. To be correct, the heights of the basketball students should be measured separately from the overall student population.
  • Conversion error: Manipulation or extraction errors when extracting data from multiple sources can cause outliers.

Methods of Detecting Outliers

The common way to detect outliers is to plot the data set and then look at the graph, similar to those shown in figures 1–3.

In the second edition of the book Outlier Analysis,4 Charu C. Aggarwal provides the following outlier detection methods:

  • Probabilistic models
  • Linear models
  • Proximity-based models
  • High-dimensional outlier detection

Applications of Outlier Detection

Because outlier detection methods can detect strange or abnormal data, they can be used to:

  • Detect irregular activities and strange addresses when analyzing network security
  • Identify credit card fraud by observing unusual buying patterns or very high-charge transactions
  • Diagnose potential patient health problems by spotting unusual symptoms or test results from the patients
  • Identify good players in sports when analyzing their data that is abnormal compared to their peers

These are just some of applications of the outlier detection methods; there are many more.

Intel® Data Analytics Acceleration Library

Intel DAAL is a library consisting of many basic building blocks that are optimized for data analytics and machine learning. These basic building blocks are highly optimized for the latest features of latest Intel® processors. In this article, we use the Python* API of Intel DAAL to illustrate how to invoke outlier detection functions. To install it, follow the instructions in the Intel DAAL documentation.5

Using the Outlier Detection Method in the Intel Data Analytics Acceleration Library

From the Intel® DAAL manual, the following paragraph describes what a univariate outlier is and the formula to define the outlier region:

“Given a set X of n feature vectors x 1= (x 11,…,x 1p ), ..., x n = (x n1,…,x np ) of dimension p, the problem is to identify the vectors that do not belong to the underlying distribution. The algorithm for univariate outlier detection considers each feature independently. The univariate outlier detection method can be parametric, assumes a known underlying distribution for the data set, and defines an outlier region such that if an observation belongs to the region, it is marked as an outlier. Definition of the outlier region is connected to the assumed underlying data distribution. The following is an example of an outlier region for the univariate outlier detection:

where m n and σ n are (robust) estimates of the mean and standard deviation computed for a given data set, α n is the confidence coefficient, and g (n, α n ) defines the limits of the region and should be adjusted to the number of observations.”

This section shows how to invoke the outlier method in Python6 using Intel DAAL.

The following steps are used to invoke the univariate outlier detection algorithm from Intel DAAL:

  1. Import the necessary packages using the commands from and import
    1. Import the Intel DAAL numeric table by issuing the following command:

      from daal.data_management import FileDataSource, writeOnly, DataSourceIface, BlockDescriptor_Float64

    2. Import the univariate outlier detection algorithm using the following commands:

      from daal.algorithms.univariate_outlier_detection import InitIface, Batch_Float64DefaultDense, data, weights

  2. Initialize the file data source if the data input is from the .csv file:
    		DataSet = FileDataSource(
    		    trainDatasetFileName, DataSourceIface.doAllocateNumericTable,
    		    DataSourceIface.doDictionaryFromContext
    		  )
  3. Load input data:
    		DataSet.loadDataBlock()
    		nFeatures = DataSet.getNumberOfColumns()
  4. Create a function algorithm:
    1. First create algorithm object

      algorithm = Batch_Float64DefaultDense()

    2. Pass the data set to the algorithm

      algorithm.input.set(data, DataSet.getNumericTable())

  5. Compute the outliers and get the results:

    results = algorithm.compute()

  6. The results can be printed using the following command:

    printNumericTable(results.get(weights), “outlier results“)

Note: some common data sets can be found at the UCI Machine Learning Repository.7

Conclusion

Outlier detection plays an important role in fraud detection, network security, and more. Intel DAAL optimizes the outlier detection methods by taking advantage of new features in future generations of Intel Xeon processors when running the methods on computer systems equipped with these processors.

References

1. General outlier detection

2. Introduction to Intel DAAL

3. Wikipedia – machine learning

4. Outlier detection

5. How to install the Python Version of Intel DAAL in Linux*

6. Python website

7. Common data sets

Video Style Guide

$
0
0

Donec sed odio dui. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas faucibus mollis interdum. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

Curabitur blandit tempus porttitor. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Donec sed odio dui. Cras mattis consectetur purus sit amet fermentum. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Maecenas faucibus mollis interdum. Maecenas sed diam eget risus varius blandit sit amet non magna. Nullam id dolor id nibh ultricies vehicula ut id elit. Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Maecenas sed diam eget risus varius blandit sit amet non magna.

Design Style Guide

$
0
0

Maecenas sed diam eget risus varius blandit sit amet non magna. Etiam porta sem malesuada magna mollis euismod. Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed posuere consectetur est at lobortis. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, eget lacinia odio sem nec elit. Maecenas sed diam eget risus varius blandit sit amet non magna.

Donec ullamcorper nulla non metus auctor fringilla. Nullam quis risus eget urna mollis ornare vel eu leo. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Nullam id dolor id nibh ultricies vehicula ut id elit. Vestibulum id ligula porta felis euismod semper.

How to Add an Article or Blog Post

$
0
0

Curabitur blandit tempus porttitor. Donec sed odio dui. Curabitur blandit tempus porttitor. Aenean lacinia bibendum nulla sed consectetur. Sed posuere consectetur est at lobortis.

Cras justo odio, dapibus ac facilisis in, egestas eget quam. Cras mattis consectetur purus sit amet fermentum. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nullam id dolor id nibh ultricies vehicula ut id elit.

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>